arxiv_id
stringlengths 0
16
| text
stringlengths 10
1.65M
|
|---|---|
1701.00384
|
\section{Introduction}
User equipment (UE), e.g., smartphone, tablet, wearable device, and digital camera, is playing an important role in new application scenarios such as virtual reality, augmented reality and surveillance systems. While resource-constrained UE components such as CPU, GPU, memory, storage, and battery lifetime, have driven a dramatic surge in developing new paradigms to handle computation intensive tasks \cite{5445167}. For example, computationally intensive applications requiring a large amount of computing resources are not suitable to run on mobile or portable devices.
Mobile cloud computing (MCC) \cite{kumar2013survey} provides a solution where
the UE offloads computationally intensive tasks to a remote resourceful cloud (e.g. EC2), thereby saving processing power and energy. Furthermore, Kumar \emph{et al.} \cite{5445167} have discovered a computing-communication trade-off, and concluded that mobile task offloading is beneficial when the computing intensity of the task in question is high, and when the required network resources that are required to transfer the offloading task to the remote mobile cloud is relatively low. Kumar \emph{et al.} have also emphasised the need for high bandwidth wireless networks for task offloading to be efficient.
\begin{figure}[htp]
\includegraphics[scale=0.5]{"mobile_cloud_controller".pdf}
\caption{\label{chap:1:fig:C-RAN_MC} Overview of the architecture, showing the interaction between, Mobile device and mobile cloud and C-RAN.}
\end{figure}
Subsequently, the offloading algorithm of the offloading framework that resides on the mobile device may make decisions on \emph{what} (which tasks), \emph{when} (when it is beneficial to offload) and \emph{where} to offload. Various authors have proposed different architectures for offloading frameworks and have provided their implementations. Examples of these include MAUI \cite{Cuervo:2010:MMS:1814433.1814441}, Thinkair \cite{6195845}, CloneCloud \cite{Chun:2011:CEE:1966445.1966473}, Cuckoo \cite{kemp2012cuckoo}. The architectures of which mostly dependent on the offloading type/level (thread level, method level, code level), and the implementation platform (programming language, used application libraries). However, the partitioning methods of task offloading are out of the scope of this paper.
To ensure highly efficient network operation and flexible
service delivery when handling mobile internet traffic surging,
Cloud Radio Access Network (C-RAN) \cite{mobile2011c} brings cloud computing technologies into mobile networks by centralising baseband processing units (BBU) of the radio access network. It moves the BBU from traditional base station to the cloud and leaves the remote radio heads (RRH) distributed
geographically. The RRHs are connected to the BBU pool
via high bandwidth and low-latency fronthaul. The BBU pool
can be realised in data centres,
and the centralised baseband processing enables BBU to be
dynamically configured and shared on demand \cite{7105959}. In this
case, with the transition from conventional hardware-based
environment to a software-based infrastructure, C-RAN can
achieve flexible management of BBU resources.
Figure \ref{chap:1:fig:C-RAN_MC} shows the hybrid deployment
of C-RAN with a mobile cloud for computation offloading, as opposed to the traditional mobile cloud hosted on the internet. Connected
with geographically distributed RRHs and centralised BBUs,
UEs get access to the VMs (i.e., mobile clones) in
a mobile cloud for computation offloading. For computation
offloading requests, data is first transmitted by the base station
(RRH and BBU) Via the uplink. Once processed by a clone (Virtual Machine) in the mobile cloud, the results will be returned to UEs via the downlink. However, our work largely focuses on the mobile cloud side.
As shown in Figure \ref{chap:1:fig:C-RAN_MC}, a new kind of resource has been introduced to the traditional mobile operator's network. This is not only beneficial to the mobile users (UEs) for offloading computationally intensive tasks to the cloud, but there are a number of aspects that operators can benefit from. Operators are now not only a network pipe provider, but they can also offer computing services to the subscribers. Moreover, the operators may let the subscribers pay more on top of the current price plan for extra computing services that they receive, by introducing new price plans for mobile task offloading. One may introduce new components into existing systems, but such components still have to be managed for utilising the resources efficiently.
Software Defined Networking (SDN) \cite{6812298} is a centralised architecture that augments a data plane and a control plane from traditional networks. Centralised controller nodes have been introduced into the network for managing resources. Such SDN-based architectures and management controllers have been introduced to wireless networks in multiple accounts \cite{6845049} \cite{6702534}. The main focus of above has been on defining SDN modules and interfaces (northbound and southbound) for wireless networks \cite{6845049}, for efficient network service deployments. Moreover, SDN has been proposed for Long Term Evolution (LTE) wireless network control \cite{6845049}; e.g. interference mitigation, network access selection.
It is inescapable that a centralised controller needs to be introduced for managing both computing and resources in the mobile cloud and communication resources in C-RAN. Such a controller may still be able to control wireless network resources, while also managing computing resources when mobile tasks are offloaded to the cloud. Figure \ref{chap:1:fig:C-RAN_MC} depicts the proposed controller that manages the mobile cloud and baseband resources in C-RAN. The dotted lines illustrate the connectivity to both the BBU pool and the mobile cloud for sending control signals. In this architecture, a signalling protocol needs to be designed for task offloading and managing wireless resources in operator's network (Communication Manager) and computing resources in the mobile cloud (Compute Manager). However, this paper's focus is mainly on mobile task offloading, managing mobile cloud computing resources and on developing a prototype of such a system architecture to show further that the computational resources offloaded to the mobile cloud have to be managed efficiently while at the same time taking delay constraints of compute (task) offloading into account.
The cloud resources in the mobile cloud have to be managed efficiently while also taking delay constraints of compute (task) offloading into consideration. Auto-scaling \cite{Zhan:2015:CCR:2775083.2788397} enables cloud administrators to dynamically scale computing resources in the cloud for their applications to adapt to workload fluctuations. There are two types of auto-scaling: 1) Horizontal scaling adds and removes virtual machines (VM) from an existing VM pool that serves an application, 2) Vertical scaling adds and removes virtual resources from an existing VM. However, horizontal scaling has been used more often, in comparison to vertical scaling in literature. Moreover, horizontal scaling allows the application to achieve higher throughput levels per each addition, but the deployment cost is greater than vertical scaling \cite{Dutta:2012:SAA:2353730.2353802}.
When scaling vertically, the resource provisioning introduces delay, which makes the desired effect arrive late. But not many previous auto-scaling techniques has taken these delays into consideration. \cite{Lorido-Botran:2014:RAT:2693546.2693559} stresses the need for future work on auto-scaling, taking auto-scaling time delays into consideration. Auto-scaling may scale either virtual disk, memory or CPU resources (virtual CPU is the most used auto-scaling resource) or a combination of types of resources. Also, one may scale up and scale down VMs depending on the current amount of workload. Moreover, when scaling resources, one may scale "continuously" by adding/removing just one resource from the existing VM or may add or remove more than one resource from the VM "non-continuously". The aforementioned scenarios should be taken into consideration when designing efficient offloading techniques. Therefore, it is important to understand the trends of such resize delays before designing auto-scaling algorithms, especially when auto-scaling for delay sensitive applications.
The remainder of this paper is organised as follows. Section \ref{sec:UOP} introduces the proposed task offloading and resource management, (i.e. unified) protocol. Then section \ref{sec:eval} presents the implemented prototype of the system described above and the performance analysis of the computing resource management operations using vertical auto-scaling. The conclusions are shown in section \ref{sec:conclusion} followed by the acknowledgement in section \ref{sec:ack}.
\section{Unified Protocol}
\label{sec:UOP}
Cloud Radio Access Networks allows cellular networks to process baseband tasks of multiple cells/RRHs as well as to allocate cellular resources to subscribers. In conventional mobile task offloading systems, the offloading framework and the mobile network are not aware of the offloading process. The network treats the offloading data as any other data, and the offloading destination resides outside the network operator's network. The offloading process could have been more efficient if the offloading network resources and cloud resources can be dynamically allocated to fit offloading requirements.
First, we design a protocol for communicating between the controller, BBU and the Mobile Cloud for resource allocation. This enables resource allocation when offloading. We assume that the UE has already discovered its BBU and its Clone, and propose a uniform PDU (Protocol Data Unit) format for both resource allocation and task offloading, as the name suggests. The proposed Uniform Offloading Protocol (UOP) operates in the Application Layer of the Internet Protocol Suite (OSI Layer 7). The Mobile Cloud Controller manages and is the main point of contact for both user (service consumer), and the services in the service providers side. We assume that the mobile cloud controllers are discoverable throughout the network for offloading using web service discovery protocols, e.g., Universal Description, Discovery and Integration. Once a mobile device discovers the most suitable mobile cloud to offload, it will then directly connect to the corresponding controller using the offloading protocol.
\subsubsection{Protocol Data Unit format}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{offloadpacketformat}
\caption{Unified Offloading Protocol PDU. (The ticks represent number of bits)}
\label{fig:offloadpacketformat}
\end{figure}
\begin{table*}[]
\centering
\caption{UOP attribute definition}
\label{tab:sec3:UOP_attribute_definition}
\begin{tabular}{|l|l|l|l|}
\hline
Field Name & Value Type & Size (Bytes) & Description \\ \hline
PDU type & Unsigned Integer & 4 & \begin{tabular}[c]{@{}l@{}}An integer value that indicates the type of PDU. \\ Refer to Table \ref{tab:sec3:pdutypes} for the list of PDU types.\end{tabular} \\ \hline
Request ID & Unsigned Integer & 4 & \begin{tabular}[c]{@{}l@{}}An identifier to match requests with replies. The \\ mobile device sets the Request ID in the request\\ PDU and then is copied by the controller and\\ the clone in the response PDU when offloading. \\ The controller sets this attribute when sending \\ resource management messages.\end{tabular} \\ \hline
ACK & Unsigned Integer & 4 & \begin{tabular}[c]{@{}l@{}}0 if response packet, 1 if Acknowledgement packet \\ and, request rejection packet if the value \textgreater 1\end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}}Object\\ Binding\end{tabular} & Object & Variable & \begin{tabular}[c]{@{}l@{}}A set of name-value pairs identifying application\\ objects to execute, user data and error messages\\ with their corresponding object references. Refer to\\ Table \ref{sec:3:definitions_of_object_binding_name-value_pair} for object definition.\end{tabular} \\ \hline
\end{tabular}
\end{table*}
\begin{table}[]
\centering
\caption{PDU Types (some examples)}
\label{tab:sec3:pdutypes}
\begin{tabular}{|l|l|}
\hline
PDU Type value & PDU type \\ \hline
0000 & Offload\_Req \\ \hline
0001 & Offload\_Accept \\ \hline
0002 & Offload\_Denied \\ \hline
0003 & Offload\_Start \\ \hline
0004 & App\_Register \\ \hline
0005 & App\_Request \\ \hline
0006 & App\_Data \\ \hline
0007 & App\_Response \\ \hline
0008 & Offload\_FIN \\ \hline
0009 & Manage\_Compute \\ \hline
0010 & Manage\_BBU \\ \hline
\end{tabular}
\end{table}
\begin{table*}[]
\centering
\caption{Definitions of Object Binding name-value pair}
\label{sec:3:definitions_of_object_binding_name-value_pair}
\begin{tabular}{|l|l|l|l|}
\hline
Subfield Name & Value Type & Size (Bytes) & Description \\ \hline
Object\_Name & Sequence of Integer & Variable & \begin{tabular}[c]{@{}l@{}}Object identifier (code, user,\\ data, application error)\end{tabular} \\ \hline
Object\_Value & Object & Variable & \begin{tabular}[c]{@{}l@{}}Contains the values of the \\ specified object type.\end{tabular} \\ \hline
\end{tabular}
\end{table*}
The PDU’s (Protocol Data Unit) format of both offloading and resource allocation protocol is shown in Figure \ref{fig:offloadpacketformat}. The offloading PDU is then passed down to the TCP layer and may be encapsulated with the TCP packet header. The offloading packet contains application control fields and a payload. The payload may contain offloading code, user data and application errors if any. Moreover, the definitions of the fields are shown in Table \ref{tab:sec3:UOP_attribute_definition}, the PDU types in Table \ref{tab:sec3:pdutypes} and the definitions of object binding name-value pair in Table \ref{sec:3:definitions_of_object_binding_name-value_pair}
We have avoided basing the offloading protocol on top of other web application protocols such as HTTP and SOAP due to their complexity and large overhead \cite{gray2005performance}. Henceforth, it operates on raw Transport Layer Sockets. In our case, the offloading protocol is designed on top of TCP. Therefore, depending on the Transport Layer protocol that the offloading protocol (TCP/UDP) implements, the Client Handler in the controller and the offloading framework in the clone will listen on a known port for messages that are sent from the offloading mobile device. Implementing the protocol on top of TCP, automatically inherits TCP reliable delivery, congestion control, error correction and ability to add optional Transport Layer Security (TLS/SSL) layers.
Time-outs takes an important role in the protocol for assuring the timeliness of individual transactions. Such timeouts are conventionally implemented within applications. Instead, the offloading protocol handles timeouts and indicates the applications if the protocol has timed out waiting for service responses.
Each transaction (offloading or resource management procedure) must succeed, but in a case of a failure, the system should roll back to its previous state. The acknowledgement messages assure the completeness of transactions. If a task or a set of tasks fail, then an error message will be sent back instead of acknowledgement. The protocol assumes an offloading task as one transaction. Within this main transaction, there are multiple individual sub-transactions. Namely, they are communication resource allocation task, compute resource allocation task, remote code execution tasks. If any of aforementioned sub-transactions fail, the main transaction is also considered failed. The communication and computing resources that were allocated will be unallocated and put back into the pool of globally available resources. Finally, the code will have to be executed locally by the mobile device, if the delay restrictions do not allow it to resend an offloading request. From start to the end of an offloading transaction, mobile cloud controller keeps track of all protocol and application states.
We have kept the protocol and the packet format as simple as possible for reducing overhead and processing complexity. As depicted in Figure \ref{fig:initialisation_protocol} the protocol separates management functions (C-RAN/Mobile Cloud resource allocation) from services (task offloading) for increasing scalability and centralising management of services. It also Integrates application error reporting to the offloading protocol, (e.g. by including the error code in ACK section of the packet when the thinkair offloading framework has thrown an error) The offloading protocol is independent of the mobile operating system and the offloading framework. The payload of the packet can carry more than one offloading task.
\subsubsection{Working Procedure}
\begin{figure*}[]
\centering
\includegraphics[scale=0.3]{"initialisation_protocol".pdf}
\caption{\label{fig:initialisation_protocol} Unified Offloading Protocol: Procedure when successfully allocates resources}
\end{figure*}
The Figure \ref{fig:initialisation_protocol} shows an instance where the protocol successfully instructs the UE to offload computationally intensive tasks to the mobile cloud. Once the UE receives the \emph{Offload\_Start} message with corresponding information about the offloading location (the clone), it successfully carries out offloading tasks. In the above figure, the acknowledgement messages are shown by appending ``\_ACK'' at the end of corresponding originating message name, to indicate the ACK bits has been set to 1 in the packet header. Resource monitoring is out of the scope of this protocol, and it is assumed that the controller uses existing monitoring protocols for monitoring C-RAN and mobile cloud resources. Moreover, C-RAN resource allocation, estimation and prediction algorithms are out of scope of this paper.
\section{Evaluation}
\label{sec:eval}
\subsection{Testbed Implementation}
Figure \ref{fig:crantestbed} shows the test environment set up for evaluating the proposed architecture. There are two USRP N210 and one X300 have been set up as RRHs of Amarisoft LTE 100 and OpenAirInterface (OAI) software base stations respectively. All nodes in the network are connected via a Gigabit Ethernet switch. The soft base stations are deployed on a Dell PowerEdge R210 rack server. OpenStack with Kernel Virtual Machine (KVM) as the hypervisor, has been deployed with no shared storage, for hosting the mobile clones running Android-x86 operating system. Thinkair \cite{6195845} has been used as the offloading framework, of which the server
components have been installed on the clone in mobile cloud,
while the client components are installed on the UE running
Android 4.4 operating system. The wireless bandwidth of the base stations has been set to 5 MHz. The BBUs are connected to its Mobility Management Entity (MME) via its S1-MME links using S1 Application Protocol (S1AP) that provides signalling between E-UTRAN and the evolved packet core (EPC). Host sFlow and sFlow-RT, monitoring and analytical tools, has been deployed on the controller as a part of the Resource Monitor for monitoring resources in the mobile cloud. We have also developed a monitoring module and a dashboard for monitoring wireless resources in the Mobile Cloud-RAN.
\begin{figure}
\centering
\includegraphics[scale=0.7]{crantestbed.pdf}
\caption{\label{fig:crantestbed} C-RAN with Mobile Cloud testbed}
\end{figure}
\subsection{Vertical Scaling for Resource Management in Mobile Cloud}
Prior to sending resource management instructions to the mobile cloud, using the proposed protocol, the resource management decisions are made in the controller. Auto-scaling in IaaS (Infrastructure as a Service) clouds has been studied extensively in the literature, for allocating computing resource optimally, while assuring Service Level Agreement (SLA) requirements and keeping the overall cost to a minimum. Although there exists an abundance of work on cloud auto-scaling, fundamental aspects of underlying technologies and algorithms that affect the performance of auto-scaling need improvement. Some known aspects of cloud computing that require improvements are \cite{Lorido-Botran:2014:RAT:2693546.2693559} \cite{Zhan:2015:CCR:2775083.2788397} \cite{7053814}, real-time hypervisors, real-time operating systems (OS), OS support for Cloudification (e.g. native cloud orchestration support in OSs), improved host resource sharing among guests (e.g. reduced latency), user friendly cost models and auto-scaling techniques for real-time applications. Specifically, when adopting cloud technologies into mobile systems, issues mentioned above are imperative due to high QoS/QoE requirements and dynamic nature of wireless networks and mobile applications \cite{6616117} \cite{Kumar2015191}. However, addressing aforementioned is out of scope of this paper. Our focus has been drawn to providing an extensive performance analysis of cloud vertical scaling, for improving auto-scaling algorithms, for real-time delay-sensitive applications (e.g. mobile task offloading) on existing cloud environments.
Cloud auto-scaling techniques are divided into predictive and reactive categories. Throughout literature authors have used various reactive and proactive techniques to do horizontal auto-scaling \cite{Bodik:2009:SML:1855533.1855545} \cite{CPE:CPE2864} \cite{6211900} \cite{6103960} \cite{5557965} by adding or removing VMs to/from a cloud application. Similarly vertical scaling \cite{6217477} \cite{Rao:2009:VRL:1555228.1555263} \cite{5071892} \cite{Shen:2011:CER:2038916.2038921} is done by adding or removing resources to/from existing VMs. Although throughout literature threshold rules based techniques seem to be most popular among authors, it can be seen that reinforcement learning, queuing theory, control theory, time series analysis have also been widely used. For both horizontal and vertical scaling, resource provisioning introduces a delay. Therefore the desired effect may arrive when it is too late. Current literature stresses the need for future work on auto-scaling focusing on reducing the time required to provision new VMs (or resize VMs when vertically scaling) \cite{Lorido-Botran:2014:RAT:2693546.2693559}. Moreover, comparatively horizontal scaling has been used predominantly in literature and by cloud service providers.
Vertical scaling is known to have a lower range when it comes to scaling \cite{Dutta:2012:SAA:2353730.2353802}, and for the changed resources to take effect, the VMs have to be restarted \cite{Lorido-Botran:2014:RAT:2693546.2693559}. Even if the underlying virtualization technology allows you to scale VMs without restarting (e.g. Xen CPU hot-plug), it may take up to 5-10 minutes for the changes to take effect and to be stabilised \cite{Rao:2009:VRL:1555228.1555263} \cite{6005367}. Moreover, this is because most conventional operating systems do not allow real-time dynamic configurations on the VMs without rebooting \cite{Lorido-Botran:2014:RAT:2693546.2693559}, and \cite{Rao:2009:VRL:1555228.1555263} reports this can also be partially due to the backlog of requests in prior intervals. However, in literature authors have proposed a number of vertical scaling algorithms, and they assume that vertical scaling actions can be performed timely, or the auto-scaling algorithms use hot-plugging \cite{5071892} \cite{6005367} \cite{Rao:2009:VRL:1555228.1555263} \cite{6217477}. However, the latter is not possible with most of other popular hypervisors (e.g. Kernel Virtual Machine), cloud platforms (e.g. OpenStack) and cloud service providers (e.g. Amazon EC2 \cite{7463864}) \cite{Lorido-Botran:2014:RAT:2693546.2693559}. Due to above reasons, horizontal scaling has been more attractive to the community and has been adopted by existing cloud service providers instead of vertical scaling \cite{6217477} \cite{7463864}. Elastic Application Container (EAP) proposes an alternative provisioning mechanism to heavy VM provisioning \cite{6184989}, but it requires you to alter the underlying infrastructure. Furthermore some authors have proposed horizontal-vertical hybrid scaling systems where both approaches are used simultaneously for increasing performance, by utilising benefits of both methodologies \cite{6481061} \cite{6217477} \cite{6212070} \cite{5609349} \cite{Dutta:2012:SAA:2353730.2353802}.
It is considered wasteful to use a VM as the smallest resource unit when allocating computing resources to a cloud application. The new VMs that are assigned to the applications are not used immediately after, and each VM consumes resources that are not directly utilised by the applications \cite{6217477}. As the number of VMs increase, the total number of resources that are consumed for just hosting and for keeping the VMs alive also increase. Furthermore, adding or removing a whole VM to an application is not always required for many real world scenarios \cite{6217477}, but subtle changes such as adding or removing available resources are sufficient. Dutta \emph{et al.} \cite{Dutta:2012:SAA:2353730.2353802} claims further that, while horizontal scaling allows the application to achieve higher throughput levels per each addition, the deployment cost is greater than vertical scaling. Therefore, this lead us to investigate further into performance trends of cloud vertical scaling.
The prototype described above has been left with out-of-the-box configurations for most software components (e.g. OpenStack), for making the evaluation results as generic as possible. The following analysis on VM creation and resizing times has been carried out by repeating each experiment 60 times for reducing noise due to other unavoidable influences within the environment. If the proposed protocol is used for vertical scaling, then the \emph{Manage\_Compute} PDU type can be used with the desired vCPU amount set in \emph{Object\_Binding} field of the packet (e.g. vcpu=4).
\begin{figure*}
\centering
\includegraphics[scale=0.3]{"vmstarttimevstotalvms-fig2".png}
\caption{\label{fig:chap5:vmstarttimevstotalvms} The VM start time as the number of instances increases in the cloud globally}
\end{figure*}
There are two basic functions that one may perform on VMs when scaling either horizontally or vertically. One may instantiate and terminate new VMs behind a load balancer, or one would use a resizing function that is provided by the underlying cloud platform (hypervisor). Moreover, in this case, the Kernel Virtual Machine (KVM) environment allows the user to instantiate and terminate VMs. Figure \ref{fig:chap5:vmstarttimevstotalvms} depicts how the VM instantiation time has been influenced by all other VM created within the same cloud environment. We have applied a moving average function on the data for smoothing out the gathered results to highlight the increasing trend in data. The above graph clarifies the observations made by \cite{6253534}, where auto-scaling actions (horizontal) typically get delays in orders of minutes on public cloud service providers (Amazon EC2, Azure and Rackspace). The above is mainly due to the increased request backlog and due to the time that it takes to assign a physical server to deploy a VM, then to move the VM image to it and get it fully operational. This further shows that instance start-time changes depending on the number of active VMs in the cloud at a given time. Moreover, when predicting the VM start times, knowledge on historical start delay times may help. We further stress that although the trend in data may be similar, the exact numbers at given points may change depending on the cloud environment. Therefore, one may expect future cloud service providers to provide such data to be used by users when auto-scaling.
\begin{figure}
\centering
\includegraphics[scale=0.3]{cpu_continuousVSstart.pdf}
\caption{\label{fig:chap5:cpu_continuousVSstart} Mean CPU upscale delay as the size of the base VM increases. The standard error shows variations in results}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.3]{regressioncpu_continuous.pdf}
\caption{\label{fig:chap5:regressioncpu_continuous} Second order polynomial function of mean CPU upscale time in continuous scenario}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.3]{regressioncpu_startfrom1.pdf}
\caption{\label{fig:chap5:regressioncpu_startfrom1} Second order polynomial function of mean CPU upscale time in non-continuous scenario}
\end{figure}
We categorised auto-scaling into two scenarios. 1) Resizing continuously; the resources are added or removed by one at each iteration, 2) the resources are added or removed by amounts other than 1 (e.g. +1. +2, +3, +4). For the sake of the discussion, we call the latter "non-continuous". The former method may be suitable for algorithms that decide \emph{when} to scale (and later scales by adding/removing only one resource), and the latter method may be appropriate for algorithms that decide the \emph{amount} of resources to be added or removed at a given time. To avoid the after effects that may be caused by the previous iteration of auto-scaling commands, to the current iteration, we have added a 5-second sleep between each command.
We applied polynomial curve fitting \cite{bishop2006pattern} on the empirical data to further demonstrate how up and down scaling delay times vary across different types of resource and when different types of scaling are employed (continuously vs. non-continuously). Specifically, we have used a function of the form in equation \ref{sec:5:least_square_ploly} for fitting the data, where $M$ denotes the order of the polynomial function, given a training data set comprising $N$ observations of $x$, where $x \equiv (x_1,\dots,x_N)$ and corresponding observations $t \equiv (t_1,\dots,t_N)$. The vector $W$ contains the polynomial coefficients $w_0,\dots,w_M$.
\begin{equation}
\label{sec:5:least_square_ploly}
y(x, W) = w_0 + w_1x + w_2x^2 + \dots + w_Mx^M = \sum_{j=0}^{M} w_jx^j
\end{equation}
The coefficients are calculated by fitting the polynomial to the provided scaling delay data by minimising the squared error $E(W)$, as shown in equation \ref{sec:5:least_square_ploly_error}.
\begin{equation}
\label{sec:5:least_square_ploly_error}
E(W) = \sum_{n=0}^{N} | y(x_n, W) - t_n|^2
\end{equation}
where $t_n$ denotes the corresponding target values for $x_n$. It measures the misfit between the function $y(x, W)$, for any given value of $W$ and scaling delay data points. We have calculated the second order polynomial function using least square polynomial fit \cite{bishop2006pattern} for the data, only for illustrating trends in data and to show differences in values, as shown in second order polynomial equation \ref{sec:5:secondorder_reg}. Such a model can be adapted to the architecture proposed in this paper where the polynomial function is used for predicting resize delays in the mobile cloud controller (when making auto-scaling decisions), the order of the polynomial function should be chosen to fit the data best. For providing further insight into the analysis, we have provided the polynomial coefficients in Table \ref{sec:5:polynomial_coefficients} that can be used with the equation \ref{sec:5:secondorder_reg} for evaluating vertical scaling algorithms in future. In equation \ref{sec:5:secondorder_reg} the $w_2$, $w_1$, $w_0$ coefficients can be looked up from the Table \ref{sec:5:polynomial_coefficients}, while $x$ denotes the amount of resources to evaluate on.
\begin{equation}
\label{sec:5:secondorder_reg}
y(x) = w_2x^2 + w_1x + w_0
\end{equation}
\begin{table}[]
\centering
\caption{Polynomial coefficients of second order polynomial function for the scaling scenarios}
\label{sec:5:polynomial_coefficients}
\begin{tabular}{l|l|l|l|}
\cline{2-4}
& \multicolumn{3}{c|}{\textbf{Coefficients}} \\ \hline
\multicolumn{1}{|l|}{\textbf{Scaling Scenario}} & $w_2$ & $w_1$ & $w_0$ \\ \hline
\multicolumn{1}{|l|}{Figure \ref{fig:chap5:regressioncpu_continuous}} & 0.0109 & 0.2013 & 50.2 \\ \hline
\multicolumn{1}{|l|}{Figure \ref{fig:chap5:regressioncpu_startfrom1}} & -0.0002161 & 0.05584 & 49.32 \\ \hline
\multicolumn{1}{|l|}{Figure \ref{fig:chap5:rev_regressioncpu_continuous}} & 0.01358 & 0.3637 & 51.61 \\ \hline
\multicolumn{1}{|l|}{Figure \ref{fig:chap5:rev_regressioncpu_startfrom1}} & 0.0003889 & 0.04552 & 66.85 \\ \hline
\multicolumn{1}{|l|}{Figure \ref{fig:chap5:regressiondisk_continuous}} & -1.159e-05 & 0.1038 & 46.92 \\ \hline
\multicolumn{1}{|l|}{Figure \ref{fig:chap5:regressiondisk_startfrom1}} & 2.837e-05 & 0.008834 & 47.05 \\ \hline
\multicolumn{1}{|l|}{Figure \ref{fig:chap5:regressionmem_continuous}} & -0.002184 & 0.04266 & 49.07 \\ \hline
\multicolumn{1}{|l|}{Figure \ref{fig:chap5:regressionmem_startfrom1}} & 0.007889 & 0.1701 & 50.31 \\ \hline
\multicolumn{1}{|l|}{Figure \ref{fig:chap5:rev_regressionmem_continuous}} & 0.1402 & 2.375 & 56.91 \\ \hline
\multicolumn{1}{|l|}{Figure \ref{fig:chap5:rev_regressionmem_startfrom1}} & 0.03394 & 0.6107 & 53.26 \\ \hline
\end{tabular}
\end{table}
CPU resizing has been more often used than resizing other resources. Figure \ref{fig:chap5:cpu_continuousVSstart} shows that the time varies when adding resources, depending on the vCPU amount it increases from (i.e. the number of vCPUs that the VM has before resizing), in continuous auto-scaling scenario. Another observation that we can gather from the graph is that the VM resizing time increases as the size of the VM increases. One may observe that the change in results in the non-continuous scenario is much smaller as the scale of the VM increases, albeit there is still an increasing trend. The aforementioned statements are further clarified in Figure \ref{fig:chap5:regressioncpu_continuous} and Figure \ref{fig:chap5:regressioncpu_startfrom1} respectively. Both Figures \ref{fig:chap5:regressioncpu_continuous} and \ref{fig:chap5:regressioncpu_startfrom1} show second order regression functions of results for continuous and non-continuous scenarios respectively, depicting that time delay increases as the VM size increases. The graph also shows the standard error of each point to show how results varied in the results.
\begin{figure}
\centering
\includegraphics[scale=0.3]{rev_cpu_continuousVSstart.pdf}
\caption{\label{fig:chap5:rev_cpu_continuousVSstart} Mean CPU downscale delay as the size of the base VM increases. The standard error shows variations in results}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.3]{rev_regressioncpu_continuous.pdf}
\caption{\label{fig:chap5:rev_regressioncpu_continuous} Second order polynomial function of mean CPU downscale time in continuous scenario}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.3]{rev_regressioncpu_startfrom1.pdf}
\caption{\label{fig:chap5:rev_regressioncpu_startfrom1} Second order polynomial function of mean CPU downscale time in non-continuous scenario}
\end{figure}
Similarly to VM CPU upscaling, Figure \ref{fig:chap5:rev_cpu_continuousVSstart}, Figure \ref{fig:chap5:rev_regressioncpu_continuous} and Figure \ref{fig:chap5:rev_regressioncpu_startfrom1} shows continuous and non-continuous cloud performances of CPU downscaling. Specifically, the Figure \ref{fig:chap5:rev_cpu_continuousVSstart} depicts a comparison of delay times when virtual CPUs are scaled down continuously and non-continuously. The error bars denote the standard error at each point to show the variations of gathered results. One may observe that this has an opposite trend (decreasing trend) to VM CPU upscaling delay as shown in Figure \ref{fig:chap5:cpu_continuousVSstart}, where the delay time decreases as more CPUs are removed from the VM. One may conclude from the analysis that the resize time is greatly influenced by the size of the VM when scaling computing resources (i.e. the number of CPUs the VM has before resizing). Moreover, the Figure \ref{fig:chap5:rev_regressioncpu_continuous} and the Figure \ref{fig:chap5:rev_regressioncpu_startfrom1} shows second order polynomial function of both continuous and non-continuous mean CPU downscaling delay in cloud respectively. Furthermore, the observations suggest that when designing auto-scaling algorithms that scale CPU resources, the future work should consider the time differences when scaling continuously and scaling non-continuously.
\begin{figure}
\centering
\includegraphics[scale=0.3]{disk_continuousVSstart.pdf}
\caption{\label{fig:chap5:disk_continuousVSstart} Mean disk upscale delay as the size of the base VM increases. The standard error shows variations (error) in data}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.3]{regressiondisk_continuous.pdf}
\caption{\label{fig:chap5:regressiondisk_continuous} Second order polynomial function of mean disk upscale time in continuous scenario}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.3]{regressiondisk_startfrom1.pdf}
\caption{\label{fig:chap5:regressiondisk_startfrom1} Second order polynomial function of mean disk upscale time in non-continuous scenario}
\end{figure}
According to our experiments, the second most influential resource on the resize time delay is the VM disk size. However, in the used test environment, disk downscaling is not supported by the cloud framework (OpenStack with KVM). Therefore only the upscaling results are presented. The Figure \ref{fig:chap5:disk_continuousVSstart} shows an opposite trend to the CPU resize time delays in Figure \ref{fig:chap5:cpu_continuousVSstart}. In this experiment, the non-continuous resizing time delays appear to be higher than when resizing continuously. Moreover, the resize time does not show any significant increases or decreases as the amount of disk space increase in the non-continuous scenario. A second order polynomial regression analysis has been carried out in Figure \ref{fig:chap5:regressiondisk_startfrom1} further clarifying the above observation.
It is clear that the resize delay time increases as the VM's disk size increases in the continuous VM scaling scenario as shown in Figure \ref{fig:chap5:regressiondisk_continuous}. Such an observation is expected, as when resizing, some hypervisors (e.g. KVM) take a snapshot of the running VM, then a new resized VM is created from the snapshot with changed configurations (often on a different compute node). Then the old VM gets deleted. Whereas, when we have conducted the non-continuous auto-scaling experiment, we chose 1GB as the base VM disk size. Therefore, one can expect the effect of the base disk size when resizing at each step to be even. Moreover, the base disk size stays constant when scaled non-continuously, although the VM is resized to a different disk size.
\begin{figure}
\centering
\includegraphics[scale=0.3]{mem_continuousVSstart.pdf}
\caption{\label{fig:chap5:mem_continuousVSstart} Mean RAM upscale delay as the size of the base VM increases. The standard error shows variations (error) in data}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.3]{regressionmem_continuous.pdf}
\caption{\label{fig:chap5:regressionmem_continuous} Second order polynomial function of mean RAM upscale time in continuous scenario}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.3]{regressionmem_startfrom1.pdf}
\caption{\label{fig:chap5:regressionmem_startfrom1}Second order polynomial function of mean RAM upscale time in non-continuous scenario}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.3]{rev_mem_continuousVSstart.pdf}
\caption{\label{fig:chap5:rev_mem_continuousVSstart} Mean RAM downscale delay as the size of the base VM increases. The standard error shows variations (error) in data}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.3]{rev_regressionmem_continuous.pdf}
\caption{\label{fig:chap5:rev_regressionmem_continuous} Second order polynomial function of mean RAM downscale time in continuous scenario}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.3]{rev_regressionmem_startfrom1.pdf}
\caption{\label{fig:chap5:rev_regressionmem_startfrom1} Second order polynomial function of mean RAM downscale time in non-continuous scenario}
\end{figure}
Despite all of the above clear trends in results, when analysing resizing time delays when resizing RAM resource on VMs, no distinctive trends were found in both continuous and non-continuous scenarios, as shown in Figure \ref{fig:chap5:mem_continuousVSstart} for upscaling and \ref{fig:chap5:rev_mem_continuousVSstart} for RAM downscaling. However, this could be due to the fact that the test-bed's maximum memory resource limit is 18GB, and the sample amount is not large enough to see clear trends. The second order polynomial regression analyses show decreasing trends on both continuous and non-continuous scenarios, in Figure \ref{fig:chap5:regressionmem_continuous} and in \ref{fig:chap5:regressionmem_startfrom1}, in continuous and non-continuous upscaling scenarios respectively. Moreover Figure \ref{fig:chap5:rev_regressionmem_continuous} and Figure \ref{fig:chap5:rev_regressionmem_startfrom1} shows a second order polynomial regression analysis of both continuous and non-continuous downscaling of RAM resources.
From the lessons learned from above empirical VM resize performance analysis, it is evident that careful attention should be paid to the time delays when resizing CPU and storage resources of VMs, for real-time applications. Due to the significant delay incurred by the cloud framework (e.g. over 50 seconds in Figure \ref{fig:chap5:regressioncpu_continuous}), it may not be practical to auto-scale in real-time for delay-constrained applications. Instead, the controller may initiate the VM scaling process before the task execution (or by migrating tasks to another VM, during the resize process). The auto-scaling delay trends may vary depending on the underlying technologies (e.g. virtualization and physical resource management) that are deployed. This analysis brings awareness to the community that auto-scaling delays depend not only on the resource type but also on the auto-scaling method, i.e., upscaling, downscaling, continuous and non-continuous. We would like to stress that, when designing auto-scaling algorithms, CPU and storage resizing delay trends with various conditions of the VMs itself (e.g. VM size) and of the environment (e.g. total VM count in the cloud) that they are hosted in, should be considered. We have discovered that it is important to use both the knowledge of the VM itself and the cloud environment together at the same time, in future vertical scaling algorithms.
\section{Conclusion}
\label{sec:conclusion}
This paper has introduced a protocol that uses a simple unified packet header for both resource management and task offloading. A new logical controller that receives instantaneous monitoring information from both computing and communication sides and makes efficient resource management decisions on both C-RAN and mobile cloud for mobile task offloading.
We conducted an analysis on scaling up and scaling down performances of MCC resources. The analysis shows that auto-scaling performances of all storage, CPU and RAM resources vary when scaling resources vertically. We can also conclude that auto-scaling in real-time is not practical due to high auto-scaling delay. The results also revealed that scaling time delay depends on the amount of resources that are added or removed from the VM at each step. One may conclude from the auto-scaling delay analysis that it is necessary to analyse auto-scaling performances of each cloud platform due to complex nature of today's cloud systems. Moreover, being aware of the scaling delay time trends may help to make effective auto-scaling decisions.
\section{Acknowledgment}
\label{sec:ack}
This work was supported by UK EPSRC NIRVANA project
(EP/L026031/1), EU Horizon 2020 iCIRRUS project (GA-
644526).
\bibliographystyle{ieeetr}
|
2112.02368
|
\section{Introduction}
\vspace{-0.5cm}
Stochastic optimal control problems involving Markov processes have been extensively studied \cite{bertsekas:measurableselection, elliott:hmm, soner:controlledprocess, gihman:controlledprocess, zhenting:controlledprocess}. The most common methods in solving these control problems are the dynamic programming principle and the stochastic maximum principle. Stochastic modelling under a Markovian regime-switching environment incorporates jumps in prices due to changes in the state of the economy, which is modelled by a Markov chain. In the context of stochastic control theory, regime-switching state equations have also been investigated. Maximum principles of regime-switching state equations are developed in several papers \cite{ref1:opt,ref2:opt,elliott:stochmaxprin}. A dynamic programming principle under regime-switching is discussed in \cite{azevedo:control}.
This paper aims to discuss bid and ask prices by formulating a control problem. These two prices are modelled as different interpretations of dynamics for the asset by supposing the coefficients in the dynamics are functions of a continuous-time finite state Markov chain whose evolution depends on parameters which can be thought of as control variables. In turn, these parameters give rise to a family of probability measures which can be considered as possible future scenarios.
The paper is organized as follows: the price process dynamics are introduced in Section \ref{ustart} as a modified geometric Brownian motion where the drift and diffusion coefficients depend on the Markov chain, together with a jump term. Section \ref{controlsection} introduces controlled Markov chains. The chain is then controlled by the first change of measure, discussed in Section \ref{change of measure for markov chains}, which allows changes in the diffusion coefficient by changing the rate matrix of the chain.
The second measure change in Section \ref{esscher} involves the use of the Esscher transform to change the drift coefficient and to find risk-neutral dynamics for the underlying process. The price of a European type asset is then determined using a system of partial differential equations and the homotopy analysis method in Section \ref{uend}. Finally, the bid and ask prices are characterized as solutions of optimal control problems in Section \ref{bid-ask price models}.
\section{Regime-Switching Process}\label{ustart}
\vspace{-0.2cm}
Consider a complete probability space $(\Omega,\mc{F},P)$, where $P$ is a real-world probability measure. Write $\mc{T}:=[0,T]$. Let $\mathbf{W}:=\{W_t\}_{t\in \mc{T}}$ be a standard Brownian motion on $(\Omega,\mc{F},P)$. Assume that states in an economy are modelled by a continuous-time Markov chain $\mathbf{X}:=\{X_t\}_{t\in\mc{T}}$ on $(\Omega,\mc{F},P)$ with finite state space. Without loss of generality we can identify the state space with $\mc{S}:=\{e_1,e_2,\ldots,e_N \}\subset \mathbb{R}^N$, where $e_i\in\mathbb{R}^N$ and the $j$th component of $e_i$ is the Kronecker delta $\delta_{ij}$ for each $i,j=1,\ldots,N$.
We suppose $\mathbf{X}$ and $\mathbf{W}$ are independent processes. In this section, we introduce a model for a price process whose drift and diffusion coefficients depend on the chain $\mathbf{X}$ and in addition has a jump term.
Suppose that the dynamics of a process $\{\ol{S}_t\}_{t\in\mc{T}}$ are described by
\begin{align*}
d\ol{S}_t=\mu_t\ol{S}_tdt+\sigma_t\ol{S}_tdW_t,\quad \ol{S}_0=s.
\end{align*}
Here, $\{\mu_t\}_{t\in\mc{T}}$ and $\{\sigma_t\}_{t\in\mc{T}}$ depend on $\mathbf{X}$ and are given, respectively, by
$\mu_t:=\mu(t,X_t)=\langle\mu,X_t\rangle$ and $\sigma_t:=\sigma(t,X_t)=\langle\sigma,X_t\rangle$,
where $\mu=(\mu_1,\mu_2,\ldots,\mu_N)^{\top}\in \mathbb{R}^N$ and $\sigma=(\sigma_1,\sigma_2,\ldots,\sigma_N)^{\top}\in \mathbb{R}^N$ with $\sigma_i>0$ for all $i=1,\ldots, N$. Let $Y_t:=\ln(\ol{S}_t/\ol{S}_0)$ denote the logarithmic return of the process $\{\ol{S}_t\}_{t\in\mc{T}}$. Then, the dynamics of the process $\{\ol{S}_t\}_{t\in\mc{T}}$ can be written as \[\ol{S}_t=\ol{S}_0\exp \ol{Y}_t,\] where $d\ol{Y}_t=\left(\mu_t-\frac{1}{2}\sigma^2_t\right)dt+\sigma_tdW_t.$
Consider now a model for a price process $S_t:=\ol{S}_t\langle\alpha,X_t\rangle$, where $\alpha:=(\alpha_1,\ldots,\alpha_N)^{\top}\in\mathbb{R}^N$ with $\alpha_i>0$ for all $i=1,\ldots,N$. Furthermore, for every $i=1,\ldots,N$, the components $\alpha_i$ satisfy exactly one of the following: (a) $0<\alpha_i<1$, (b) $\alpha_i=1$, or (c) $\alpha_i>1$. The process $S_t$ can be interpreted as a scaled version of the original process. This serves as a pure jump component of the price process.
Let $A:=[a_{ji}]_{i,j=1,\ldots,N}$ be a family of rate matrices of the chain $\mathbf{X}$ under $P$, where $a_{ji}$ is the instantaneous transition intensity of the chain from state $e_i$ to state $e_j$. Note that for $i,j=1,\ldots,N$, the $a_{ji}$ satisfy (1) $a_{ji}\geq 0$, for $i\neq j$ and (2) $\sum_{j=1}^Na_{ji}=0$. The second condition implies that $a_{ii}\leq 0$ for $i=1,\ldots, N$.
Let $\mathbb{F}^{\mathbf{X}}:=\{\mc{F}_t^{\mathbf{X}}\}_{t\in\mc{T}}$ be the $P$-augmented natural filtration generated by $\mathbf{X}$. Then the semimartingale dynamics for $\mathbf{X}$ under $P$ are given by
\begin{equation}\label{chain}
X_t=X_0+\int_{0}^{t} AX_sds+M_t,
\end{equation}
where $\mathbf{M}:=\{M_t\}_{t\in \mc{T}}$ is an $\mathbb{R}^N$-valued, square-integrable, $(\mathbb{F}^{\mathbf{X}},P)$-martingale \cite{elliott:hmm}.
The dynamics of $\{S_t\}_{t\in\mc{T}}$ are then described by
\begin{align}\label{prior change of measure}
dS_t&= \langle\alpha,X_t\rangle d\ol{S}_t +\ol{S}_{t-}\langle\alpha,dX_t\rangle\nonumber\\
&=\mu_tS_tdt+\sigma_tS_tdW_t +\ol{S}_t\langle\alpha,AX_t\rangle dt+\ol{S}_t\langle\alpha,dM_t\rangle.
\end{align}
Note that $\ol{S}_{t-}=\ol{S}_t$ since the process $\{\ol{S}_t\}_{t\in\mc{T}}$ is continuous.
\section{Controlled Markov chains}\label{controlsection}
We review here ideas relating to the control of a Markov chain as discussed in \cite{soner:controlledprocess, miller:control, miller:controlmarkov, miller:control2}. Consider the same complete probability space $(\Omega,\mc{F},P)$. Suppose $\textsf{U}$ is a set of control values where \textsf{U} is a compact subset of $\mathbb{R}^N_+$. Define the set of admissible control functions, denoted by $\mc{U}$, as the set of $\mathbb{F}^{\mathbf{X}}$-predictable functions $u:\mc{T}\rightarrow\textsf{U}$. We wish to consider the situation where the rate matrix $A$ in \eqref{chain} is replaced by a family of rate matrices $B(u):=[b_{ji}(u)]_{i,j=1,\ldots,N}$.
As above, for $i,j=1,\ldots,N$ and $u\in\mc{U}$, the terms $b_{ji}(u)$ satisfy (1) $b_{ji}(u)\geq0$, for $i\neq j$ and (2) $\sum_{j=1}^Nb_{ji}(u)=0$. The second condition then implies that $b_{ii}(u)\leq0$ for $i=1,\ldots, N$.
In the next section, we will show how for any control function $u\in\mc{U}$, the chain $\mathbf{X}=\mathbf{X}^u=\{X^u_t\}_{t\in\mc{T}}$ then has dynamics
\begin{equation}\label{chainb}
X^u_t=X^u_0+\int_0^t B(u(s))X^u_sds+M^u_t,
\end{equation}
where $\mathbf{M}^u:=\{M^u_t\}_{t\in \mc{T}}$ is an $\mathbb{R}^N$-valued, square-integrable martingale with initial condition $M^u(0)=0$ and (matrix) quadratic variation
\begin{align}\label{quadchar}
\langle M^{u}\rangle_t&=\int_0^t\left\{\mbox{diag}[B(u(s))X^u_s]-B(u(s))\mbox{diag}[X^u_s]-\mbox{diag}[X^u_s]B(u(s))^{\top} \right\}ds.
\end{align}
The chain $\mathbf{X}^u$ is then a controlled Markov chain. In the context of asset pricing under regime-switching, we can interpret the control $u(t)$ as a factor that influences the rate matrix at time $t$. The probabilities of each state at each time $t$ are then changing depending on $u(t)$.
\section{Change of Measure for Markov Chains}\label{change of measure for markov chains}
In this section, we provide a new approach and show how the dynamics \eqref{chainb} can be constructed using a Girsanov-type of measure change using methods from \cite{dufour:filter}. The results will be presented without proofs as the proofs are similar. We begin by considering a control function $u(t)=(u_1(t),\ldots, u_N(t))^{\top}\allowbreak\in\mc{U}$ and, as above, the family of rate matrices $B(u(t)):=[b_{ji}(u_i(t))]_{i,j=1,\ldots,N}$ for the chain $\mathbf{X}$.
Recall that the off-diagonal entries of the rate matrix represent the rate of switching between states. Having large-valued off-diagonal entries implies that jumping to another state takes place faster. Therefore, the chain can model rapid changes in asset prices.
Suppose now $a_{ji}>0$ for all $i,j=1,\ldots,N$. Define, for each $t\in\mc{T}$,
\begin{align*}
D(u(t)):=\left[\frac{b_{ji}(u_i(t))}{a_{ji}}\right]_{i,j=1,\ldots,N}=[d_{ji}(u_i(t))]_{i,j},
\end{align*}
and write
\begin{align*}
D_0(u(t))&:=D(u(t))-\mbox{diag}(d_{11}(u_1),\ldots,d_{NN}(u_N)).
\end{align*}
Similarly, write
\begin{align*}
A_0:=A-\mbox{diag}(a_{11},a_{22},\ldots,a_{NN}).
\end{align*}
These matrices can be thought of as the original matrices $D(u(t))$ and $A$, but with zeroes as their diagonal entries.
\begin{mypr}
Let $\mathbf{J}:=\{J(t)\}_{t\in\mc{T}}$ be a vector-valued counting process on $(\Omega,\mathcal{F},P)$, where for each $t\in\mc{T}$, \[J(t):=\int_0^t(I-\mathbf{diag}[X_{s-}])dX_s.\] Then $J_i(t)$ counts the number of jumps of chain $\mathbf{X}$ to state $e_i$ up to time t, for each $i=1,\ldots,N$.
\end{mypr}
\begin{proof}
Let $0< s\leq t$. Suppose $X_{s-}=e_i$ and $\Delta X_s=X_s-X_{s-}$. The vector $\Delta X_s$ is the zero vector unless a jump to another state occurs. Then, $(I-\mathbf{diag}[X_{s-}])\Delta X_s=X_s$. Summing over all $s\in(0,t]$ yields the result.
\end{proof}
\begin{mypr}\label{Jbar}
The process $\overline{\mathbf{J}}:=\{\overline{J}(t)\}_{t\in\mc{T}}$ defined by
\[\overline{J}(t):=J(t)-\int_0^tA_0X_sds,\quad t\in\mc{T}\] is an $\mathbb{R}^N$-valued, $(\mathbb{F}^{\mathbf{X}},P)$-martingale.
\end{mypr}
\begin{proof}
\begin{align*}
(I-\mbox{diag}[X_{t-}])dX_t &=(I-\mbox{diag}[X_{t-}])(AX_tdt+dM_t) \\
dJ(t)&=A_0X_tdt+(I-\mbox{diag}[X_{t-}])dM_t \\
&=dJ(t)-d\overline{J}(t)+(I-\mbox{diag}[X_{t-}])dM_t.
\end{align*}
Rearranging the equation yields
\begin{align*}
d\overline{J}(t) &=(I-\mbox{diag}[X_{t-}])dM_t,
\end{align*}
which proves the result.
\end{proof}
Define for each $t\in\mc{T}$ and $u\in\mc{U}$,
\[\Lambda^u_1(t):=1+\int_0^t\Lambda^u_1(s-)[D_0(u(s-))X_{s-}-\mathbf{1}]^{\top}d\overline{J}(s),\] where $\mathbf{1}:=(1,\ldots,1)^{\top}\in\mathbb{R}^N$. From Proposition \ref{Jbar}, $\{\Lambda^u_1(t)\}_{t\in\mc{T}}$ is an $(\mathbb{F}^{\mathbf{X}},P)$-local martingale. Since $u(t)$ is bounded, $\{\Lambda^u_1(t)\}_{t\in\mc{T}}$ is an $(\mathbb{F}^{\mathbf{X}},P)$-martingale.
The control of the chain $\mathbf{X}$ will be described by the following change of measure using the density $\Lambda_1^u$. For each $u\in\mc{U}$, define a new probability measure $P^{(u)}\sim P$ on $\mc{F}_T$:
\[\frac{dP^{(u)}}{dP}\bigg|_{\mc{F}_T}:=\Lambda^u_1(T).\]
\begin{mypr}\label{girsanovchain}
For each $u\in\mc{U}$, the chain $\mathbf{X}$ is a Markov chain with rate matrix $B(u)$ under $P^{(u)}$.
\end{mypr}
\begin{proof}
Write $B_0(u(t)):=B(u(t))-\mbox{diag}(b_{11}(u_1(t)),\ldots,b_{NN}(u_N(t)))$. Write
\begin{align*}
J^*(t)=J(t)-\int_0^tB_0(u(s))X_sds.
\end{align*}
The dynamics of $\{\Lambda_1(t)J^*(t)\}_{t\in\mc{T}}$ are then given by
\begin{align*}
\Lambda^u_1(t)J^*(t)
&= \ds\sum_{0<s\leq t}\Lambda^u_1(s-)\Delta J(s)-\int_0^t\Lambda^u_1(s)B_0(u(s))X_{s}ds+\int_0^tJ^*(s-)d\Lambda^u_1(s) \\
&\quad+\ds\sum_{0<s\leq t}\Lambda^u_1(s-)[D_0(u(s-))X_{s-}-\mathbf{1}]^{\top}\Delta J(s)\Delta J(s)^{\top} \\
&=\ds\ds\sum_{0<s\leq t}\Lambda^u_1(s-)\Delta J(s)-\int_0^t\Lambda^u_1(s)B_0(u(s))X_{s}ds+\int_0^tJ^*(s-)d\Lambda^u_1(s)\\
&\quad+\ds\int_0^t\Lambda^u_1(s-)\mbox{diag}[\Delta J(s)][D_0(u(s-))X_{s-}-\mathbf{1}] \\
&=\ds\ds\sum_{0<s\leq t}\Lambda^u_1(s-)\Delta J(s)-\int_0^t\Lambda^u_1(s)B_0(u(s))X_{s}ds+\int_0^tJ^*(s-)d\Lambda^u_1(s)\\
&\quad+\ds\int_0^t\Lambda^u_1(s-)\mbox{diag}[A_0X_{s-}][D_0(u(s))X_{s-}-\mathbf{1}]ds \\
&\quad+\ds\int_0^t\Lambda^u_1(s-)\mbox{diag}[\Delta\overline{J}(s)][D_0(u(s))X_{s-}-\mathbf{1}].
\end{align*}
Since $\mbox{diag}[A_0X_s][D_0(u(s))X_s-\mathbf{1}]=B_0(u(s))X_s-A_0X_s$, then
\begin{align*}
\Lambda^u_1(t)J^*(t)&=\ds\int_0^t\Lambda^u_1(s-)d\overline{J}(s)+\int_0^tJ^*(s-)d\Lambda^u_1(s)\\
&\quad+\ds\int_0^t\Lambda^u_1(s-)\mbox{diag}[d\overline{J}(s)][D_0(u(s-))X_{s-}-\mathbf{1}].
\end{align*}
Therefore, $\{\Lambda^u_1(t)J^*(t)\}_{t\in\mc{T}}$ is an $(\mathbb{F}^{\mathbf{X}},P)$-martingale. This implies that, under $P^{(u)}$, $\{J^*(t)\}_{t\in\mc{T}}$ is an $(\mathbb{F}^{\mathbf{X}},P^{(u)})$-martingale. Then,
\begin{align*}
(I-\mbox{diag}[X_{t-}])dX_t&=A_0X_tdt+(I-\mbox{diag}[X_{t-}])dM_t \\
&=dJ^*(t)-d\overline{J}(t)+B_0(u(t))X_tdt+(I-\mbox{diag}[X_{t-}])dM_t\\
&=dJ^*(t)+B_0(u(t))X_tdt.
\end{align*}
Pre-multiplying by $(I-X_{t-}\mathbf{1}^{\top})$ gives
\begin{align*}
(I-X_{t-}\mathbf{1}^{\top})(I-\mbox{diag}[X_{t-}])dX_t&=(I-X_{t-}\mathbf{1}^{\top})dJ^*(t)+(I-X_{t-}\mathbf{1}^{\top})B_0(u(t))X_{t-}dt.
\end{align*}
Then,
\begin{align*}
dX_t&=(I-X_{t-}\mathbf{1}^{\top})dJ^*(t)+B(u(t))X_tdt.
\end{align*}
Hence, the result.
\end{proof}
Consequently, under $P^{(u)}$, the chain $\mathbf{X}=\mathbf{X}^u$ has the following semimartingale dynamics
\begin{align*}
X^u_t=X^u_0+\int_0^tB(u(s))X^u_sds+M^{u}_t.
\end{align*}
Here, $\mathbf{M}^u:=\{M^u_t\}_{t\in\mc{T}}$ is an $\mathbb{R}^N$-valued, $(\mathbb{F}^{\mathbf{X}},P^{(u)})$-martingale with initial condition $M^{u}(0)=0$ and quadratic variation \eqref{quadchar}.
\section{An Example of a Rate Matrix B(u)}\label{example of rate matrix B}
In this section, we provide an example of a family of rate matrices $B(u(t))$. Suppose $u_i:\mc{T}\times\Omega\rightarrow\textsf{U}_i$ for each $i=1,\ldots,N$. Write $u(t)=(u_1(t),\ldots,u_N(t))^{\top}$ and define
\[B(u(t)):=A\mathbf{diag}[u(t)].\]
Then, for all $i,j=1\ldots,N$, with $i\neq j$, $b_{ji}(u_i(t))=u_i(t)a_{ji}\geq 0$, $t\in\mc{T}$. Furthermore,
\[\sum_{j=1}^{N}b_{ji}(u_i(t))=u_i(t)\sum_{j=1}^{N}a_{ji}=0,\] which implies that $b_{ii}(u_i(t))\leq 0$ for each $i=1,\ldots,N$. This proves that $B(u(t))$ is indeed a family of rate matrices.
Further, for each $t\in\mc{T}$, \[D(u(t)):=\left[\frac{b_{ji}(u_i(t))}{a_{ji}}\right]_{i,j=1,\ldots,N}=\left[{\begin{array}{cccc}
u_1(t)&u_1(t)&\cdots&u_1(t)\\
u_2(t)&u_2(t)&\cdots&u_2(t)\\
\vdots &\vdots &\ddots&\vdots\\
u_N(t)&u_N(t)&\cdots&u_N(t)\\
\end{array}}\right]\]
and
\[D_0(u(t)):=D(u(t))-\mbox{diag}[u(t)].\]
\section{Esscher Transform}\label{esscher}
An earlier change of measure adopted in actuarial science \cite{gerbershiu:esscher} is the Esscher transform. It was first introduced by Esscher in \cite{esscher:esscher}. We extend this to our regime-switching dynamics. Write $\mathbb{F}^{\mathbf{W}}:=\{\mc{F}_t^{\mathbf{W}}\}_{t\in\mc{T}}$ for the $P$-augmentation of the natural filtration generated by $\mathbf{W}$. For each $t\in\mc{T}$, we define $\mc{G}_t:=\mc{F}_t^{\mathbf{X}}\lor \mc{F}_t^{\mathbf{W}}$, the minimal augmented $\sigma$-field generated by the two $\sigma$-fields $\mc{F}_t^{\mathbf{X}}$ and $\mc{F}_t^{\mathbf{W}}$. Write $\mathbb{G}:=\{\mc{G}_t\}_{t\in\mc{T}}$. Let $\theta_t:=\theta(t,X_t)$ be a regime switching Esscher parameter such that
\begin{align*}
\theta_t=\langle\theta(t),X_t\rangle,
\end{align*}
where $\theta(t):=(\theta_1(t),\ldots,\theta_N(t))^{\top}\in\mathbb{R}^N$. Then, the regime switching Esscher transform on $\mc{G}_t$ with respect to $\{\theta_s \}_{s\in [0,t]}$ is given by:
\begin{equation*}
\Lambda_2(t):=\frac{\exp\left(\int_0^t\theta_s d\ol{Y}_s\right)}{E^P\left[\exp\left(\int_0^t\theta_s d\ol{Y}_s\right)\bigg| \mc{F}^{\mathbf{X}}_T\right]}, \quad t\in\mc{T}.
\end{equation*}
Using Ito's formula and taking conditional expectations given $\mc{F}^{\mathbf{X}}_T$ yields
\begin{align*}
Z_t&=Z_0+\int_0^t\left[\theta_s\left(\mu_s-\frac{1}{2}\sigma^2_s\right)+\frac{1}{2}\theta^2_s\sigma^2_s\right]Z_sds+\int_0^t\theta_s\sigma_sZ_sdW_s.
\end{align*}
Taking the conditional expectation of $Z_t$ given $\mc{F}^{\mathbf{X}}_T$ yields
\begin{align*}
E^P\left[Z_t|\mc{F}^{\mathbf{X}}_T\right]&=Z_0+\int_0^t\left[\theta_s\left(\mu_s-\frac{1}{2}\sigma^2_s\right)+\frac{1}{2}\theta^2_s\sigma^2_s\right]E^P[Z_s|\mc{F}^{\mathbf{X}}_T]ds\\
&=Z_0\exp\left[\ds\int_0^t\theta_s\left(\mu_s-\frac{1}{2}\sigma_s^2\right)ds+\frac{1}{2}\int_0^t\theta^2_s\sigma^2_sds \right] .
\end{align*}
Thus, for $t\in\mc{T}$, the Radon-Nikodym derivative of the regime switching Esscher transform is given by
\begin{align*}
\Lambda_2(t)&=\frac{\exp\left[\ds\int_0^t\theta_s\left(\mu_s-\frac{1}{2}\sigma_s^2\right)ds+\int_0^t\theta_s\sigma_sdW_s\right]}{\exp\left[\ds\int_0^t \theta_s\left(\mu_s-\frac{1}{2}\sigma^2_s\right)ds+\frac{1}{2}\int_0^t\theta^2_s\sigma^2_sds \right]}\\
&=\exp\left(\int_0^t\theta_s\sigma_s dW_s-\frac{1}{2}\int_0^t\theta^2_s\sigma^2_sds\right).
\end{align*}
Since $\theta_t$ and $\sigma_t$ are bounded, then $E^P\left[\exp\left(\frac{1}{2}\int_0^t|\theta_s\sigma_s|^2ds\right)\right]<\infty$. This implies that $\Lambda_2(t)$ is a $(\mathbb{G},P)$-martingale. Using the density $\Lambda_2$, we can implement a measure change for the continuous part of the price process in \eqref{prior change of measure}.
Consider now the $\mathbb{G}$-adapted process $\Lambda^u:=\{\Lambda^u(t)\}_{t\in\mc{T}}$ defined by
\begin{align*}
\Lambda^u(t):=\Lambda^u_1(t)\cdot\Lambda_2(t), \quad t\in\mc{T}.
\end{align*}
Define a new probability measure $Q^u\sim P$ on $\mc{G}_T$:
\begin{align*}
\frac{dQ^u}{dP}\bigg|_{\mc{G}_T}:=\Lambda^u(T).
\end{align*}
As $\mathbf{X}$ and $\mathbf{W}$ are independent, using the standard Girsanov's Theorem for Brownian motion, the process $\mathbf{W}^{\theta}:=\{W^{\theta}_t\}_{t\in\mc{T}}$ defined by
$W^{\theta}_t:=W_t-\int_0^t\theta_u\sigma_udu$ is a $(\mathbb{G},Q^u)$-standard Brownian motion. Furthermore, using Proposition \ref{girsanovchain}, the chain $\mathbf{X}$ remains a Markov chain with rate matrix $B(u(t))$ under $Q^u$.
The dynamics under $Q^u$ of $S_t$ can then be rewritten as
\begin{align*}
dS_t=(\mu_t+\theta_t\sigma_t)S_tdt+\sigma_tS_tdW^{\theta}_t +\ol{S}_t\langle\alpha,B(u(t))X^u_t\rangle dt+\ol{S}_{t}\langle\alpha,dM^{u}_t\rangle.
\end{align*}
By the fundamental theorem of asset pricing for semimartingales bounded from below, arbitrage opportunities do not exist if and only if there exists an equivalent local martingale measure under which the discounted price process is a martingale \cite{delbaenschachermayer:FTOAP}. Proposition \ref{emm} below gives a condition that allows the discounted price process to be a martingale. To establish this result, we first state the following lemma.
\begin{mylm}\label{Sbar in terms of S}
Define $\alpha^{-1}:=(\alpha_1^{-1},\alpha_2^{-1},\ldots,\alpha_N^{-1})^{\top}\in\mathbb{R}^N$. Then
\begin{align*}
\ol{S}_t\langle\alpha,B(u(t))X^u_t\rangle=S_t\left\langle\alpha,B(u(t))\mathbf{diag}(\alpha^{-1})X^u_t\right\rangle.
\end{align*}
\end{mylm}
\begin{proof}Suppose $X^u_t=e_i$. Then
\begin{align*}
\ol{S}_t\langle\alpha,B(u(t))e_i\rangle&=S_t\langle\alpha^{-1},e_i\rangle\langle\alpha,B(u(t))e_i\rangle\\
&=S_t\left(\alpha_i^{-1}\right)\left(\sum_{j=1}^N\alpha_jb_{ji}\right)\\
&=S_t\left(\sum_{j=1}^N\alpha_jb_{ji}\alpha_i^{-1}\right)\\
&=S_t\left\langle\alpha,B(u(t))\mathbf{diag}(\alpha^{-1})e_i\right\rangle,
\end{align*}
which proves the result.
\end{proof}
Using It\^{o}'s product rule and Lemma \ref{Sbar in terms of S} yields the following proposition.
\begin{mypr}\label{emm}
Define a risk-free interest rate by $r_t:=\langle r,X_t\rangle$, where $r=(r_1,\ldots,r_N)^{\top}\in\mathbb{R}^N$. For each $t\in\mc{T}$ and some $u\in\mc{U}$, let the discounted price process $\{\widetilde{S}_t \}_{t\in\mc{T}}$ be defined by
\begin{align*}
\widetilde{S}_t=e^{-\int_0^tr_udu}S_t.
\end{align*}
For $i=1,\ldots,N$, define
\begin{align*}
\theta^u_i(t)=\frac{1}{\sigma_i}\left(r_i-\mu_i-\sum_{j=1}^N\alpha_jb_{ji}(u_i(t))\alpha_i^{-1}\right).
\end{align*}
Then $\{\widetilde{S}_t \}_{t\in\mc{T}}$ is a $(\mathbb{G},Q^u)$-martingale if and only if\[\theta_t=\langle\theta^u(t),X_t^u\rangle, \]
where $\theta^u(t)=(\theta^u_1(t),\ldots,\theta^u_N(t))^{\top}\in\mathbb{R}^N$.
\end{mypr}
\begin{proof}
Using Ito's product rule and Lemma \ref{Sbar in terms of S},
\begin{align*}
d\widetilde{S}_t&= e^{-\int_0^tr_udu}\bigg[-r_tS_tdt+(\mu_t+\theta_t\sigma_t)S_tdt+\sigma_tS_tdW^{\theta}_t+\ol{S}_t\langle\alpha,B(u(t))X^u_t\rangle dt+\ol{S}_t\langle\alpha,dM^{u}_t\rangle\bigg] \\
&=\bigg[\mu_t+\theta_t\sigma_t-r_t+\left\langle\alpha,B(u(t))\mathbf{diag}(\alpha^{-1})X^u_t\right\rangle\bigg]\widetilde{S}_tdt+\sigma_t\widetilde{S}_tdW^{\theta}_t+\ol{S}_te^{-\int_0^tr_udu}\langle\alpha,dM^{u}_t\rangle.
\end{align*}
This is a $(\mathbb{G},Q^u)$-martingale if and only if the finite variation term is indistinguishable from the zero process.
\end{proof}
With the appropriate choice of $\theta_t$ such that $\widetilde{S}_t$ is a $(\mathbb{G},Q^u)$-martingale, the dynamics under $Q^u$ of $S_t$ can be written as:
\begin{equation*}
dS_t=r_tS_tdt+\sigma_tS_tdW^{\theta}_t +\ol{S}_t\langle\alpha,dM^{u}_t\rangle
\end{equation*}
or
\begin{align}\label{sbar2 chapter 3}
dS_t&=r_tS_tdt+\sigma_tS_tdW^{\theta}_t+S_{t-}\langle\alpha^{-1},X^u_{t-}\rangle\langle\alpha,dX^u_t\rangle-S_t\left\langle\alpha,B(u(t))\mathbf{diag}(\alpha^{-1})X^u_t\right\rangle dt.
\end{align}
\section{Pricing European Call Options via Homotopy}\label{uend}
In this section, we shall determine the price of a European call option. We first show that the price satisfies a system of partial differential equations (PDEs). We then solve these PDEs using the homotopy analysis method. The discussion follows closely with that of \cite{elliottchansiu:homotopyoptionpricing}. The results will be presented without proofs.
Consider a vanilla European call option with underlying $S$, strike price $K$, and maturity at time $T$. Given $S_t=s$ and $X_t^u=x$, then the call price at time $t$ is given by
\begin{align*}
C(t,s,x):=E^{Q^u}_{t,s,x}\left[e^{-\int_0^tr_udu}(S_T-K)^{+}\right],
\end{align*}
where $Q^u$ is a risk-neutral measure and $E^{\cdot}_{t,s,x}$ is the conditional expectation given $S_t=s$ and $X_t=x$. Since $S_t:=\ol{S}_t\langle\alpha,X_t^u \rangle$, then
\begin{align*}
C(t,s,x)&=E^{Q^u}_{t,s,x}\left[e^{-\int_t^Tr_{u}du}(S_T-K)^{+}\right]\\ &=E^{Q^u}_{t,s,x}\left[e^{-\int_t^Tr_{u}du}\left(\ol{S}_T\langle\alpha,X^u_T\rangle-K\right)^{+}\right]\\
&=:\ol{C}(t,\ol{s},x)
\end{align*}
\begin{mypr}
For each $i=1,\ldots,N$, write $\ol{C}_i:=\ol{C}_i(t,\ol{s})=\ol{C}(t,\ol{s},e_i)$ and $\mathbf{\ol{C}}:=(\ol{C}_1,\ldots,\ol{C}_N)^{\top}\in\mathbb{R}^N$. Let $\mc{O}:=(0,T)\times(0,\infty)$ be an open set. Suppose that for each $i=1,\ldots,N$, $\ol{C}_i\in {\mc{C}}^{1,2}(\mc{O})$, where ${\mc{C}}^{1,2}(\mc{O})$ is the space of functions $f(t,\ol{s})$ such that $f$ is continuously differentiable in $t$ and twice continuously differentiable in $\ol{s}$. Then, for each $i=1,\ldots,N$ and each $(t,\ol{s})\in\mc{O}$, $\ol{C}_i$ satisfies the following system of PDEs:
\begin{align}\label{pde for cbar}
0&=\frac{\partial \ol{C}_i}{\partial t}+r_i\ol{s}\frac{\partial \ol{C}_i}{\partial \ol{s}}+\frac{1}{2}\sigma_i^2\ol{s}^2\frac{\partial^2 \ol{C}_i}{\partial \ol{s}^2}-r_i\ol{C}_i+\langle \mathbf{\ol{C}},B(u(t))e_i\rangle,
\end{align}
with terminal condition $\ol{C}_i(T,\ol{s}):=\ol{C}(T,\ol{s},e_i)=(\ol{s}-K)^+$.
\end{mypr}
\begin{proof}
For each $i=1,\ldots,N$, write $\ol{C}_i:=\ol{C}_i(t,\ol{s})=\ol{C}(t,\ol{s},e_i)$ and $\mathbf{\ol{C}}:=(\ol{C}_1,\ldots,\ol{C}_N)^{\top}\in\mathbb{R}^N$. Then for $x\in\mc{S}$, $\ol{C}(t,\ol{s},x)=\langle\mathbf{\ol{C}},x\rangle$. For $i=1,\ldots,N$, by It\^{o}'s formula,
\begin{align*}
d\ol{C}_i&=\frac{\partial \ol{C}_i}{\partial t}dt+\frac{\partial \ol{C}_i}{\partial \ol{s}}d\ol{s}+\frac{1}{2}\sigma_t^2\ol{s}^2\frac{\partial^2 \ol{C}_i}{\partial \ol{s}^2}dt\\
&=\frac{\partial \ol{C}_i}{\partial t}dt+\frac{\partial \ol{C}_i}{\partial \ol{s}}(r_i\ol{s}dt+\sigma_i\ol{s}dW_t^{\theta})+\frac{1}{2}\sigma_i^2\ol{s}^2\frac{\partial^2 \ol{C}_i}{\partial \ol{s}^2}dt\\
&=\left(\frac{\partial \ol{C}_i}{\partial t}+r_i\ol{s}\frac{\partial \ol{C}_i}{\partial \ol{s}}+\frac{1}{2}\sigma_i^2\ol{s}^2\frac{\partial^2 \ol{C}_i}{\partial \ol{s}^2}\right)dt+\sigma_i\ol{s}\frac{\partial \ol{C}_i}{\partial \ol{s}}dW_t^{\theta}
\end{align*}
Write $d\mathbf{\ol{C}}(t,\ol{s}):=(d\ol{C}_1,\ldots,d\ol{C}_N)^{\top}\in\mathbb{R}^N$. Then,
\begin{align}\label{dCbar}
d\ol{C}(t,\ol{s},X^u_t)=\langle \mathbf{\ol{C}},dX_t^u\rangle+\langle d\mathbf{\ol{C}}(t,\ol{s}),X_t^u\rangle.
\end{align}
Since $\{(\ol{S}_t,X_t^u)\}_{t\in\mc{T}}$ is jointly Markovian with respect to $\mathbb{G}$ under $Q^u$, then for each $t\in\mc{T}$, the discounted price of a European call option given $\ol{S}_t=\ol{s}$ and $X_t^u=x$ is given by
\begin{align*}
\widetilde{C}(t,\ol{s},x)&=e^{-\int_0^tr_{u}du}\ol{C}(t,\ol{s},x)\\
&=e^{-\int_0^tr_{u}du}E^{Q^u}\left[e^{-\int_t^Tr_{u}du}\left(\ol{S}_T\langle\alpha,X^u_T\rangle-K\right)^{+}|\ol{S}_t=\ol{s},X^u_t=x\right]\\
&=E^{Q^u}\left[e^{-\int_0^Tr_{u}du}\left(\ol{S}_T\langle\alpha,X^u_T\rangle-K\right)^{+}|\mc{G}_t\right].
\end{align*}
Hence, $\{\widetilde{C}(t,\ol{s},X^u_t)\}_{t\in\mc{T}}$ is a $(\mathbb{G},Q^u)$-martingale.
For each $i=1,\ldots,N$, write $\widetilde{C}_i:=\widetilde{C}_i(t,\ol{s})=\widetilde{C}(t,\ol{s},e_i)$ and $\mathbf{\widetilde{C}}:=(\widetilde{C}_1,\ldots,\widetilde{C}_N)^{\top}\in\mathbb{R}^N$. Then for $x\in\mc{S}$, $\widetilde{C}(t,\ol{s},x)=\langle\mathbf{\widetilde{C}},x\rangle$.
From \eqref{dCbar},
\begin{align*}
d\widetilde{C}(t,\ol{s},X^u_t)&=e^{-\int_0^tr_{u}du}\bigg[-r_t\langle\mathbf{\ol{C}},X^u_t\rangle dt+d\ol{C}(t,\ol{s},X^u_t)\bigg]\\
&=e^{-\int_0^tr_{u}du}\bigg[-r_t\langle\mathbf{\ol{C}},X^u_t\rangle dt+\langle \mathbf{\ol{C}},dX_t^u\rangle+\langle d\mathbf{\ol{C}}(t,\ol{s}),X_t^u\rangle\bigg]\\
&=e^{-\int_0^tr_{u}du}\bigg[-r_t\langle\mathbf{\ol{C}},X^u_t\rangle dt+\langle \mathbf{\ol{C}},B(u(t))X_t^udt+dM_t^u\rangle+\langle d\mathbf{\ol{C}}(t,\ol{s}),X_t^u\rangle\bigg].
\end{align*}
Then, for each $i=1,\ldots,N$,
\begin{align*}
d\widetilde{C}(t,\ol{s},e_i)&=e^{-\int_0^tr_{u}du}\bigg[-r_i\ol{C}_i dt+\langle \mathbf{\ol{C}},B(u(t))e_i\rangle dt+\langle \mathbf{\ol{C}},dM_t^u\rangle\\
&\quad+\left(\frac{\partial \ol{C}_i}{\partial t}+r_i\ol{s}\frac{\partial \ol{C}_i}{\partial \ol{s}}+\frac{1}{2}\sigma_i^2\ol{s}^2\frac{\partial^2 \ol{C}_i}{\partial \ol{s}^2}\right)dt+\frac{\partial \ol{C}_i}{\partial \ol{s}}\sigma_i\ol{s}dW_t^{\theta} \bigg]\\
&=e^{-\int_0^tr_{u}du}\bigg[\left(\frac{\partial \ol{C}_i}{\partial t}+r_i\ol{s}\frac{\partial \ol{C}_i}{\partial \ol{s}}+\frac{1}{2}\sigma_i^2\ol{s}^2\frac{\partial^2 \ol{C}_i}{\partial \ol{s}^2}-r_i\ol{C}_i+\langle \mathbf{\ol{C}},B(u(t))e_i\rangle\right)dt\\
&\quad+\langle \mathbf{\ol{C}},dM_t^u\rangle+\sigma_i\ol{s}\frac{\partial \ol{C}_i}{\partial \ol{s}}dW_t^{\theta} \bigg].
\end{align*}
Since $\{\widetilde{C}(t,\ol{s},X^u_t)\}_{t\in\mc{T}}$ is a $(\mathbb{G},Q^u)$-martingale, the finite variation term should be identical to the zero process. The result then follows.
\end{proof}
Since $S_t:=\ol{S}_t\langle\alpha,X_t^u \rangle$ and $C(t,s,x)=\ol{C}(t,\ol{s},x)$, then the following corollary holds.
\begin{mycor}
Write ${C}_i:={C}_i(t,{s})={C}(t,{s},e_i)$ for each $i=1,\ldots,N$ and the vector $\mathbf{{C}}:=({C}_1,\ldots,{C}_N)^{\top}\in\mathbb{R}^N$. Then, for each $i=1,\ldots,N$ ${C}_i$ satisfies the following system of PDEs:
\begin{align}\label{C as pde}
0&=\frac{\partial {C}_i}{\partial t}+r_i{s}\frac{\partial {C}_i}{\partial {s}}+\frac{1}{2}\sigma_i^2{s}^2\frac{\partial^2 {C}_i}{\partial {s}^2}-r_i{C}_i+\langle \mathbf{{C}},B(u(t))e_i\rangle,
\end{align}
with terminal condition ${C}_i(T,{s}):={C}(T,{s},e_i)=({s}-K)^+$.
\end{mycor}
\begin{proof}
Rewriting ${\partial \ol{C}_i}/{\partial \ol{s}}$ and ${\partial^2 \ol{C}_i}/{\partial \ol{s}^2}$ yields:
\begin{align*}
\frac{\partial \ol{C}_i}{\partial \ol{s}}=\frac{\partial \ol{C}_i}{\partial {s}}\cdot\frac{\partial s}{\partial \ol{s}}=\frac{\partial}{\partial {s}}\ol{C}(t,\ol{s})\cdot\frac{\partial}{\partial \ol{s}}\ol{s}\langle\alpha,x\rangle=\frac{\partial {C}_i}{\partial {s}}\cdot\langle\alpha,x\rangle
\end{align*}
and
\begin{align*}
\frac{\partial^2 \ol{C}_i}{\partial \ol{s}^2}=\frac{\partial}{\partial \ol{s}}\left[\frac{\partial \ol{C}_i}{\partial \ol{s}}\right]=\frac{\partial}{\partial \ol{s}}\left[\frac{\partial {C}_i}{\partial {s}}\cdot\langle\alpha,x\rangle\right]=\langle\alpha,x\rangle\frac{\partial}{\partial \ol{s}}\left[\frac{\partial {C}_i}{\partial {s}}\right]=\langle\alpha,x\rangle\frac{\partial}{\partial {s}}\left[\frac{\partial {C}_i}{\partial \ol{s}}\right]=\langle\alpha,x\rangle^2\frac{\partial^2{C}_i}{\partial {s}^2}.
\end{align*}
Hence, \eqref{pde for cbar} can be expressed as
\begin{align*}
\frac{\partial {C}_i}{\partial t}+r_i{s}\langle\alpha^{-1},x\rangle\frac{\partial {C}_i}{\partial {s}}\cdot\langle\alpha,x\rangle+\frac{1}{2}\sigma_i^2{s}^2\langle\alpha^{-1},x\rangle^2\langle\alpha,x\rangle^2\frac{\partial^2{C}_i}{\partial {s}^2}-r_i{C}_i+\langle \mathbf{{C}},B(u(t))e_i\rangle=0,
\end{align*}
which yields the result.
\end{proof}
We can solve for the above PDEs using the homotopy analysis method (HAM), first introduced in \cite{liao:homotopyanalysismethod}, which expresses the price of a European call option as an infinite series. The discussion below follows closely with that of \cite{elliottchansiu:homotopyoptionpricing, parkkim:homotopyoptionpricing}.
Using the transformation $x=\log(s)$ and writing
\begin{align*}
\mc{L}_i:=\frac{\partial}{\partial t}+r_i\left(\frac{\partial}{\partial x}-1\right)+\frac{1}{2}\sigma_i^2\left(\frac{\partial^2}{\partial x^2}-\frac{\partial}{\partial x}\right),
\end{align*}
it can be shown that
\begin{align*}
\frac{\partial {C}_i}{\partial s}(t,s
=\frac{\partial {C}_i}{\partial {x}}(t,x)\cdot\frac{1}{s}
\end{align*}
and
\begin{align*}
\frac{\partial^2 {C}_i}{\partial s^2}(t,s
=\frac{1}{s^2}\cdot \frac{\partial^2 C_i}{\partial x^2}(t,x)-\frac{1}{s^2}\cdot\frac{\partial C_i}{\partial x}(t,x).
\end{align*}
\eqref{C as pde} can be rewritten as
\begin{align}\label{C as pde in x}
0=\frac{\partial {C}_i}{\partial t}(t,x)+r_i\frac{\partial {C}_i}{\partial {x}}(t,x)+\frac{1}{2}\sigma_i^2\left[\frac{\partial^2 {C}_i}{\partial {x}^2}(t,x)-\frac{\partial C_i}{\partial x^2}(t,x)\right]-r_i{C}_i(t,x)+\langle \mathbf{{C}}(t,x),B(u(t))e_i\rangle,
\end{align}
with terminal condition ${C}_i(T,x):={C}(T,{x},e_i)=(e^x-K)^+$. Write
\begin{align*}
\mc{L}_i:=\frac{\partial}{\partial t}+r_i\left(\frac{\partial}{\partial x}-1\right)+\frac{1}{2}\sigma_i^2\left(\frac{\partial^2}{\partial x^2}-\frac{\partial}{\partial x}\right).
\end{align*}
Then, \eqref{C as pde in x} can be simplified in terms of notation as
\begin{align*}
\mc{L}_iC_i(t,x)+\langle\mathbf{{C}}(t,x),B(u(t))e_i\rangle=0.
\end{align*}
We then have the following result.
\begin{myth}
For each $i=1,\ldots,N$, $C_i(t,x)$ can be represented as
\begin{align*}
C_i(t,x)=\sum_{m=0}^{\infty}\frac{\hat{C}^m_i(t,x)}{m!},
\end{align*}
given the assumption that
\begin{align*}
\hat{C}_i^0(t,x)=e^xN(d_{1,i})-Ke^{-r_i(T-t)}N(d_{2,i}),
\end{align*}
where
\begin{align*}
d_{1,i}=\frac{\log\left(\frac{e^x}{K}\right)+\left(r_i+\frac{1}{2}\sigma_i^2\right)(T-t)}{\sigma_i\sqrt{T-t}},
\end{align*}
$d_{2,i}=d_{1,i}-\sigma_i\sqrt{T-t}$, and $N(\cdot)$ is the cumulative distribution function of a standard normal random variable. Write $\tau:=T-t$ and $\beta_i:=\frac{2r_i}{\sigma_i^2}$. Then $\hat{C}^m_i(t,x)$, $m=1,2,\ldots$, is recursively given by
\begin{align*}
&\hat{C}^m_i(t,x)=\frac{\breve{C}^m_i(t,x)}{\gamma_i(\tau,x)},\\
&\breve{C}^m_i(t,x)=\int_t^{T}\int_{\mathbb{R}}G^m_i(T-s,\xi)H_i(T-s,x,\xi) d\xi ds,\\
&G^m_i(t,x)=\gamma_i(T-t,x)\mc{L}_i\hat{C}^m_i(t,x),\\
&H_i(t,x,\xi)=\frac{1}{\sqrt{2\pi\sigma_i^2t}}\exp\left[\frac{-(x-\xi)^2}{2\sigma_i^2t}\right],\\
&\gamma_i(t,x)=\exp\left[\frac{1}{2}(\beta_i-1)x+\frac{1}{8}\sigma_i^2(\beta_i+1)^2t\right].
\end{align*}
\end{myth}
\begin{proof}
We construct a homotopy of \eqref{C as pde in x} by considering an embedding parameter $p\in[0,1]$ and unknown functions $\hat{C}_i(t,x,p)$ which satisfy the following system:
\begin{align}\label{homotopy of pde}
(1-p)\mc{L}_i\left[\hat{C}_i(t,x,p)-\hat{C}_i^0(t,x)\right]+p\left[\mc{L}_i\hat{C}_i(t,x,p)+\langle\mathbf{\hat{C}}(t,x,p),B(u(t))e_i\rangle\right]=0,
\end{align}
with terminal condition $\hat{C}_i(T,x,p):=\hat{C}(T,{x},p,e_i)=(e^x-K)^+$ and $\hat{C}_i^0(t,x)=\hat{C}_i(t,x,0)$ for $i=1,\ldots,N$. This is also called an initial guess for $C_i(t,x)$.
We choose $\hat{C}_i^0(t,x)$ as the Black-Scholes-Merton option price. This implies that $\hat{C}_i^0(t,x)$ satisfies
\begin{align*}
\begin{cases}
\mc{L}_i\hat{C}_i^0(t,x)=0\\
\hat{C}_i^0(0,x)=(e^x-K)^+.
\end{cases}
\end{align*}
The system in \eqref{homotopy of pde} can then be rewritten as
\begin{align}\label{simplified homotopy of pde}
\begin{cases}
\mc{L}_i\hat{C}_i(t,x,p)+p\langle\mathbf{\hat{C}}(t,x,p),B(u(t))e_i\rangle=0\\
\hat{C}_i(T,x,p)=(e^x-K)^+,
\end{cases}
\end{align}
with the same terminal condition. Note that if $p=1$, we get
\begin{align}\label{homotopy p=1}
\begin{cases}
\mc{L}_i\hat{C}_i(t,x,1)+\langle\mathbf{\hat{C}}(t,x,1),B(u(t))e_i\rangle=0\\
\hat{C}_i(T,x,1)=(e^x-K)^+.
\end{cases}
\end{align}
Consider the Taylor's expansion for $\hat{C}_i(t,x,p)$ about $p=0$ given by
\begin{align}\label{taylor expansion for c hat}
\hat{C}_i(t,x,p)=\sum_{m=0}^{\infty}p^m\frac{\hat{C}^m_i(t,x)}{m!},
\end{align}
where
\begin{align*}
\hat{C}^m_i(t,x)=\frac{\partial^m}{\partial p^m}\hat{C}_i(t,x,p)\bigg|_{p=0}.
\end{align*}
Using \eqref{C as pde in x}, \eqref{homotopy p=1}, and \eqref{taylor expansion for c hat} yields
\begin{align*}
C_i(t,x)=\lim_{p\to 1^-}\hat{C}_i(t,x,p)=\sum_{m=0}^{\infty}\frac{\hat{C}^m_i(t,x)}{m!}.
\end{align*}
We want to solve for $\hat{C}^m_i(t,x)$. Substituting \eqref{taylor expansion for c hat} to \eqref{simplified homotopy of pde} yields the following recursive relations for $m=1,2,\ldots$
\begin{align}\label{recursive relations}
\begin{cases}
\mc{L}_i\hat{C}^m_i(t,x)+\langle\mathbf{\hat{C}}^{m-1}(t,x),B(u(t))e_i\rangle=0\\
\hat{C}^m_i(T,x)=0.
\end{cases}
\end{align}
Write $\breve{C}^m_i(t,x)=\gamma_i(\tau,x)\hat{C}_i^m(t,x).$ Using \eqref{recursive relations} yields
\begin{align*}
\frac{\partial\breve{C}^m_i}{\partial\tau}+\frac{1}{2}\sigma_i^2\frac{\partial^2\breve{C}^m_i}{\partial x^2}&=-\gamma_i(\tau,x)\mc{L}_i\hat{C}^m_i(t,x)\\
&=\gamma_i(\tau,x)\langle\mathbf{\hat{C}}^{m-1}(T-\tau,x),B(u(T-\tau))e_i\rangle,
\end{align*}
with initial condition $\breve{C}^m_i(0,x)=0$. This type of nonhomogeneous diffusion equation has a known solution (see \cite{parkkim:homotopydiffusion} or Chapter 1.3 of \cite{kevorkian:pde}) given by
\begin{align*}
\breve{C}^m_i(t,x)=\int_t^{T}\int_{\mathbb{R}}G^m_i(s,\xi)H_i(T-s,x,\xi) d\xi ds,
\end{align*}
which proves the result.
\end{proof}
\section{Bid and Ask Prices}\label{bid-ask price models}
Uncertainty in this model is modelled by uncertainty in the dynamics of the chain $\mathbf{X}$. In turn this represents uncertainty in both the jump and diffusion coefficients of the price process. This is motivated by the buying or selling agent supposing that the market is acting in the most adverse way contrary to their interests. Therefore, a model for the ask price is represented by the supremum over possible trajectories of $\mathbf{X}$ and a model for the bid price by the infimum. These are described by maximum and minimum principles. respectively, for related control problems. A dynamic programming principle and a verification theorem are also given below.
As in \cite{cohen:bsde}, define $\Psi_t$ to be the positive semidefinite matrix given by
\begin{equation}\label{psi}
\Psi_t:=\mbox{diag}(AX_{t-})-A\mbox{diag}(X_{t-})-\mbox{diag}(X_{t-})A^{\top}
\end{equation}
and, for $Y\in\mathbb{R}^N$,
\begin{equation*}
\|Y \|^2_{X_{t-}}=Y^{\top}\Psi_tY.
\end{equation*}
Then $\|\cdot\|_{X_{t-}}$ is a stochastic seminorm on $\mathbb{R}^N$ satisfying
\begin{equation*}
Y_t^{\top}d\langle M\rangle_tY_t=\|Y_t\|_{X_{t-}}^2dt,
\end{equation*}
where $\langle M\rangle$ is the predictable quadratic variation of the martingale defined in \eqref{chain}.
Below $(\textsf{U},d)$ will be a Polish metric space. We adapt some methods used in \cite{elliott:stochastic}. Suppose the maps $f,\sigma,\eta:\mc{T}\times\mathbb{R}\times\mc{S}\mapsto\mathbb{R}$ and $g:\mathbb{R}\times\mc{S}\mapsto\mathbb{R}$ satisfy the following assumption:
\begin{myass}\label{assumption:Lipschitz}
For each $x\in\mc{S}$ and $t\in\mc{T}$, there exists a constant $C_1>0$ such that for all $z_1,z_2\in{\mathbb{R}}$,
\begin{align*}
|f(t,z_1,x)-f(t,z_2,x)|&\leq C_1|z_1-z_2|\\
|\sigma(t,z_1,x)-\sigma(t,z_2,x)|&\leq C_1|z_1-z_2|\\
|\eta(t,z_1,x)-\eta(t,z_2,x)|&\leq C_1|z_1-z_2|\\
|g(z_1,x)-g(z_2,x)|&\leq C_1|z_1-z_2|.
\end{align*}
\end{myass}
Consider a process $\{Z_t\}_{t\in\mc{T}}$ given by the following dynamics:
\begin{align}\label{stateequation}
dZ_t&=f(t,Z_t,X_t)dt+\sigma(t,Z_t,X_t)dW_t+\eta(t,Z_{t-},X_{t-})\langle\alpha,dX_t\rangle.
\end{align}
\begin{mylm}\label{unique}
Suppose Assumption \ref{assumption:Lipschitz} holds. Then for any $(t,z,x)\in[0,T)\times\mathbb{R}\times\mc{S}$, the SDE \eqref{stateequation} admits a unique solution with initial conditions $Z_t=z$ and $X_t=x$.
\end{mylm}
\begin{proof}
See Theorem 16.3.11 of \cite{elliott:stochastic} or Theorem 6 of \cite{protter:stochastic} for the proof.
\end{proof}
Now, write $S_t=Z_t$. From \eqref{prior change of measure}, the dynamics of $\{Z_t\}_{t\in\mc{T}}$ are given by
\begin{equation}\label{zdynamics}
dZ_t=\mu_tZ_tdt+\sigma_tZ_tdW_t +Z_{t-}\langle\alpha^{-1},X_{t-}\rangle\langle\alpha,dX_t\rangle.
\end{equation}
Note that \eqref{zdynamics} is of the form \eqref{stateequation} with $f(t,Z_t,X_t)=\mu_tZ_t$, $\sigma(t,Z_t,X_t)=\sigma_tZ_t$ and $\eta(t,Z_{t-},X_{t-})=\langle\alpha^{-1},X_{t-}\rangle Z_{t-}$. The following lemma is a consequence of Lemma \ref{unique}.
\begin{mylm}
For any $(t,z,x)\in[0,T)\times\mathbb{R}\times\mc{S}$, the SDE \eqref{zdynamics} admits a unique solution with initial conditions $Z_t=z$ and $X_t=x$.
\end{mylm}
\begin{proof}
For each $x\in\mc{S}$ and $t\in\mc{T}$, choose $C_1=\ds\max\{|\langle\mu,x\rangle|,|\langle\sigma,x\rangle|,|\langle\alpha^{-1},x\rangle|\}$. Then, Assumption \ref{assumption:Lipschitz} holds. By Lemma \ref{unique}, the result follows.
\end{proof}
We denote the solution of \eqref{zdynamics} with initial conditions $z$ and $x$ by $\{Z_s^{t,z,x}\}_{s\in[t,T]}$.
Consider an objective functional given by
\begin{equation}\label{objfunc}
\mc{J}(t,z,x;u)=\expoc\left[g(Z_T^u,X_T^u)\right].
\end{equation}
Here, $\expoc(\cdot)$ is the conditional expectation given $Z^u_{t}=z$ and $X^u_{t}=x$ under $Q^u$. By Assumption \ref{assumption:Lipschitz}, the objective functional \eqref{objfunc} is well-defined. Write $E^{Q^u}[g(\z_t,\x_t)]\allowbreak=\expoc\left[g(Z_t^u,X_t^u)\right]$.
We shall consider predictable processes $\varphi_1:\Omega\times\mc{T}\mapsto\mathbb{R}$ and $\varphi_2:\Omega\times\mc{T}\mapsto\mathbb{R}^N$ which satisfy the following assumption.
\begin{myass}\label{square integrable}
For each $x\in\mc{S}$, $t\in\mc{T}$ and $u\in\textsf{U}$,
\begin{align*}
E\bigg[\int_0^T(\|\mc{J}(t,z,x,u)\|^2+\|\varphi_1(t)\|^2+\|\varphi_2(t)\|_{X_{t-}}^2)dt\bigg]<\infty.
\end{align*}
\end{myass}
For the rest of the section, we assume that Assumption \ref{assumption:Lipschitz} and Assumption \ref{square integrable} hold.
We shall show that \eqref{objfunc} is a solution to a backward stochastic differential equation (BSDE). We first prove the following lemmas.
Recall from Section \ref{change of measure for markov chains} that $D(u(t))=[b_{ji}(u_i)/a_{ji}]_{i,j=1,\ldots,N}$ and
\begin{align*}
D_0(u)=D(u)-\mbox{diag}(d_{11}(u_1),\ldots,d_{NN}(u_N)).
\end{align*}
Furthermore, from Section \ref{esscher}, we have a regime switching parameter $\theta_t=\langle\theta,X_t^u\rangle$, where $\theta:=(\theta_1,\ldots,\theta_N)^{\top}$. The following lemma can be proved by using the fact that $\theta$ and $\sigma$ are bounded by some constant $K$ and expanding the second term via the stochastic seminorm definition.
\begin{mylm}\label{boundedness}
For each $x\in\mc{S}$, $t\in\mc{T}$ and $u\in\textsf{U}$, there exists a constant $C_2>0$ such that
\begin{align*}
|\langle\theta\sigma,x\rangle|^2+\|D_0(u)X_{t-}-\mathbf{1}\|^2_{X_{t-}}<C_1,
\end{align*}
where
\begin{align*}
C_1&=K+\max_{u\in\textsf{U}}\left\{\sum_{\substack{k=1\\k\neq j}}^N\frac{(b_{kj}(u_j))^2}{a_{kj}}\right\}.
\end{align*}
\end{mylm}
\begin{proof}
Since $\theta$ and $\sigma$ are bounded, then the first term is bounded by some constant $K$. Suppose $X_{t-}=e_j\in\mc{S}$. For $i,m,n=1,\ldots,N$ and some vector $\mathbf{a}=(a_1,\ldots,a_N)^{\top}$, define the $N\times N$ matrices $\mbox{col}_i(\mathbf{a})=[c_{i,mn}]$ and $\mbox{row}_i(\mathbf{a})=[r_{i,mn}]$ such that
\begin{equation*}
c_{i,mn}=r_{i,nm}=\begin{cases}
a_m,&\mbox{if $n=i$}\\
0,&\mbox{otherwise}.
\end{cases}
\end{equation*}
Then,
\begin{align*}
D_0(u)X_{t-}-\mathbf{1}&=(d_{1j}-1,\ldots,-1,\ldots,d_{Nj}-1)^{\top}\\
\mbox{diag}(AX_{t-})&=\mbox{diag}(a_{1j},\ldots,a_{Nj})\\
A\mbox{diag}(X_{t-})&=\mbox{col}_j(a_{1j},\ldots,a_{Nj})\\
\mbox{diag}(X_{t-})A^{\top}&=\mbox{row}_j(a_{1j},\ldots,a_{Nj}).
\end{align*}
From \eqref{psi}, the diagonal entries, the $j$th column and the $j$th row of $\Psi_t$ have entries of the form $\pm a_{mj}$ and zero elsewhere. That is,
\begin{align*}
\Psi_t=\left[{\begin{array}{ccccccc}
a_{1j}&0&\cdots&-a_{1j}&\cdots&0\\
0&a_{2j}&\cdots&-a_{2j}&\cdots&0\\
\vdots &\vdots &\ddots&\vdots&\vdots&\vdots\\
-a_{1j}&-a_{2j}&\cdots&-a_{jj}&\cdots&-a_{Nj}\\
\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\
0&0&\cdots&-a_{Nj}&\cdots&a_{Nj}
\end{array}}\right].
\end{align*}
Multiplying $(D_0(u)X_{t-}-\mathbf{1})$ to the right of $\Psi_t$ yields
\begin{align*}
\Psi_t(D_0(u)X_{t-}-\mathbf{1})=\left[{\begin{array}{c}
d_{1j}(u)a_{1j}-a_{1j}+a_{1j}\\
d_{2j}(u)a_{2j}-a_{2j}+a_{2j}\\
\vdots\\
-\ds\sum_{\substack{k=1\\k\neq j}}^Nd_{kj}(u)a_{kj}+\ds\sum_{k=1}^Na_{kj}\\
\vdots\\
d_{Nj}(u)a_{Nj}-a_{Nj}+a_{Nj}
\end{array}}\right].
\end{align*}
From Sections 3 and 4, we have $\sum_{k=1}^Na_{kj}=0$ and $d_{kj}(u)a_{kj}=b_{kj}(u_j)$. Then, multiplying $(D_0(u)X_{t-}-\mathbf{1})^{\top}$ to the left of the above vector yields
\begin{align*}
\|D_0(u)X_{t-}-\mathbf{1}\|^2_{X_{t-}}&=\ds\sum_{\substack{k=1\\k\neq j}}^N[b_{kj}(u_j)d_{kj}(u)-b_{kj}(u_j)]+\ds\sum_{\substack{k=1\\k\neq j}}^Nb_{kj}(u_j)\\
&=\ds\sum_{\substack{k=1\\k\neq j}}^N\frac{(b_{kj}(u_j))^2}{a_{kj}}.
\end{align*}
Hence, $\|D_0(u)X_{t-}-\mathbf{1}\|^2_{X_{t-}}$ is also bounded. Choose
\begin{align*}
C_2&=K+\max_{u\in\textsf{U}}\left\{\sum_{\substack{k=1\\k\neq j}}^N\frac{(b_{kj}(u_j))^2}{a_{kj}}\right\}.
\end{align*}
The result then follows.
\end{proof}
The following lemma is a result of using the predicatable quadratic variation of $\mc{E}(\Theta)$, Gr\"{o}nwall's inequality, and Lemma \ref{boundedness}.
\begin{mylm}\label{squareintegrablemartingale}
Define
\begin{equation*}
\Theta_t=\int_0^t\langle\theta\sigma,X_s\rangle dW_s+\int_0^t\langle D_0(u)X_{s-}-\mathbf{1}, dM_s\rangle.
\end{equation*}
Then the stochastic exponential $\mc{E}(\Theta)$ is a square integrable martingale for $t\leq T$.
\end{mylm}
\begin{proof}
The predictable quadratic variation of $\mc{E}(\Theta)$ is given by
\begin{equation*}
\langle\mc{E}(\Theta)\rangle_t=\int_0^t\mc{E}(\Theta)^2_{s-}\left(|\langle\theta\sigma,X_s\rangle|^2+\|D_0(u)X_{s-}-\mathbf{1}\|^2_{X_{s-}}\right)ds.
\end{equation*}
Since $\mc{E}(\Theta)_{0}=1$, then using It\^{o}'s formula yields
\begin{align*}
\mc{E}(\Theta)_{t}^2=1+2\int_0^t\mc{E}(\Theta)_{t}d\mc{E}(\Theta)_{t}+\langle\mc{E}(\Theta)\rangle_t.
\end{align*}
Hence, for some localizing sequence $T_n\uparrow\infty$ such that $\mc{E}(\Theta)_{T_n}$ and $\langle\mc{E}(\Theta)\rangle_{T_n}$ are bounded martingales,
\begin{align*}
E\left[1_{\{t\leq T_n\}}\mc{E}(\Theta)^2_t\right]&\leq E\left[\mc{E}(\Theta)_{t\wedge T_n}^2\right]\\
&=E\left[1+\langle\mc{E}(\Theta)\rangle_{t\wedge T_n}\right]\\
&=1+\int_0^t E\bigg[1_{\{s\leq T_n\}}\mc{E}(\Theta)^2_{s-}\left(|\langle\theta\sigma,X_s\rangle|^2+\|D_0(u)X_{s-}-\mathbf{1}\|^2_{X_{s-}}\right)\bigg]ds.
\end{align*}
By Gr\"onwall's Inequality and Lemma \ref{boundedness},
\begin{align*}
E\left[1_{\{t\leq T_n\}}\mc{E}(\Theta_t)^2\right]&\leq\exp\left[\int_0^t\left(|\langle\theta\sigma,X_s\rangle|^2+\|D_0(u)X_{s-}-\mathbf{1}\|^2_{X_{s-}}\right)ds\right]\\
&\leq\exp\left[\int_0^tC_2\phantom{|}ds\right]=e^{C_2t}.
\end{align*}
By monotone convergence, it follows that
\begin{align*}
E\left[\mc{E}(\Theta_t)^2\right]\leq e^{C_2T}<\infty,
\end{align*}
which proves the result.
\end{proof}
\begin{mydef}\label{balanced driver}
Consider a driver $F:\mc{T}\times\mc{S}\times\textsf{U}\times\mathbb{R}\times\mathbb{R}^N\mapsto\mathbb{R}$ and the maps $\varphi_1:\Omega\times\mc{T}\mapsto\mathbb{R}$ and $\varphi_2,\varphi_2':\Omega\times\mc{T}\mapsto\mathbb{R}^N$. Suppose there exists a map $\beta:\mc{T}\times\mc{S}\times\textsf{U}\times\mathbb{R}\times\mathbb{R}^N\times\mathbb{R}^N\mapsto\mathbb{R}^N$ such that
\begin{itemize}
\item $\beta$ is predictable in $(t,x,u)\in\mc{T}\times\mc{S}\times\textsf{U}$ and Borel measurable in $(\varphi_1,\varphi_2,\varphi_2')\in\mathbb{R}\times\mathbb{R}^N\times\mathbb{R}^N$;
\item $\beta>-\mathbf{1}$ for all $(\varphi_1,\varphi_2,\varphi_2')$ and $dP\times dt$ almost all $t$; and
\item for $dP\times dt$ almost all $t$, and for all $(\varphi_1,\varphi_2,\varphi_2')$,
\begin{align*}
\ds\sum_{m=1}^N(\varphi^{(m)}_2-\varphi'^{(m)}_2)\beta(t,x,u,\varphi_1,\varphi_2,\varphi_2')^{(m)}\Psi_t^{(m)}=F(t,x,u,\varphi_1,\varphi_2)-F(t,x,u,\varphi_1,\varphi_2'),
\end{align*}
\end{itemize}
where $\varphi^{(m)}_2,\varphi'^{(m)}_2,\beta(t,x,u,\varphi_1,\varphi_2,\varphi_2')^{(m)}$ and $\Psi_t^{(m)}$ are the $m$th element of their respective vectors. Then, $F$ is \textbf{balanced}.
\end{mydef}
Let $M^2(\mc{T};\mathbb{R}^{N},\mc{G}_t)$ be the set of $\mathbb{R}^{K\times N}$-valued $\mc{G}_t$-adapted square-integrable processes over $\Omega\times\mc{T}$, and $P^2(\mc{T};\mathbb{R}^{N},\mc{G}_t)$ be the set of $\mathbb{R}^N$-valued $\mc{G}_t$-predictable processes $\{\varphi_t\}_{t\in\mc{T}}$ such that $E(\int_0^t\lVert\varphi_s\rVert_{X_{s-}}^2ds)<\infty$. Write
\begin{align*}
F(t,x,u,\varphi_1,\varphi_2)&=\varphi_1\langle\theta\sigma,x\rangle+\sum_{m=1}^N\varphi^{(m)}_2[D_0(u)x-\mathbf{1}]^{(m)}\Psi_t^{(m)}.
\end{align*} We know that $F$ is Lipschitz in $\varphi_1$ and $\varphi_2$ since their coefficients are bounded. In addition, $F$ is balanced since $\beta=D_0(u)\x_{t-}-\mathbf{1}$ satisfies the conditions in Definition \ref{balanced driver}. We then have the following proposition.
\begin{mypr}\label{J as solution to BSDE}
For a given control $u\in\mc{U}$, the process
\begin{equation*}
\mc{J}(t,z,x,u)=\expoc\left[g(Z_T^u,X_T^u)\right]
\end{equation*}
is the unique solution to the BSDE
\begin{equation*}
\begin{cases}
d\mc{J}(t,z,x,u)&=-F\left(t,\x_t,u(t),\varphi_1(t),\varphi_2(t)\right)dt+\varphi_1(t)dW_t+\langle\varphi_2(t-),dM_t\rangle,\\
\mc{J}(T,z,x,u)&=g(\z_T,\x_T),
\end{cases}
\end{equation*}
such that $(\varphi_1,\varphi_2)\in M^2(\mc{T};\mathbb{R}^{K\times N},\mc{G}_t)\times P^2(\mc{T};\mathbb{R}^{K\times N},\mc{G}_t)$.
\end{mypr}
\begin{proof}
We know that $F$ is Lipschitz in $\varphi_1$ and $\varphi_2$ since their coefficients are bounded. In addition, $F$ is balanced since $\beta=D_0(u)\x_{t-}-\mathbf{1}$ satisfies the conditions in Definition \ref{balanced driver}.
By the product rule,
\begin{align*}
d\left(\mc{J}(t,z,x,u)\mc{E}(\Theta)_{t}\right)&=\mc{J}(t-,z,x,u)d\mc{E}(\Theta)_{t}+\mc{E}(\Theta)_{t-}d\mc{J}(t,z,x,u)+d[\mc{J},\mc{E}(\Theta)]_t
\end{align*}
Since $d\langle M\rangle_t=\Psi_tdt$, then
\begin{align*}
\frac{d\left(\mc{J}(t,z,x,u)\mc{E}(\Theta)_{t}\right)}{\mc{E}(\Theta)_{t-}}
&=\mc{J}(t-,z,x,u)d\Theta_t+d\mc{J}(t,z,x,u)+d[\mc{J},\Theta]_t\\
&=\mc{J}(t-,z,x,u)\left[\langle\theta\sigma,x\rangle dW_t+\langle D_0(u)x-\mathbf{1}, dM_t\rangle\right]\\
&\quad-\bigg[\varphi_1(t)\langle\theta\sigma,x\rangle+\sum_{m=1}^N\varphi^{(m)}_2(t)[D_0(u)x-\mathbf{1}]^{(m)}\Psi_t^{(m)}\bigg]dt\\
&\quad+\varphi_1(t)dW_t+\langle\varphi_2(t-),dM_t\rangle+\varphi_1(t)\langle\theta\sigma,x\rangle dt\\
&\quad+\sum_{m=1}^N\varphi^{(m)}_2(t)[D_0(u)x-\mathbf{1}]^{(m)}\Psi_t^{(m)}dt\\
&=\left[\varphi_1(t)+\mc{J}(t,z,x,u)\langle\theta\sigma,x\rangle\right]dW_t\\
&\quad+\langle\varphi_2(t-)+ \mc{J}(t-,z,x,u)(D_0(u)x-\mathbf{1}),dM_t\rangle.
\end{align*}
The right-hand side of the last equality is a local martingale because the standard Brownian motion $\{W_t\}_{t\in\mc{T}}$ and $\{M_t\}_{t\in\mc{T}}$ are martingales by definition. Therefore, $\mc{J}(t,z,x,u)\mc{E}(\Theta)_{t}$ is also a local martingale.
Write
\begin{align*}
\ol{\mc{J}}&=\left(\int_0^T\|\mc{J}(t,Z_t,X_t,u(t))\langle\theta\sigma,X_{t}\rangle\|^2dt\right)^{1/2}.
\end{align*}
Then,
\begin{align*}
\ol{\mc{J}}&\leq N\left(\max_{X_t\in\mc{S}}\sup_{t\in\mc{T}}|\mc{J}(t,Z_t,X_t,u(t))|^2\right)^{1/2}\left(\int_0^T\|\langle\theta\sigma,X_t\rangle\|^2dt\right)^{1/2}\\
&\leq\frac{N}{2}\left(\max_{X_t\in\mc{S}}\sup_{t\in\mc{T}}|\mc{J}(t,Z_t,X_t,u(t))|^2+\int_0^T\|\langle\theta\sigma,X_t\rangle\|^2dt\right)<\infty.
\end{align*}
Write $\mc{H}^1$ for the set of integrable martingales. By the Burkholder-Davis-Gundy (BDG) Inequality,
$$\left\{\int_0^s(\varphi_1(s)+\mc{J}(s,Z_s,X_s,u(s))\langle\theta\sigma,X_s\rangle)dW_s\right\}_{s\in(0,T]}\in\mc{H}^1.$$
Using a similar argument,
\begin{align*}
\bigg\{\int_0^s\langle\varphi_2(s-)+ \mc{J}(s-,Z_{s-},X_{s-},u(s-))(D_0(u)X_{s-}-\mathbf{1}),dM_s\rangle\bigg\}_{s\in(0,T]}\in\mc{H}^1.
\end{align*}
By Lemma \ref{squareintegrablemartingale}, we know that $\mc{E}(\Theta)_t$ is a square integrable martingale. Thus,
\begin{align*}
\{\mc{J}(t,Z_t,X_t,u(t))\mc{E}(\Theta)_t\}_{t\geq 0}\in\mc{H}^1.
\end{align*}
It then follows that
\begin{align*}
E^P\left[\mc{E}(\Theta)_Tg(\z_T,\x_T)|\mc{F}_t\right]&=E^P\left[\mc{E}(\Theta)_T\mc{J}(T,z,x,u)|\mc{F}_t\right]\\
&=\mc{E}(\Theta)_t\mc{J}(t,z,x,u).
\end{align*}
By Bayes' Rule,
\begin{align*}
\mc{J}(t,z,x,u)&=\frac{1}{\mc{E}(\Theta)_t}E^P[\mc{E}(\Theta)_Tg(\z_T,\x_T)|\mc{F}_t]\\
&=E^{Q^{u}}[g(\z_T,\x_T)].
\end{align*}
\end{proof}
Define the following value processes
\begin{equation}\label{valinf}
\ul{V}(t,z,x):=\essinf_{u\in\mc{U}}\mc{J}(t,z,x,u)
\end{equation}
and
\begin{equation}\label{valsup}
\ol{V}(t,z,x):=\esssup_{u\in\mc{U}}\mc{J}(t,z,x,u).
\end{equation}
The next objective is to show that the value processes \eqref{valinf} and \eqref{valsup} have c\`adl\`ag modifications which are solutions to some BSDEs. We first state the following lemma.
\begin{mylm}\label{existence of inf and sup}
Recall that $F$ is a standard balanced driver in Proposition \ref{J as solution to BSDE}. Then, for each $t\in\mc{T}$, $x\in\mc{S}$, $\varphi_1\in\mathbb{R}$, $\varphi_2\in\mathbb{R}^N$, and $u\in\textsf{U}$,
\begin{enumerate}
\item[(i)] the maps $(x,\varphi_1,\varphi_2)\mapsto F(t,x,u,\varphi_1,\varphi_2)$ have a common uniform Lipschitz constant K;
\item[(ii)] $\essinf_{u\in\textsf{U}}\left[\frac{b_{ji}(u)}{a_{ji}}-1\right]> -1$ for $i\neq j$, where $D=\left[\frac{b_{ji}(u)}{a_{ji}}\right]$ as defined in Section \ref{change of measure for markov chains};
\item[(iii)] $\sup_{u\in\textsf{U}}\{|F(t,x,u,0,0)|^2\}$ is bounded by a predictable $dt\times dP$-integrable process;
\item[(iv)] the maps $u\mapsto\frac{b_{ji}(u)}{a_{ji}}-1$ are continuous, for fixed $(t,x,\varphi_1,\varphi_2)$, and $\textsf{U}$ is a countable union of compact metrizable subsets of itself.
\end{enumerate}
Furthermore, there is a version of the mappings
\begin{align*}
\ul{F}(t,x,\varphi_1,\varphi_2)&=\essinf_{u\in\textsf{U}}F(t,x,u,\varphi_1,\varphi_2)\\
\ol{F}(t,x,\varphi_1,\varphi_2)&=\esssup_{u\in\textsf{U}}F(t,x,u,\varphi_1,\varphi_2)
\end{align*}
which are standard balanced BSDE drivers.
\end{mylm}
\begin{proof}
See Lemma 19.3.8 of \cite{elliott:stochastic} for the proof.
\end{proof}
Using the previous lemma, we have the following result.
\begin{mylm}\label{existence of H}
Define the functions
\begin{equation*}
\ul{H}(t,z,x,\varphi_1,\varphi_2)=\essinf_{u\in\mc{U}}F(t,x,u,\varphi_1,\varphi_2)
\end{equation*}
and
\begin{equation*}
\ol{H}(t,z,x,\varphi_1,\varphi_2)=\esssup_{u\in\mc{U}}F(t,x,u,\varphi_1,\varphi_2).
\end{equation*}
Then there are versions of $\ul{H}$ and $\ol{H}$ which are balanced Lipschitz drivers for some BSDEs.
\end{mylm}
\begin{proof}
This is a direct application of Lemma \ref{existence of inf and sup}.
\end{proof}
\begin{mypr}[Dynamic Programming Principle]\label{V has version}
The value processes $\ul{V}$ and $\ol{V}$ have c\`adl\`ag modifications, which are the respective solutions to the BSDEs
\begin{equation}\label{V inf}
\begin{cases}
d\ul{V}(t,z,x)&=-\ul{H}(t,\x_t,\varphi_1(t),\varphi_2(t))dt+\varphi_1(t)dW_t+\langle\varphi_2(t-),dM_t\rangle\\
\ul{V}(T,z,x)&=g(\z_T,\x_T)
\end{cases}
\end{equation}
and
\begin{equation}\label{V sup}
\begin{cases}
d\ol{V}(t,z,x)&=-\ol{H}(t,\x_t,\varphi_1(t),\varphi_2(t))dt+\varphi_1(t)dW_t+\langle\varphi_2(t-),dM_t\rangle\\
\ol{V}(T,z,x)&=g(\z_T,\x_T).
\end{cases}
\end{equation}
\end{mypr}
\begin{proof}
As discussed in \cite{pardoux:bsde} for Brownian motions and \cite{cohen:bsde} for Markov chains, BSDEs with drivers $\ul{H}$ and $\ol{H}$ defined in Lemma \ref{existence of H} have c\`adl\`ag solutions, which we denote by $\ul{Y}$ and $\ol{Y}$, respectively.
By definition, for all $u\in\mc{U}$
\begin{equation*}
\ul{H}(t,x,\varphi_1,\varphi_2)\leq F(t,x,u,\varphi_1,\varphi_2)
\end{equation*}
and
\begin{equation*}
\ol{H}(t,x,\varphi_1,\varphi_2)\geq F(t,x,u,\varphi_1,\varphi_2).
\end{equation*}
By the comparison theorem for BSDEs with Brownian motion and Markov chains \cite{cohen:bsde,peng:comparison,cohen:generalcomparison} and Proposition \ref{J as solution to BSDE}, up to indistinguishability,
\begin{equation*}
\ul{Y}(t,z,x)\leq \mc{J}(t,z,x,u)
\end{equation*}
and
\begin{equation*}
\ol{Y}(t,z,x)\geq \mc{J}(t,z,x,u),
\end{equation*}
for all $u\in\mc{U}$. By Filippov's Implicit Function Theorem (see Theorem 21.3.4 of \cite{elliott:stochastic}), for every $\epsilon>0$, there exist predictable controls $\ul{u}^{\epsilon},\ol{u}^{\epsilon}\in\mc{U}$ such that
\begin{equation*}
F(t,x,\ul{u}^{\epsilon},\varphi_1,\varphi_2)\leq \ul{H}(t,x,\varphi_1,\varphi_2)+\epsilon
\end{equation*}
and
\begin{equation*}
F(t,x,\ol{u}^{\epsilon},\varphi_1,\varphi_2)\geq \ol{H}(t,x,\varphi_1,\varphi_2)-\epsilon.
\end{equation*}
Since $\ul{Y}(t,z,x)+\epsilon(T-t)$ and $\ol{Y}(t,z,x)-\epsilon(T-t)$ are the respective solutions to BSDEs with drivers $\ul{H}(t,x,\varphi_1,\varphi_2)+\epsilon$ and $\ol{H}(t,x,\varphi_1,\varphi_2)-\epsilon$, then up to indistinguishability, by the comparison theorem,
\begin{equation*}
\mc{J}(t,z,x,\ul{u}^{\epsilon})\leq\ul{Y}(t,z,x)+\epsilon(T-t)
\end{equation*}
and
\begin{equation*}
\mc{J}(t,z,x,\ol{u}^{\epsilon})\geq\ol{Y}(t,z,x)-\epsilon(T-t).
\end{equation*}
Letting $\epsilon\rightarrow 0$, then for every $t\in\mc{T}$
\begin{equation*}
\ul{Y}(t,z,x)=\essinf_{u\in\mc{U}}J(t,z,x,u)=\ul{V}(t,z,x)
\end{equation*}
and
\begin{equation*}
\ol{Y}(t,z,x)=\esssup_{u\in\mc{U}}J(t,z,x,u)=\ol{V}(t,z,x).
\end{equation*}
Thus, $\ul{Y}$ and $\ol{Y}$ are versions of $\ul{V}$ and $\ol{V}$, respectively.
\end{proof}
The following proposition states the minimum and maximum principles for the control problems.
\begin{mypr}[Minimum/Maximum Principles]\label{minmax principle}
Let $\big(\ul{V},\ul{\varphi}_1,\ul{\varphi}_2\big)$ and $\big(\ol{V},\ol{\varphi}_1,\allowbreak\ol{\varphi}_2\big)$ be the resepective solutions to the BSDEs with drivers $\ul{H}$ and $\ol{H}$ and terminal value $g(\z_T,\x_T)$. The controls $\ul{u},\ol{u}\in\mc{U}$ are optimal if and only if
\begin{equation*}
F\big(t,\x_t,\ul{u},\ul{\varphi}_1,\ul{\varphi}_2\big)=\ul{H}\big(t,\x_t,\ul{\varphi}_1,\ul{\varphi}_2\big)
\end{equation*}
and
\begin{equation*}
F\big(t,\x_t,\ol{u},\ol{\varphi}_1,\ol{\varphi}_2\big)=\ol{H}\big(t,\x_t,\ol{\varphi}_1,\ol{\varphi}_2\big).
\end{equation*}
\end{mypr}
\begin{proof}
By definition, $\mc{J}(t,z,x,u)\geq\ul{V}(t,z,x)$ and $\mc{J}(t,z,x,u)\leq\ol{V}(t,z,x)$ for all $u\in\mc{U}$ and $(t,z,x)\in\mc{T}\times\mathbb{R}\times\mc{S}$, with equality if and only if $u$ is optimal for the respective control problem.
Suppose that we have controls $\ul{u}$ and $\ol{u}$ such that
\begin{equation*}
F\big(t,\x_t,\ul{u}(t),\ul{\varphi}_1(t),\ul{\varphi}_2(t)\big)=\ul{H}\big(t,\x_t,\ul{\varphi}_1(t),\ul{\varphi}_2(t)\big)
\end{equation*}
and
\begin{equation*}
F\big(t,\x_t,\ol{u}(t),\ol{\varphi}_1(t),\ol{\varphi}_2(t)\big)=\ol{H}\big(t,\x_t,\ol{\varphi}_1(t),\ol{\varphi}_2(t)\big).
\end{equation*}
It follows that $\big(\ul{V},\ul{\varphi}_1,\ul{\varphi}_2\big)$ and $\big(\ol{V},\ol{\varphi}_1,\ol{\varphi}_2\big)$ solve the BSDEs with drivers $F(\cdot,\cdot,\ul{u}(t),\cdot,\cdot)$ and
$F(\cdot,\cdot,\ol{u}(t),\cdot,\cdot)$, respectively. By uniqueness of solutions of BSDEs,
\begin{align*}
\mc{J}(t,z,x,\ul{u})=\ul{V}(t,z,x)\quad\mbox{and}\quad \mc{J}(t,z,x,\ol{u})=\ol{V}(t,z,x).
\end{align*}
This proves that $\ul{u}$ and $\ol{u}$ are optimal controls.
Conversely, suppose $\ul{u}$ and $\ol{u}$ are optimal. Then from Proposition \ref{J as solution to BSDE}, for some $\ul{\varphi}'_1$, $\ul{\varphi}'_2$, $\ol{\varphi}'_1$, and $\ol{\varphi}'_2$, the triples $(\mc{J}(t,z,x,\ul{u}),\ul{\varphi}'_1,\ul{\varphi}'_2)$ and $(\mc{J}(t,z,x,\ol{u}),\ol{\varphi}'_1,\ol{\varphi}'_2)$ are solutions to BSDEs with drivers $F(\cdot,\cdot,\ul{u}(t),\cdot,\cdot)$ and $F(\cdot,\cdot,\ol{u}(t),\cdot,\cdot)$, respectively. Furthermore,
\begin{equation*}
F(t,x,u,\varphi_1,\varphi_2)\geq\ul{H}(t,x,\varphi_1,\varphi_2)\quad\mbox{and}\quad F(t,x,u,\varphi_1,\varphi_2)\leq\ol{H}(t,x,\varphi_1,\varphi_2).
\end{equation*}
Using the definition of $\ul{V}$ and $\ol{V}$ in Proposition \ref{V has version} and the comparison theorem, for all $s\in\mc{T}$ we have
\begin{equation*}
\mc{J}(0,z,x,\ul{u})=\ul{V}(0,z,x)\quad\mbox{and}\quad \mc{J}(0,z,x,\ol{u})=\ol{V}(0,z,x)
\end{equation*}
if and only if
\begin{equation*}
\mc{J}(s,z,x,\ul{u})=\ul{V}(s,z,x)\quad\mbox{and}\quad \mc{J}(s,z,x,\ol{u})=\ol{V}(s,z,x).
\end{equation*}
Since these processes have a unique canonical semimartingale decomposition, we have, up to indistinguishability,
\begin{equation*}
F\big(t,\x_t,\ul{u}(t),\ul{\varphi}'_1(t),\ul{\varphi}'_2(t)\big)=\ul{H}\big(t,\x_t,\ul{\varphi}_1(t),\ul{\varphi}_2(t)\big)
\end{equation*}
and
\begin{equation*}
F\big(t,\x_t,\ol{u}(t),\ol{\varphi}'_1(t),\ol{\varphi}'_2(t)\big)=\ol{H}\big(t,\x_t,\ol{\varphi}_1(t),\ol{\varphi}_2(t)\big).
\end{equation*}
Furthermore,
\begin{equation*}
\int_0^T(\ul{\varphi}_1'(t)dW_t+\ul{\varphi}_2'(t)d\ol{J}(t))=\int_0^T(\ul{\varphi}_1(t)dW_t+\ul{\varphi}_2(t)d\ol{J}(t))
\end{equation*}
and
\begin{equation*}
\int_0^T(\ol{\varphi}_1'(t)dW_t+\ol{\varphi}_2'(t)d\ol{J}(t))=\int_0^T(\ol{\varphi}_1(t)dW_t+\ol{\varphi}_2(t)d\ol{J}(t)).
\end{equation*}
Since $(\varphi_1,\varphi_2)\in M^2(\mc{T};\mathbb{R}^{K\times N},\mc{G}_t)\times P^2(\mc{T};\mathbb{R}^{K\times N},\mc{G}_t)$, then by the uniqueness of the martingale representation theorem, we have
\begin{equation*}
\|\ul{\varphi}_1-\ul{\varphi}'_1\|^2=0,\quad\|\ul{\varphi}_2-\ul{\varphi}'_2\|^2_{X_{t-}}=0,\quad\|\ol{\varphi}_1-\ol{\varphi}'_1\|=0,\quad\mbox{and}\quad\|\ol{\varphi}_2-\ol{\varphi}'_2\|^2_{X_{t-}}=0.
\end{equation*}
Since $F$ and $H$ are continuous with respect to their norms, then the result follows.
\end{proof}
The following proposition states that an optimal feedback control exists if an optimal control exists. Optimal feedback controls are controls that depend only on the current values of the state variables $(t,\z_t,\x_t)$.
\begin{mypr}
Suppose that there exist $\ul{u},\ol{u}\in\textsf{U}$ such that
\begin{equation*}
F(t,x,\ul{u},\varphi_1,\varphi_2)=\essinf_{u\in\textsf{U}}F(t,x,u,\varphi_1,\varphi_2)
\end{equation*}
and
\begin{equation*}
F(t,x,\ol{u},\varphi_1,\varphi_2)=\esssup_{u\in\textsf{U}}F(t,x,u,\varphi_1,\varphi_2).
\end{equation*}
Then there exist feedback controls $\ul{u}^*,\ol{u}^*:\mc{T}\times\mathbb{R}\times\mc{S}\mapsto\textsf{U}$ such that
\begin{align*}
\ul{u}^*(t,\z_t,\x_t)\quad\mbox{and}\quad \ol{u}^*(t,\z_t,\x_t)
\end{align*}
are optimal among all predictable controls.
\end{mypr}
\begin{proof}
From Proposition \ref{minmax principle}, a control is optimal if and only if it minimizes or maximizes $F(t,x,u,\varphi_1,\varphi_2)$. Since $\varphi_1$ and $\varphi_2$ come from the solution of a Markovian BSDE, then $\varphi_1$ and $\varphi_2$ are Borel measurable functions. Using Filippov's Implicit Function Theorem, there are $\mc{B}(\mc{T}\times\mathbb{R}\times\mc{S})$-measurable maps $\ul{u}^*$ and $\ol{u}^*$ which minimizes and maximizes, respectively, $F(t,x,u,\varphi_1(t,z,x),\varphi_2(t,z,x))$ for all $(z,x)\in\mathbb{R}\times\mc{S}$ and almost all $t\in\mc{T}$.
\end{proof}
We now state the verification theorem.
\begin{mypr}[Verification Theorem]\label{verification theorem}
Define the integro-differential operator $\mc{L}$ by
\begin{align*}
\mc{L}v(t,z,x)&=f(t,z,x)v_z(t,z,x)+\frac{1}{2}\sigma^2(t,z,x)v_{zz}(t,z,x).
\end{align*}
Write $\mathbf{v}:=(v(t,z,e_1),\ldots,v(t,z,e_N))^{\top}$. Consider the following Hamilton-Jacobi-Bellman (HJB) equations:
\begin{equation}\label{hjbinf}
\begin{cases}
v_t(t,z,x)+\mc{L}v(t,z,x)+\ul{H}(t,x,v_z\sigma,\mathbf{v}+\eta(t,z))+\langle\mathbf{v}+\eta(t,z),Ax\rangle=0,\\
v(T,z,x)=g(\z_T,\x_T)
\end{cases}
\end{equation}
and
\begin{equation}\label{hjbsup}
\begin{cases}
v_t(t,z,x)+\mc{L}v(t,z,x)+\ol{H}(t,x,v_z\sigma,\mathbf{v}+\eta(t,z))+\langle\mathbf{v}+\eta(t,z),Ax\rangle=0,\\
v(T,z,x)=g(\z_T,\x_T),
\end{cases}
\end{equation}
where $\eta(t,z):=(\eta(t,z,e_1),\ldots,\eta(t,z,e_N))^{\top}\in\mathbb{R}^N$.
Suppose the HJB equations \eqref{hjbinf} and \eqref{hjbsup} admit $C^{1,2}([0,T)\times\mathbb{R})$ solutions $\ul{v}$ and $\ol{v}$, which satisfy the growth bound condition:
\begin{align*}
\|\ul{v}(s,z,x)\|^2+\|\ul{v}_z(s,z,x)\sigma(s,z,x)\|^2+\|\mathbf{v}+\eta(t,z)\|_{X_{t-}}^2\leq(1+|z|^2)
\end{align*}
and
\begin{align*}
\|\ol{v}(s,z,x)\|^2+\|\ol{v}_z(s,z,x)\sigma(s,z,x)\|^2+\|\mathbf{v}+\eta(t,z)\|_{X_{t-}}^2\leq(1+|z|^2).
\end{align*}
Then, for each $x\in\mc{S}$, the value functions $\ul{V}(t,z,x)=\ul{v}(t,z,x)$ and $\ol{V}(t,z,x)=\ol{v}(t,z,x)$ are the respective value functions of the control problems.
\end{mypr}
\begin{proof}
We will prove the infimum case. Using Ito's formula,
\begin{align*}
d\ul{v}(t,z,x)&=\big[\ul{v}_t(t,z,x)+\mc{L}\ul{v}(t,z,x)+\langle\mathbf{\ul{v}}+\eta(t,z),Ax\rangle]dt+\ul{v}_z(t,z,x)\sigma(t,z,x)dW_t+\langle\mathbf{\ul{v}}+\eta(t,z),dM_t\rangle.
\end{align*}
Since $\ul{v}$ is the solution to \eqref{hjbinf}, then
\begin{align*}
d\ul{v}(t,z,x)&=-\ul{H}(t,x,v_z(t,z,x)\sigma(t,z,x),\mathbf{\ul{v}}+\eta(t,z))dt+\ul{v}_z(t,z,x)\sigma(t,z,x)dW_t+\langle\mathbf{\ul{v}}+\eta(t,z),dM_t\rangle.
\end{align*}
It follows that $\ul{v}$ solves \eqref{V inf}. Since $\ul{v}$ satisfies the growth bound condition, then by uniqueness $\ul{v}$ and $\ul{V}$ must agree, and similarly for $\varphi_1$ and $\varphi_2$ in their $M^2(\mc{T};\mathbb{R}^{K\times N},\mc{G}_t)$ and $P^2(\mc{T};\mathbb{R}^{K\times N},\mc{G}_t)$, respectively.
\end{proof}
Propositions \ref{V has version}, \ref{minmax principle}, and \ref{verification theorem} yield the following result.
\begin{myth}
The value processes $\ul{V}$ and $\ol{V}$ defined in \eqref{valinf} and \eqref{valsup} provide models of the bid and ask prices of the European call option.
\end{myth}
The result above can provide a connection to risk measures and nonlinear expectations. From \cite{ref2:conic,ref15:conic}, coherent risk measures can be represented as the suprema of expectations over a set of probability measures. Based on this result, the bid and ask prices can be represented as the infimum or supremum, respectively, of expectations over a convex set of probability measures, which is one of the main assumptions in conic finance \cite{ref1:conic}.
On the other hand, from \cite{ref7:sublinear}, nonlinear expectations can be represented as a solution to a BSDE if the driver satisfies some conditions. One type of nonlinear expectation is the sublinear expectation, which satisfies subadditity and positive homogeneity. Sublinear expectations can be represented as a supremum of a family of expectations \cite{ref2:sublinear,peng:sublinear}.
\section{Conclusion} A pricing formula for a European call option whose underlying asset has dynamics governed by the state process is modelled using a system of partial differential equations. Modelling the bid and ask prices as the infimum and supremum, respectively, of the objective functional in \eqref{objfunc}, we have shown through a dynamic programming principle that these prices can be described as solutions to the BSDEs \eqref{V inf} and \eqref{V sup}. Optimality conditions for the drivers of the BSDEs are then proved in a minimum/maximum principle. Assuming some conditions are satisfied, we have also shown through a verification theorem that the bid and ask prices are solutions to the HJB equations \eqref{hjbinf} and \eqref{hjbsup}.
Further work may be done to discuss model estimation and model calibration. Numerical methods may be used to obtain estimates of the semi-analytical European option price and the BSDEs associated to the stochastic control problems.
\section*{Acknowledgements} The authors would like to thank the Australian Research Council and NSERC for continuing support.
\bibliographystyle{siam}
|
1905.07574
|
\section{Introduction} \label{int}
\section{Introduction} \label{int}
Cabling on classical braids has been used for establishing the fundamental connections between the homotopy groups and the theory of Brunnian braids~\cite{BCWW} as well as a relationship between associators (for quasi-triangular quasi-Hopf algebras) and (a variant of) the Grothendieck-Teichmuller group~\cite{Bar-Natan}. In the paper \cite{BM}, cabling for braids was used to study some properties of Burau representation. Similar operation (called \textit{naive cabling}) on framed links has been explored in~\cite{LLW} with obtaining simplicial groups arising from link groups.
The purpose of this article is to explore cabling for virtual braids. Along the ideas in~\cite{CW} on cabling for classical braids, one gets cabling operation for virtual pure braid group $VP_n$ that gives new generators for $VP_n$. More precisely, for $n\geq 3$, the group $VP_n$ is generated by the $n$-strand virtual braids obtained by taking $(k,l)$-cabling on the standard generators $\lambda_{1,2}$ and $\lambda_{2,1}$ of $VP_2$ together with adding trivial strands $n-k-l$ to the end for $1\leq k\leq n-1$ and $2\leq k+l\leq n$, where a $(k,l)$-cabling on a $2$-strand virtual braid means to take $k$-cabling on the first strand and $l$-cabling on the second strand.
Different from the classical situation~\cite{CW} that the $n$-strand braids cabled from the standard generator $A_{1,2}$ for $P_2$ generates a free group of rank $n-1$, the subgroup of $VP_n$ generated by $n$-strand virtual braids cabled from $\lambda_{1,2}$ and $\lambda_{2,1}$, which is denoted by $T_{n-1}$, is no longer free for $n\geq3$.
For the first nontrivial case that $n=3$, a presentation of $T_2$ has been explored with producing a decomposition theorem for $VP_3$ using cabled generators~\cite{BMVW}.
Our main work in this article is to introduce a new generating set for $VP_n$, define a simplicial group $T_*$ and extend the results on $VP_3$ in~\cite{BMVW} to $VP_4$. The main result is Theorem~\ref{t4.11} that describe $VP_4$ as HNN-extension. As a consequence, we get a presentation for the group $T_3$ in Theorem~\ref{T_3}. In the next article \cite{BMW} we prove the lifting theorem for the virtual braids. From this theorem follows that if we know the structure of $VP_4$, $T_3$ or $P_4$, then using degeneracy maps we can find the structure of $VP_n$, $T_n$ or $P_n$ for all bigger $n$.
The article is organized as follows. In Section \ref{virt}, we give a review on braid groups and virtual braid groups. The simplicial structure on virtual pure braid groups will be discussed in Section~\ref{simplicial}. In Section ~\ref{s41}, we discuss the cabling operation on classical pure braid group $P_n$ as subgroup $VP_n$. In particular, we give a new presentation of the Artin pure braid group $P_4$ in terms of the cabled generators in Proposition~\ref{prop4.1}. We explore the structures of $VP_3$ and $VP_4$ in Section~\ref{s4}.
\subsection{Acknowledgements}
This article was written when the first author visited College of Mathematics and information Science Hebei Normal University. He thanks the administration for good working conditions. The authors would like to thank Roman Mikhailov for interesting ideas and useful discussion and Yu. Mikhal'chishina, who made the picture.
\section{Braid and virtual braid groups} \label{virt}
\subsection{Braid group} The braid group $B_n$ on $n$ strings is generated by $\sigma_1,\, \sigma_2, \, \ldots , \, \sigma_{n-1}$ and is defined by relations
\begin{align*}
& \sigma_i \sigma_{i+1} \sigma_i = \sigma_{i+1} \sigma_i \sigma_{i+1},\\
& \sigma_i \sigma_{j} = \sigma_j \sigma_{i},~~|i-j|>1.
\end{align*}
Let $S_n$, $n \geq 1$ be the symmetric group which is generated by $\rho_1, \, \rho_2, \, \ldots , \, \rho_{n-1}$ and is defined by relations
\begin{align*}
& \rho_i^2 = 1,~~~i = 1, 2, \ldots, n-1,\\
& \rho_i \rho_{i+1} \rho_i = \rho_{i+1} \rho_i \rho_{i+1},~~~i = 1, 2, \ldots, n-2,\\
& \rho_i \rho_{j} = \rho_j \rho_{i},~~|i-j|>1.
\end{align*}
There is a homomorphism $B_n \to S_n$, which sends $\sigma_i$ to $\rho_i$. Its kernel is the pure braid group $P_n$. This group is generated by elements $A_{i,j}$, $1 \leq i < j \leq n$, where
$$
A_{i,i+1} = \sigma_i^2,
$$
$$
A_{i,j} = \sigma_{j-1} \sigma_{j-2} \ldots \sigma_{i+1} \sigma_i^2 \sigma_{i+1}^{-1} \ldots \sigma_{j-2}^{-1} \sigma_{j-1}^{-1},~~~i+1 < j \leq n,
$$
and is defined by relations (where $\varepsilon = \pm 1$):
\begin{align*}
& A_{ik}^{-\varepsilon} A_{kj} A_{ik}^{\varepsilon} = (A_{ij} A_{kj})^{\varepsilon} A_{kj} (A_{ij} A_{kj})^{-\varepsilon},\\
& A_{km}^{-\varepsilon} A_{kj} A_{km}^{\varepsilon} = (A_{kj} A_{mj})^{\varepsilon} A_{kj} (A_{kj} A_{mj})^{-\varepsilon},~~m < j, \\
& A_{im}^{-\varepsilon} A_{kj} A_{im}^{\varepsilon} = [A_{ij}^{-\varepsilon}, A_{mj}^{-\varepsilon}]^{\varepsilon} A_{kj} [A_{ij}^{-\varepsilon}, A_{mj}^{-\varepsilon}]^{-\varepsilon}, ~~i < k < m,\\
& A_{im}^{-\varepsilon} A_{kj} A_{im}^{\varepsilon} = A_{kj}, ~~k < i, m < j~\mbox{or}~ m < k,
\end{align*}
Here and further $[a,b] = a^{-1} b^{-1} a b$ is the commutator of $a$ and $b$, $a^b = b^{-1} a b$ is the conjugation of $a$ by $b$.
There is an epimorphism of $P_n$ to $P_{n-1}$ what is removing of the $n$-th string. Its kernel $U_n = \langle A_{1n}, A_{2n}, \ldots, A_{n-1,n} \rangle$ is a free group of rank $n-1$ and $P_n = U_n \leftthreetimes P_{n-1}$ is a semi-direct product of $U_n$ and $P_{n-1}$. Hence,
$$
P_n = U_n \leftthreetimes (U_{n-1} \leftthreetimes (\ldots \leftthreetimes (U_3 \leftthreetimes U_2)) \ldots ),
$$
is a semi-direct product of free groups and $U_2 = \langle A_{12}\rangle$ is the infinite cyclic group.
\subsection{Virtual braid group} \label{virt}
The virtual braid group $VB_n$ is generated by elements
$$
\sigma_1,\, \sigma_2, \, \ldots , \, \sigma_{n-1}, \, \rho_1, \, \rho_2, \, \ldots , \, \rho_{n-1},
$$
where $\sigma_1,\, \sigma_2, \, \ldots , \, \sigma_{n-1}$ generate the classical braid group $B_n$ and
the elements $\rho_1$, $\rho_2$, $\ldots $, $\rho_{n-1}$ generate the symmetric group
$S_n$. Hence, $VB_n$ is defined by relations of $B_n$, relations of $S_n$
and mixed relations:
$$
\sigma_i \rho_j = \rho_j \sigma_i,~~~|i-j| > 1,
$$
$$
\rho_i \rho_{i+1} \sigma_i = \sigma_{i+1} \rho_i \rho_{i+1}.
$$
As for the classical braid groups there exists the canonical
epimorphism of $VP_n$ onto the symmetric group $VB_n\to S_n$ with the
kernel called the {\it virtual pure braid group} $VP_n$. So we have a
short exact sequence
\begin{equation*}
1 \to VP_n \to VB_n \to S_n \to 1.
\end{equation*}
Define the following elements in $VP_n$:
$$
\lambda_{i,i+1} = \rho_i \, \sigma_i^{-1},~~~
\lambda_{i+1,i} = \rho_i \, \lambda_{i,i+1} \, \rho_i = \sigma_i^{-1} \, \rho_i,
~~~i=1, 2, \ldots, n-1,
$$
$$
\lambda_{ij} = \rho_{j-1} \, \rho_{j-2} \ldots \rho_{i+1} \, \lambda_{i,i+1} \, \rho_{i+1}
\ldots \rho_{j-2} \, \rho_{j-1},
$$
$$
\lambda_{ji} = \rho_{j-1} \, \rho_{j-2} \ldots \rho_{i+1} \, \lambda_{i+1,i} \, \rho_{i+1}
\ldots \rho_{j-2} \, \rho_{j-1}, ~~~1 \leq i < j-1 \leq n-1.
$$
It is shown in \cite{B} that the group $VP_n\ (n\geq 2)$ admits a
presentation with the generators $\lambda_{ij},\ 1\leq i\neq j\leq n,$
and the following relations:
\begin{align}
& \lambda_{ij}\lambda_{kl}=\lambda_{kl}\lambda_{ij} \label{rel},\\
&
\lambda_{ki}\lambda_{kj}\lambda_{ij}=\lambda_{ij}\lambda_{kj}\lambda_{ki}
\label{relation},
\end{align}
where distinct letters stand for distinct indices.
Like the classical pure braid groups, groups $VP_n$ admit a
semi-direct product decompositions \cite{B}: for $n\geq 2,$ the
$n$-th virtual pure braid group can be decomposed as
\begin{equation}
VP_n=V_{n-1}^*\rtimes VP_{n-1},~~n \geq 2,
\label{eq:s_d_dec}
\end{equation}
where $V_{n-1}^*$ is a subgroup of $VP_{n}$, $V_1^* = F_2$, $VP_1$ is supposed
to be the trivial group.
\section{Simplicial groups $VP_*$ and $T_*$}\label{simplicial}
\subsection{Simplicial sets and simplicial groups} Recall the definition of simplicial groups (see \cite[p.~300]{MP} or \cite{BCWW}). A sequence of sets $\mathcal{X} = \{ X_n \}_{n \geq 0}$ is called a
{\it simplicial set} if there are face maps:
$$
d_i : X_n \longrightarrow X_{n-1} ~\mbox{for}~0 \leq i \leq n
$$
and degeneracy maps
$$
s_i : X_n \longrightarrow X_{n+1} ~\mbox{for}~0 \leq i \leq n,
$$
that are satisfy the following simplicial identities:
\begin{enumerate}
\item $d_i d_j = d_{j-1} d_i$ if $i < j$,
\item $s_i s_j = s_{j+1} s_i$ if $i \leq j$,
\item $d_i s_j = s_{j-1} d_i$ if $i < j$,
\item $d_j s_j = id = d_{j+1} s_j$,
\item $d_i s_j = s_{j} d_{i-1}$ if $i > j+1$.
\end{enumerate}
\subsection{The cablings of virtual pure braid groups}
By using the same ideas in the work~\cite{BCWW,CW} on the classical braids, we have a simplcial group
$$
\VAP_* :\ \ \ \ldots\ \begin{matrix}\longrightarrow\\[-3.5mm] \ldots\\[-2.5mm]\longrightarrow\\[-3.5mm]
\longleftarrow\\[-3.5mm]\ldots\\[-2.5mm]\longleftarrow \end{matrix}\ VP_4 \ \begin{matrix}\longrightarrow\\[-3.5mm]\longrightarrow\\[-3.5mm]\longrightarrow\\[-3.5mm]\longrightarrow\\[-3.5mm]\longleftarrow\\[-3.5mm]
\longleftarrow\\[-3.5mm]\longleftarrow
\end{matrix}\ VP_3\ \begin{matrix}\longrightarrow\\[-3.5mm] \longrightarrow\\[-3.5mm]\longrightarrow\\[-3.5mm]
\longleftarrow\\[-3.5mm]\longleftarrow \end{matrix}\ VP_2\ \begin{matrix} \longrightarrow\\[-3.5mm]\longrightarrow\\[-3.5mm]
\longleftarrow \end{matrix}\ VP_1$$
on pure virtual braid groups with $\VAP_n=VP_{n+1}$, the face homomorphism
$$
d_i : \VAP_n=VP_{n+1} \longrightarrow \VAP_{n-1}=VP_n
$$
given by deleting $(i+1)$th strand for $0\leq i\leq n$, and the degeneracy homomorphism
$$
s_i : \VAP_n=VP_{n+1} \longrightarrow \VAP_{n+1}=VP_{n+2}
$$
given by doubling the $(i+1)$th strand for $0\leq i\leq n$.
The idea of cabling is obtained from the geometric description, which can be regarded as the formal definition. See the Figure 1.
\begin{figure}[h]
\noindent\centering{\includegraphics[height=0.3\textwidth]{s3}}
\caption{Degeneracy map $s_1$ }
\end{figure}
The proof of the following proposition is straightforward.
\begin{prop} \label{p3.1}
The sequence of groups $\VAP_*$ with $\VAP_n=VP_{n+1}$ for $n\geq0$ is a simplicial group under the faces
$d_i : \VAP_{n-1}=VP_{n} \longrightarrow \VAP_{n-2}=VP_{n-1}$, $0\leq i\leq n-1$, and
degeneracies $s_i : \VAP_{n-1}=VP_{n} \longrightarrow \VAP_{n}=VP_{n+1}$, $0\leq i\leq n-1$, given the group homomorphism with acting on the generators $\lambda_{k,l}$ and $\lambda_{l,k}$, $1 \leq k < l \leq n$, of $VP_{n}$ by the rules
$$
s_i (\lambda_{k,l}) = \left\{
\begin{array}{lcl}
\lambda_{k+1,l+1} & \textrm{ for }&i < k-1,\\
\lambda_{k,l+1} \lambda_{k+1,l+1} & \textrm{ for } &i = k-1, \\
\lambda_{k,l+1} & \textrm{ for } &k-1 < i < l-1,\\
& \\
\lambda_{k,l+1} \, \lambda_{k,l} & \textrm{ for }& i = l-1, \\
& \\
\lambda_{k,l} & \textrm{ for } &i > l-1,
\end{array}
\right.
$$
$$
s_i (\lambda_{l,k}) = \left\{
\begin{array}{lcl}
\lambda_{l+1,k+1} &\textrm{ for }&i < k-1,\\
\lambda_{l+1,k+1} \lambda_{l+1,k} &\textrm{ for }&i = k-1, \\
\lambda_{l+1,k} & \textrm{ for }&k-1 < i < l-1,\\
& \\
\lambda_{l,k} \, \lambda_{l+1,k} & \textrm{ for }&i = l-1, \\
& \\
\lambda_{l,k} & \textrm{ for }&i > l-1,
\end{array}
\right.
$$
$$
d_i (\lambda_{k,l}) = \left\{
\begin{array}{lcl}
\lambda_{k-1,l-1} & \textrm{ for } &0 \leq i < k-1, \\
1 & \textrm{ for } & i =k-1,\\
\lambda_{k,l-1} & \textrm{ for } &k-1 < i < l-1, \\
1 & \textrm{ for } & i =l,\\
\lambda_{k,l} & \textrm{ for } &l-1 < i \leq n-1, \\
\end{array}
\right.
$$
$$
d_i (\lambda_{l,k}) = \left\{
\begin{array}{lcl}
\lambda_{k-1,l-1} & \textrm{ for } &0 \leq i < k-1, \\
1 & \textrm{ for } & i = k-1,\\
\lambda_{l-1,k} & \textrm{ for } &k-1 < i < l-1, \\
1 & \textrm{ for } & i = l-1,\\
\lambda_{l,k} & \textrm{ for } &l-1 < i \leq n-1. \\
\end{array}
\right.
$$
\hfill $\Box$
\end{prop}
Let $T_*$ be the smallest simplicial subgroup of $\VAP_*$ with the $1$-simplex group $T_1=\VAP_1=VP_2$. It is routine to see that the group $T_n$ as a subgroup of $VP_{n+1}$ can be constructed recursively as follows:
\begin{enumerate}
\item[] $T_0=\{1\}$, $T_1=VP_2$, and $T_{n+1} = \langle s_0(T_n), s_1(T_n), \ldots,
s_{n}(T_n) \rangle. $
\end{enumerate}
Let
\begin{equation}\label{a_{kl}}
\begin{array}{rcl}
a_{k,n+1-k}
&=&s_{n-1}s_{n-2}\cdots s_{k}\hat{s}_{k-1}s_{k-2}\cdots s_0\lambda_{1,2},\\
\end{array}
\end{equation}
\begin{equation}\label{b_{kl}}
\begin{array}{rcl}
b_{k,n+1-k}
&=&s_{n-1}s_{n-2}\cdots s_{k}\hat{s}_{k-1}s_{k-2}\cdots s_0\lambda_{2,1}\\
\end{array}
\end{equation}
be the elements in $VP_{n+1}$ for $1\leq k\leq n$.
By direct computations, we have the following formulae.
\begin{equation}\label{equation3.1}
a_{n-k,k} = \left\{
\begin{array}{lr}
\lambda_{1n} \lambda_{2n} \ldots \lambda_{n-1,n} & for ~k = 1,\\
\lambda_{1n} \lambda_{2n} \ldots \lambda_{n-k,n} a_{n-k,k-1} & for ~1 < k < n, \\
\lambda_{1n} a_{1,n-2} & for ~k = n-1,
\end{array}
\right.
\end{equation}
\begin{equation}\label{equation3.2}
b_{n-k,k} = \left\{
\begin{array}{lr}
\lambda_{n,n-1} \lambda_{n,n-2} \ldots \lambda_{n1} & for ~k = 1,\\
b_{n-k,k-1} \lambda_{n,n-k} \lambda_{n,n-k-1} \ldots \lambda_{n1} & for ~1 < k < n, \\
b_{1,n-2} \lambda_{n1} & for ~k = n-1,
\end{array}
\right.
\end{equation}
Moreover the generators $\lambda_{ij}$ can be written in terms of $a_{k,l}$ and $b_{s,t}$ as follows:
\begin{equation}\label{equation3.3}
\lambda_{kn} = \left\{
\begin{array}{lr}
a_{1,n-1} \, a_{1,n-2}^{-1} & for ~k = 1,\\
a_{k-1,n-k} \, a_{k-1,n-k+1}^{-1} \, a_{k,n-k} \, a_{k,n-k-1}^{-1} & for ~1 < k < n, \\
a_{n-2,1} \, a_{n-2,2}^{-1} \, a_{n-1,1} & for ~k = n-1,
\end{array}
\right.
\end{equation}
\begin{equation}\label{equation3.4}
\lambda_{nk} = \left\{
\begin{array}{lr}
b_{1,n-2}^{-1} \, b_{1,n-1} & for ~k = 1,\\
b_{k,n-k-1}^{-1} \, b_{k,n-k} \, b_{k-1,n-k+1}^{-1} \, b_{k-1,n-k} & for ~1 < k < n, \\
b_{n-1,1} \, b_{n-2,2}^{-1} \, b_{n-2,1} & for ~k = n-1,
\end{array}
\right.
\end{equation}
From the above formulae, we have the following proposition.
\begin{prop} \label{p3.3}
Consider $VP_k$ as a subgroup of $VP_{k+1}$ by adding a trivial strand in the end. Then
\begin{enumerate}
\item The subgroup $T_{n-1}$ of $VP_n$, $n \geq 3$, is generated by elements $a_{k,l}$, $b_{k,l}$, $k+l = n$.
\item The group $VP_n=\la T_1, T_2,\ldots, T_{n-1}\ra$ generated by $a_{k,l}$ and $b_{k,l}$ for $2\leq k+l\leq n, 1\leq k,l\leq n-1$.
\item $VP_{n+1}=\la VP_n, s_0VP_n, s_1 VP_n,\ldots, s_{n-1}VP_n \ra$ for $n\geq 2$.\hfill $\Box$
\end{enumerate}
\end{prop}
Let $c_{ij} = b_{ij} a_{ij}$. Put
$$
T_i^c = \langle c_{ij}~|~i+j = n-1 \rangle,~~i = 1, 2, \ldots, n-1.
$$
Notice that
$$
c_{1,1}=b_{1,1}a_{1,1}=\lambda_{2,1}\lambda_{1,2}=\sigma_1^{-1}\rho_1\rho_1\sigma_1^{-1}=\sigma_1^{-2}
$$
is a generator for $P_2$ as a subgroup of $VP_2$. The cabled braid $c_{i,j}$ lies in $P_{i+j+1}\leq VP_{i+j+1}$. It is straightforward to see that the following proposition holds for classical braids.
\begin{prop}\label{proposition3.3}
Consider $P_k$ as a subgroup of $P_{k+1}$ by adding a trivial strand in the end. Then
\begin{enumerate}
\item The subgroup $T^c_{n-1}$ of $P_n$, $n \geq 3$, is generated by elements $c_{k,l}$, $k+l = n$.
\item The group $P_n=\la T^c_1, T^c_2,\ldots, T^c_{n-1}\ra$ generated by $c_{k,l}$ for $2\leq k+l\leq n, 1\leq k,l\leq n-1$.
\item $P_{n+1}=\la P_n, s_0P_n, s_1 P_n,\ldots, s_{n-1}P_n \ra$ for $n\geq 2$.\hfill $\Box$
\end{enumerate}
\end{prop}
\section{Cabling of the classical pure braid group} \label{s41}
In the present section we find a set of defining relations of $P_4$ in the generators $c_{ij}$ in Proposition~\ref{proposition3.3}.
\begin{prop}\label{prop4.1} The group $P_4$ is generated by elements
$$
c_{11},~~ c_{21},~~ c_{12},~~ c_{31},~~c_{22},~~ c_{13}
$$
and is defined by relations (where $\varepsilon = \pm 1$):
$$
c_{21}^{c_{11}^{\varepsilon}} = c_{21},~~~c_{12}^{c_{11}^{\varepsilon}} = c_{12}^{c_{21}^{-\varepsilon}},~~~c_{31}^{c_{11}^{\varepsilon}} = c_{31},~~~c_{22}^{c_{11}^{\varepsilon}} = c_{22},~~~c_{13}^{c_{11}^{\varepsilon}} = c_{13}^{c_{22}^{-\varepsilon}},
$$
$$
c_{31}^{c_{21}^{\varepsilon}} = c_{31},~~~c_{22}^{c_{21}^{\varepsilon}} = c_{22}^{c_{31}^{-\varepsilon}},~~~c_{13}^{c_{21}^{\varepsilon}} = c_{13}^{c_{22}^{\varepsilon} c_{31}^{-\varepsilon}},
$$
$$
c_{31}^{c_{12}^{\varepsilon}} = c_{31},~~~c_{13}^{c_{12}^{\varepsilon}} = c_{13}^{c_{31}^{-\varepsilon}}.
$$
$$
c_{22}^{c_{12}^{-1}} = [c_{31}, c_{13}^{-1}] \, [c_{13}^{-1}, c_{22}] \, c_{22} \, [c_{21}^2, c_{12}^{-1}] = c_{13}^{c_{31}} c_{13}^{-c_{22}} c_{22} [c_{21}^2, c_{12}^{-1}],
$$
$$
c_{22}^{c_{12}} = [c_{12}, c_{21}^{-2}] \, c_{22} \, [c_{22}^{-3}, c_{13}] \, [c_{13}, c_{31}^{-1}] = [c_{12}, c_{21}^{-2}] \, c_{13}^{-c_{22}^{-2}} \, c_{22} \, c_{13}^{c_{31}^{-1}} .
$$
\end{prop}
\begin{proof}
Rewrite the generators $c_{ij}$ in the standard generators of $P_4$. We have
$$
c_{11} = b_{11} a_{11} = \lambda_{21} \lambda_{12} = \sigma_1^{-1} \rho_1 \rho_1 \sigma_1^{-1} = \sigma_1^{-2} = A_{12}^{-1},
$$
$$
c_{21} = b_{21} a_{21} = \lambda_{32} (\lambda_{31} \lambda_{13}) \lambda_{23} = \sigma_2^{-1} \lambda_{21} \lambda_{12} \sigma_2^{-1} = \sigma_2^{-1} A_{12}^{-1} \sigma_2^{-1} = \sigma_2^{-2} \sigma_2 A_{12}^{-1} \sigma_2^{-1} = A_{23}^{-1} A_{13}^{-1},
$$
$$
c_{12} = b_{12} a_{12} = \lambda_{21} (\lambda_{31} \lambda_{13}) \lambda_{12} = \sigma_1^{-1} \rho_1 \rho_2 \lambda_{21} \lambda_{12} \rho_2 \rho_1 \sigma_1^{-1} = \sigma_1^{-1} \rho_1 \lambda_{31} \lambda_{13} \rho_1 \sigma_1^{-1} =
$$
$$
= \sigma_1^{-1} \lambda_{32} \lambda_{23} \sigma_1^{-1} =
\sigma_1^{-1} \sigma_2^{-1} \sigma_2^{-1} \sigma_1^{-1} =
\sigma_1^{-1} A_{23}^{-1} \sigma_1^{-1} = (\sigma_1^{-1} A_{23}^{-1} \sigma_1) A_{12}^{-1} = A_{13}^{-1} A_{12}^{-1}.
$$
And analogously,
$$
c_{31} = A_{34}^{-1} A_{24}^{-1} A_{14}^{-1},~~~c_{22} = A_{24}^{-1} A_{14}^{-1} A_{23}^{-1} A_{13}^{-1},~~~c_{13} = A_{14}^{-1} A_{13}^{-1} A_{12}^{-1}.
$$
In particular, we see that
$$
P_2 = T_1^c = \langle A_{12} \rangle,
$$
$$
P_3 = \langle T_1^c, T_2^c \rangle = \langle c_{11}, c_{21}, c_{12} \rangle,
$$
$$
P_4 = \langle T_1^c, T_2^c, T_3^c \rangle = \langle c_{11}, c_{21}, c_{12}, c_{31}, c_{22}, c_{13} \rangle.
$$
To find a set of defining relations, express the old generators in the new one:
$$
A_{12} = c_{11}^{-1},~~A_{13} = c_{11} c_{12}^{-1},~~A_{23} = c_{12} c_{21}^{-1} c_{11}^{-1}.
$$
$$
A_{14} = c_{12} c_{13}^{-1},~~A_{24} = c_{13} c_{12}^{-1} c_{21} c_{22}^{-1},~~A_{34} = c_{22} c_{21}^{-1} c_{31}^{-1}.`
$$
Rewriting the set of defining relations of $P_4$ in the new generators we will find the set of defining relations.
Let us prove the formula for $c_{22}^{c_{12}^{-1}}$ and for $c_{22}^{c_{12}}$, assuming that all other formulas are true. Proofs for all others not difficult. Take the relation
$$
A_{13}^{-1} A_{24} A_{13} = [A_{14}^{-1}, A_{34}^{-1}] A_{24} [A_{34}^{-1}, A_{14}^{-1}].
$$
In the new generators this relation after cancellations has the form
$$
\left( c_{13} c_{12}^{-1} c_{21} c_{22}^{-1} \right)^{c_{11}} = c_{13}^{-1} c_{22} c_{21}^{-1} c_{31}^{-1} c_{13} c_{12}^{-1} c_{31} c_{21} c_{22}^{-1} c_{13} c_{31}^{-1} c_{13}^{-1} c_{31} c_{21} c_{22}^{-1} c_{13}.
$$
Using the formulas of conjugating by $c_{11}^{-1}$ we get
$$
c_{22} c_{13} c_{22}^{-1} c_{21} c_{12}^{-1} c_{22}^{-1} = c_{13}^{-1} c_{22} (c_{21}^{-1} c_{31}^{-1} c_{13}) c_{12}^{-1} c_{31} c_{21} c_{22}^{-1} c_{13} c_{31}^{-1} c_{13}^{-1} c_{31} (c_{21} c_{22}^{-1} c_{13}).
$$
Rewrite the term in the brackets in the form
$$
c_{21}^{-1} c_{31}^{-1} c_{13} = (c_{31}^{-1} c_{13})^{c_{21}} c_{21}^{-1} = c_{22}^{-1} c_{13} c_{22} c_{31}^{-1} c_{21}^{-1},
$$
$$
c_{21} c_{22}^{-1} c_{13} = (c_{22}^{-1} c_{13})^{c_{21}^{-1}} c_{21} = c_{31}^{-1} c_{13} c_{22}^{-1} c_{31} c_{21},
$$
we get
$$
(c_{13} c_{22}^{-1} c_{21}) c_{12}^{-1} c_{22}^{-1} = (c_{31}^{-1} c_{21}^{-1} c_{12}^{-1} c_{31}) c_{21} (c_{22}^{-1} c_{13} c_{31}^{-1} c_{22}^{-1} c_{31} c_{21}).
$$
Using the conjugation rules, rewrite the term in the brackets in the form
$$
c_{13} c_{22}^{-1} c_{21} = c_{21} (c_{13} c_{22}^{-1})^{c_{21}} = c_{21} c_{31} c_{22}^{-1} c_{13} c_{31}^{-1},~~~
c_{31}^{-1} c_{21}^{-1} c_{12}^{-1} c_{31} = (c_{21}^{-1} c_{12}^{-1})^{c_{31}} = c_{21}^{-1} c_{12}^{-1},
$$
$$
c_{22}^{-1} c_{13} c_{31}^{-1} c_{22}^{-1} c_{31} c_{21}= c_{21} (c_{22}^{-1} c_{13} c_{31}^{-1} c_{22}^{-1} c_{31})^{c_{21}} = c_{21} c_{31} c_{22}^{-2} c_{13} c_{22} c_{31}^{-1} c_{22}^{-1},
$$
then
$$
c_{21} c_{31} c_{22}^{-1} c_{13} c_{31}^{-1} c_{12}^{-1} = c_{21}^{-1} c_{12}^{-1} c_{21}^{2} c_{31} c_{22}^{-2} c_{13} c_{22} c_{31}^{-1}.
$$
Conjugating both sides by $c_{31}$ and using the fact that it commutes with $c_{12}$ and $c_{21}$, we get
\begin{equation} \label{conj}
c_{21} (c_{22}^{-1} c_{13} c_{12}^{-1}) = c_{21}^{-1} c_{12}^{-1} c_{21}^{2} c_{22}^{-2} c_{13} c_{22}.
\end{equation}
Transforming the expression in the brackets, we get
$$
c_{21} c_{12}^{-1} c_{22}^{-c_{12}^{-1}} c_{31}^{-1} c_{13} c_{31} = c_{21}^{-1} c_{12}^{-1} c_{21}^{2} c_{22}^{-2} c_{13} c_{22}.
$$
Multiply both sides to $c_{21}^{-2} c_{12} c_{21}$ on the left ant to $ c_{31}^{-1} c_{13}^{-1} c_{31}$ on the right we get
$$
[c_{21}^{2}, c_{12}^{-1}] c_{22}^{-c_{12}^{-1}} = c_{22}^{-1} [c_{22}, c_{13}^{-1}] [c_{13}^{-1}, c_{31}].
$$
From this relation follows that
$$
c_{22}^{c_{12}^{-1}} = [c_{31}, c_{13}^{-1}] \, [c_{13}^{-1}, c_{22}] \, c_{22} \, [c_{21}^2, c_{12}^{-1}].
$$
\bigskip
To find conjugation formula $c_{22}^{c_{12}}$, we are using (\ref{conj})
$$
c_{21} c_{22}^{-1} c_{13} c_{12}^{-1} = c_{21}^{-1} c_{12}^{-1} c_{21}^{2} (c_{22}^{-2} c_{13} c_{22}).
$$
Since
$$
(c_{22}^{-1} c_{13})^{c_{11}^{-1}} = c_{22}^{-2} c_{13} c_{22},
$$
then
$$
c_{21} (c_{22}^{-1} c_{13}) c_{12}^{-1} = c_{21}^{-1} c_{12}^{-1} c_{21}^{2} (c_{22}^{-1} c_{13})^{c_{11}^{-1}}.
$$
Multiply both sides to $c_{21}^{-2} c_{12} c_{21}$ to the left:
$$
[c_{21}^2, c_{12}^{-1}] (c_{22}^{-1} c_{13})^{c_{12}^{-1}} = (c_{22}^{-1} c_{13})^{c_{11}^{-1}} \Leftrightarrow
(c_{22}^{-1} c_{13})^{c_{12}^{-1}} = [c_{12}^{-1}, c_{21}^{2}] (c_{22}^{-1} c_{13})^{c_{11}^{-1}}.
$$
Using the relation
$$
[c_{12}^{-1}, c_{21}^{2}] = [c_{12}^{-1}, c_{11}^{-2}]
$$
we have
$$
c_{22}^{-1} c_{13} c_{12}^{-1} = c_{11}^{2} c_{12}^{-1} c_{11}^{-1} (c_{22}^{-1} c_{13}) c_{11}^{-1}.
$$
Multiply both sides to $c_{12}$ on the right
$$
c_{22}^{-1} c_{13} = c_{11}^{2} c_{12}^{-1} (c_{11}^{-1} c_{22}^{-1} c_{13}) c_{11}^{-1} c_{12}.
$$
Rewrite expression in the brackets
$$
c_{11}^{-1} c_{22}^{-1} c_{13} = (c_{22}^{-1} c_{13})^{c_{11}} c_{11}^{-1} = (c_{13} c_{22}^{-1}) c_{11}^{-1},
$$
then
$$
c_{22}^{-1} c_{13} = c_{11}^{2} c_{12}^{-1} (c_{13} c_{22}^{-1}) c_{11}^{-2} c_{12} \Leftrightarrow
c_{22}^{-1} c_{13} = c_{11}^{2} c_{13}^{c_{12}} c_{22}^{-c_{12}} c_{12}^{-1} c_{11}^{-2} c_{12}.
$$
Multiply both sides to $c_{13}^{-c_{12}} c_{11}^{-2}$ on the left and to $c_{12}^{-1} c_{11}^{2} c_{12}$ on the right, we get
$$
c_{13}^{-c_{12}} (c_{11}^{-2} c_{22}^{-1} c_{13}) c_{12}^{-1} c_{11}^{2} c_{12} = c_{22}^{-c_{12}}.
$$
Find the expression in the brackets
$$
c_{11}^{-2} c_{22}^{-1} c_{13} = (c_{22}^{-1} c_{13})^{c_{11}^{2}} c_{11}^{-2} = c_{22}^{-1} c_{13}^{c_{22}^{-2}} c_{11}^{-2}.
$$
Then
$$
c_{13}^{-c_{12}} c_{22}^{-1} c_{13}^{c_{22}^{-2}} [c_{11}^{2}, c_{12}] = c_{22}^{-c_{12}}.
$$
Using the relations
$$
c_{13}^{-c_{12}} = c_{13}^{-c_{31}^{-1}},~~~[c_{11}^{2}, c_{12}] = [c_{21}^{-2}, c_{12}],
$$
we get
$$
c_{13}^{-c_{31}^{-1}} c_{22}^{-1} c_{13}^{c_{22}^{-2}} [c_{21}^{-2}, c_{12}] = c_{22}^{-c_{12}}.
$$
from this relation follows the need relation.
\end{proof}
\subsection{Decomposition of $P_4$} In the paper \cite{CW} was proved that the Milnor simplicial group $F[S^1]$ is embedded into the simplicial group $AP_*$. The main problem in this theorem is the proof that groups $T_n^c$, $n = 2, 3, \ldots$, are free. To do it the authors used some Lie algebras. In this section we prove, that $T_2^c$ and $T^c_3$ are free groups using group-theoretical methods. Note, that $T_1^c$ is infinite cyclic.
From Proposition \ref{prop4.1} follows that $P_3$ has the following presentation
$$
P_3 = \langle c_{11},~~ c_{21},~~ c_{12}~||~
c_{21}^{c_{11}^{\varepsilon}} = c_{21},~~~c_{12}^{c_{11}^{\varepsilon}} = c_{12}^{c_{21}^{-\varepsilon}} \rangle.
$$
Hence, $PV_3 = T_2^c \leftthreetimes \mathbb{Z}$, where $T_2 = \langle c_{21}, c_{12} \rangle$ is a free group and $\mathbb{Z} = \langle c_{11} \rangle$.
To prove that $T_3^c$ is free,
define a homomorphism of $P_4$ onto free abelian group of rank 2:
$$
\varphi : P_4 \to \langle x, y ~||~ x y = y x \rangle = \mathbb{Z}^2,
$$
by the rule:
$$
\varphi(c_{11}) = x,~~\varphi(c_{21}) = y,~~\varphi(c_{12}) = e,~~\varphi(c_{31}) = e,~~\varphi(c_{22}) = e,~~\varphi(c_{13}) = e,
$$
where $e$ is the unit element of abelaian group.
Note that subgroup of $P_4$ that is generated by $c_{11}$ and $c_{21}$ is free abelian of rank 2. Hence, for the short exact sequens
$$
1 \to Ker (\varphi) \to P_4 \to \mathbb{Z}^2 \to 1,
$$
there exist a section $s : \mathbb{Z}^2 \to P_4$, $s(x) = a_{11}$, $s(y) = a_{21}$ and we have decomposition $P_4 = Ker (\varphi) \leftthreetimes \mathbb{Z}^2$ of $P_4$ into a semi-direct product.
Let us find a set of generators and defining relations for $Ker (\varphi)$. Put
$$
\Lambda = \{ c_{11}^k c_{21}^l ~|~k, l \in \mathbb{Z} \}
$$
is a set of coset representatives of $P_4$ by $s(\mathbb{Z}^2)$. Then $Ker (\varphi)$ is generated by elements
$$
c_{12}^{\lambda},~~c_{31}^{\lambda},~~c_{22}^{\lambda},~~c_{13}^{\lambda},~~\mbox{where}~~\lambda \in \Lambda.
$$
Using the following defining relations of $P_4$:
$$
c_{21}^{c_{11}^{\varepsilon}} = c_{21},~~~c_{12}^{c_{11}^{\varepsilon}} = c_{12}^{c_{21}^{-\varepsilon}},~~~c_{31}^{c_{11}^{\varepsilon}} = c_{31},~~~c_{22}^{c_{11}^{\varepsilon}} = c_{22},~~~c_{13}^{c_{11}^{\varepsilon}} = c_{13}^{c_{22}^{-\varepsilon}},
$$
$$
c_{31}^{c_{21}^{\varepsilon}} = c_{31},~~~c_{22}^{c_{21}^{\varepsilon}} = c_{22}^{c_{31}^{-\varepsilon}},~~~c_{13}^{c_{21}^{\varepsilon}} = c_{13}^{c_{22}^{\varepsilon} c_{31}^{-\varepsilon}},
$$
rewrite the generators of $Ker (\varphi)$ in the form
$$
c_{12}^{c_{11}^k c_{21}^l} = c_{12}^{c_{21}^{l-k}},~~~c_{31}^{c_{11}^k c_{21}^l} = c_{31},~~~c_{22}^{c_{11}^k c_{21}^l} = c_{22}^{c_{31}^{-l}},~~~c_{13}^{c_{11}^k c_{21}^l} = c_{13}^{c_{22}^{l-k} c_{31}^{-l}}.
$$
Hence, $Ker (\varphi)$ is generated by $c_{31}$, $c_{22}$, $c_{13}$ and infinite set $c_{12}^{c_{21}^{m}}$,~~$m \in \mathbb{Z}$. For simplicity we will denote $d_m = c_{12}^{c_{21}^{m}}$.
To find a set of defining relations of $Ker (\varphi)$, we take the last relations of $P_4$:
$$
c_{31}^{c_{12}} = c_{31},~~~c_{13}^{c_{12}} = c_{13}^{c_{31}^{-1}}.
$$
$$
c_{22}^{c_{12}^{-1}} = c_{13}^{c_{31}} c_{13}^{-c_{22}} c_{22} [c_{21}^2, c_{12}^{-1}].
$$
For simplicity, instead the last relation take relation
(\ref{conj}):
$$
c_{21} c_{22}^{-1} c_{13} c_{12}^{-1} = c_{21}^{-1} c_{12}^{-1} c_{21}^{2} (c_{22}^{-2} c_{13} c_{22}),
$$
which is equivalent to the last one. Conjugating these relations by coset representatives $\lambda \in \Lambda$, we get a set of defining relations for $Ker (\varphi)$.
1) Conjugating the relation $c_{12}^{-1} c_{31} c_{12} = c_{31}$ by $c_{11}^k c_{21}^l$, we get
$$
c_{12}^{-c_{21}^{l-k}} c_{31} c_{12}^{c_{21}^{l-k}} = c_{31}.
$$
Put $m = l-k$, we get the set of relations
$$
d_m^{-1} c_{31} d_m = c_{31},~~m \in \mathbb{Z}.
$$
2) Conjugating the relation $c_{12}^{-1} c_{13} c_{12} = c_{31} c_{13} c_{31}^{-1}$ by $c_{11}^k c_{21}^l$, we get
$$
c_{12}^{-c_{21}^{l-k}} c_{13}^{c_{22}^{l-k} c_{31}^{-l}} c_{12}^{c_{21}^{l-k}} = c_{31} c_{13}^{c_{22}^{l-k} c_{31}^{-l}} c_{31}^{-1}.
$$
Conjugating this relation by $c_{31}^{l}$ and put $m = l-k$ we get the set of relations
$$
d_m^{-1} c_{13}^{c_{22}^{m} } d_m = c_{13}^{c_{22}^{m} c_{31}^{-1}},~~m \in \mathbb{Z}.
$$
3) Conjugating the relation $c_{22}^{-1} c_{13} c_{12}^{-1} = c_{12}^{- c_{21}^{2}} c_{22}^{-2} c_{13} c_{22}$ by $c_{11}^k c_{21}^l$, we get
$$
c_{22}^{-c_{31}^{-l}} c_{13}^{c_{22}^{l-k} c_{31}^{-l}} c_{12}^{-c_{21}^{l-k}} = c_{12}^{- c_{21}^{l-k+2}} \left( c_{22}^{-c_{31}^{-l}} \right)^{-2} c_{13}^{c_{22}^{l-k} c_{31}^{-l}} c_{22}^{c_{31}^{-l}}.
$$
Conjugating this relation by $c_{31}^{l}$ and put $m = l-k$ we get the set of relations
$$
c_{22}^{-1} c_{13}^{c_{22}^{l-k}} d_m^{-1} = d_{m+2}^{-1} c_{22}^{2} c_{13}^{c_{22}^{m}} c_{22}.
$$
Hence, we prove
\begin{lem}
$Ker (\varphi)$ is generated by
$$
c_{31},~~c_{22},~~c_{13},~~d_m,~~m \in \mathbb{Z},
$$
and is defined by relations\\
1)~~~ $d_m^{-1} \, c_{31} \, d_m = c_{31},$\\
2)~~~ $ d_m^{-1} \, c_{13}^{c_{22}^{m}} \, d_m = c_{13}^{c_{22}^{m} c_{31}^{-1}},$\\
3)~~~ $d_{m+2} = c_{22}^{-1} \, c_{13}^{c_{22}^{m+1}} \, d_m \, c_{13}^{-c_{22}^{m}} \, c_{22}$\\
for $m \in \mathbb{Z}$.
\end{lem}
From the set of relations 3) we express all generators $d_m$, $m \not= 0, 1$ as words in the generators
$$
d_0, d_1, c_{31}, c_{22}, c_{13}.
$$
If $m = 2 m_1 \geq 0$, then from 3) we have
$$
d_m = c_{22}^{-m_1} c_{13}^{c_{22}^{m_1}} c_{13}^{c_{22}^{m_1-1}} \ldots c_{13}^{c_{22}} \, d_0 \, c_{13}^{-1} \, c_{13}^{-c_{22}} \ldots c_{13}^{-c_{22}^{m_1-1}} c_{22}^{m_1}.
$$
If $m = 2 m_1 < 0$, then rewrite 3) in the form
$$
d_{m} = c_{22} \, c_{13}^{-c_{22}^{m+2}} \, d_{m+2} \, c_{13}^{c_{22}^{m+1}} \, c_{22}^{-1}
$$
and by induction we get
$$
d_m = c_{22}^{-m_1} \, c_{13}^{-c_{22}^{-(m_1+1)}} \, c_{13}^{c_{22}^{-(m_1+2)}} \ldots c_{13}^{-c_{22}^{-1}} \, c_{13}^{-1} \, d_0 \, \, c_{13}^{c_{22}^{-1}} \, c_{13}^{c_{22}^{-2}} \ldots c_{13}^{c_{22}^{-m_1}} c_{22}^{m_1}.
$$
Put these formulas into relations 2), we get
\begin{lem} \label{l4.3}
The set of relations 2) for the even indexes $m = 2 m_1$ is equivalent to the following set of relations:
$$
\left( c_{13}^{(c_{13} \, c_{22})^{m_1}} \right)^{d_0} = c_{13}^{c_{22}^{m} \, c_{31}^{-1} \, c_{22}^{-m} \,(c_{22} c_{13})^{m_1}}.
$$
\end{lem}
Now consider the odd indexes. If $m = 2 m_1 +1 > 0$, then from 3) we have
$$
d_m = c_{22}^{-m_1} c_{13}^{c_{22}^{m_1+1}} c_{13}^{c_{22}^{m_1}} \ldots c_{13}^{c_{22}^2} \, d_1 \, c_{13}^{-c_{22}} \, c_{13}^{-c_{22}^2} \ldots c_{13}^{-c_{22}^{m_1}} c_{22}^{m_1}.
$$
If $m = 2 m_1 + 1 < 0$, then
$$
d_m = c_{22}^{-m_1} \, c_{13}^{-c_{22}^{m_1+2}} \, c_{13}^{-c_{22}^{m_1}+3} \ldots c_{13}^{-c_{22}^{-1}} \, c_{13}^{-1} \, c_{13}^{-c_{22}} \, d_1 \, \, c_{13} \, c_{13}^{c_{22}^{-1}} \ldots c_{13}^{-c_{22}^{m_1+1}} c_{22}^{m_1}.
$$
Put these formulas into relations 2), we get
\begin{lem} \label{l4.4}
The set of relations 2) for the odd indexes $m = 2 m_1+1$ is equivalent to the following set of relations:
$$
\left( c_{13}^{(c_{22} \, c_{13})^{m_1-1} c_{22}^2} \right)^{d_1} = c_{13}^{c_{22}^{m} \, c_{31}^{-1} \, c_{22}^{-(m-1)} \,(c_{13} c_{22})^{m_1}}.
$$
\end{lem}
Considering the relations 1) and input the expressions for $d_m$ into these relations we get.
\begin{lem} \label{l4.5}
The set of relations 1) is equivalent to the union of the following sets of relations:
if $m = 2 m_1$ is even, then
$$
\left( c_{31}^{ c_{22}^{-m} \, (c_{13} \, c_{22})^{m_1} } \right)^{d_0} = c_{31}^{c_{22}^{-m} \,(c_{22} c_{13})^{m_1}};
$$
if $m = 2 m_1 + 1$ is odd, then
$$
\left( c_{31}^{ c_{22}^{-(m+1)} \, (c_{22} \, c_{13})^{m_1} c_{22}^2} \right)^{d_1} = c_{31}^{c_{22}^{-(m-1)} \,(c_{13} c_{22})^{m_1}}.
$$
\end{lem}
Hence we have proven
\begin{prop}
$Ker (\varphi)$ is generated by
$$
c_{31},~~c_{22},~~c_{13},~~d_0,~~d_1
$$
and is defined by relations from Lemmas \ref{l4.3} - \ref{l4.5}.
\end{prop}
Now we are going to prove that $Ker (\varphi)$ is two consequent HNN-extensions of the group $T_3^c = \langle c_{31}, c_{22}, c_{13} \rangle$. For this define subgroups $A_0, B_0, A_1, B_1$ of $G$. Let $m=2m_1$ is even number, then $A_0$ is generated by elements
$$
c_{13}^{ (c_{13} c_{22})^{m_1}},~~ c_{31}^{ c_{22}^{-m} (c_{13} c_{22})^{m_1}};
$$
$B_0$ is generated by elements
$$
c_{13}^{ c_{22}^m c_{31}^{-1} c_{22}^{-m} (c_{22} c_{13})^{m_1}},~~ c_{31}^{ c_{22}^{-m} (c_{22} c_{13})^{m_1}}.
$$
Define a map $\psi_0 : A_0 \to B_0$ on the generators:
$$
c_{13}^{ (c_{13} c_{22})^{m_1}} \to c_{13}^{ c_{22}^m c_{31}^{-1} c_{22}^{-m} (c_{22} c_{13})^{m_1}},~~~~ c_{31}^{ c_{22}^{-m} (c_{13} c_{22})^{m_1}} \to c_{31}^{ c_{22}^{-m} (c_{22} c_{13})^{m_1}}.
$$
From Lemmas \ref{l4.3}, \ref{l4.5} follows that $\psi_0$ is induced conjugation by $d_0$ in $Ker (\varphi)$ and hence is an isomorphism.
Analogously, let $m=2m_1 + 1$ is odd number, then $A_1$ is generated by elements
$$
c_{13}^{ (c_{22} c_{13})^{(m_1-1)} c_{22}^2},~~ c_{31}^{ c_{22}^{-(m+1)} (c_{22} c_{13})^{m_1} c_{22}^2};
$$
$B_1$ is generated by elements
$$
c_{13}^{ c_{22}^m c_{31}^{-1} c_{22}^{-(m-1)} (c_{13} c_{22})^{m_1}},~~ c_{13}^{ c_{22}^{-(m-1)} (c_{13} c_{22})^{m_1}}.
$$
Define a map $\psi_1 : A_1 \to B_1$ on the generators:
$$
c_{13}^{ (c_{22} c_{13})^{(m_1-1)} c_{22}^2}
\to c_{13}^{ c_{22}^m c_{31}^{-1} c_{22}^{-(m-1)} (c_{13} c_{22})^{m_1}},~~~~ c_{31}^{ c_{22}^{-(m+1)} (c_{22} c_{13})^{m_1} c_{22}^2} \to c_{13}^{ c_{22}^{-(m-1)} (c_{13} c_{22})^{m_1}}.
$$
From Lemmas \ref{l4.4}, \ref{l4.5} follows that $\psi_1$ is induced conjugation by $d_1$ in $Ker (\varphi)$ and hence is an isomorphism. In these notations we have
\begin{thm} \label{t4.7}
$Ker (\varphi)$ is two consequent HNN-extensions with the base group $T_3^c$:
$$
Ker (\varphi) = \langle T_3^c, d_0, d_1~||~d_0^{-1} A_0 d_0 = B_0, \psi_0; ~~d_1^{-1} A_1 d_1 = B_1, \psi_1 \rangle.
$$
\end{thm}
\begin{cor}
The group $T_3^c = \langle c_{31}, c_{22}, c_{13} \rangle$ is free of rank 3.
\end{cor}
\begin{proof}
The group $T_3^c$ is a subgroup of $Ker (\varphi)$. From Theorem \ref{t4.7} follows that all relations of $Ker (\varphi)$ are define $Ker (\varphi)$ as HNN-extensions, hence $T_3^c$ does not have defining relations.
\end{proof}
\section{Structure of $VP_3$ and $VP_4$} \label{s4}
The main purpose of this section find sets of defining relations for $T_2$ and $T_3$. Note that $VP_3$ contains $T_2$ and has no commutativity relations,
$VP_4$ contains $T_3$ and has commutativity relations,
Relations of $T_n$ for $n > 3$ one can find using degeneracy maps $s_i$.
\subsection{The group $VP_3$}
In the generators
$$
\lambda_{12}, \lambda_{21}, \lambda_{13}, \lambda_{23}, \lambda_{31}, \lambda_{23},
$$
$VP_3$ is defined by the following 6 relations
$$
\lambda_{12} \lambda_{13} \lambda_{23} = \lambda_{23} \lambda_{13} \lambda_{12},~~~
\lambda_{21} \lambda_{23} \lambda_{13} = \lambda_{13} \lambda_{23} \lambda_{21},~~~
\lambda_{13} \lambda_{12} \lambda_{32} = \lambda_{32} \lambda_{12} \lambda_{13},
$$
$$
\lambda_{31} \lambda_{32} \lambda_{12} = \lambda_{12} \lambda_{32} \lambda_{31},~~~
\lambda_{23} \lambda_{21} \lambda_{31} = \lambda_{31} \lambda_{21} \lambda_{23},~~~
\lambda_{32} \lambda_{31} \lambda_{21} = \lambda_{21} \lambda_{31} \lambda_{32}.
$$
In the generators
$$
a_{11}, b_{11}, a_{21}, a_{12}, b_{21}, b_{12}
$$
$VP_3$ is defined by the following 6 relations
$$
[a_{21}, a_{12}] = 1,~~~b_{11} a_{11} a_{21} a_{11}^{-1} = a_{21} b_{11},~~~a_{12} b_{21} b_{12}^{-1} b_{11} = b_{21} b_{12}^{-1} b_{11} a_{11} a_{12} a_{11}^{-1},
$$
$$
b_{11}^{-1} b_{21} b_{11} a_{11} = a_{11} b_{21},~~~a_{11} a_{12}^{-1} a_{21} b_{12} = b_{11}^{-1} b_{12} b_{11} a_{11} a_{12}^{-1} a_{21},~~~
[b_{21}, b_{12}] = 1.
$$
In the paper \cite{BMVW} was found the following decomposition of $VP_3$.
\begin{prop} \label{p4.1}
(\cite{BMVW}) The group $VP_3$ is generated by elements
$$
a_{11},~~ c_{11},~~ a_{21},~~ a_{12},~~
b_{21},~~ b_{12}
$$
and is defined by relations
$$
[a_{21}, a_{12}] = [b_{21}, b_{12}] = 1,
$$
$$
a_{21}^{c_{11}} = a_{21},~~~b_{21}^{c_{11}} = b_{21},~~~b_{12}^{c_{11}} =
b_{12}^{a_{21}^{-1} a_{12}},~~~ a_{12}^{c_{11}} = a_{12}^{b_{12}
a_{21}^{-1} a_{12} b_{21}^{-1}},
$$
i.~e. $VP_3 = \langle T_2, c_{11} \rangle * \langle a_{11} \rangle$,
$\langle T_2, c_{11} \rangle = T_2 \leftthreetimes \langle c_{11} \rangle.$
\end{prop}
In this proposition $c_{11}$ acts on $b_{12}$ and $a_{12}$ by different manner. Let us show that in fact these actions are equel. Indeed, since $a_{21}^{-1} a_{12} = a_{12} a_{21}^{-1}$, then
$$
a_{12}^{c_{11}} = a_{12}^{b_{12} a_{12} a_{21}^{-1} b_{21}^{-1}} \Leftrightarrow
a_{12}^{c_{11}} = a_{12}^{c_{12} c_{21}^{-1}}.
$$
Similarly, rewrite the conjugation rule
$$
b_{12}^{c_{11}} =
b_{12}^{a_{21}^{-1} a_{12}}
$$
in the form
$$
b_{12}^{c_{11}} =
b_{12}^{b_{12} a_{12} a_{21}^{-1}}.
$$
Conjugating both sides of this relation by $b_{21}^{-1}$ and using the fact that $c_{11} b_{21}^{-1} = b_{21}^{-1} c_{11}$ and $b_{12} b_{21} = b_{21} b_{12}$, we get
$$
b_{12}^{c_{11}} =
b_{12}^{c_{12} c_{21}^{-1}}.
$$
Hence, we have proven
\begin{cor} \label{c4.2}
The group $VP_3$ is generated by elements
$$
a_{11},~~ c_{11},~~ a_{21},~~ a_{12},~~
b_{21},~~ b_{12}
$$
and is defined by relations
$$
[a_{21}, a_{12}] = [b_{21}, b_{12}] = 1,
$$
$$
a_{21}^{c_{11}} = a_{21},~~~b_{21}^{c_{11}} = b_{21},~~~b_{12}^{c_{11}} =
b_{12}^{c_{12} c_{21}^{-1}},~~~ a_{12}^{c_{11}} = a_{12}^{c_{12} c_{21}^{-1}}.
$$
\end{cor}
Also, we can change the generators $b_{ij}$ to the generators $c_{ij}$.
\begin{cor} \label{c4.3}
The group $VP_3$ is generated by elements
$$
a_{11},~~ c_{11},~~ a_{21},~~ a_{12},~~
c_{21},~~ c_{12}
$$
and is defined by relations
$$
[a_{21}, a_{12}] = [c_{21} a_{21}^{-1}, c_{12} a_{12}^{-1}] = 1,
$$
$$
a_{21}^{c_{11}} = a_{21},~~~c_{21}^{c_{11}} = c_{21},~~~a_{12}^{c_{11}} =
a_{12}^{c_{12} c_{21}^{-1}},~~~ c_{12}^{c_{11}} = c_{12}^{c_{21}^{-1}}.
$$
\end{cor}
\medskip
To find a set of defining relations of $T_2$ consider a homomorphism $\varphi : \langle T_2, c_{11} \rangle \to \langle c_{11} \rangle$ which sends all generators of $T_2$ to $1$ and sends $c_{11}$ to $c_{11}$. To find the kernel of this homomorphism,
we are using the Reidemeister-Schreier method \cite[Section 2.3]{MKS}. The kernel is generated by elements
$$
S_{\lambda,a} = \lambda a \cdot (\overline{\lambda a})^{-1},\quad \lambda
\in \langle c_{11} \rangle,\quad
a \in \{ a_{12}, a_{21}, b_{12}, b_{21}, c_{11} \}
$$
that are equal to
$$
c_{11}^{-k} a_{12} c_{11}^k,~~c_{11}^{-k} a_{21} c_{11}^k,~~c_{11}^{-k} b_{12} c_{11}^k,~~c_{11}^{-k} b_{21} c_{11}^k,~~k \in \mathbb{Z}.
$$
Defining relations of $Ker(\varphi)$ have the form
$$
c_{11}^{-k} \tau (r) c_{11}^k,~~k \in \mathbb{Z},
$$
where $r$ is a defining relation of the group $\langle T_2, c_{11} \rangle$ and $\tau$ is the rewriteble prosess (see \cite[Section 2.3]{MKS}). If $r$ runs through defining relation which are the conjugation rules, then we can use these defining relations to remove all generators of $Ker (\varphi)$ and keep only four generators:
$$
a_{12}, a_{21}, b_{12}, b_{21},
$$
It means that the kernel is equal to $T_2$.
Hence, we have only relations
$$
[a_{21}, a_{12}]^{c_{11}^k} = [b_{21}, b_{12}]^{c_{11}^k} = 1, ~~~k \in \mathbb{Z}.
$$
We proved
\begin{prop}
The group $T_2$ is generated by elements $a_{12}, a_{21}, b_{12}, b_{21}$ and is defined by relations
$$
[a_{21}, a_{12}]^{c_{11}^k} = [b_{21}, b_{12}]^{c_{11}^k} = 1, ~~~k \in \mathbb{Z}.
$$
\end{prop}
Using the conjugation rules in $VP_3$ one can prove
\begin{lem} \label{l4.12}
In $VP_3$ the following formulas hold
$$
a_{12}^{c_{11}^k} = a_{12}^{c_{12}^k c_{21}^{-k}},~~~b_{12}^{c_{11}^k} = b_{12}^{c_{12}^k c_{21}^{-k}},~~~k \in \mathbb{Z}.
$$
\end{lem}
Using these formulas we can give other description of $T_2$.
\begin{prop} \label{l4.13}
$T_2$ is generated by elements
$$
a_{21}, a_{12}, b_{21}, b_{12}
$$
and is defined by the relations
$$
[a_{21}^{c_{21}^k}, a_{12}^{c_{12}^k}] = [b_{21}^{c_{21}^k}, b_{12}^{c_{12}^k}] = 1, ~~~k \in \mathbb{Z}.
$$
\end{prop}
\begin{proof}
As we know $T_2$ is defined by the relations
$$
[a_{21}, a_{12}]^{c_{11}^k} = [b_{21}, b_{12}]^{c_{11}^k} = 1.
$$
Using Lemma \ref{l4.12} and conjugation rules we can rewrite these relations in the form
$$
[a_{21}, a_{12}^{c_{12}^k c_{21}^{-k}}] = [b_{21}, b_{12}^{c_{12}^k c_{21}^{-k}}] = 1.
$$
Conjugating both sides of these relations by $c_{21}^{k}$, we get the need relations.
\end{proof}
\medskip
\subsection{$VP_4$ and its subgroup $T_3$} The group $VP_4$ is generated by elements
$$
\lambda_{12}, ~~\lambda_{21}, ~~\lambda_{13}, ~~\lambda_{23}, ~~\lambda_{31}, ~~\lambda_{32}, ~~
\lambda_{14}, ~~\lambda_{24}, ~~\lambda_{34}, ~~\lambda_{41}, ~~\lambda_{42}, ~~\lambda_{43}.
$$
On the over side, $VP_4 = \langle T_1, T_2, T_3 \rangle$, where
$$
T_1 = \langle a_{11}, b_{11} \rangle,~~T_2 = \langle a_{21}, a_{12}, b_{21}, b_{12} \rangle,~~
T_3 = \langle a_{31}, a_{22}, a_{13}, b_{31}, b_{22}, b_{13} \rangle.
$$
We have found expressions of the new generators $a_{ij}$ and $b_{ij}$ as words in standard generators of $VP_4$. Find expressions of the old generators
as words in the new generators:
$$
\lambda_{12} = a_{11}, ~~\lambda_{21} = b_{11}, ~~\lambda_{13} = a_{12} a_{11}^{-1}, ~~\lambda_{23} = a_{11} a_{12}^{-1} a_{21}, ~~
\lambda_{31} = b_{11}^{-1} b_{12},~~
\lambda_{32} = b_{21} b_{12}^{-1} b_{11},
$$
$$
\lambda_{14} = a_{13} a_{12}^{-1},
~~\lambda_{24} = a_{12} a_{13}^{-1} a_{22} a_{21}^{-1}, ~~\lambda_{34} = a_{21} a_{22}^{-1} a_{31},
$$
$$
\lambda_{41} = b_{12}^{-1} b_{13}, ~~\lambda_{42} = b_{21}^{-1} b_{22} b_{13}^{-1} b_{12}, ~~\lambda_{43} = b_{31} b_{22}^{-1} b_{21}.
$$
To find a presentation of $VP_4$ in the new generators, we can act on $VP_3$ by degeneracy maps $s_1, s_2, s_3$. We will use a presentation of $VP_3$ from Corollary \ref{c4.2}. Then
1) The group $s_0(VP_3)$ is generated by elements
$$
a_{21},~~ c_{21},~~ a_{22},~~ a_{31},~~
b_{22},~~ b_{31}
$$
and is defined by relations
$$
[a_{31}, a_{22}] = [b_{31}, b_{22}] = 1,
$$
$$
a_{31}^{c_{21}} = a_{31},~~~b_{31}^{c_{21}} = b_{31},~~~a_{22}^{c_{21}} =
a_{22}^{c_{22} c_{31}^{-1}},~~~ b_{22}^{c_{21}} = b_{22}^{c_{22} c_{31}^{-1}}.
$$
2) The group $s_1(VP_3)$ is generated by elements
$$
a_{12},~~ c_{12},~~ a_{13},~~ a_{31},~~
b_{13},~~ b_{31}
$$
and is defined by relations
$$
[a_{31}, a_{13}] = [b_{31}, b_{13}] = 1,
$$
$$
a_{31}^{c_{12}} = a_{31},~~~b_{31}^{c_{12}} = b_{31},~~~a_{13}^{c_{12}} =
a_{13}^{c_{13} c_{31}^{-1}},~~~ b_{13}^{c_{12}} = b_{13}^{c_{13} c_{31}^{-1}}.
$$
3) The group $s_2(VP_3)$ is generated by elements
$$
a_{11},~~ c_{11},~~ a_{13},~~ a_{22},~~
b_{13},~~ b_{22}
$$
and is defined by relations
$$
[a_{22}, a_{13}] = [b_{22}, b_{13}] = 1,
$$
$$
a_{22}^{c_{11}} = a_{22},~~~b_{22}^{c_{11}} = b_{22},~~~a_{13}^{c_{11}} =
a_{13}^{c_{13} c_{22}^{-1}},~~~ b_{13}^{c_{11}} = b_{13}^{c_{13} c_{22}^{-1}}.
$$
The defining relations of the groups $VP_3$, $s_i(VP_3)$, $i = 0, 1, 2$, are not the full set of defining relations of $VP_4$. We need to add
the commutativity relations:
\begin{equation}
[\lambda_{34}^{*}, \lambda_{12}^{*}] = [\lambda_{24}^{*}, \lambda_{13}^{*}] = [\lambda_{14}^{*}, \lambda_{23}^{*}] = 1,
\label{r4.1}
\end{equation}
where $\lambda_{ij}^{*}$ is any element from the set $\{ \lambda_{ij}, \lambda_{ji} \}$.
To find defining relations of $T_3$ we need to understand that relations in $VP_4$ give relations in $T_3$. To do it we present $VP_4$ as HNN-extensions with some base group $G_4$ and stable letter $a_{11}$. Hence, the defining relations of $T_3$ came from defining relations of $G_4$.
We will analize the relations (\ref{r4.1}) and show that six from these relations are conjugation rules by $a_{11}$ and can be used in a presentation of $VP_4$ as HNN-extensions and other relations can be write as defining relations in $G_4$.
{\it Commutativity relations } $[\lambda_{34}^*, \lambda_{12}^*] = 1$. These relations have the form
$$
[\lambda_{34}, \lambda_{12}] = 1 \Leftrightarrow [a_{21} a_{22}^{-1} a_{31}, a_{11}] = 1,
$$
$$
[\lambda_{34}, \lambda_{21}] = 1 \Leftrightarrow [a_{21} a_{22}^{-1} a_{31}, b_{11}] = 1,
$$
$$
[\lambda_{43}, \lambda_{12}] = 1 \Leftrightarrow [b_{31} b_{22}^{-1} b_{21}, a_{11}] = 1,
$$
$$
[\lambda_{43}, \lambda_{21}] = 1 \Leftrightarrow [b_{31} b_{22}^{-1} b_{21}, b_{11}] = 1.
$$
Write the first and the third relations in the form
$$
\left( a_{21} a_{22}^{-1} a_{31} \right)^{a_{11}} = a_{21} a_{22}^{-1} a_{31},~~~\left( b_{31} b_{22}^{-1} b_{21} \right)^{a_{11}} = b_{31} b_{22}^{-1} b_{21}. $$
Then from the second and from the fourth relations follows
\begin{equation} \label{r4.3}
\left( a_{21} a_{22}^{-1} a_{31} \right)^{c_{11}} = a_{21} a_{22}^{-1} a_{31},~~~\left( b_{31} b_{22}^{-1} b_{21} \right)^{c_{11}} = b_{31} b_{22}^{-1} b_{21}.
\end{equation}
In $VP_3$ we have relations $a_{21}^{c_{11}} = a_{21}$, $b_{21}^{c_{11}} = b_{21}$, and in $s_2(VP_3)$ we have relations $a_{22}^{c_{11}} = a_{22}$, $b_{22}^{c_{11}} = b_{22}$. Hence, from (\ref{r4.3}) we get
$$
a_{31}^{c_{11}} = a_{31},~~~ b_{31}^{c_{11}} = b_{31}.
$$
We proved
\begin{lem} \label{l4.6}
From the relations $[\lambda_{34}^*, \lambda_{12}^*] = 1$ in $VP_4$ follow formulas of conjugation by $a_{11}$:
$$
\left( a_{21} a_{22}^{-1} a_{31} \right)^{a_{11}} = a_{21} a_{22}^{-1} a_{31},~~~\left( b_{31} b_{22}^{-1} b_{21} \right)^{a_{11}} = b_{31} b_{22}^{-1} b_{21}, $$
and formulas of conjugation by $c_{11}$:
$$
a_{31}^{c_{11}} = a_{31},~~~ b_{31}^{c_{11}} = b_{31}.
$$
\end{lem}
{\it Commutativity relations } $[\lambda_{24}^*, \lambda_{13}^*] = 1$. These relations have the form
$$
[\lambda_{24}, \lambda_{13}] = 1 \Leftrightarrow [a_{12} a_{13}^{-1} a_{22} a_{21}^{-1}, a_{12} a_{11}^{-1}] = 1,
$$
$$
[\lambda_{24}, \lambda_{31}] = 1 \Leftrightarrow [a_{12} a_{13}^{-1} a_{22} a_{21}^{-1}, b_{11}^{-1} b_{12}] = 1,
$$
$$
[\lambda_{42}, \lambda_{13}] = 1 \Leftrightarrow [b_{21}^{-1} b_{22} b_{13}^{-1} b_{12}, a_{12} a_{11}^{-1}] = 1,
$$
$$
[\lambda_{42}, \lambda_{31}] = 1 \Leftrightarrow [b_{21}^{-1} b_{22} b_{13}^{-1} b_{12}, b_{11}^{-1} b_{12}] = 1.
$$
From the first relation we have the following conjugation formula by $a_{11}$:
\begin{equation} \label{r4.4}
\left( a_{12} a_{13}^{-1} a_{22} a_{21}^{-1} \right)^{a_{11}} = \left( a_{12} a_{13}^{-1} a_{22} a_{21}^{-1} \right)^{a_{12}}.
\end{equation}
The second relation
has the form
$$
\left( a_{12} a_{13}^{-1} a_{22} a_{21}^{-1} \right)^{b_{11}^{-1}} = \left( a_{12} a_{13}^{-1} a_{22} a_{21}^{-1} \right)^{b_{12}^{-1}}.
$$
Since $b_{ij}^{-1} = a_{ij} c_{ij}^{-1}$ we have
$$
\left( a_{12} a_{13}^{-1} a_{22} a_{21}^{-1} \right)^{a_{11} c_{11}^{-1}} = \left( a_{12} a_{13}^{-1} a_{22} a_{21}^{-1} \right)^{a_{12} c_{12}^{-1}}.
$$
Using (\ref{r4.4}) rewrite this relation in the form
$$
\left( a_{12} a_{13}^{-1} a_{22} a_{21}^{-1} \right)^{a_{12} c_{11}^{-1}} = \left( a_{13}^{-1} a_{22} a_{21}^{-1} a_{12} \right)^{c_{12}^{-1}}.
$$
That is equivalent to the relation
\begin{equation}
\left( a_{13}^{-1} a_{22} \right)^{ c_{12}^{-1}} = \left( a_{13}^{-1} a_{22} \right)^{ c_{11}^{-1}} \left( a_{21}^{-1} a_{12}\right)^{c_{11}^{-1}}
\left( a_{12}^{-1} a_{21}\right)^{c_{12}^{-1}}.
\end{equation}
Similarly, from the third relation
\begin{equation} \label{r4.5}
\left( b_{21}^{-1} b_{22} b_{13}^{-1} b_{12} \right)^{a_{11}} = \left( b_{21}^{-1} b_{22} b_{13}^{-1} b_{12}\right)^{a_{12}}.
\end{equation}
It is a formula of conjugation by $a_{11}$.
The fourth relation has the form
$$
\left( b_{21}^{-1} b_{22} b_{13}^{-1} b_{12} \right)^{b_{11}^{-1}} = \left( b_{21}^{-1} b_{22} b_{13}^{-1} b_{12}\right)^{b_{12}^{-1}}.
$$
Using the equality $b_{11}^{-1} = a_{11} c_{11}^{-1}$, rewrite the last relation in the form
$$
\left( b_{21}^{-1} b_{22} b_{13}^{-1} b_{12} \right)^{a_{11} c_{11}^{-1}} = b_{12} b_{21}^{-1} b_{22} b_{13}^{-1},
$$
and using (\ref{r4.5}) we get
$$
\left( b_{21}^{-1} b_{22} b_{13}^{-1} b_{12} \right)^{a_{12} c_{11}^{-1}} = b_{12} b_{21}^{-1} b_{22} b_{13}^{-1}.
$$
Since $a_{12} = b_{12}^{-1} c_{12}$, we have
$$
\left( b_{12} b_{21}^{-1} b_{22} b_{13}^{-1} \right)^{c_{12}} = \left( b_{12} b_{21}^{-1} b_{22} b_{13}^{-1} \right)^{c_{11}}.
$$
This relation is equivalent to
\begin{equation}
\left( b_{22} b_{13}^{-1} \right)^{c_{12}} = \left( b_{21} b_{12}^{-1} \right)^{c_{12}} \left( b_{12} b_{21}^{-1} \right)^{c_{11}}
\left( b_{22} b_{13}^{-1} \right)^{c_{11}}.
\end{equation}
Hence, we have
\begin{lem} \label{l4.7}
From the relations $[\lambda_{24}^*, \lambda_{13}^*] = 1$ in $VP_4$ follow formulas of conjugation by $a_{11}$:
$$
\left( a_{12} a_{13}^{-1} a_{22} a_{21}^{-1} \right)^{a_{11}} = \left( a_{12} a_{13}^{-1} a_{22} a_{21}^{-1} \right)^{a_{12}},~~~
\left( b_{21}^{-1} b_{22} b_{13}^{-1} b_{12} \right)^{a_{11}} = \left( b_{21}^{-1} b_{22} b_{13}^{-1} b_{12}\right)^{a_{12}},
$$
and formulas of conjugation by $c_{12}^{-1}$ and by $c_{12}$:
$$
\left( a_{13}^{-1} a_{22} \right)^{ c_{12}^{-1}} = \left( a_{13}^{-1} a_{22} \right)^{ c_{11}^{-1}} \left( a_{21}^{-1} a_{12}\right)^{c_{11}^{-1}}
\left( a_{12}^{-1} a_{21}\right)^{c_{12}^{-1}},
$$
$$
\left( b_{22} b_{13}^{-1} \right)^{c_{12}} = \left( b_{21} b_{12}^{-1} \right)^{c_{12}} \left( b_{12} b_{21}^{-1} \right)^{c_{11}}
\left( b_{22} b_{13}^{-1} \right)^{c_{11}}.
$$
\end{lem}
Let us prove that we can simplify two last relations from this lemma.
\begin{lem} \label{l4.81}
In $VP_4$ the following relations hold
1) $\left( a_{13}^{-1} a_{22} \right)^{ c_{12}^{-1}} = \left( a_{13}^{-1} a_{22} \right)^{ c_{11}^{-1}} [c_{21}, c_{12}^{-1}],$\\
2) $ \left( b_{22} b_{13}^{-1} \right)^{ c_{12}} = [c_{12}, c_{21}^{-1}] \left( b_{22} b_{13}^{-1} \right)^{ c_{11}}.$
\end{lem}
\begin{proof}
1) To prove the first relation, we need to prove the equality
$$
\left( a_{21}^{-1} a_{12}\right)^{c_{11}^{-1}}
\left( a_{12}^{-1} a_{21}\right)^{c_{12}^{-1}} = [c_{21}, c_{12}^{-1}].
$$
We have
$$
c_{11}^{-1} a_{12} a_{21}^{-1} c_{12}^{-1} = (c_{21}^{-1} c_{21}) c_{11}^{-1} a_{12} a_{21}^{-1} c_{12}^{-1},
$$
where we add unit element $1 = c_{21}^{-1} c_{21}$.
Since $c_{21} c_{11}^{-1} = c_{11}^{-1} c_{21}$ and $c_{21} a_{21}^{-1} \cdot a_{12} c_{12}^{-1} = a_{12} c_{12}^{-1} \cdot c_{21} a_{21}^{-1}$, the last expression has the form
$$
c_{21}^{-1} c_{11}^{-1} (c_{21} a_{21}^{-1} \cdot a_{12} c_{12}^{-1}) = c_{21}^{-1} c_{11}^{-1} (a_{12} c_{12}^{-1} \cdot c_{21} a_{21}^{-1}) =
c_{21}^{-1} (a_{12} c_{12}^{-1} \cdot c_{21} a_{21}^{-1})^{c_{11}} c_{11}^{-1} =
$$
$$
= c_{21}^{-1} a_{12}^{c_{12} c_{21}^{-1}} c_{12}^{-c_{21}^{-1}} c_{21} a_{21}^{-1} c_{11}^{-1} = c_{12}^{-1} a_{12} a_{21}^{-1} c_{11}^{-1}.
$$
Hence, we have the relation
$$
c_{11}^{-1} a_{12} a_{21}^{-1} c_{12}^{-1} = c_{12}^{-1} a_{12} a_{21}^{-1} c_{11}^{-1}.
$$
From this relation follows
$$
a_{12} a_{21}^{-1} c_{12}^{-1} = (c_{12}^{-1} a_{12} a_{21}^{-1})^{c_{11}^{-1}} \Leftrightarrow
(a_{12} a_{21}^{-1})^{c_{12}^{-1}} = c_{12} c_{12}^{-c_{11}^{-1}} (a_{12} a_{21}^{-1})^{c_{11}^{-1}}\Leftrightarrow
$$
$$
\Leftrightarrow [c_{21}, c_{12}^{-1}] = ( a_{21}^{-1} a_{12})^{c_{11}^{-1}} ( a_{12}^{-1} a_{21})^{c_{12}^{-1}}.
$$
From the last relation following the first relation in the lemma.
2) Let us prove the equality
$$
\left( b_{21} b_{12}^{-1} \right)^{c_{12}}
\left( b_{12} b_{21}^{-1} \right)^{c_{11}} = [c_{12}, c_{21}^{-1}].
$$
Similarly to the previous case, we have
$$
c_{12} b_{12}^{-1} b_{21} c_{11} = c_{12} b_{12}^{-1} b_{21} c_{11} (c_{21}^{-1} c_{21}) = (c_{12} b_{12}^{-1} \cdot b_{21} c_{21}^{-1}) c_{11} c_{21} =
$$
$$
= c_{11} ( b_{21} c_{21}^{-1} \cdot c_{12} b_{12}^{-1})^{c_{11}} c_{21} = c_{11} b_{21} c_{21}^{-1} \cdot c_{12}^{c_{21}^{-1}} b_{12}^{-c_{12} c_{21}^{-1}}) c_{21}) = c_{11} b_{21} b_{12}^{-1} c_{12}.
$$
Hence, we have found the relation
$$
c_{12} b_{12}^{-1} b_{21} c_{11} = c_{11} b_{21} b_{12}^{-1} c_{12}.
$$
From this relation
$$
c_{12}^{c_{11}} ( b_{12}^{-1} b_{21})^{c_{11}} = b_{21} b_{12}^{-1} c_{12} \Leftrightarrow
c_{12}^{-1} c_{12}^{c_{21}^{-1}} ( b_{12}^{-1} b_{21})^{c_{11}} = (b_{21} b_{12}^{-1})^{c_{12}}.
$$
This relation is equivalent to the need relation.
\end{proof}
\medskip
\begin{cor}
In $VP_4$ the following formulas hold
1) $a_{22}^{c_{12}^{-1}} = a_{13}^{c_{13}^{-1} c_{31}} a_{13}^{-c_{13}^{-1} c_{22}} a_{22} [c_{21}, c_{12}^{-1}],$\\
2) $b_{22}^{c_{12}} = [c_{12}, c_{21}^{-1}] b_{22} b_{13}^{-c_{13} c_{22}^{-1}} b_{13}^{c_{13} c_{31}^{-1}},$\\
3) $a_{22}^{c_{12}} = [c_{12}, c_{21}^{-1}] a_{13}^{-c_{13} c_{22}^{-1}} a_{22} a_{13}^{c_{13} c_{31}^{-1}},$\\
4) $b_{22}^{c_{12}^{-1}} = b_{13}^{c_{13}^{-1} c_{31}} b_{22} b_{13}^{-c_{13}^{-1} c_{22}} [c_{21}, c_{12}^{-1}].$\\
\end{cor}
\begin{proof}
1) Let us prove the first formula. The prove of the second one is the same. Take the first relation in Lemma \ref{l4.81}:
$$
\left( a_{13}^{-1} a_{22} \right)^{ c_{12}^{-1}} = \left( a_{13}^{-1} a_{22} \right)^{ c_{11}^{-1}} [c_{21}, c_{12}^{-1}].
$$
Using the conjugation formulas, we get
$$
a_{13}^{-c_{13}^{-1} c_{31}} a_{22}^{c_{12}^{-1}} = a_{13}^{-c_{13}^{-1} c_{22}} a_{22} [c_{21}, c_{12}^{-1}].
$$
From this relation we get the first formula:
$$
a_{22}^{c_{12}^{-1}} = a_{13}^{c_{13}^{-1} c_{31}} a_{13}^{-c_{13}^{-1} c_{22}} a_{22} [c_{21}, c_{12}^{-1}].
$$
3) Let us prove the third formula. The prove of the fourth one is the same. Take the first relation in Lemma \ref{l4.81}:
$$
\left( a_{13}^{-1} a_{22} \right)^{ c_{12}^{-1}} = \left( a_{13}^{-1} a_{22} \right)^{ c_{11}^{-1}} [c_{21}, c_{12}^{-1}].
$$
Since
$$
[c_{21}, c_{12}^{-1}] = c_{11} c_{12} c_{11}^{-1} c_{12}^{-1},
$$
then the relation have the form
$$
a_{13}^{-1} a_{22} = c_{12}^{-1} c_{11} (a_{13}^{-1} a_{22}) c_{12} c_{11}^{-1}.
$$
Conjugating both sides by $c_{11}$ we get
$$
a_{13}^{-c_{11}} a_{22} = [c_{11}, c_{12}] a_{13}^{-c_{12}} a_{22}^{c_{12}}.
$$
Since $a_{13}$ and $a_{22}$ are commute, then
$$
a_{13}^{-c_{11}} a_{22} = [c_{11}, c_{12}] a_{22}^{c_{12}} a_{13}^{-c_{12}}
$$
or
$$
a_{22}^{c_{12}} = [c_{12}, c_{11}] a_{13}^{-c_{11}} a_{22} a_{13}^{c_{12}}.
$$
Using the formulas
$$
[c_{12}, c_{11}] = [c_{12}, c_{21}^{-1}],~~~a_{13}^{c_{11}} = a_{13}^{c_{13} c_{22}^{-1}},~~~a_{13}^{c_{12}} = a_{13}^{c_{13} c_{31}^{-1}},
$$
we get the need relation.
\end{proof}
\medskip
{\it Commutativity relations } $[\lambda_{14}^*, \lambda_{23}^*] = 1$. These relations have the form
$$
[\lambda_{14}, \lambda_{23}] = 1 \Leftrightarrow [a_{13} a_{12}^{-1}, a_{11} a_{12}^{-1} a_{21}] = 1,
$$
$$
[\lambda_{14}, \lambda_{32}] = 1 \Leftrightarrow [a_{13} a_{12}^{-1}, b_{21} b_{12}^{-1} b_{11}] = 1,
$$
$$
[\lambda_{41}, \lambda_{23}] = 1 \Leftrightarrow [b_{12}^{-1} b_{13}, a_{11} a_{12}^{-1} a_{21}] = 1,
$$
$$
[\lambda_{41}, \lambda_{32}] = 1 \Leftrightarrow [b_{12}^{-1} b_{13}, b_{21} b_{12}^{-1} b_{11}] = 1.
$$
The first relation
gives the following conjugation rule by $a_{11}$:
\begin{equation} \label{r4.21}
\left( a_{13} a_{12}^{-1} \right)^{a_{11}} = \left( a_{13} a_{12}^{-1} \right)^{a_{21}^{-1} a_{12}}.
\end{equation}
The third relation
gives the following conjugation rule
\begin{equation} \label{r4.22}
\left( b_{12}^{-1} b_{13} \right)^{a_{11}} = \left( b_{12}^{-1} b_{13} \right)^{a_{21}^{-1} a_{12}}.
\end{equation}
Since $b_{ij} = c_{ij} a_{ij}^{-1}$, then
$$
b_{21} b_{12}^{-1} b_{11} = c_{21} a_{21}^{-1} a_{12} c_{12}^{-1} c_{11} a_{11}^{-1},
$$
and the second relation:
$$
[a_{13} a_{12}^{-1}, b_{21} b_{12}^{-1} b_{11}] = [a_{13} a_{12}^{-1}, c_{21} a_{21}^{-1} a_{12} c_{12}^{-1} c_{11} a_{11}^{-1}] = 1
$$
gives the following relation
$$
\left( a_{13} a_{12}^{-1} \right)^{a_{11} c_{11}^{-1} c_{12} a_{12}^{-1} a_{21} c_{21}^{-1}} = a_{13} a_{12}^{-1}.
$$
Since $c_{12} a_{12}^{-1} \cdot a_{21} c_{21}^{-1} = a_{21} c_{21}^{-1} \cdot c_{12} a_{12}^{-1}$, then
$$
\left( a_{13} a_{12}^{-1} \right)^{a_{11} c_{11}^{-1} a_{21} c_{21}^{-1} \cdot c_{12}} = (a_{13} a_{12}^{-1})^{ a_{12}}.
$$
Using (\ref{r4.21})
rewrite this relation in the form
$$
\left( a_{13} a_{12}^{-1} \right)^{a_{21}^{-1} a_{12} c_{11}^{-1} a_{21} c_{21}^{-1} c_{12}} = \left( a_{13} a_{12}^{-1} \right)^{ a_{12}}.
$$
Since $a_{21}^{-1} a_{12} = a_{12} a_{21}^{-1}$, it is equivalent to
$$
\left( a_{12}^{-1} a_{13} \right)^{a_{21}^{-1} c_{11}^{-1} a_{21} c_{21}^{-1} c_{12}} = a_{12}^{-1} a_{13}.
$$
Since $[a_{21}, c_{11}] = 1$, then
$$
\left( a_{12}^{-1} a_{13} \right)^{ c_{11}^{-1} c_{21}^{-1} c_{12}} = a_{12}^{-1} a_{13}.
$$
Using a conjugation formula by $c_{11}^{-1}$ we get
$$
\left( a_{12}^{-c_{12}^{-1} c_{21}} a_{13}^{c_{13}^{-1} c_{22}} \right)^{c_{21}^{-1} c_{12}} = a_{12}^{-1} a_{13}.
$$
Hence
\begin{equation} \label{r3}
a_{13}^{c_{13}^{-1} c_{22}} = a_{13}^{c_{12}^{-1} c_{21}}.
\end{equation}
Similarly, the forth relation
has the form
$$
\left( b_{12}^{-1} b_{13} \right)^{a_{11} c_{11}^{-1} c_{12} a_{12}^{-1} a_{21} c_{21}^{-1}} = b_{12}^{-1} b_{13}.
$$
Using (\ref{r4.22})
rewrite the forth relations in the form
$$
\left( b_{12}^{-1} b_{13} \right)^{a_{21}^{-1} a_{12} c_{11}^{-1} c_{12} a_{12}^{-1} a_{21} c_{21}^{-1}} = b_{12}^{-1} b_{13}.
$$
Using the relation $c_{12} a_{12}^{-1} \cdot a_{21} c_{21}^{-1} = a_{21} c_{21}^{-1} \cdot c_{12} a_{12}^{-1}$ and $b_{12}^{-1} b_{13} = a_{12} c_{12}^{-1} c_{13} a_{13}^{-1} $ we can present this relation in the form
$$
\left( c_{12}^{-1} c_{13} a_{13}^{-1} a_{12} \right)^{a_{21}^{-1} c_{11}^{-1} a_{21} c_{21}^{-1} c_{12}} = c_{12}^{-1} c_{13} a_{13}^{-1} a_{12}.
$$
Since $[a_{21}, c_{11}] = 1$, we have
$$
\left( c_{12}^{-1} c_{13} a_{13}^{-1} a_{12} \right)^{ c_{11}^{-1} c_{21}^{-1} c_{12}} = c_{12}^{-1} c_{13} a_{13}^{-1} a_{12}.
$$
Using the formulas of conjugating by $c_{11}^{-1}$ we get
$$
c_{13}^{c_{22}} a_{13}^{-c_{13}^{-1} c_{22}} = \left( c_{13} a_{13}^{-1} \right)^{c_{12}^{-1} c_{21}}.
$$
Using (\ref{r3}) we have
$$
c_{13}^{c_{22}} = c_{13}^{c_{12}^{-1} c_{21}}.
$$
Using a conjugation formula by $c_{12}^{-1}$
$$
c_{13}^{c_{22}} = \left( c_{13}^{c_{31}} \right)^{c_{21}}.
$$
Conjugating both sides by $c_{21}^{-1}$
$$
c_{22}^{-c_{21}^{-1}} c_{13}^{c_{21}^{-1}} c_{22}^{c_{21}^{-1}} = c_{13}^{c_{31}}.
$$
Using the conjugation rules by $c_{21}^{-1}$
$$
c_{22}^{-c_{31}} c_{13}^{c_{21}^{-1}} c_{22}^{c_{31}} = c_{13}^{c_{31}},
$$
or
\begin{equation} \label{r41}
c_{13}^{c_{21}^{-1}} = c_{13}^{c_{22}^{-1} c_{31}}.
\end{equation}
\smallskip
Now come back to the relation (\ref{r3}) and write it in the form
$$
a_{13}^{c_{13}^{-1} c_{22} c_{21}^{-1}} = a_{13}^{c_{12}^{-1}}.
$$
Using the conjugation formulas, rewrite the left side, we arrive to relation
$$
(c_{22}^{-1})^{c_{21}^{-1}} c_{13}^{c_{21}^{-1}} a_{13}^{c_{21}^{-1}} c_{13}^{-c_{21}^{-1}} c_{22}^{c_{21}^{-1}} = a_{13}^{c_{12}^{-1}}.
$$
Using the conjugation rules, we get
$$
c_{22}^{-c_{31}} c_{13}^{c_{22}^{-1} c_{31}} a_{13}^{c_{21}^{-1}} c_{13}^{-c_{22}^{-1} c_{31}} c_{22}^{c_{31}} = a_{13}^{c_{13}^{-1} c_{31}}.
$$
It is equivalent to
$$
a_{13}^{c_{21}^{-1}} = c_{13}^{-c_{22}^{-1} c_{31}} c_{22}^{c_{31}} a_{13}^{c_{13}^{-1} c_{31}} c_{22}^{-c_{31}} c_{13}^{c_{22}^{-1} c_{31}}
$$
and after cancelations
$$
a_{13}^{c_{21}^{-1}} = a_{13}^{c_{22}^{-1} c_{31}}.
$$
Since $b_{13} = c_{13} a_{13}^{-1}$, then using the last relation and relation (\ref{r41}), we get
$$
b_{13}^{c_{21}^{-1}} = b_{13}^{c_{22}^{-1} c_{31}}.
$$
Hence, we proved
\begin{lem}
The commutativity relations $[\lambda_{14}^*, \lambda_{23}^*] = 1$ in $VP_4$ give the following conjugation formulas by $a_{11}$:
$$
\left( a_{13} a_{12}^{-1} \right)^{a_{11}} = \left( a_{13} a_{12}^{-1} \right)^{ a_{21}^{-1} a_{12}},~~~
\left( b_{12}^{-1} b_{13} \right)^{a_{11}} = \left( b_{12}^{-1} b_{13} \right)^{a_{21}^{-1} a_{12}},
$$
and the conjugation formulas by $c_{21}^{-1}$:
$$
a_{13}^{c_{21}^{-1}} = a_{13}^{c_{22}^{-1} c_{31}},~~~
b_{13}^{c_{21}^{-1}} = b_{13}^{c_{22}^{-1} c_{31}}.
$$
\end{lem}
\subsection{$VP_4$ as HNN-extension} From the relations of commutativity in $VP_4$ we got the following conjugation formulas by element $a_{11}$:
$$
\left( a_{21} a_{22}^{-1} a_{31} \right)^{a_{11}} = a_{21} a_{22}^{-1} a_{31},~~~\left( b_{31} b_{22}^{-1} b_{21} \right)^{a_{11}} = b_{31} b_{22}^{-1} b_{21},
$$
$$
\left( a_{12} a_{13}^{-1} a_{22} a_{21}^{-1} \right)^{a_{11}} = \left( a_{12} a_{13}^{-1} a_{22} a_{21}^{-1} \right)^{a_{12}},~~~
\left( b_{21}^{-1} b_{22} b_{13}^{-1} b_{12} \right)^{a_{11}} = \left( b_{21}^{-1} b_{22} b_{13}^{-1} b_{12}\right)^{a_{12}},
$$
$$
\left( a_{13} a_{12}^{-1} \right)^{a_{11}} = \left( a_{13} a_{12}^{-1} \right)^{ a_{21}^{-1} a_{12}},~~~
\left( b_{12}^{-1} b_{13} \right)^{a_{11}} = \left( b_{12}^{-1} b_{13} \right)^{a_{21}^{-1} a_{12}}.
$$
Denote
$$
A_a = \langle a_{21} a_{22}^{-1} a_{31},~~a_{12} a_{13}^{-1} a_{22} a_{21}^{-1},~~ a_{13} a_{12}^{-1} \rangle.
$$
We see that
$$
A_a = \langle \lambda_{34},~~\lambda_{24},~~ \lambda_{14} \rangle.
$$
Denote
$$
A_b = \langle b_{31} b_{22}^{-1} b_{21},~~b_{21}^{-1} b_{22} b_{13}^{-1} b_{12},~~ b_{12}^{-1} b_{13} \rangle.
$$
We see that
$$
A_b = \langle \lambda_{43},~~\lambda_{42},~~ \lambda_{41} \rangle.
$$
Also denote
$$
B_a = \langle a_{21} a_{22}^{-1} a_{31},~~\left( a_{12} a_{13}^{-1} a_{22} a_{21}^{-1} \right)^{a_{12}},~~ \left( a_{13} a_{12}^{-1} \right)^{ a_{21}^{-1} a_{12}} \rangle = \langle \lambda_{34},~~\lambda_{24}^{\lambda_{12}},~~ \lambda_{14}^{\lambda_{12}} \rangle
$$
and
$$
B_b = \langle b_{31} b_{22}^{-1} b_{21},~~\left( b_{21}^{-1} b_{22} b_{13}^{-1} b_{12}\right)^{a_{12}},~~ \left( b_{12}^{-1} b_{13} \right)^{a_{21}^{-1} a_{12}} \rangle = \langle \lambda_{43},~~\lambda_{42}^{\lambda_{12}},~~ \lambda_{41}^{\lambda_{12}}\rangle.
$$
We see that $B_a = A_a^{a_{11}}$, $B_b = A_b^{a_{11}}$. Put $A = \langle A_a, A_b \rangle$, $B = \langle B_a, B_b \rangle$. Since $B$ is conjugate with $A$, then $A$ is isomorphic to $B$ and we get
\begin{thm} \label{t4.11}
$VP_4$ is the HNN-extension with the base group
$$
G_4 = \langle c_{11}, a_{21}, a_{12}, c_{21}, c_{12}, a_{31}, a_{22}, a_{13}, b_{31}, b_{22}, b_{13} \rangle
$$
associated subgroups $A$ and $B$, stable letter $a_{11}$. $G_4$ is defined by the following relations
(here $\varepsilon = \pm 1$):
1) conjugations by $c_{11}^{\varepsilon}$
$$
a_{21}^{c_{11}^{\varepsilon}} = a_{21},~~~a_{12}^{c_{11}^{\varepsilon}} = a_{12}^{c_{12}^{\varepsilon} c_{21}^{-\varepsilon}},~~~c_{21}^{c_{11}^{\varepsilon}} = c_{21},~~~c_{12}^{c_{11}^{\varepsilon}} = c_{12}^{c_{21}^{-\varepsilon}},
$$
$$
a_{31}^{c_{11}^{\varepsilon}} = a_{31},~~~a_{22}^{c_{11}^{\varepsilon}} = a_{22},~~~
a_{13}^{c_{11}^{\varepsilon}} = a_{13}^{c_{13}^{\varepsilon} c_{22}^{-\varepsilon}},~~~b_{31}^{c_{11}^{\varepsilon}} = b_{31},~~~
b_{22}^{c_{11}^{\varepsilon}} = b_{22},~~~
b_{13}^{c_{11}^{\varepsilon}} = b_{13}^{c_{13}^{\varepsilon} c_{22}^{-\varepsilon}},
$$
2) conjugations by $c_{21}^{\varepsilon}$
$$
a_{31}^{c_{21}^{\varepsilon}} = a_{31},~~~a_{22}^{c_{21}^{\varepsilon}} = a_{22}^{c_{22}^{\varepsilon} c_{31}^{-\varepsilon}},~~~
a_{13}^{c_{21}^{\varepsilon}} = a_{13}^{c_{22}^{\varepsilon} c_{31}^{-\varepsilon}},~~~b_{31}^{c_{21}^{\varepsilon}} = b_{31},~~~
b_{22}^{c_{21}^{\varepsilon}} = b_{22}^{c_{22}^{\varepsilon} c_{31}^{-\varepsilon}},~~~
b_{13}^{c_{21}^{\varepsilon}} = b_{13}^{c_{22}^{\varepsilon} c_{31}^{-\varepsilon}},
$$
3) conjugations by $c_{12}^{\varepsilon}$
$$
a_{31}^{c_{12}^{\varepsilon}} = a_{31},~~~
a_{13}^{c_{12}^{\varepsilon}} = a_{13}^{c_{13}^{\varepsilon} c_{31}^{-\varepsilon}},~~~b_{31}^{c_{12}^{\varepsilon}} = b_{31},~~~
b_{13}^{c_{12}^{\varepsilon}} = b_{13}^{c_{13}^{\varepsilon} c_{31}^{-\varepsilon}},
$$
$$
a_{22}^{c_{12}^{-1}} = a_{13}^{c_{13}^{-1} c_{31}} a_{13}^{-c_{13}^{-1} c_{22}} a_{22} [c_{21}, c_{12}^{-1}],~~
a_{22}^{c_{12}} = [c_{12}, c_{21}^{-1}] a_{13}^{-c_{13} c_{22}^{-1}} a_{22} a_{13}^{c_{13} c_{31}^{-1}},
$$
$$
b_{22}^{c_{12}^{-1}} = b_{13}^{c_{13}^{-1} c_{31}} b_{22} b_{13}^{-c_{13}^{-1} c_{22}} [c_{21}, c_{12}^{-1}],~~
b_{22}^{c_{12}} = [c_{12}, c_{21}^{-1}] b_{22} b_{13}^{-c_{13} c_{22}^{-1}} b_{13}^{c_{13} c_{31}^{-1}}.
$$
4) commutativity relations
$$
[a_{21}, a_{12}] = [a_{31}, a_{22}] = [a_{31}, a_{13}] = [a_{22}, a_{13}] = 1,
$$
$$
[c_{21} a_{21}^{-1}, c_{12} a_{21}^{-1}] = [b_{31}, b_{22}] = [b_{31}, b_{13}] = [b_{22}, b_{13}] = 1.
$$
\end{thm}
Hence, to find defining relations of $T_3$ we need to study $G_4$.
Define the following subgroup of $G_4$:
$$
Q = \langle a_{21}, a_{12}, c_{21}, c_{12}, a_{31}, a_{22}, a_{13}, b_{31}, b_{22}, b_{13} \rangle.
$$
From relations 1) of Theorem \ref{t4.11} follows that $Q$ is normal in $G_4$ and is the kernel of the homomorphism
$$
G_4 \longrightarrow \langle c_{11} \rangle
$$
which sends $c_{11}$ to $c_{11}$ and sends all other generators to 1.
Similarly to the case $VP_3$ one can see that $Q$ is defined by relations which come from
relations 2) -- 4) of Theorem \ref{t4.11} by conjugation $c_{11}^k$, $k \in \mathbb{Z}$. Using the defining relations of $G_4$ one can prove that all conjugations of relations 2) -- 3) are equivalent to relations 2) -- 3). Hence,
\begin{lem} \label{l5.13}
The group $Q$ is defined by relations 2) -- 3) of Theorem \ref{t4.11} and relations
$$
[a_{21}, a_{12}]^{c_{11}^k} = [a_{31}, a_{22}]^{c_{11}^k} = [a_{31}, a_{13}]^{c_{11}^k} = [a_{22}, a_{13}]^{c_{11}^k} = 1,
$$
$$
[c_{21} a_{21}^{-1}, c_{12} a_{21}^{-1}]^{c_{11}^k} = [b_{31}, b_{22}]^{c_{11}^k} = [b_{31}, b_{13}]^{c_{11}^k} = [b_{22}, b_{13}]^{c_{11}^k} = 1,
$$
that can be written in the form
$$
[a_{21}, a_{12}^{c_{12}^k c_{21}^{-k}}] = [a_{31}, a_{22}] = [a_{31}, a_{13}^{c_{13}^k c_{22}^{-k}}] = [a_{22}, a_{13}^{c_{13}^k c_{22}^{-k}}] = 1,
$$
$$
[c_{21} a_{21}^{-1}, c_{12}^{c_{21}^{-k}} a_{21}^{-c_{12}^k c_{21}^{-k}}] = [b_{31}, b_{22}] = [b_{31}, b_{13}^{c_{13}^k c_{22}^{-k}}] = [b_{22}, b_{13}^{c_{13}^k c_{22}^{-k}}] = 1,
$$
for all integer numbers $k$.
\end{lem}
Now we can prove the main result of the present paper.
\begin{thm}\label{T_3}
The group
$$
T_3 = \langle a_{31},~~a_{22},~~a_{13},~~b_{31},~~b_{22},~~b_{13} \rangle
$$
is defined by relations
$$
[a_{31}, a_{22}^{c_{22}^{m} c_{31}^{-m}}] = [a_{31}, a_{13}^{c_{13}^{k} c_{22}^{m-k} c_{31}^{-m}}] = [a_{22}^{c_{22}^m c_{31}^{-m}}, a_{13}^{c_{13}^k c_{22}^{m-k} c_{31}^{-m}}] = 1,
$$
$$
[b_{31}, b_{22}^{c_{22}^{m} c_{31}^{-m}}] = [b_{31}, b_{13}^{c_{13}^{k} c_{22}^{m-k} c_{31}^{-m}}] = [b_{22}^{c_{22}^m c_{31}^{-m}}, b_{13}^{c_{13}^k c_{22}^{m-k} c_{31}^{-m}}] = 1.
$$
where $k, m \in \mathbb{Z}$.
\end{thm}
\begin{proof}
In Lemma \ref{l5.13} we have found a set of defining relations for $Q$. From this set follows that $Q$ is a free product of subgroups
$Q_1 = \langle c_{21}, c_{12}, a_{21}, a_{12} \rangle$ and $Q_2 = \langle c_{21}, c_{12}, a_{31}, a_{22}, a_{13}, b_{31}, b_{22}, b_{13} \rangle$ with amalgamated
subgroup $Q_c = \langle c_{21}, c_{12} \rangle$. Hence, we have a set of defining relations for $Q_2$: it includes relations of $Q$, that contains only generators of $Q_2$.
Now consider a homomorphism
$$
\varphi : Q_2 \longrightarrow \langle c_{21} \rangle,
$$
that is defined by the formulas
$$
\varphi (c_{21}) = c_{21},~~ \varphi (c_{12}) = \varphi (a_{31}) = \varphi (a_{22}) = \varphi (a_{13}) = \varphi (b_{31}) = \varphi (b_{22}) = \varphi (b_{13}) = 1.
$$
To find a presentation of $Ker(\varphi)$ take the set of coset representatives of this kernel in $Q_2$:
$$
\Lambda = \{ c_{21}^m ~|~ m \in \mathbb{Z} \}.
$$
Then $Ker(\varphi)$ is generated by elements
$$
c_{12}^{c_{21}^m},~~a_{31}^{c_{21}^m},~~a_{22}^{c_{21}^m},~~a_{13}^{c_{21}^m},~~
b_{31}^{c_{21}^m},~~b_{22}^{c_{21}^m},~~b_{13}^{c_{21}^m}.
$$
Let us denote
$$
d_m = c_{12}^{c_{21}^m},~~~m \in \mathbb{Z}.
$$
Using the conjugations formulas by $c_{21}$ from Theorem \ref{t4.11}, we get
$$
a_{31}^{c_{21}^m} = a_{31},~~a_{22}^{c_{21}^m} = a_{22}^{c_{22}^m c_{31}^{-m}},~~~a_{13}^{c_{21}^m} = a_{13}^{c_{22}^m c_{31}^{-m}},
$$
$$
b_{31}^{c_{21}^m} = b_{31},~~b_{22}^{c_{21}^m} = b_{22}^{c_{22}^m c_{31}^{-m}},~~~b_{13}^{c_{21}^m} = b_{13}^{c_{22}^m c_{31}^{-m}}.
$$
Hence $Ker(\varphi)$ is generated by elements
$$
d_m, m \in \mathbb{Z}, ~a_{31}, a_{22}, a_{13}, b_{31}, b_{22}, b_{13}.
$$
To find a set of defining relations for $Ker(\varphi)$ we have to take the following relations in $Q$:
$$
a_{31}^{c_{12}} = a_{31},~~~
a_{13}^{c_{12}} = a_{13}^{c_{13} c_{31}^{-1}},~~~b_{31}^{c_{12}} = b_{31},~~~
b_{13}^{c_{12}} = b_{13}^{c_{13} c_{31}^{-1}},
$$
$$
a_{22}^{c_{12}^{-1}} = a_{13}^{c_{13}^{-1} c_{31}} a_{13}^{-c_{13}^{-1} c_{22}} a_{22} [c_{21}, c_{12}^{-1}],~~
b_{22}^{c_{12}^{-1}} = b_{13}^{c_{13}^{-1} c_{31}} b_{22} b_{13}^{-c_{13}^{-1} c_{22}} [c_{21}, c_{12}^{-1}],
$$
$$
[a_{21}, a_{12}^{c_{12}^k c_{21}^{-k}}] = [a_{31}, a_{22}] = [a_{31}, a_{13}^{c_{13}^k c_{22}^{-k}}] = [a_{22}, a_{13}^{c_{13}^k c_{22}^{-k}}] = 1,
$$
$$
[c_{21} a_{21}^{-1}, c_{12}^{c_{21}^{-k}} a_{21}^{-c_{12}^k c_{21}^{-k}}] = [b_{31}, b_{22}] = [b_{31}, b_{13}^{c_{13}^k c_{22}^{-k}}] = [b_{22}, b_{13}^{c_{13}^k c_{22}^{-k}}] = 1,
$$
and conjugate them by $c_{21}^m$.
At first consider the relations
$$
a_{22}^{c_{12}^{-1}} = a_{13}^{c_{13}^{-1} c_{31}} a_{13}^{-c_{13}^{-1} c_{22}} a_{22} [c_{21}, c_{12}^{-1}],~~
b_{22}^{c_{12}^{-1}} = b_{13}^{c_{13}^{-1} c_{31}} b_{22} b_{13}^{-c_{13}^{-1} c_{22}} [c_{21}, c_{12}^{-1}].
$$
These relations are equivalent to the following relations from Lemma \ref{l4.81}
\begin{equation} \label{r5.11}
\left( a_{13}^{-1} a_{22} \right)^{ c_{12}^{-1}} = \left( a_{13}^{-1} a_{22} \right)^{ c_{11}^{-1}} [c_{21}, c_{12}^{-1}],
\end{equation}
\begin{equation} \label{r5.12}
\left( b_{22} b_{13}^{-1} \right)^{ c_{12}} = [c_{12}, c_{21}^{-1}] \left( b_{22} b_{13}^{-1} \right)^{ c_{11}}.
\end{equation}
Using the formulas of conjugations by $c_{11}^{-1}$, rewrite the relation (\ref{r5.11}) in the form
$$
d_0 \, a_{13}^{-1} \, a_{22} = a_{13}^{-c_{13}^{-1} c_{22}} \, a_{22} \, d_1.
$$
Conjugated it by $c_{21}^m$ we get
$$
d_m \, a_{13}^{-c_{22}^{m} c_{31}^{-m}} a_{22}^{c_{22}^{m} c_{31}^{-m}} = a_{13}^{-c_{13}^{-1} c_{22}^{m} c_{22} c_{31}^{-m}} a_{22}^{c_{22}^{m} c_{31}^{-m}} d_{m+1}.
$$
Conjugated both sides of this relation by $c_{31}^{m}$ we have
$$
d_m \, \left( a_{13}^{-1} a_{22}\right)^{c_{22}^{m}} = a_{13}^{-c_{13}^{-1} c_{22}^{m+1}} a_{22}^{c_{22}^{m}} d_{m+1}.
$$
From these relations we have
$$
d_{m+1} = \left( a_{22}^{-1} a_{13}^{c_{13}^{-1} c_{22}} \right)^{c_{22}^{m}} \, d_m \left( a_{13}^{-1} a_{22} \right)^{c_{22}^{m}}~~\mbox{for}~m \geq 0,
$$
$$
d_{m} = \left( a_{13}^{-c_{13}^{-1} c_{22}} a_{22} \right)^{c_{22}^{m}} \, d_{m+1} \left( a_{22}^{-1} a_{13} \right)^{c_{22}^{m}}~~\mbox{for}~m < 0.
$$
Analogously, from (\ref{r5.12}) we get the following formulas
$$
d_{m} = \left( b_{13} b_{22}^{-1} \right)^{c_{22}^{m}} \, d_{m-1} \left( b_{22} b_{13}^{-c_{13} c_{22}^{-1}} \right)^{c_{22}^{m}}~~\mbox{for}~m \geq 1,
$$
$$
d_{m-1} = \left( b_{22} b_{13}^{-1} \right)^{c_{22}^{m}} \, d_{m} \left( b_{13}^{c_{13} c_{22}^{-1}} b_{22}^{-1} \right)^{c_{22}^{m}}~~\mbox{for}~m < 1.
$$
For further calculations introduce the notations
$$
A_1^{(l)} = \left( a_{22}^{-1} \, a_{13}^{c_{13}^{-1} c_{22}} \right)^{c_{22}^{l}},~~~A_2^{(l)} = \left( a_{13}^{-1} \, a_{22} \right)^{c_{22}^{l}},~~\overline{A}_i^{(l)} = \left( A_i^{(l)} \right)^{-1},
$$
$$
B_1^{(l)} = \left( b_{22} \, b_{13}^{-c_{13} c_{22}^{-1}} \right)^{c_{22}^{l}},~~~B_2^{(l)} = \left( b_{13} \, b_{22}^{-1} \right)^{c_{22}^{l}},~~\overline{B}_i^{(l)} = \left( B_i^{(l)} \right)^{-1},
$$
for all integers $l$.
Using these notations we express $d_m$, $m \not= 0$ as words, which depend only on $d_0^{\pm 1}$ and other generators of $Ker(\varphi)$. Using induction on $m$ we get:
for $m \geq 1$
$$
d_m = A_1^{(m)} \, A_1^{(m-1)} \ldots A_1^{(0)} \, d_0 \, A_2^{(0)} \, A_2^{(1)} \ldots A_2^{(m)},
$$
$$
d_m = B_2^{(m)} \, B_2^{(m-1)} \ldots B_2^{(1)}\, d_0 \, B_1^{(1)} \, B_1^{(2)} \ldots B_1^{(m)},
$$
for $m \leq -1$
$$
d_m = \overline{A}_1^{(m)} \overline{A}_1^{(m+1)} \ldots \overline{A}_1^{(-1)} \, d_0 \, \overline{A}_2^{(-1)} \overline{A}_2^{(-2)} \ldots \overline{A}_2^{(m)},
$$
$$
d_m = \overline{B}_2^{(m+1)} \overline{B}_2^{(m+2)} \ldots \overline{B}_2^{(0)} \, d_0 \, \overline{B}_1^{(0)} \overline{B}_1^{(-1)} \ldots \overline{B}_1^{(m+1)}.
$$
We see that left sides of these relations are equal, then equality of the right sides gives relations:
for $m \geq 1$
$$
d_0^{-1} \left( \overline{A}_1^{(0)} \, \overline{A}_1^{(1)} \ldots \overline{A}_1^{(m)} \cdot B_2^{(m)} \, B_2^{(m-1)} \ldots B_2^{(1)} \right) d_0 =
$$
$$
=A_2^{(0)} \, A_2^{(1)} \ldots A_2^{(m)} \cdot \overline{B}_1^{(m)} \overline{B}_1^{(m-1)} \ldots \overline{B}_1^{(1)},
$$
for $m \leq -1$
$$
d_0^{-1} \left( A_1^{(-1)} \, A_1^{(-2)} \ldots A_1^{(m)} \cdot \overline{ B}_2^{(m+1)} \, \overline{B}_2^{(m+2)} \ldots \overline{B}_2^{(0)} \right) d_0 =
$$
$$
=\overline{A}_2^{(-1)} \, \overline{A}_2^{(-2)} \ldots \overline{A}_2^{(m)} \cdot \overline{B}_1^{(m+1)} \overline{B}_1^{(m+2)} \ldots \overline{B}_1^{(0)}.
$$
Let us consider other relations.
1) Take the relation $c_{12}^{-1} a_{31} c_{12} = a_{31}$. Conjugating it by $c_{21}^m$ we get $d_{m}^{-1} a_{31} d_{m} = a_{31}$. Put instead $d_m$ its expressions we get:
$$
d_0^{-1} \left( \overline{A}_1^{(0)} \overline{A}_1^{(1)} \ldots \overline{A}_1^{(m)} \, a_{31} \, A_1^{(m)} A_1^{(m-1)} \ldots A_1^{(0)} \right) d_0 =
$$
$$
= A_2^{(0)} \, A_2^{(1)} \ldots A_2^{(m)} \, a_{31} \, \overline{A}_2^{(m)} \overline{A}_2^{(m-1)} \ldots \overline{A}_2^{(0)},~~\mbox{for}~~m \geq 1,
$$
and
$$
d_0^{-1} \left( A_1^{(-1)} A_1^{(-2)} \ldots A_1^{(m)} \, a_{31} \, \overline{A}_1^{(m)} \overline{A}_1^{(m+1)} \ldots \overline{A}_1^{(-1)} \right) d_0 =
$$
$$
= \overline{A}_2^{(-1)} \, \overline{A}_2^{(-2)} \ldots \overline{A}_2^{(m)} \, a_{31} \, A_2^{(m)} A_2^{(m+1)} \ldots A_2^{(-1)},~~\mbox{for}~~m \leq -1.
$$
Analogously, from relation $c_{12}^{-1} b_{31} c_{12} = b_{31}$ we get relations
$$
d_0^{-1} \left( \overline{B}_2^{(1)} \overline{B}_2^{(2)} \ldots \overline{B}_2^{(m)} \, b_{31} \, B_2^{(m)} B_2^{(m-1)} \ldots B_2^{(1)} \right) d_0 =
$$
$$
= B_1^{(1)} \, B_1^{(2)} \ldots B_1^{(m)} \, b_{31} \, \overline{B}_1^{(m)} \overline{B}_1^{(m-1)} \ldots \overline{B}_1^{(1)},~~\mbox{for}~~m \geq 1,
$$
and
$$
d_0^{-1} \left( B_2^{(0)} B_2^{(-1)} \ldots B_2^{(m+1)} \, b_{31} \, \overline{B}_2^{(m+1)} \overline{B}_2^{(m+2)} \ldots \overline{B}_2^{(0)} \right) d_0 =
$$
$$
= \overline{B}_1^{(0)} \, \overline{B}_1^{(-1)} \ldots \overline{B}_1^{(m+1)} \, b_{31} \, B_1^{(m+1)} B_1^{(m+2)} \ldots B_1^{(0)},~~\mbox{for}~~m \leq -1.
$$
2) Take the relation
$$
a_{13}^{c_{12}} = a_{13}^{c_{13} c_{31}^{-1}}.
$$
Conjugating it by $c_{21}^m$ we get
$$
d_{m}^{-1} a_{13}^{c_{22}^m} d_{m} = a_{13}^{c_{13} c_{22}^m c_{31}^{-1}}.
$$
Put instead $d_m$ its expressions we get:
$$
d_0^{-1} \left( \overline{A}_1^{(0)} \overline{A}_1^{(1)} \ldots \overline{A}_1^{(m)} \, a_{13}^{c_{22}^m} \, A_1^{(m)} A_1^{(m-1)} \ldots A_1^{(0)} \right) d_0 =
$$
$$
= A_2^{(0)} \, A_2^{(1)} \ldots A_2^{(m)} \, a_{13}^{c_{13} c_{22}^m c_{31}^{-1}} \, \overline{A}_2^{(m)} \overline{A}_2^{(m-1)} \ldots \overline{A}_2^{(0)},~~\mbox{for}~~m \geq 1,
$$
and
$$
d_0^{-1} \left( A_1^{(-1)} A_1^{(-2)} \ldots A_1^{(m)} \, a_{13}^{c_{22}^m} \, \overline{A}_1^{(m)} \overline{A}_1^{(m+1)} \ldots \overline{A}_1^{(-1)} \right) d_0 =
$$
$$
= \overline{A}_2^{(-1)} \, \overline{A}_2^{(-2)} \ldots \overline{A}_2^{(m)} \, a_{13}^{c_{13} c_{22}^m c_{31}^{-1}} \, A_2^{(m)} A_2^{(m+1)} \ldots A_2^{(-1)},~~\mbox{for}~~m \leq -1.
$$
Analogously, the relation
$$
b_{31}^{c_{12}} = b_{31}^{c_{13} c_{31}^{-1}}
$$
is equivalent to the relations
$$
d_0^{-1} \left( \overline{B}_2^{(1)} \overline{B}_2^{(2)} \ldots \overline{B}_2^{(m)} \, b_{13}^{c_{22}^m} \, B_2^{(m)} B_2^{(m-1)} \ldots B_2^{(1)} \right) d_0 =
$$
$$
= B_1^{(1)} \, B_1^{(2)} \ldots B_1^{(m)} \, b_{13}^{c_{13} c_{22}^m c_{31}^{-1}} \, \overline{B}_1^{(m)} \overline{B}_1^{(m-1)} \ldots \overline{B}_1^{(1)},~~\mbox{for}~~m \geq 1,
$$
and
$$
d_0^{-1} \left( B_2^{(0)} B_2^{(-1)} \ldots B_2^{(m+1)} \, b_{13}^{c_{22}^m} \, \overline{B}_2^{(m+1)} \overline{B}_2^{(m+2)} \ldots \overline{B}_2^{(0)} \right) d_0 =
$$
$$
= \overline{B}_1^{(0)} \, \overline{B}_1^{(-1)} \ldots \overline{B}_1^{(m+1)} \, b_{13}^{c_{13} c_{22}^m c_{31}^{-1}} \, B_1^{(m+1)} B_1^{(m+2)} \ldots B_1^{(0)},~~\mbox{for}~~m \leq -1.
$$
3) Conjugating the commutativity relations by $c_{21}^m$ we get
$$
[a_{31}, a_{22}^{c_{22}^{m} c_{31}^{-m}}] = [a_{31}, a_{13}^{c_{13}^{k} c_{22}^{m-k} c_{31}^{-m}}] = [a_{22}^{c_{22}^m c_{31}^{-m}}, a_{13}^{c_{13}^k c_{22}^{m-k} c_{31}^{-m}}] = 1,
$$
$$
[b_{31}, b_{22}^{c_{22}^{m} c_{31}^{-m}}] = [b_{31}, b_{13}^{c_{13}^{k} c_{22}^{m-k} c_{31}^{-m}}] = [b_{22}^{c_{22}^m c_{31}^{-m}}, b_{13}^{c_{13}^k c_{22}^{m-k} c_{31}^{-m}}] = 1.
$$
Now we will show that $Ker (\varphi)$ is an HNN-extension with base group $T_3$ and stable letter $d_0$. Introduce subgroups $A$ and $B$ in $Ker (\varphi)$. Subgroup $A$ is generated by elements:
for $m \geq 1$
$$
\overline{A}_1^{(0)} \, \overline{A}_1^{(1)} \ldots \overline{A}_1^{(m)} \cdot B_2^{(m)} \, B_2^{(m-1)} \ldots B_2^{(1)},
$$
$$
\overline{A}_1^{(0)} \overline{A}_1^{(1)} \ldots \overline{A}_1^{(m)} \, X_1 \, A_1^{(m)} A_1^{(m-1)} \ldots A_1^{(0)},~~\mbox{where}~X_1 \in \{ a_{31}, a_{31}^{c_{22}^m} \},
$$
$$
\overline{B}_2^{(1)} \overline{B}_2^{(2)} \ldots \overline{B}_2^{(m)} \, Y_1 \, B_2^{(m)} B_2^{(m-1)} \ldots B_2^{(1)},~~\mbox{where}~Y_1 \in \{ b_{31}, b_{31}^{c_{22}^m} \},
$$
for $m \leq -1$
$$
A_1^{(-1)} \, A_1^{(-2)} \ldots A_1^{(m)} \cdot \overline{ B}_2^{(m+1)} \, \overline{B}_2^{(m+2)} \ldots \overline{B}_2^{(0)},
$$
$$
A_1^{(-1)} A_1^{(-2)} \ldots A_1^{(m)} \, X_1 \, \overline{A}_1^{(m)} \overline{A}_1^{(m+1)} \ldots \overline{A}_1^{(-1)},
$$
$$
B_2^{(0)} B_2^{(-1)} \ldots B_2^{(m+1)} \, Y_1 \, \overline{B}_2^{(m+1)} \overline{B}_2^{(m+2)} \ldots \overline{B}_2^{(0)}.
$$
Subgroup $B$ is generated by elements:
for $m \geq 1$
$$
\overline{A}_2^{(-1)} \, \overline{A}_2^{(-2)} \ldots \overline{A}_2^{(m)} \cdot \overline{B}_1^{(m+1)} \overline{B}_1^{(m+2)} \ldots \overline{B}_1^{(0)},
$$
$$
A_2^{(0)} \, A_2^{(1)} \ldots A_2^{(m)} \, X_2 \, \overline{A}_2^{(m)} \overline{A}_2^{(m-1)} \ldots \overline{A}_2^{(0)},~~\mbox{where}~X_2 \in \{ a_{31}, a_{13}^{c_{13} c_{22}^m c_{31}^{-1}} \},
$$
$$
B_1^{(1)} \, B_1^{(2)} \ldots B_1^{(m)} \, b_{13}^{c_{13} c_{22}^m c_{31}^{-1}} \, \overline{B}_1^{(m)} \overline{B}_1^{(m-1)} \ldots \overline{B}_1^{(1)},~~\mbox{where}~Y_2 \in \{ b_{31}, b_{13}^{c_{13} c_{22}^m c_{31}^{-1}} \},
$$
for $m \leq -1$
$$
\overline{A}_2^{(-1)} \, \overline{A}_2^{(-2)} \ldots \overline{A}_2^{(m)} \cdot \overline{B}_1^{(m+1)} \overline{B}_1^{(m+2)} \ldots \overline{B}_1^{(0)},
$$
$$
\overline{A}_2^{(-1)} \, \overline{A}_2^{(-2)} \ldots \overline{A}_2^{(m)} \, X_2 \, A_2^{(m)} A_2^{(m+1)} \ldots A_2^{(-1)},
$$
$$
\overline{B}_1^{(0)} \, \overline{B}_1^{(-1)} \ldots \overline{B}_1^{(m+1)} \, Y_2 \, B_1^{(m+1)} B_1^{(m+2)} \ldots B_1^{(0)}.
$$
The isomorphism $\psi : A \to B$ is defined conjugation by $d_0$ and we see that all relations of $Ker(\varphi)$, exclude the commutativity relations from 3), define this conjugation. Hence,
$Ker(\varphi)$ is an HNN-extension with base group $T_3$, stable letter $d_0$ and assotiated subgroups $A$ and $B$:
$$
Ker(\varphi) = \langle T_3, d_0 ~|~rel(T_3),~~d_0^{-1} A d_0 = B, \psi \rangle.
$$
From the properties of HNN-extension follows that the set of defining relations $rel(T_3)$ is the set of commutativity relations from 3).
\end{proof}
|
2204.10004
|
\section{Introduction}
\subsection{Background}
A Seifert surface for an oriented knot $K\subset S^3$ is a connected compact oriented surface $F\subset S^3$ with $\partial F=K$. Recall \cite[Chapter~VIII]{Ro90} that the corresponding Seifert form is defined as
\[ \begin{array}{rcl} \h_1(S)\times \h_1(S)&\to & \mathbb{Z}\\
([\gamma],[\delta])&\mapsto & \operatorname{lk}(\gamma^+,\delta),\end{array}\]
where $\gamma^+$ denotes a positive push-off of $\gamma$ and $\operatorname{lk}$ denotes the linking number of oriented disjoint curves in $S^3$. Any choice of a basis for $\h_1(S)$
now defines a corresponding matrix, called Seifert matrix. Seifert matrices,
have played an important role in knot theory
ever since they were introduced by Herbert Seifert \cite{Se34}.
For example a Seifert matrix~$A$ can be used to calculate the Alexander polynomial $\Delta_K(t)=\operatorname{det}(At-A^T)\in \mathbb{Z}[t^{\pm 1}]$ and
it can be used to define the
Levine-Tristram signature function $\sigma(K)\colon S^1\to \mathbb{Z}$
by setting $\sigma_z(K):=\operatorname{sign}(A(1-z)+A^T(1-\overline{z}))$.
There are various algorithms for computing Seifert matrices for a knot
\cite{O'B02,Col16}. In particular, Julia Collins gave an algorithm to determine a Seifert matrix from a given braid description \cite{Col16}. An implementation of this algorithm is available online \cite{CKL16}.
Less known are the generalizations of Seifert surfaces and Seifert matrices to links.
Let $L=L_1\sqcup \dots\sqcup L_\mu\subset S^3$ be a $\mu$-colored oriented link, i.e.\ $L$ is a disjoint union of finitely many oriented knots that get grouped into $\mu$ sets.
Daryl Cooper \cite{Coo82} and David Cimasoni \cite{Ci04} introduced the notion of a C-complex for $L$. A C-complex consists, roughly speaking, of $\mu$ embedded compact oriented surfaces $S_1\cup \dots\cup S_\mu$ with $\partial S_i=L_i$ and a few restrictions on how the $S_i$ are allowed to intersect. We postpone the definition to Section~\ref{section:c-complex}, but we hope that Figure~\ref{fig.clasp_complex} gives at least an idea of the concept.
\begin{figure}[h]
\centering
\def0.55 \linewidth{0.45 \linewidth}
\input{clasp_complex.pdf_tex}
\caption{A C-complex for a $3$-component link in $\mu = 2$ colors. The Seifert surface for the trefoil knot (in red) and the Seifert surface for the Hopf link (in blue) intersect along a clasp in the center of the picture.}
\label{fig.clasp_complex}
\end{figure}
Given a C-complex $S$ and given a basis for $\h_1(S)$
one obtains for any choice of $\epsilon\in \{\pm 1\}^\mu$ a generalized Seifert matrix $A^\epsilon$.
These $2^\mu$ generalized Seifert matrices
can be used to define and calculate the Conway potential function, which in turn determines the multivariable Alexander polynomial \cite[p.~128]{Ci04} \cite{DMO21}. We recall the formulae in
Theorem~\ref{thm:conway-potential-function}.
Furthermore, David Cimasoni and Vincent Florens used generalized Seifert matrices to define a generalization of the Levine-Tristram signature function \cite{CF08}, namely the Cimasoni-Florens signature function $\sigma_L\colon (S^1)^\mu\to \mathbb{Z}$.
The generalized Seifert matrices can also be used to determine the Blanchfield pairing of a colored link \cite{CFT18,Con18}.
\subsection{Our results}
Our goal was to come up with an algorithm that computes generalized Seifert matrices of colored links, and implement it as a computer program. To formulate an algorithm one first has to settle on the input.
The usual proof of Alexander's Theorem (see for example the account of Burde-Zieschang-Heusener \cite[Proposition~2.12]{BZH14}) shows that every oriented $\mu$-colored link is the closure of a $\mu$-colored braid (i.e.\ a braid together with an integer $0\le k < \mu$ associated to each component of its closure). The description of braids as sequences of elementary crossings makes them a convenient input type for a computer program.
\begin{framed}
We give an algorithm that takes as input a colored braid, produces a C-complex~$S$ for its closure, and computes the associated generalized Seifert matrices $A^\epsilon$ (with respect to some basis of~$\h_1(S)$).
\end{framed}
In this paper, our algorithm is explained in natural language. Despite its geometrical flavor, it is formal enough to be implemented as a computer program:
\begin{framed}
We provide a computer program, called \clasper, implementing our algorithm. \clasper\ displays a visualization of the constructed C-complex and outputs a family of generalized Seifert matrices. It also computes the Conway potential function, the multivariable Alexander polynomial, and the Cimasoni-Florens signatures of the colored link.
\end{framed}
\clasper\ was programmed by the second author. A Windows installer, as well as the Python source code, can be downloaded at \url{https://github.com/Chinmaya-Kausik/py_knots}. Figure~\ref{fig.screenshots} shows the user interface of \clasper.
\begin{figure}[h]
\centering
\includegraphics[width=0.95 \linewidth]{clasper_interface.png}\medskip
\includegraphics[width=0.85 \linewidth]{Clasper_visualization.png}
\caption{The \clasper\ interface. Top: \clasper\ takes as input a braid given as its sequence of crossings, number of strands, and information about the coloring of its closure, and outputs a family of generalized Seifert matrices thereof. \clasper\ also computes the Conway potential function, multivariable Alexander polynomial, and Cimasoni-Florens signature (at a user-specified point). Bottom: \clasper\ displays a diagram of the input colored braid and a schematic of the C-complex from which the generalized Seifert matrices were produced. See Subsection~\ref{sec.fillin} for how to interpret such a schematic, and Figure~\ref{fig.multicolorspine} (right) for the C-complex represented in this screenshot.}
\label{fig.screenshots}
\end{figure}
\subsection{Organization of the article}
In Section~\ref{section:c-complex-matrix} we give the definition of C-complexes and we show how they can be used to define generalized Seifert matrices. Furthermore we recall how generalized Seifert matrices can be used to compute the multivariable Alexander polynomial and how they can be used to define the Cimasoni-Florens signatures.
In Section~\ref{section:algorithm} we explain our algorithm for computing generalized Seifert matrices for colored links given by a braid description.
Section~\ref{section:implementation} contains additional remarks on the technical details of the implementation by the second author.
\subsection*{Acknowledgments}
SF and JPQ were supported by the SFB 1085 ``higher invariants'' at the University of Regensburg, funded by the DFG. We also wish to thank Lars Munser for helpful conversations.
\section{C-complexes and generalized Seifert matrices}\label{section:c-complex-matrix}
\subsection{C-complexes}\label{section:c-complex}
\begin{definition}
A \textbf{C-complex} (where ``C'' is short for ``clasp'') for a colored oriented link $L =L_1\cup \dots \cup L_\mu\subset S^3$ is a collection of surfaces $S_1,\dots,S_\mu$ such that:
\begin{itemize}
\item each $S_i$ is a compact oriented surface in $S^3$ with $\partial S_i=L_i$, where we demand that the equality is an equality of oriented 1-manifolds,
\item every two distinct $S_i, S_j$ intersect transversely along a (possibly empty) finite disjoint union of intervals, each with one endpoint in~$L_i$ and the other in~$L_j$, but otherwise disjoint from~$L$. Such an intersection component is called a \textbf{clasp} -- see Figure~\ref{fig.clasp_complex} for an illustration,
\item the union $S_1\cup\dots\cup S_\mu$ is connected, and
\item there are no triple points: for distinct $i,j,k$, we have $S_i \cap S_j \cap S_k = \emptyset$.
\end{itemize}
\end{definition}
C-complexes were introduced for 2-component links by Cooper \cite{Coo82}
and in the general case by Cimasoni \cite{Ci04}.
Cimasoni \cite[Lemma~1]{Ci04} showed that every colored oriented link admits a C-complex.
In this paper we will give a different proof, in fact we will outline an algorithm which takes as input a braid description of a colored oriented link and which produces an explicit C-complex.
We remark that any two C-complexes for a colored link are related by a finite sequence of moves of certain types, a complete list of which was given by by Davis-Martin-Otto \cite[Theorem~1.3]{DMO21} building on work of Cimasoni \cite{Ci04}. We will however not make use of this fact.
\subsection{Push-offs of curves and generalized Seifert matrices}\label{sec.seifertmatrices}
Let $L$ be a $\mu$-colored oriented link and let $S_1,\dots,S_\mu$ be a C-complex for $L$.
Following Cooper \cite{Coo82} and Cimasoni \cite{Ci04} we will now associate to this data generalized Seifert pairings and generalized Seifert matrices. The approach to defining generalized Seifert pairings and matrices is quite similar to the more familiar definition for knots recalled in the introduction.
First note that the orientation of the link $L$ induces an orientation on each Seifert surface~$S_i$ in our C-complex~$S$, which in turn induces an orientation of the normal bundle of~$S_i$. Now, each of the $2^\mu$ tuples of signs $\epsilon = (\epsilon_1, \ldots, \epsilon_\mu)$, with $\epsilon_i = \pm 1$, prescribes a way of pushing a point $p\in S$ off of~$S$: at the Seifert surface $S_i$ and away from the clasps, we let the sign~$\epsilon_i$ determine whether to push $p$ in the positive or negative direction of the normal bundle of~$S_i$, and at a clasp between $S_i$~and~$S_j$, we move~$p$ in the ``diagonal'' direction specified by $\epsilon_i, \epsilon_j$. The push-off of a path~$\gamma$ in~$S$ specified by a tuple $\epsilon \in \{\pm 1\}^\mu$ will be denoted by $\gamma^\epsilon$; see Figure~\ref{fig.clasp_pushoff}.
\begin{figure}[h]
\centering
\def0.55 \linewidth{0.8\linewidth}
\input{clasp_pushoff.pdf_tex}
\caption{Push-offs of a path~$\gamma$ in a C-complex determined by a $\mu$-tuple~$\epsilon$. Arrows indicate the framings of the normal bundles of the Seifert surfaces induced by their orientations. In this example, the C-complex is comprised of $\mu = 2$ Seifert surfaces. We show all its push-offs~$\gamma^\epsilon$, determined by the four possible pairs $\epsilon \in \{\pm 1\}^\mu$.}
\label{fig.clasp_pushoff}
\end{figure}
Given any $\epsilon\in \{\pm 1\}^\mu$ Cimasoni defines the generalized Seifert pairing
\[ \begin{array}{rcl}\alpha^\epsilon\colon \h_1(S) \times \h_1(S)& \to &\mathbb{Z}\\
([\gamma],[\delta])&\mapsto & \operatorname{lk}(\gamma^\epsilon, \delta).\end{array}\]
Finally we pick a basis of $\h_1(S)$.
The collection of matrices $A^\epsilon$ of $\alpha^\epsilon$ with respect to this basis is called a collection of \textbf{generalized Seifert matrices} of $L$.
\subsection{The Conway potential function, the multivariable Alexander polynomial and generalized signatures}\label{sec.invariants}
In this section we turn to the discussion of several applications of generalized Seifert matrices.
In 1970 John Conway \cite{Con70} associated to any $\mu$-colored link~$L$ a rational function on $\mu$ variables, now called the Conway potential function $\nabla_L(t_1,\dots,t_\mu)\in \mathbb{Q}(t_1,\dots,t_\mu)$.
Cimasoni showed that the Conway potential function can be computed using generalized Seifert matrices. To state Cimasoni's Theorem we need the following definition.
\begin{definition}
Let $S=(S_1,\dots,S_\mu)$ be a C-complex for a $\mu$-colored link. We define the \textbf{sign of a clasp}~$C$ by choosing one of its endpoints~$P$, and then taking the sign of the intersection between the Seifert surface and the link component at~$P$. This definition is independent of the choice of endpoint~$P$; see Figure~\ref{fig.clasp_sign}.
The \textbf{sign of a C-complex}~$S$ is defined to be the product of the signs of all clasps, and denoted by~$\operatorname{sgn}(S)$.
\end{definition}
\begin{figure}[h]
\centering
\def0.55 \linewidth{0.45 \linewidth}
\input{clasp_sign.pdf_tex}
\caption{The sign of a clasp~$C$. Link orientations and induced normal framings on the Seifert surfaces are indicated by arrows. Left: a negative, or left-handed clasp. Right: a positive, or right-handed clasp.}
\label{fig.clasp_sign}
\end{figure}
The following theorem of Cimasoni \cite[p.~128]{Ci04} gives a way to calculate the Conway potential function.
\begin{theorem}\label{thm:conway-potential-function}
Let $L$ be a $\mu$-colored oriented link and let $S$ be a C-complex for $L$.
Choose a basis for $\h_1(S)$ and use it to define the $2^\mu$ generalized Seifert matrices
$A^\epsilon$ with $\epsilon=(\epsilon_1,\dots,\epsilon_\mu)\in\{\pm 1\}^\mu$.
Then
\[\nabla_L(t_1,\dots,t_\mu) = \operatorname{sgn}(S)\cdot \prod\limits_{i=1}^\mu
(t_i-t_i^{-1})^{-1+\chi\big(\bigcup_{j\ne i} S_j \big)}\cdot \operatorname{det}\Big(-\sum_{\epsilon \in \{\pm 1\} ^\mu} A^\epsilon\cdot \epsilon_1 \dots \epsilon_\mu\cdot t_1^{\epsilon_1}\dots t_\mu^{\epsilon_\mu}\Big).\]
\end{theorem}
In fact the right-hand side of Theorem~\ref{thm:conway-potential-function} can be used as a definition of the Conway potential function \cite[Lemma~4]{Ci04} \cite[Lemma~2.1]{DMO21}. Also note that for $\mu=1$ the above definition differs from the definition of the one-variable Conway polynomial~$\nabla_L(z)$
given by Lickorish \cite{Li97} and LinkInfo \cite{LM22} by the substitution $z=t-t^{-1}$.
Next recall that given a $\mu$-colored oriented link $L$ we can use presentation matrices for the Alexander module $\h_1(S^3\setminus L;\mathbb{Z}[t_1^{\pm 1},\dots,t_\mu^{\pm 1}])$
to define the multivariable Alexander polynomial $\Delta_L(t_1,\dots,t_\mu)$, which is well-defined only up to multiplication by a term of the form $\pm t_1^{k_1}\dots t_\mu^{k_\mu}$, with $k_i \in \mathbb{Z}$.
The Conway potential function can be viewed as a refinement of the multivariable Alexander polynomial $\Delta_L(t_1,\dots,t_\mu)$. More precisely Conway shows \cite[p.~338]{Con70} that, up to the above indeterminacy, the following equality holds
\[ \nabla_L(t_1,\dots,t_\mu)\,\,=\,\, \left\{ \begin{array}{ll}
\frac{1}{t_1-t_1^{-1}}\cdot \Delta_L(t_1^2), &\mbox{ if }\mu=1,\\
\Delta_L(t_1^2,\dots,t_\mu^2), &\mbox{ if }\mu \geq 2.\end{array}\right.\]
It follows from this equality that the multivariable Alexander polynomial $\Delta_L(t_1,\dots,t_\mu)$ is determined by the Conway potential function $\nabla_L(t_1,\dots,t_\mu)$. In particular, in light of Theorem~\ref{thm:conway-potential-function}, it can be determined by the generalized Seifert matrices.
\begin{definition}\label{dfn.CFsignatures}
Let $L$ be a $\mu$-colored oriented link and let $S$ be a C-complex for $L$.
We pick a basis for $\h_1(S)$ and we use it to define the generalized Seifert matrices
$A^\epsilon$. Following Cimasoni-Florens \cite{CF08} we define
\begin{center}
$\displaystyle H(\omega) := \prod_{i=1}^\mu (1 - \overline{\omega}_i) \cdot A(\omega_1,\dots,\omega_\mu)$,
\end{center}
where
\[A(t_1, \ldots , t_\mu) := \sum_{\epsilon \in \{0,1\}^\mu} \epsilon_1 \cdots \epsilon_\mu \cdot t_1^\frac{1-\epsilon_1}{2} \cdots t_\mu^\frac{1-\epsilon_\mu}{2} \cdot A^\epsilon\]
and
$\omega = (\omega_1,\dots,\omega_\mu) \in (S^1 \setminus \{1\})^\mu$. We define the \textbf{generalized signature} of $L$ at $\omega$ as
\[ \sigma_L(\omega)\,\,=\,\, \mbox{signature of the hermitian matrix $H(\omega)$}\]
and we define the \textbf{nullity} of $L$ at $\omega$ as
\[ \eta_L(\omega)\,\,=\,\,b_0(S)-1+ \mbox{nullity of the matrix $H(\omega)$}.\]
\end{definition}
We have the following theorem due to Cimasoni--Florens \cite[Theorem~2.1]{CF08}
and Davis--Martin--Otto \cite[Theorem~3.2]{DMO21}.
\begin{theorem}
Let $\omega = (\omega_1,\dots,\omega_\mu) \in (S^1 \setminus \{1\})^\mu$.
The signature and nullity of a $\mu$-colored oriented link at $\omega$ are invariant under isotopy of colored links.
\end{theorem}
As is explained in \cite{CF08}, the signature and nullity invariants of links contain a lot of deep information on links:
\begin{enumerate}
\item If $L$ is a $\mu$-colored oriented link that is isotopic to its mirror image,
then the signature function is identically zero \cite[Corollary~2.11]{CF08}.
\item Two $m$-component oriented links $L=L_1\sqcup\dots\sqcup L_m$ and $J=J_1\sqcup\dots\sqcup J_m$ are called \textbf{smoothly concordant} if there exist disjoint properly smoothly embedded oriented annuli $A_1,\dots,A_m\subset S^3\times [0,1]$ such that
$\partial A_i=L_i\times \{0\}\cup (-J_i)\times \{1\}$. If we treat $L$ and $J$ as $m$-colored links in the obvious way, then \cite[Theorem~7.1]{CF08}
for every prime power $q=p^k$ and every $\omega_1,\dots,\omega_m\in S^1\setminus \{1\}$ with $\omega_1^q=\dots=\omega_m^q=1$ we have
\[ \sigma_L(\omega_1,\dots,\omega_m)\,=\, \sigma_J(\omega_1,\dots,\omega_m).\]
\end{enumerate}
We also point the reader towards two other applications:
\begin{enumerate}
\item If $L$ is a $\mu$-colored link and $S_1,\dots,S_\mu$ is a C-complex, such that for any $i\ne j$ we have $S_i\cap S_j\ne \emptyset$, then the generalized Seifert matrices
can be used to give an explicit presentation matrix for the multivariable
Alexander module $\h_1(S^3\setminus L;\Lambda_\mu)$, where $\Lambda_\mu:=\mathbb{Z}[t_1^{\pm 1},\dots,t_\mu^{\pm 1},(1-t_1)^{-1},\dots,(1-t_\mu)^{-1}]$ \cite[Theorem~3.2]{CF08}.
\item Generalized signatures can be used to calculate Casson-Gordon invariants \cite{CG75,CG78} (which are a special case of Atiyah-Patodi-Singer invariants \cite{APS75}) of surgeries $M$ on links, corresponding to characters $\chi\colon \operatorname{H}_1(M)\to S^1$ \cite[Theorem~6.4]{CF08}.
\end{enumerate}
\section{Explanation of the algorithm}\label{section:algorithm}
\begin{comment}
Link to Julia Collins - https://www.maths.ed.ac.uk/~v1ranick/julia/index.htm#:~:text=Seifer
\end{comment}
We now describe our algorithm, which takes as input a colored braid, and produces:
\begin{enumerate}
\item a C-complex for it, encoded as a graph with decorations (which we will call a ``decorated spine''), and
\item a family of generalized Seifert matrices for that C-complex (with respect to some homology basis).
\end{enumerate}
With these matrices in hand, the computation of the Conway potential function and the Cimasoni-Florens signatures are conceptually straightforward using Theorem~\ref{thm:conway-potential-function} and Definition~\ref{dfn.CFsignatures}.
In Subsection~\ref{sec.warmup}, we warm up by explaining how to produce a Seifert surface for the closure of a braid on only one color, and how to and encode this Seifert surface as a decorated spine. This should help the reader familiarize themselves with our conventions. Subsection~\ref{sec.fullalg} lays out the full algorithm for braids on multiple colors, exemplifying the construction of the C-complex on a running example, and its encoding as a decorated spine. Finally, Subsection~\ref{sec.readmatrix} explains how to produce a homology basis for the C-complex and construct its associated generalized Seifert matrices. This is essentially an analysis of how to read off the relevant linking numbers from the decorated spines.
\subsection{Constructing Seifert surfaces}\label{sec.warmup}
We warm up to the construction end encoding of the C-complex with the case where there is a single color -- in other words, we construct a Seifert surface for a link given as the closure of a braid.
More concretely, the input data is a number~$n$ of strands, together with a sequence $s = (s_1, \ldots, s_m)$ of integers in~$\{-(n-1), \ldots, n -1 \}\backslash \{0\}$ specifying the crossings. We will adopt the convention of orienting the strands from left to right, and numbering the $n$~positions from bottom to top; see Figure \ref{fig.1colorbraid} (left) for an example. Each integer~$\pm s_i$ then represents a crossing between the strands in positions $s_i$~and~$s_i+1$, with a plus sign indicating that the over-crossing strand goes down one position (right-handed crossing), and a minus sign meaning that it goes up (left-handed crossing).
\begin{figure}[h]
\centering
\def0.55 \linewidth{0.9 \linewidth}
\input{1colorbraid.pdf_tex}
\caption{Constructing a surface from a braid. Left: an example of a single-color braid. The numbers on the left indicate our convention for numbering the positions in a braid, and the numbers above the braid indicate the ordering of the crossings. Here, the number of strands is $n = 4$, and the sequence representing the braid is $s=(-1, 2, -1)$. Center: a diagram for the closure of the braid. Right: the surface obtained by applying Seifert's algorithm, visualized as a stack of Seifert disks connected by half-twisted bands. In this example, the surface is not connected.}
\label{fig.1colorbraid}
\end{figure}
We close the braid by drawing $n$ arcs above it as illustrated in Figure~\ref{fig.1colorbraid} (center), and apply Seifert's algorithm to the resulting diagram. Explicitly, this means that each of the $n$ positions gives rise to a Seifert disk, and a crossing between the strands in positions $k$ and $k+1$ translates into a half-twisted band connecting the corresponding disks, the sign of the crossing determining the handedness of the twist. It is often convenient to visualize the Seifert disks as a ``stack of pancakes'', as in Figure~\ref{fig.1colorbraid} (right). Since all half-twisted bands connect Seifert disks in a manner that respects their top and bottom sides, the resulting surface is orientable.
The surface we produced might not yet be a Seifert surface, because it could be disconnected. We remedy this by adding, for each pair of adjacent disks that are in different connected components, a pair of half-twisted bands with opposite handedness -- see Figure~\ref{fig.connected}. This is a harmless modification, as the diagram for the braid closure changes by a Reidemeister move of type II.
\begin{figure}[h]
\centering
\def0.55 \linewidth{0.4\linewidth}
\input{connected.pdf_tex}
\caption{Adding half-twisted bands to ensure connectedness. We obtain a Seifert surface from the surface in Figure~\ref{fig.1colorbraid} (right) by adding a pair of half-twisted bands with opposite handedness connecting the disks in positions $3$~and~$4$. This does not change the underlying link.}
\label{fig.connected}
\end{figure}
The resulting Seifert surface~$S$ can be encoded as a graph~$G$ with $n$ vertices, each corresponding to a Seifert disk, and one edge for each half-twisted band. We call this graph the \textbf{spine} of the Seifert surface. We can fully encode $S$ by decorating each edge of the spine with either a plus sign or a minus sign to record the handedness of the corresponding half-twist, and by remembering the vertical ordering of the Seifert disks and the ordering of the edges around the stack of disks. See Figure~\ref{fig.1colorspine} (left) for an example. We will refer to this package of data as the \textbf{decorated spine} for the surface. We also remark that the spine embeds naturally as a strong deformation retract of the surface, as illustrated in Figure~\ref{fig.1colorspine} (right). We draw the embedding in such a way that the vertices of the spine all lie to the right of the edges. Later, it will turn out to be convenient that we have adopted one such choice.
\begin{figure}[h]
\centering
\def0.55 \linewidth{0.5\linewidth}
\input{1colorspine.pdf_tex}
\caption{A decorated spine. Left: the spine for the Seifert surface in Figure~\ref{fig.connected}. Edges are labeled with signs indicating the handedness of the corresponding half-twisted bands, the ordering of the vertices is to be read from bottom to top, and the ordering of the edges from left to right. Right: an embedding of the spine as a strong deformation retract of the Seifert surface.}
\label{fig.1colorspine}
\end{figure}
\subsection{Constructing C-complexes}\label{sec.fullalg}
We now explain how to generalize the previous con\-stru\-ction to colored links. This time, besides the input data of the number~$n$ of strands in the braid and the sequence $s = (s_1, \ldots, s_m)$ of crossings, we also have the data of a $\mu$-coloring of the braid. What this means in practice is that if $\sigma \in \Sigma_n$ is the permutation induced by the braid, then to each orbit of $\sigma$ we associate a color in~$\{0, \ldots, \mu-1\}$. See Figure~\ref{fig.multicolorbraid} (top) for an example.
\begin{figure}[h]
\centering
\def0.55 \linewidth{0.5\linewidth}
\input{multicolorbraid.pdf_tex}
\caption{Sorting the strands by color. Top: a braid on $n=4$ strands and $\mu = 3$ colors, with crossing sequence $(-2, -3, 2, -3, -1, -2, -3)$. Numbers on the left indicate the color of the strand starting on that position. Middle: dragging the $0$-colored strands across the back produces hooks at the points where they crossed over strands of different color. Bottom: repeating the procedure for all colors yields a diagram with crossings between strands of the same color and hooks between strands of different colors. Here, crossing~$\#5$ is preserved, crossings $\#2$ and $\#4$ do not show up in the final diagram because the higher-colored strand crosses over a lower-colored strand, crossing~$\#3$ produces a right-handed hook, and crossings $\#1$,~$\#6$~and~$\#7$ produce left-handed hooks. In the original braid, crossing $\#3$ occurs below the $1$-colored strand, so the resulting hook appears in front of that strand. In contrast, crossing~$\#7$ occurs above the $1$-colored strand, and hence gives rise to a hook passing behind that strand.}
\label{fig.multicolorbraid}
\end{figure}
\subsubsection{From crossings to hooks}\label{sec.dragdown}
To understand the translation of the input data into a decorated graph, it is useful to draw the braid diagram with the strands sorted by color, as we now explain. We start by considering all the strands with color~$0$, and isotope them as a stack to the bottom of the braid, keeping everything else fixed in place. We adopt the convention that the strands are to be moved \emph{across the back} of the braid. This is not an isotopy relative endpoints of the braid, but the endpoints are moved in parallel, so it extends to an isotopy of the braid closure. This modification does not affect crossings between $0$-colored strands, however some of these strands might get caught in differently-colored strands, leaving hook formations where some of the crossings used to be -- see Figure \ref{fig.multicolorbraid} (middle). Specifically, each time a $0$-colored strand crosses \emph{over} a strand with a different color, that crossing will appear as a hook in the modified diagram. The handedness of this hook depends on whether the $0$-colored strand was moving one position up or down. On the other hand, crossings of $0$-colored strands \emph{under} strands of different colors will not manifest in the final picture.
Having moved the $0$-colored strands to the bottom of the picture, we then proceed by moving the $1$-colored strands, as a stack, to the space above the $0$-colored strands, and so on until all colors have been moved. In the end, we obtain a diagram for a braid whose closure is isotopic to that of our starting braid, and where each position contains only strands of a fixed color -- see Figure~\ref{fig.multicolorbraid} (bottom). Moreover, the only interactions between different strands are either crossings between strands of the same color (which may be positive or negative), or hooks of some strand around a strand of a higher color. These hooks have a handedness, as already explained, may span several positions in the braid, and may travel across the front or the back of the strands in intermediate positions. This last distinction is a reflection of whether the crossing that originated the hook occurred above or below the intermediate strand.
\subsubsection{Filling in the surfaces}\label{sec.fillin}
In a similar fashion to what was done in the single-color case, we can now stare at such a diagram and visualize a collection of (possibly disconnected) orientable surfaces bounded by link components of the same color, with surfaces of different colors intersecting only along clasps or ribbons, as in Figure~\ref{fig.colorpancakes}. More explicitly, these surfaces are constructed by starting with a disk for each of the $n$ positions, filling in the crossings between strands of the same color with half-twisted bands, and filling in the hooks with protrusions of the disks, which we will call \textbf{fingers}. Each finger forms a clasp with the Seifert disk at the position where its boundary hooks, and whenever the finger passes behind some strand, it creates a ribbon intersection with the corresponding disk. A finger without ribbon intersections is said to be \textbf{clean}.
\begin{figure}[h]
\centering
\def0.55 \linewidth{0.60\linewidth}
\input{colorpancakes.pdf_tex}
\caption{Constructing colored surfaces from a braid with strands sorted by color. Following up the example from Figure~\ref{fig.multicolorbraid}, we see a Seifert disk for each position in the braid, each now having a well-defined color. As in the single-color case, crossing~$\#5$ gives rise to a half-twisted band between two disks of the same color. The hooks from crossings $\#1, \#3, \#6$~and~$\#7$ yield fingers connecting disks of different colors, with a clasp intersection at the apex. Moreover, since the hook from crossing $\#7$ passes behind the $1$-colored strand, it also produces a ribbon intersection between the finger and the $1$-colored disk. The fingers arising from crossings~$\#1, \#3$~and~$\#6$ are clean.}
\label{fig.colorpancakes}
\end{figure}
\subsubsection{Removing ribbon intersections}
In order to turn this collection of surfaces into a C-complex, we need to exchange the ribbon intersections that show up by clasp intersections. It turns out that the framework developed so far can neatly handle this situation, because of the following observation:
\begin{obs*}
Let $1 \le k_1 < k_2 < k_3 \le n$. Suppose a finger from disk $\# k_1$ whose boundary hooks with disk~$\# k_3$ has its bottom-most ribbon intersection with disk $\# k_2$. Then the isotopy type of the link does not change when that ribbon intersection is removed, and two clean fingers from disk $\# k_1$ to disk~$\# k_2$ are added: a right-handed one to the left of the original finger, and a left-handed one to the right.
\end{obs*}
As illustrated in Figure~\ref{fig.cleanfingers}, by sequentially applying this observation to the ribbon inter\-sections in our colored surfaces from the bottom to the top, we reach a setting where all fingers are clean, and so there are only clasp intersections between the surfaces.
\begin{figure}[h]
\centering
\def0.55 \linewidth{0.6\linewidth}
\input{cleanfingers.pdf_tex}
\caption{Exchanging ribbon intersections for clean fingers. Top: in the running example from Figure~\ref{fig.colorpancakes}, the only ribbon intersection occurs at the right-most finger. This ribbon intersection is removed at the cost of adding two clasp intersections given by clean fingers. Bottom: a more complicated example, where this trick is iterated over multiple ribbon intersections in a finger, from bottom to top.}
\label{fig.cleanfingers}
\end{figure}
\subsubsection{Cleaning up}
At this point, we might not yet be in the presence of a C-complex, because the surfaces in each color might fail to be connected, and thus fail to be Seifert surfaces. Moreover, the union of all surfaces might not be connected.
Before addressing that, however, we perform a clean-up step that is relevant only for reasons of computational efficiency: we simplify our surfaces by removing consecutive pairs of oppositely oriented half-twisted bands or fingers between the same disks (say, by scanning from left to right).
This step is carried out cyclically, that is, if the first and last bands or fingers connect the same pair of disks and have opposite signs, they are also removed. This step is repeated until no such redundant pairs remain. Clearly this does not change the isotopy type of the link. We exemplify on our running example in Figure~\ref{fig.clasp_cleanup}.
\begin{figure}[h]
\centering
\def0.55 \linewidth{0.4\linewidth}
\input{clasp_cleanup.pdf_tex}
\caption{The result of removing the redundant pair of fingers from the surfaces in Figure~\ref{fig.cleanfingers} (top). In this example, this completes the cleanup step.}
\label{fig.clasp_cleanup}
\end{figure}
\subsubsection{Guaranteeing connectedness conditions}
Next, we make sure that all Seifert disks of the same color are connected by at least one half-twisted band. This is done exactly as in the single color case: whenever two adjacent disks of the same color are not connected to one another, add a pair of half-twisted bands with opposite handedness between them. In this way, we end up with a Seifert surface for each color.
The next step is to ensure that the C-complex itself is connected. To this end, we sort the Seifert surfaces into the connected components of the C-complex, and add a pair of consecutive oppositely oriented clean fingers between the bottom-most disk of the bottom-most component, and the bottom-most disk of each of the other components. As before, this does not change the isotopy type of the link. The result of the processing form this and the previous paragraph is exemplified with a toy example in Figure~\ref{fig.connectedcolors} (top).
\begin{figure}[h]
\centering
\def0.55 \linewidth{0.8\linewidth}
\input{connectedcolors.pdf_tex}
\caption{Ensuring connectedness conditions. Top: the $0$-colored surface on the left is not connected, so we turn it into a Seifert surface by adding a pair of oppositely-oriented half-twisted bands between the two $0$-colored disks. The union of the resulting Seifert surfaces then has two connected components, which we connect by adding a pair of consecutive oppositely oriented fingers between the disks in positions $1$~and~$3$.
Bottom: if we insist that all Seifert surfaces in the C-complex have non-empty intersection, we should also introduce a pair of fingers between the disks in positions $3$~and~$4$.}
\label{fig.connectedcolors}
\end{figure}
As mentioned at the end of Section~\ref{sec.invariants}, one might wish to use the generalized Seifert matrices for computing a presentation matrix for the multivariable Alexander module $\h_1(S^3\setminus L;\Lambda_\mu)$ \cite[Theorem~3.2]{CF08}. This application, however, requires not only that the C-complex be connected, but in fact that every two Seifert surfaces have non-empty intersection. If this stronger condition is desired, our algorithm introduces, additionally, pairs of fingers between the bottom-most disks of every two disjoint Seifert surfaces, as exemplified in Figure~\ref{fig.connectedcolors} (bottom).
\subsubsection{Encoding the C-complex as a decorated spine}
We are now ready to encode our C-complex as a decorated graph, which we do by extending the definition of a decorated spine to the multi-colored setting. We have one vertex for each Seifert disk, remembering their bottom-to-top ordering, as well as their color. Now we add one edge for each half-twisted band or finger, remembering the left-to-right order. We don't have to explicitly remember whether each edge represents a half-twisted band or a finger, as that is determined by whether its endpoints are of the same color. We do however need to store a sign for each edge, as before, to encode the handedness of the corresponding half-twisted band or finger. In Figure~\ref{fig.multicolorspine} we give an example of a decorated spine, and also illustrate the fact that the spine embeds as a strong deformation retract of the clasp complex. In the schematic, we again drew the vertices of the embedded spine to the right of all edges, for reasons that will become apparent in the next subsection.
\begin{figure}[h]
\centering
\def0.55 \linewidth{0.6\linewidth}
\input{multicolorspine.pdf_tex}
\caption{A decorated spine for a C-complex. Left: the decorated spine for the C-complex in Figure~\ref{fig.clasp_cleanup}. Vertices are to be read as ordered from bottom to top, and edges from left to right. This time, vertices are labeled with colors, which we indicate to the right. Edges represent half-twisted bands or fingers depending on whether their endpoints are the same color. Signs indicate the handedness of the half-twisted band or finger. Right: an embedding of the spine in the C-complex as a strong deformation retract.}
\label{fig.multicolorspine}
\end{figure}
\subsection{Reading off a Seifert matrix}\label{sec.readmatrix}
We now explain how to read off a Seifert matrix for the C-complex~$S$ from its decorated spine.
First, one must choose a basis for the first homology~$\h_1(S)$. Since $S$~strongly deformation retracts onto its spine~$G$, this amounts to finding a basis for the homology of a finite connected graph. This is a routine exercise, but we give a quick sketch: choose a maximal tree~$T$ for~$G$, and then $G/T$~is a wedge of, say, $r$~circles, with the collapse map $G \to G/T$ being a homotopy equivalence. After orienting each of the $r$~circles, the resulting collection of oriented loops gives a basis of~$\h_1(G/T)$. Now, each loop lifts to an edge~$e$ of~$G$ that is not in~$T$, say oriented from the vertex~$v_1$ to the vertex~$v_2$. The corresponding element of~$\h_1(G)$ is represented by the circuit obtained by concatenating~$e$ with the unique path in~$T$ from~$v_2$ to~$v_1$.
We are now armed with a collection of circuits in the spine~$G$, and wish to read off the linking numbers of the corresponding embedded curves in the C-complex~$S$, and their push-offs. We remind the reader that the linking number~$\lk(J,K)$ of two knots~$J, K$ can be computed from a diagram as the signed count of the number of crossings of~$K$ over~$J$, with the sign convention depicted in Figure \ref{fig.crossingsign}. We will use diagrams for the curves in our homology basis for~$S$ and their push-offs obtained from depictions as in Figure~\ref{fig.multicolorspine}. The local nature of the linking number computation is then well-suited to our description of~$\h_1(S)$ as a collection of circuits in~$G$, because we can simply count the contribution of each pair of edges. More explicitly, let $\gamma, \delta$~be circuits in~$G$ given as sequences of oriented edges $\gamma = (\alpha_{1}, \ldots, \alpha_{r}), \delta = (\beta_1, \ldots, \beta_{s})$. For a $\mu$-tuple of signs $\epsilon \in \{\pm 1\}^\mu$, we have
\[ \lk(\gamma^\epsilon, \delta) = \sum_{\substack{1 \le i \le r \\ 1 \le j \le s}} [ \alpha_i^\epsilon, \beta_j ], \label{linknumberformula} \]
where the $\epsilon$-superscript denotes the push-off dictated by~$\epsilon$, as described in Section~\ref{sec.seifertmatrices}, and for oriented edges~$\alpha, \beta$ of~$G$, the symbol~$[\alpha^\epsilon, \beta]$ denotes the signed number of crossings of~$\beta$ over~$\alpha^\epsilon$ in some fixed diagram for $\gamma$~and~$\delta$.
\begin{figure}[h]
\centering
\captionsetup{margin=20pt}
\def 0.55 \linewidth{0.25 \linewidth}
\input{crossingsign.pdf_tex}
\caption{The contributions of the crossings in a diagram for the knots~$J, K$ to the linking number~$\lk(J, K)$.}
\label{fig.crossingsign}
\end{figure}
At this point, we have reduced the computation of the desired linking numbers $\lk(\gamma^\epsilon, \delta)$ to counting the signed crossings~$[\alpha^\epsilon, \beta]$ for oriented edges~$\alpha, \beta$ of~$G$. We emphasise that the symbol~$[\alpha^\epsilon, \beta]$ makes sense only under a choice of diagram, which should be globally fixed in order for the above formula for linking numbers to hold. This is where it is relevant that we insist on always drawing the vertices of the spine~$G$ to the right of the edges, as in Figures~\ref{fig.1colorspine}~and~\ref{fig.multicolorspine}. Having fixed this convention, we proceed to investigate how to read off the symbols~$[\alpha^\epsilon, \beta]$ from $G$~and its decorations.
Denote the endpoints of~$\alpha$ by $u_1, u_2$, with $\alpha$ oriented from~$u_1$ to~$u_2$, and similarly suppose $\beta$~goes from~$v_1$ to~$v_2$. We assume that $\alpha$~and~$\beta$ are both oriented ``upwards'' in~$G$, that is, for the total vertex order packaged into the decoration of~$G$, we have $u_1 < u_2$~and~$v_1 < v_2$. The other cases are recovered from the obvious identities $[\overline{\alpha}^\epsilon, \beta] = [\alpha^\epsilon, \overline{\beta}] = - [\alpha^\epsilon, \beta]$, where over-lines indicate orientation reversal.
Observe now that in our picture of the spine~$G$ embedded in~$S$, the four points in the set $P:=\{u_1^\epsilon, u_2^\epsilon, v_1, v_2\}$ are all distinct and vertically aligned. This is true even if one of the~$u_i$ equals one of the~$v_j$, because then the push-off given by the sign~$\epsilon_i$ moves~$u_i$ above or below~$v_j$. Reading these points from bottom to top, one thus obtains a total order on~$P$ that is easily recovered from the decoration of~$G$: we first order~$P$ partially by reading the indices of $u_1, u_2, v_1, v_2$, and then break potential ties, necessarily between a~$u_i$ and a~$v_j$, by reading the sign~$\epsilon_i$.
One now sees that $[\alpha^\epsilon, \beta]$~can only be non-zero if the~$u_i^\epsilon$ and the~$v_j$ are alternating, that is, if $u_1^\epsilon < v_1 < u_2^\epsilon < v_2 $ or $v_1 < u_1^\epsilon < v_2 < u_2^\epsilon$. Let us thus assume we are in this situation, and consider first the case where the edges $\alpha$~and~$\beta$ are distinct. In this case, it is clear that one among~$\alpha^\epsilon, \beta$ crosses over the other precisely once. The value of~$[\alpha^\epsilon, \beta]$ is then non-zero exactly if it is~$\beta$ crossing over~$\alpha^\epsilon$. Now, our convention of drawing the vertices of~$G$ to the right of all edges implies that $\beta$~crosses over~$\alpha^\epsilon$ precisely if in the total ordering of edges of~$G$ we have~$\alpha < \beta$. We then see by direct inspection that
\[ [\alpha^\epsilon, \beta] = \begin{cases} 1 &\text{if $ v_1 < u_1^\epsilon < v_2 < u_2^\epsilon$,} \\ -1 &\text{if $ u_1^\epsilon< v_1 < u_2^\epsilon < v_2$.}
\end{cases}\]
If~$\alpha = \beta$, we must analyse several cases. Still under the assumption that $\alpha$~is oriented upwards and that $\alpha, \alpha^\epsilon$~have endpoints that alternate along the vertical direction, we have to consider:
\begin{itemize}
\item whether $\alpha$~corresponds to a half-twisted band or a finger,
\item the handedness of the half-twisted hand or finger,
\item whether the sign of $\epsilon$~at the endpoints of~$\alpha$ is $+$~or~$-$ (in the case of fingers, the assumption that the endpoints of $\alpha,\alpha^\epsilon$ alternate implies that the push-off direction dictated by~$\epsilon$ is the same at both endpoints).
\end{itemize}
This amounts to eight cases, which are depicted in Figure~\ref{fig.pushoffs}. By direct inspection, we obtain the results in the following table:
\begin{center}
\begin{tabular}{r|l | l}
& left-handed & right-handed\\ \hline
\multirow{2}{*}{half-twisted band} & $[\alpha^+, \alpha] = +1$ & $[\alpha^+, \alpha] = 0$\\
& $[\alpha^-, \alpha] = 0$ & $[\alpha^-, \alpha] = -1$\\ \hline
\multirow{2}{*}{finger} & $[\alpha^+, \alpha] = +1$ & $[\alpha^+, \alpha] = 0$\\
& $[\alpha^-, \alpha] = 0$ & $[\alpha^-, \alpha] = -1$\\\end{tabular}
\end{center}
\begin{figure}[h]
\centering
\captionsetup{margin=20pt}
\def 0.55 \linewidth{0.55 \linewidth}
\input{pushoffs.pdf_tex}
\caption{An upwards-oriented edge~$\alpha$ of the spine~$G$, together with its push-offs $\alpha^+, \alpha^-$. We consider the cases where $\alpha$~corresponds to a half-twisted band (top) or a finger (bottom), and whether this band/finger is left-handed (left) or right-handed (right).}
\label{fig.pushoffs}
\end{figure}
With this, we finish the explanation of how to determine the value of~$[\alpha^\epsilon, \beta]$ (in a diagram following our conventions) for any two oriented edges in~$G$, just by reading the decoration of $G$. Hence, by the linking number formula on page \pageref{linknumberformula}, we know how to compute $\lk(\gamma^\epsilon, \delta)$ for any circuits $\gamma, \delta$ in~$G$. Applying this to a homology basis for~$G$ we produce the desired generalized Seifert matrix.
\section{Additional comments on the implementation}\label{section:implementation}
In this brief section we say a few words about the actual computer implementation of our algorithm.
\subsection{Input format}
In \clasper, the input format for the braids follows the convention of the ``braid notation'' in LinkInfo \cite{LM22} and of the website ``Seifert Matrix Computations'' (SMC) \cite{CKL16}. Note that in the explanation of the notation in SMC, the positions in the braid are numbered from top to bottom, but the sign convention for left/right-handed crossings is the same as ours. Hence, given a sequence of crossings, the braid specified by our convention and the one specified as in SMC differ merely by a rotation of half a turn about a horizontal line in the projection plane, which is immaterial.
\subsection{Output format}
\clasper\ displays the colored link invariants on the graphical interface, and also allows the user to them as \LaTeX\ code.
The button ``Export Seifert matrices'' allows the user to save a text file containing a presentation matrix for the multivariable Alexander module $\h_1(S^3\setminus L;\Lambda_\mu)$ (where $\Lambda_\mu:=\mathbb{Z}[t_1^{\pm 1},\dots,t_\mu^{\pm 1},(1-t_1)^{-1},\dots,(1-t_\mu)^{-1}]$), and the collection of generalized Seifert matrices used to compute it, in a format compatible with SageMath.
Above each generalized Seifert matrix is indicated the sign tuple $\epsilon \in \{0,1\}^\mu$ to which it corresponds. For the running example in this section, the output looks as follows.
\bigskip
\begin{quote}
\begin{footnotesize}
\begin{verbatim}
Presentation Matrix
Matrix([[0, t0*t1*t2 - t0*t2], [t1 - 1, -t0*t1*t2 + 1]])
Generalized Seifert Matrices
[-1, -1, -1]
Matrix([[0, -1], [0, 1]])
[-1, -1, 1]
Matrix([[0, 0], [0, 0]])
[-1, 1, -1]
Matrix([[0, -1], [0, 0]])
[-1, 1, 1]
Matrix([[0, 0], [0, 0]])
[1, -1, -1]
Matrix([[0, 0], [0, 0]])
[1, -1, 1]
Matrix([[0, 0], [-1, 0]])
[1, 1, -1]
Matrix([[0, 0], [0, 0]])
[1, 1, 1]
Matrix([[0, 0], [-1, 1]])
\end{verbatim}
\end{footnotesize}
\end{quote}
\subsection{Optimizing determinant computations}
In our approach for determining the Conway potential function of a braid closure, the most computationally demanding step is the calculation of the determinant
\[\operatorname{det}\Big(-\sum_{\epsilon \in \{\pm 1\} ^\mu} A^\epsilon\cdot \epsilon_1 \dots \epsilon_\mu\cdot t_1^{\epsilon_1}\dots t_\mu^{\epsilon_\mu}\Big),\]
which appears in the formula in Theorem~\ref{thm:conway-potential-function}. We employ the Bareiss algorithm for efficient computation of determinants using integer arithmetic.
Moreover, in order to try and streamline this step, \clasper\ computes several different spines for the braid closure, obtained by randomly permuting the colors of the link. In other words, when performing the step described in Subsection~\ref{sec.dragdown}, we ``drag down'' the colors in different orders. The idea is to find C-complexes whose spines have homology with small rank, so that the determinant computation is performed on smaller matrices. We do this by trying out 500 randomly chosen permutations of the colors, and selecting one with minimal homology rank.
\subsection{Signature computations and floating point arithmetic}
The computation of Cimasoni-Florens signatures is carried out in floating point arithmetic. \clasper\ will consider to be~$0$ any eigenvalue of absolute value below $10^{-5}$, but will also display the computed eigenvalues of the relevant matrix~$H(\omega)$ from Definition~\ref{dfn.CFsignatures}.
\subsection{Libraries used and download location}
\clasper\ was written by the second author in Python 3 using the libraries numpy, matplotlib, tkinter and sympy.
A Windows installer and the Python source code are available at \url{https://github.com/Chinmaya-Kausik/py_knots}.
|
2204.10101
|
\section{Introduction}
\label{sec:Intro}
The time evolution of quantum systems is represented by non-equal time correlators of the form
\begin{equation}
\label{eqn:expectation}
\langle \hat{\mathcal{O}}(t_1) \hat{\mathcal{O}}(t_2) \rangle = \text{Tr}\left( \hat{\mathcal{O}}(t_1) \hat{\mathcal{O}}(t_2) \hat{\rho}\right),
\end{equation}
where $\hat{\rho}$ represents the density matrix. Separating out the time-evolution operators $U(t)=e^{i\hat{H}t}$, allows us to rewrite these quantum expectation values as path integrals in real time, with the field variable living on the Keldysh contour. This contour runs from the initial condition at $t=0$ to some finite time larger than both $t_1$ and $t_2$, and returns to $t=0$. In this Heisenberg picture, the density matrix simply represents the initial state at $t=0$.
In equilibrium systems, this density matrix may also be written in terms of the Hamiltonian $\hat{H}$ as $\hat{\rho} = e^{-\beta \hat{H}}/\text{Tr}(e^{-\beta \hat{H}})$. Further identifying the inverse temperature $\beta = 1/T$ with an imaginary time, this density matrix is formally equivalent to time evolution along the Keldysh contour extended to $-i\beta$, and may be included in the path integral formulation that way.
Fully non-perturbative evaluation of equal-time ($t_1=t_2$) correlators in thermal equilibrium is by now routine through the lattice discretization of field theory systems, and the application of numerical importance (Monte-Carlo, MC) sampling. This works, because the weights of paths in the path integral, $e^{iS}$, are real (and positive) when evaluated only on the imaginary part of the time contour, $e^{iS}\rightarrow e^{-S_E}$ and these may therefore be sampled as a probability distribution. The statistical averaging converges well, since paths with large Euclidean action are exponentially suppressed.
Unfortunately, when evaluating non-equal time or non-equilibrium correlators, the time contour is no longer purely imaginary, the weights are complex and oscillating rather than real, and standard importance sampling techniques no longer apply. This is known as the "sign problem" of real-time lattice field theory.
Over the past several years, possible avenues to solving this problem have been explored. Some of these involve allowing the real field variables, for the purpose of evaluating the path integral only, to take on complex values (see \cite{Berges:2006xc} for an early work, and \cite{Alexandru:2020wrj} for a brief review). This renders the action complex and can in some cases make the path integral better convergent. Recently, the use of Picard-Lefshetz Thimbles or Generalised Thimble methods have successfully mitigated the sign problem for fully real-time processes, but still at a considerable computational cost \cite{mou2019real, mou2019quantum, tanizaki2014real}.
The central insight is that since the action is a smooth (typically polynomial) function of the field variables, the integral over these field variables is unchanged by deforming the integration region from the real axis to some other contour in the complex plane. An efficient evaluation becomes a question of finding the optimal (or just a good) deformation of $\mathbb{R}^n$ into $\mathbb{C}^n$, where $n$ is the number of field variables to be integrated over\footnote{We will work on a finite space-time lattice, and so $n$ is finite, although often large.}. Picard-Lefshetz thimbles or Generalised Thimble methods provide an algorithm for doing this.
Much attention has been given to finite density problems in QCD, where the sign problem may likely be alleviated through complexification of the variables using Complex Langevin dynamics (see for instance \cite{Aarts:2008rr,Sexty:2013ica}) and recenlty through the method of thimbles \cite{Cristoforetti:2012su,Cristoforetti:2013wha}.
The sign problem is even more severe for the real-time evolution out of equilibrium, but we were able to demonstrate that also for this case, the method of thimbles provides a significant improvement \cite{mou2019real}. In short, one may apply standard importance sampling to the initial density matrix for the field variables at $t=0$, and subsequently evaluate integrals over the $t>0$ variables using the thimble methods. This works for initial states corresponding to positive definite Wigner functions.
As the number of field variables increases, the computational effort of evaluating correlators grows exponentially due to the sign problem. In principle, complexifying the variables by means of the Picard-Lefshetz Thimbles and Generalised Thimble methods resolves this problem, but the computational cost is still substantial and grows as a power of the number of field variables. Hence, full field theory in 3+1 dimensions for any useful physical time-scale requires further analysis, diagnostics and optimisation of the bottlenecks of the numerical implementation. In the first part of the present paper, we will perform such an analysis in the context of field theory in 0+1 dimensions. This is a numerically manageable system, allowing us to better investigate the space of physical parameters as well as parameters of the numerical implementation. It also has the advantage that direct computation of the evolution using standard quantum mechanical evolution is possible to do very cheaply, providing something to compare our results to. In the second part, we will implement the Generalised Thimble method to a system of two interacting scalar fields. Mixing and decay of interacting fields in real time is an obvious (current and future) application of the Generalised Thimbles method. In addition to confirming the applicability and accuracy of the method, we will be able to assess what size lattice, what amount of MC sampling, and consequently how much computing time is required to convincingly tackle relevant physical processes.
The paper is structured as follows: In Section 2, we present the Lefshetz Thimble and Generalised Thimble methods in some detail and offer a toy model example. We also set up the types of initial conditions, we will be considering. In Section 3 we dive deeper into the technology of the numerical implementation and identify the primary bottlenecks. We then investigate ways of tuning the numerics for optimal convergence. In Section 4, we introduce a second field, and present results for the correlator for the decoupled case, for the case when the two fields mix, and for when they are quartically coupled allowing one species to decay into the other. We conclude in section 5.
Since we are in effect working in quantum mechanics (field theory in 0+1 dimensions), we in Appendix A present the (numerically much simpler) standard quantum mechanical method we will use for comparison.
\section{Path Integrals in the complex plane}
\label{sec:thimble}
Consider a path integral of the form
\begin{equation}
\label{eqn:path_integral}
A = \int_{\mathbb{R}^n} \mathcal{D}\varphi \; e^{-\mathcal{I}(\varphi)},
\end{equation}
with real, scalar variables $\varphi$. It is implied that there is a finite number $n$ of variables, so that $\mathcal{D}\varphi=\Pi_n d\varphi$. The action $-\mathcal{I}=iS$ is a function of all the $\varphi$, and most often imaginary, meaning the integrand is oscillatory with a constant unit amplitude but variable phase\footnote{Note that in the following, we are considering the object $\mathcal{I}=-iS$ rather than $S$ itself.}. This variable phase is the root cause of the "sign problem", and makes the integral difficult to evaluate even using numerical methods. A solution to this problem is provided by a multidimensional version of Cauchy's theorem. By promoting the real variables $\varphi$ to complex variables, denoted $\phi$, the integration manifold $\mathbb{R}^n$ can be deformed into an $n$-dimensional manifold in $\mathbb{C}^n$ without altering the value of the integral. The task is to select a manifold, where the integrand has no, or at least better behaved, oscillations. We note that although this amounts to deforming the integration regions $\mathbb{R}$ for each of the $\varphi$ into the complex plane, the optimal common $n$-dimensional manifold may not necessarily follow from deforming the domain of each $\varphi$ independently.
\subsection{Lefschetz Thimbles}
\label{Lefschetz}
Picard-Lefschetz theory provides a flow equation to find an appropriate manifold, known as a Lefshetz Thimble.
Given that the action is $\mathcal{I}(\{\phi_j\})$, given some initial values for all the variable $\phi_j$, we can "flow" them in (non-physical) time $\tau$, according to
\begin{equation}
\label{eqn:flow}
\frac{\text{d}\phi_j}{\text{d}\tau} = \overline{\frac{\partial \mathcal{I}}{\partial \phi_j}}.
\end{equation}
This is a coupled set of equations for all the variables $\phi_j$, for which the right-hand side is in general not real. So given an initial set of real values for all the $\varphi_j$ (a real configuration), we can flow them in $\tau$ to a corresponding set of complex values $\phi_j$ (a complex configuration). And we may evaluate the integrand $e^{-\mathcal{I}}$ at that configuration rather than the original real-valued one.
One property of this procedure is that the classical solution (or in this context, the critical point in the space of field configurations) satisfying
\begin{equation}
\label{eqn:critical}
\left. \frac{\partial \mathcal{I}}{\partial \phi_j} \right|_{critical} = 0 ,
\end{equation}
is a fixed point of the flow, and the corresponding integrand contribution is unchanged. For all other configurations, we see that as the flow proceeds, the action changes according to
\begin{equation}
\label{eqn:damp}
\begin{split}
\frac{\partial \mathcal{I}}{\partial \tau} &= \sum_j \frac{\partial \mathcal{I}}{\partial \phi_j}\frac{\partial \phi_j}{\partial \tau}
= \sum_j \frac{\partial \mathcal{I}}{\partial \phi_j} \overline{\frac{\partial S}{\partial \phi_j}}
= \sum_j \left| \frac{\partial \mathcal{I}}{\partial \phi_j} \right|^2.
\end{split}
\end{equation}
This quantity is explicitly real, and consequently, the imaginary part of $\mathcal{I}$ is unchanged during the flow while a positive real part is acquired, exponentially suppressing the oscillating integrand contribution of the configuration. In principle, flowing to $\tau \rightarrow \infty$, the oscillations are removed completely, but this is rarely possible.
This procedure ensures that the full path integral over the entire $\mathbb{R}^n$ with equal amplitude integrands everywhere, now reduces to integrals over localised regions (thimbles) in configuration space close to the classical configurations (critical points). Care must be taken to include the contributions from all such thimbles, one for each classical solution (or stationary point) of the action.
\begin{figure}[htb!]
\centering
\begin{subfigure}[b]{0.42\textwidth}
\centering
\includegraphics[width = \textwidth]{thimble_00.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.42\textwidth}
\centering
\includegraphics[width = \textwidth]{tau_00.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.42\textwidth}
\centering
\includegraphics[width = \textwidth]{thimble_001.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.42\textwidth}
\centering
\includegraphics[width = \textwidth]{tau_001.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.42\textwidth}
\centering
\includegraphics[width = \textwidth]{thimble_1.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.42\textwidth}
\centering
\includegraphics[width = \textwidth]{tau_1.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.42\textwidth}
\centering
\includegraphics[width = \textwidth]{thimble_PL.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.42\textwidth}
\centering
\includegraphics[width = \textwidth]{tau_PL.pdf}
\caption{}
\end{subfigure}
\caption{The thimbles (left) and the real and imaginary parts of the integrand along the generalized thimble (right), $A = \exp[-\mathcal{I}(\phi)]$, where $\mathcal{I}(\phi) = -\frac{1}{2}i\phi^2$, for a number of different flow times $\tau$.}
\label{fig:airy_demo}
\end{figure}
To illustrate the procedure for a very simple example, consider $\mathcal{I} = - \frac{1}{2}i \varphi^2$, with a single variable, $n=1$. The flow equation (\ref{eqn:flow}) gives
\begin{equation}
\frac{\text{d} \phi}{\text{d} \tau} = \overline{-i\phi}.
\end{equation}
Writing $\phi = a + ib$
\begin{equation}
\dot{a} + i\dot{b} = i(a - ib),
\end{equation}
and splitting into real and imaginary parts gives
\begin{equation}
\begin{split}
\dot{a} = b, \quad \dot{b} = a \rightarrow \ddot{a} = a, \quad \ddot{b} = b.
\end{split}
\end{equation}
The critical point is $\phi=0$, and for any real initial value $a_0$, we find
\begin{equation}
\label{eqn:a_b}
\begin{split}
a = a_0\cosh(\tau), \qquad b = a_0\sinh(\tau).
\end{split}
\end{equation}
i.e. a straight line in the $\phi$-plane, of gradient $\cosh{\tau}$.
The thimble follows from taking $\tau\rightarrow \infty$, where $a=b$, as illustrated in Figure \ref{fig:airy_demo}. In that figure we also show the integrand $A=e^{-\mathcal{I}}$, which in the limit is just $e^{-a^2}$, rather than the original ($\tau=0$) integrand value $e^{ia_0^2/2}$.
\subsection{Generalised Thimble Method}
\label{sec:GenThimbles}
For multi-variable systems it is seldom possible to analytically find the Lefshetz thimble. Instead, a numerical solution of the flow equation is required, which in practice means setting a finite maximum flow time $\tau_{max}$. In this way, one may flow the initial manifold $\mathcal{M}_0 = \mathbb{R}^n$ (real variables $\varphi_j$) to some other, complex manifold $\mathcal{M}_\tau$ (variables $\phi_j$). The transformation between the two is encoded in the Jacobian
\begin{equation}
\label{eqn:jacobian_definition}
J_{ij} = \frac{\partial \phi_i}{\partial \varphi_j} ,
\end{equation}
so that the path integral may be written
\begin{equation}
\label{eqn:jacobian_integral}
\begin{split}
\int_{\mathcal{M}_0} \mathcal{D}\varphi \; e^{-\mathcal{I}(\varphi)} &= \int_{\mathcal{M}_\tau} \mathcal{D}\phi \; e^{-\mathcal{I}(\phi)} = \int_{\mathcal{M}_0} \mathcal{D}\varphi \; \det(J) e^{-\mathcal{I}(\phi)}.
\end{split}
\end{equation}
This Jacobian has its own flow equation
\begin{equation}
\label{eqn:Jacobian_calculation}
\frac{\text{d}}{\text{d} \tau} J_{ij} = \sum_s \overline{\frac{\partial^2 \mathcal{I}}{\partial \phi_i \partial \phi_s} J_{sj}},
\end{equation}
with $J = \mathbb{I}$ at $\tau=0$. In Figure \ref{fig:airy_demo}, left-hand panel, we show the Generalised Thimble (orange) and the asymptotic thimble (blue), while the right-hand panel shows the path integral integrand $A$ (real/blue and imaginary/orange parts) at different flow times. For $\tau=0$, the field only takes real values, and $A$ is strongly oscillating. Flowing to $\tau=0.01$ makes little difference, as $\phi$ still takes only values close to the real axis, and the $A$ is somewhat damped, but still very much an oscillating function. The situation is very different for $\tau=1$, where $A$ is strongly suppressed away from $\phi=0$, and $\phi$ takes values close to, but not yet on the thimble. As $\tau\rightarrow \infty$, the integrand $A$ is purely real and gaussian near the critical point $\phi=0$.
\subsection{The field theory path integral}
\label{sec:Path_integral}
\begin{figure}
\centering
\includegraphics[scale = 1.2]{_real_time_sequence.pdf}
\caption{The discretized Schwinger-Keldysh contour.}
\label{fig:contour}
\end{figure}
The path integral formalism on the Schwinger-Keldysh contour can be used to evaluate expressions of the form
\begin{equation}
\label{eqn:path_integral_expectation}
\left\langle \hat{\mathcal{O}}(t) \right \rangle = \text{Tr}\left(\hat{\mathcal{O}}(t) \hat{\rho}(t_0) \right) = C \int \mathcal{D} \phi \; \mathcal{O}(t) \left \langle \phi^+_0; t_0| \hat{\rho} | \phi_0^-; t_0 \right\rangle \exp{\left(\frac{i}{\hbar}\int \text{d}t L\right)}.
\end{equation}
The top (bottom) branch field variables are indicated with $+$ ($-$), $\hat{\rho}$ is the density matrix, of which the matrix elements are taken at the initial time $t_0$. The Lagrangian $L$ is a function of $\phi^+_n$, $\phi^-_n$, and we will in the following have in mind, that time is discretized as displayed in Figure \ref{fig:contour}, with a finite number of time steps $N_{tot} = 2m + 1$. $C$ is a constant.
The initial state is left unspecified, and our contour does not have an imaginary time extension. Both the top and bottom branches are exactly on the real axis and are only separated here for clarity.
Keeping in mind our discussion above and Eq. (\ref{eqn:jacobian_integral}), we may rewrite this expression as
\begin{equation}
\label{eqn:observable_samples}
\left \langle \hat{\mathcal{O}}(t) \right \rangle = \frac{\int \mathcal{D} \phi \; e^{-\mathcal{I}(\phi)} \hat{\mathcal{O}}}{\int \mathcal{D} \phi \; e^{-\mathcal{I}(\phi)}} = \frac{\left \langle e^{-i \text{Im}[\mathcal{I}(\phi)] + i\arg[\det(J)]} \hat{\mathcal{O}} \right \rangle_P}{\left \langle e^{-i \text{Im}[\mathcal{I}(\phi)] + i\arg[\det(J)]}\right \rangle_P},
\end{equation}
where the expectation values are evaluated over a distribution $P$, defined as
\begin{equation}
P(\phi) = e^{-\text{Re}[\mathcal{I}(\phi)] + \ln|\det(J)|}.
\end{equation}
\subsection{Initial density matrix for $n$-particle states}
\label{sec:n_particle_density_matrix}
The initial conditions are defined at the initial time $t=0$ through the variables $\phi_0$, $\dot{\phi}_0$, which in discretized time involves $\phi_0$ and $\phi_1$, $\dot{\phi}_0=(\phi_1-\phi_0)/dt$. The appropriate set of field variables to sample are the Keldysh basis \cite{aarts1998classical, fukuma2017parallel},
\begin{eqnarray}
\phi^{cl}_n &= \frac{1}{2}(\phi^+_n + \phi^-_n), \qquad
\phi^q_n &= \phi^+_n - \phi^-_n,
\end{eqnarray}
sometimes termed the "classical" and "quantum" field variables. The two variables $\phi_0^q$ and $\phi_1^q$ are not sampled but integrated out directly in the path integral, which reduces the contour variables as shown in Figure \ref{fig:reduced_contour}.
Following the method outlined in \cite{mou2019real}, initial conditions consistent with a non-interacting thermal density matrix may be integrated out of the path integral and instead sampled from distributions given by
\begin{align}
\label{eqn:initial_conditions}
\left \langle \phi_0^{cl}(p) \left( \phi_0^{cl}(p')\right)^\dagger \right \rangle &= \frac{\hbar}{\omega_p}\left( n_p + \frac{1}{2}\right) (2\pi)^d\delta^d(p-p'), \nonumber \\
\left \langle \dot{\phi}_0^{cl}(p) \left( \dot{\phi}_0^{cl}(p')\right)^\dagger \right \rangle &= \omega_p \hbar\left( n_p + \frac{1}{2}\right) (2\pi)^d\delta^d(p-p').
\end{align}
\begin{figure}
\centering
\includegraphics[scale = 1.2]{denotion.pdf}
\caption{Reduced contour by integrating out the initial conditions. $\Tilde{\phi}_n^{cl}$ represents the solution to the equation of motion for $\phi^{cl}$ for a given set of initial conditions.}
\label{fig:reduced_contour}
\end{figure}
The initial correlators in Eq. (\ref{eqn:initial_conditions}) are defined in momentum space, and the objects $n_p$ and $\omega_p$ are the particle number and mode energy (or dispersion relation), respectively. In thermal equilibrium, a free scalar field would have a Bose-Einstein distribution and a standard relativistic dispersion relation, $\omega_p^2=p^2+m^2$. But in principle, one may choose anything, and thereby define some Gaussian non-equilibrium initial states.
As we will discuss further below, we have now separated the Monte-Carlo sampling of the complete real-time path integral into two parts. First, the initial conditions $\phi_0^{cl}$ and $\phi_1^{cl}$ are sampled by drawing real values from a Gaussian distribution Eq. (\ref{eqn:initial_conditions}). For each such initial condition, we subsequently perform Monte-Carlo simulations using the Generalised Thimble method as described above, for all the remaining field variables $\phi_{n>1}^{cl}$, $\phi_{n>1}^q$, but keeping $\phi_0^{cl}$ and $\phi_1^{cl}$ fixed. Each initial condition uniquely determines a classical solution/critical point. At the same time, scanning over all initial conditions ensures that we include all critical points/thimbles in the system.
\section{Numerical Simulations and Optimisations}
\label{sec:Numerics}
We have now set up a formalism to compute any real-time correlator exactly, up to the lattice discretization and the statistical error of the Monte-Carlo sampling. The sign problem is alleviated for finite $\tau$ and in principle resolved for $\tau\rightarrow\infty$, although this limit may not be reached in practice.
In the following, we will investigate the scope and limitations of the formalism, from the point of view of one and two scalar fields. Spatial extent is at this stage less essential than time-extent, and we will proceed in 0+1 dimensions. We will comment briefly on 1+1 and 3+1 dimensional simulations in the Conclusions.
We will in the present section consider a simple scalar field model, for which the continuum action is
\begin{align}
S = \int d^Dx \left[\frac{1}{2}(\partial_\mu\phi)^2-\frac{m_\phi^2}{2}\phi^2\right].
\end{align}
In section \ref{sec:Two_fields}, this will be extended into a model of two interacting scalar fields, by adding
\begin{align}
+\int d^Dx \left[\frac{1}{2}(\partial_\mu\chi)^2-\frac{m_\chi^2}{2}\chi^2-\frac{\lambda_1}{4}\phi\chi-\frac{\lambda_2}{4}\phi^2\chi^2\right].
\end{align}
Whereas the first interaction term (proportional to $\lambda_1$) is really a non-diagonal mass contribution, leading only to mixing, the second one (proportional to $\lambda_2$) is a true non-linear interaction leading to particle decay.
\subsection{Algorithm implementation}
\label{sec:Algorithm}
\begin{figure}
\centering
\includegraphics[width = 0.75\textwidth]{hist.pdf}
\caption{Histogram of 400 random numbers, sampling the Gaussian initial conditions.}
\label{fig:Gaussian_sampling}
\end{figure}
We first set out the algorithm used to generate samples to compute the observables in Eq. (\ref{eqn:observable_samples}) \cite{mou2019real,Alexandru:2016gsd,Alexandru:2017lqr}.
\begin{enumerate}
\item We assume that a discretized action for one or more scalar fields is given, involving a set of physical parameters (masses, couplings, ...). This action also involves non-physical parameters, such as the lattice spacings in time and space, the Keldysh contour time extent, and the chosen finite number of lattice points, in space and time.
\item We pick a maximum flow time $\tau_{max}$ and a flow time step $d\tau$. We also select a MC proposal width $\delta$.
\item We define an initial condition through selecting the particle numbers $n_p$ and the mode energies $\omega_p$, for each lattice momentum mode. This may or may not be a thermal or vacuum state.
\item As described above, we draw a set of $N_{init}$ values for the initial field variables $\phi_0^{cl}$, $\phi_1^{cl}$. Figure \ref{fig:Gaussian_sampling} shows an example of 400 initial values of $\phi_0^{cl}$ for one of the data sets described below.
\item For each of these initial conditions, we first solve the classical equation of motion for the entire time extent on the lattice. This gives us values $\phi^{cl}_j$ for all times $j$, making up an initial configuration $\Tilde{\phi}_j^{cl}$ from which to start our Monte-Carlo sampling. As we also discussed above, the classical solution $\Tilde{\phi}_j^{cl}$ is a fixed point of the thimble gradient flow, a critical "point" in the multi-dimensional space spanned by all the $n$ complex planes.
\item Now we construct a MC chain of configurations through a Metropolis-like algorithm. Given a current real "n-th" configuration $\varphi_j^n$ (which initially is the classical solution, $\Tilde{\phi}_j^{cl}$), we flow the entire configuration using Eq. (\ref{eqn:flow}) until $\tau=\tau_{max}$. That gives a complex-valued configuration $\phi_j^n$ including both $cl$ and $q$ variables. We then construct the Jacobian $J$ allowing for the transformation between complex-valued $\phi_j^n$ and real-valued $\varphi_j^n$.
\item We randomly generate a complex proposal vector $\eta$ for all variables in the configuration (although not $\phi_0^{cl}$, $\phi_1^{cl}$), by drawing real and imaginary parts from a Gaussian with width $\sigma=\sqrt{2}\delta$.
\item We transform this into a proposal vector $\Delta$ on the real axis using $\eta =J\Delta$. This defines a new configuration $\varphi_j^{n+1}=\varphi_j^n+Re(\Delta)$.
\item We flow $\varphi_j^{n+1}$ to $\tau_{max}$ to give a proposal $\phi_j^{n+1}$.
\item We accept/reject the proposal with a probability \cite{mou2019real}
\begin{eqnarray}
Pr = \text{min}\left(e^{-(A_{n+1}-A_n)}, 1\right),
\end{eqnarray}
where
\begin{eqnarray}
A_n=
\text{Re}(\mathcal{I}_n) - 2\ln|\det J_{n}| + \Delta^T(J_n^\dagger J_n) \Delta/\delta^2 ,
\end{eqnarray}
Note that this involves not just the difference in action but also the Jacobian. $\mathcal{I}_{n,n+1}$ and $J_{n,n+1}$ are implied to be evaluated at $\varphi_j^{n,n+1}$ respectively.
\item We repeat from step 6, until a sufficently long MC chain is generated, say $N_{MC}$ steps.
\item Then we start over from step 6 with a new initial classical configuration generated in steps 4 and 5.
\end{enumerate}
All the parameters $\tau_{max}$, $d\tau$, $\delta$, $N_{MC}$, $N_{init}$ may be optimised for best statistical significance and minimal numerical wall time. The optimal values may depend on the physical parameters in the action and the lattice size and discretization.
\subsection{Optimisation with a single field}
\label{sec:Optimisation}
\begin{figure}
\centering
\includegraphics[scale = 0.7 ]{phi_correlator_example.pdf}
\caption{The correlator Eq. (\ref{eqn:correlator}) for a single free field.}
\label{fig:correlator}
\end{figure}
We will perform our investigation of optimizing simulations with a single scalar field in 0+1 dimensions. We will consider one concrete test correlator, the non-equal time two-point function (or propagator)
\begin{align}
\label{eqn:correlator}
\langle \phi_k^{cl} \phi_j^{cl} \rangle, 0 \leq k, j \leq m,
\end{align}
and gauge the performance based on how accurately we are able to determine this correlator. An example is shown in Figure \ref{fig:correlator}.
After integrating out $\phi_{0,1}^q$, the discretized free field Lagrangian reads \cite{mou2019real} (see Fig. \ref{fig:reduced_contour})
\begin{align}
\mathcal{I} = &\left( \frac{-i}{\hbar}\right) \left[ \frac{2\phi_1 \Tilde{\phi}_2^{cl}}{\text{d}t} - \frac{\phi_2 \Tilde{\phi}_1^{cl}}{\text{d} t} + \frac{\phi_{2m - 2} \Tilde{\phi}_1^{cl}}{\text{d} t} + \right.
\left. \sum_{i = 1}^{2m -2} \frac{(\phi_{i + 1} - \phi_i)^2}{2\Delta_i} - \left( \frac{\Delta_i + \Delta_{i + 1}}{2}\right)\left( \frac{1}{2}m_\phi^2\phi_i^2\right)\right],
\end{align}
where
\begin{equation}
\Delta_i = \left\lbrace \begin{array}{cl}
\text{d} t, & 1 \leq i < m\\
-\text{d} t,& m \leq i < 2m - 1,
\end{array}\right.
\end{equation}
The mass is taken to be $m_\phi=1$ in lattice units and $\hbar=1$ throughout.
The free field has a number of simplifying properties. Firstly, we know the exact solution for the two-point function to be a nicely oscillating (and therefore well-behaved) function, so that amplitudes and errors will be comparable for all $j,k$ pairs. Second, since the equation of motion is linear in $\phi$, the right-hand side of (\ref{eqn:Jacobian_calculation}) does not depend on $\phi_i$. As a result, the Jacobian in the flow evolution is constant, and does not need to be recomputed at every Monte Carlo and time step. This reduces the computational cost by about 95\%.
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{cl_cn.pdf}
\caption{}
\label{fig:cl_cn}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{cn_ft.pdf}
\caption{}
\label{fig:cn_ft}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{cn_dl.pdf}
\caption{}
\label{fig:cn_dl}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{cl_ft.pdf}
\caption{}
\label{fig:cl_ft}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{cl_dl.pdf}
\caption{}
\label{fig:cl_dl}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{ft_dl.pdf}
\caption{}
\label{fig:ft_dl}
\end{subfigure}
\caption{A comparison of the maximum error on a test correlator for various simulation parameters}
\label{fig:optimisation_plots}
\end{figure}
We select $dt=0.75$ and the number of time steps to be $m=10$. We then proceed to vary the number of initial conditions $N_{init}$ "number of MC chains", the flow time $\tau_{max}$, the parameter $\delta$ and the length of the MC chains $N_{MC}$.
For the purpose of optimisation, we will define our "number of merit" to be the statistical error on the propagator, selecting the largest value over the $m=10$ time points. In Figure \ref{fig:optimisation_plots} we show correlation plots of this number of merit as we vary the parameters of the algorithm. We see that the performance improves with increasing MC chain length, increasing proposal size $\delta$ and increasing flow time $\tau$. The number of initial conditions is less important, provided it is large enough to convincingly sample the initial Gaussian distribution\footnote{For a purely classical simulation, the statistical error decreases as $N_{init}^{-1/2}$.} (see again Figure \ref{fig:Gaussian_sampling}).
\begin{figure}
\centering
\includegraphics[scale = 0.85 ]{acceptance_probability.pdf}
\caption{Probability of a proposal being accepted for $\tau = 1$. Note that while acceptance probability decreases with step size, the 'speed' around the manifold increases as the larger step size compensates.}
\label{fig:acceptance_rate}
\end{figure}
We also see that the effects are uncorrelated, so that there is no favoured combination of parameters, that improves accuracy beyond the combined individual effects. The runtime depends linearly on $N_{init}$, $N_{MC}$ and $\tau_{max}$, since it is just how many times the algorithm is run. On the other hand, the runtime does not depend on $\delta$. This can however not be increased indefinitely, as shown in Figure \ref{fig:acceptance_rate}, which shows the acceptance rate of MC steps, as $\delta$ is increases. This drops substantially at a maximal value $\delta_{max}$ (in this case 3.15, for $\tau_{max}=1$).
In general, the flowed field manifold can have a very complicated geometry. Having knowledge of the curvature in different directions along the manifold would allow us to generate random increments with different $\delta$ along each direction, for optimal speed through field configuration space. However, without such detailed knowledge of the geometry, we are left with selecting one, global $\delta$. The hope is that for a given set of parameters, we are able to identify $\delta_{max}$. Once the error can no longer be improved by increasing $\delta$, further improvements must come from increasing the flow time $\tau_{max}$ and the chain length $N_{MC}$.
The effect of the chain length on the error is expected to be $\propto N_{MC}^{1/2}$, although there are considerations to do with the autocorrelation time. The dependence on the flow time seems to be approximately linear $\propto 1/\tau_{max}$.
So summarize: the flow time should be increased until the reward is cancelled by the corresponding $\delta_{max}$ decreasing. Once this has been optimized, any further computing power should be used to increase the chain length and the number of chains/initial conditions. The chain length should in any case at least be large enough that ergodicity is achieved and much longer than the autocorrelation time. Similarly, the number of chains/initial conditions must be large enough that the initial condition distribution is well sampled. Each chain is independent providing an excellent opportunity for parallelisation.
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{phi_tau_0-01.pdf}
\caption{}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width = \textwidth]{varphi_tau_0-01.pdf}
\caption {}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{phi_tau_0-1.pdf}
\caption{}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{varphi_tau_0-1.pdf}
\caption {}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{phi_tau_1.pdf}
\caption{}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{varphi_tau_1.pdf}
\caption {}
\end{subfigure}
\caption{Complex domain of a single field variable during the MC sampling of a multivariable system (left) and the corresponding domain in terms of the un-flowed real variable (right). Top to bottom, $\tau_{max}=0.01, 0.1, 1$.}
\label{fig:onevar_thimble}
\end{figure}
For the simple one-variable example of Figure \ref{fig:airy_demo}, we were able to explicitly compute the Thimble and the Generalized Thimble, where the field manifold flows to. For multiple variables this is highly non-trivial, and in a MC sampling of coupled variables, it is the entire multidimensional manifold, including initial conditions, that is sampled. Still, it may be illustrative to show the domain in the complex plane, that one single variable samples during the course of the entire MC simulation. This will depend on the flow time $\tau_{max}$, where for $\tau_{max}=0$, the domain is the real axis. In Figure \ref{fig:onevar_thimble}, we show this domain for three different flow times, $\tau_{max}=0.01, 0.1, 1$. We see that for larger flow times, a larger region of the complex plane is sampled (note the different scales on the axes). In the right-hand panels, we see the corresponding distribution of the real-valued variables $\varphi$. As the flow time becomes larger, they cluster around an ever smaller range near, but displaced from, the origin. This is qualitatively similar to the one-variable example.
\section{An interacting two-field system}
\label{sec:Two_fields}
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{phi_example15.pdf}
\caption{Free field $\langle \phi \phi \rangle$ un-equal time correlator}
\label{fig:phi_analytic_comparison}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{chi_example15.pdf}
\caption{Free field $\langle \chi \chi \rangle$ un-equal time correlator}
\label{fig:chi_analytic_comparison}
\end{subfigure}
\newline
\vspace{0.1cm}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{L1,0,L2,0,_heisenberg.pdf}
\caption{Free field $\phi$ and $\chi$ occupation numbers, analytic computation.}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{L1,0,L2,0,_thimble.pdf}
\caption {Free field $\phi$ and $\chi$ occupation numbers, Thimble computation.}
\end{subfigure}
\caption{The un-equal time correlator (top) and occupation number (bottom) for two free fields, comparing semi-analytic results to Thimble results.}
\label{fig:analytic_comparison}
\end{figure}
Using the optimisations outlined in section \ref{sec:Optimisation} allows us to add a second field to our simulations without exceeding our computational resources. This second field exists on the same lattice as our original field, and is implemented through the action
\begin{align}
\mathcal{I} = & \left( \frac{-i}{\hbar} \right)\left[ \frac{2\phi_1 \Tilde{\phi}_2^{cl}}{\text{d} t} - \frac{\phi_2 \Tilde{\phi}_1^{cl}}{\text{d} t} + \frac{\phi_{2m-2} \Tilde{\phi}_1^{cl}}{\text{d} t} +\frac{2\chi_1 \Tilde{\chi}_2^{cl}}{\text{d} t} - \frac{\chi_2 \Tilde{\chi}_1^{cl}}{\text{d} t} + \frac{\chi_{2m-2} \Tilde{\chi}_1^{cl}}{\text{d} t} \right. \nonumber
\\ & \left. + \sum_{i = 1}^{2m - 2}\frac{(\phi_{i + 1} - \phi_i)^2}{2\Delta_i} + \frac{(\chi_{i + 1} - \chi_i)^2}{2\Delta_i} - \left( \frac{\Delta_i + \Delta_{i + 1}}{2}\right)\left( \frac{1}{2}m_\phi^2 \phi_i^2 + \frac{1}{2}m_\chi^2 \chi_i^2 + \frac{\lambda_1}{4}\phi_i \chi_i+\frac{\lambda_2}{4}\phi_i^2 \chi_i^2\right)\right],
\end{align}
where $\chi$ represents the second field. We have included a bilinear mass mixing term parameterized by $\lambda_1$ and a quartic interaction parameterized by $\lambda_2$. When $\lambda_1=\lambda_2=0$, we recover two decoupled free systems, as in the previous section. When $\lambda_1\neq 0$, the system is still free, but $\phi$ and $\chi$ are no longer mass eigenstates, and oscillation between the two states is expected. When $\lambda_2\neq 0$, we can expect actual interactions, decay and scattering between the two, depending on parameter values and the initial condition.
We will focus on the case when $\chi$ is the heavier field, and initially occupied, and the $\phi$ is the lighter field and initially in vacuum. Concretely, we take $n_p = 0$ for $\phi$ and $n_p = 1$ for $\chi$, $m_\phi = 1$ and $m_\chi = 2$. Considering first $\lambda_1=\lambda_2=0$, we display the free correlators in Figure \ref{fig:analytic_comparison}. As the system is really two-variable quantum mechanics, we can in fact solve the system semi-analytically using the method described in appendix \ref{app:Heisenberg}, and use this for comparison. All thimble results below were generated with 400 chains of length $2 \times 10^6$, with $\tau = 1.5$, $\text{d} t = 0.5$ and $\delta = 0.27$.
\subsection{Mass mixing and field oscillations}
\label{sec:mixing}
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{L1,2,L2,0,_heisenberg.pdf}
\caption{}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width = \textwidth]{L1,2,L2,0,_thimble.pdf}
\caption {}
\end{subfigure}
\vspace{0.2cm}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{L1,4,L2,0,_heisenberg.pdf}
\caption{}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{L1,4,L2,0,_thimble.pdf}
\caption {}
\end{subfigure}
\vspace{0.2cm}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{L1,6,L2,0,_heisenberg.pdf}
\caption{}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{L1,6,L2,0,_thimble.pdf}
\caption {}
\end{subfigure}
\caption{Semi-analytic (left) and Thimble (right) occupation numbers for two fields mixing with different values of the parameter $\lambda_1$, $\lambda_2=0$.}
\label{fig:n_results}
\end{figure}
We may introduce a useful representation of the time-dependent occupation number operator, extracted from the equal-time correlators
\begin{equation}
\label{eqn:occ_number}
\left\langle n_{\phi \: i} \right\rangle = \frac{1}{\hbar}\left(\sqrt{\left\langle \phi_i \phi_i \right \rangle \left \langle \dot{\phi}_i \dot{\phi}_i \right \rangle} - \frac{1}{2}\right),
\end{equation}
We now compute this for a number of different values of $\lambda_1$, still keeping $\lambda_2=0$. This is shown in Figure \ref{fig:n_results}, and we see that the Thimble method provides a very good qualitative and quantitative match to the semi-analytic computation.
\subsection{Interactions and particle exchange}
\label{sec:4point}
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{L1,0,L2,2,_heisenberg.pdf}
\caption{}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{L1,0,L2,2,_thimble.pdf}
\caption {}
\end{subfigure}
\vspace{0.2cm}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{L1,0,L2,4,_heisenberg.pdf}
\caption{}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width = \textwidth]{L1,0,L2,4,_thimble.pdf}
\caption {}
\end{subfigure}
\vspace{0.2cm}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{L1,0,L2,6,_heisenberg.pdf}
\caption{}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{L1,0,L2,6,_thimble.pdf}
\caption {}
\end{subfigure}
\caption{Semi-analytic (left) and Thimble (right) occupation numbers for two fields interacting with different values of the parameter $\lambda_2$, $\lambda_1=0$.}
\label{fig:scattering_results}
\end{figure}
We now turn off the mass mixing, setting $\lambda_1=0$, and instead turn on interactions $\lambda_2\neq 0$. Again, we start out with non-zero occupation number in the $\chi$ field, and vacuum in the $\phi$ field. The $\chi$ is heavy and the $\phi$ is light, $m_\chi/m_\phi=2$. Figure \ref{fig:scattering_results} shows again the evolution in time, but now including quartic interactions. We see that instead of oscillations, the $\chi$ "particles" are slowly leaking into $\phi$ particles. This is a truly non-equilibrium, non-perturbative computation, captured within what is admittedly a quite small physical time interval. Clearly, for this quantum mechanical system, it is vastly more efficient to simply solve it using the semi-analytic method. But we can see that with moderate numerical effort, the Thimble approach provides accuracy good enough to distinguish a gradual exchange of particles between $\phi$ and $\chi$.
\section{Conclusions}
\label{sec:Conclusions}
Using the technique developed in \cite{mou2019real}, we have demonstrated that multiple fields can be simulated fully non-perturbatively in real-time, for time-scales where interesting real-time physics may begin to be explored. In the particular system considered here, the fields were made to interact through mass mixing as well as through a 4-point interaction allowing for exchange of particles. By comparing to a standard semi-analytic computation in quantum mechanics, this is a further demonstration that real-time Generalised Thimble methods, as introduced in \cite{mou2019real}, give correct and accurate results.
It is however clear that in order to improve the computational viability of this new technique for full field theory at long physical times, the parameters controlling the numerical implementation must be optimised for minimal statistical error. These are critical as the system scales in complexity, but also due to the increasing number of independent chains required to probe the initial condition parameter space as the number of fields and the number of dimensions increases. By ensuring that the optimal simulation parameters are used the simulation time can be improved by a factor of 5 compared to previous attempts. Despite this, large scale multi-field $3 + 1$ dimensional simulations are probably out of reach for present computing power using this technique in its present form. As an example, consider a small classical-statistical real-time simulation in 3+1 D, which would typically involved $32^3$ spatial sites, $dt=0.05$ with a mass of $am=0.5$, running until a physical time of order $mt=100$. That is an eye-watering $40\times 100 \times 32^3\times 2= 262$ million variables, doubled for the two Keldysh branches. In the present simulations, we had up to 30. Even when straining the simulations in just 1+1D, using perhaps $dt=0.1$, $am=1$, $N_x=16$ simulating to $mt=25$, this is still $250\times 16\times 2=8000$ variables. Inversion of this size matrices is possible, but generating sufficiently long MC chains remains a challenge.
The MC chains/initial conditions may be trivially parallelised. The issue, as for standard MC simulations in four Euclidean dimensions, is the computation and inversion of large matrices, in this case the Jacobian J \cite{Alexandru:2016lsn}. Dealing with large (sparse) matrices is a well-known problem in that field, and optimised algorithms exist. Using GPU processors rather than CPU's would be a way to go. As mentioned above, even at the fairly small systems considered here, up to 95\% of the runtime is spent dealing with the Jacobian. Since that scales as the number of variables squared (or even cubed), it will be the vastly dominant bottleneck for large systems.
Hence, we propose further work be done improving parallelisation within each chain, and the use of optimised algorithms for standard mathematical tasks. There are two good candidates for this effort, the implementations of Eqs. (\ref{eqn:flow}) and (\ref{eqn:Jacobian_calculation}) which generate $Nm + (Nm)^2$ coupled complex equations for $N$ fields and $m$ total dynamical lattice sites, and the solution of the matrix equation in step (3) of Section \ref{sec:Algorithm}. This would make larger flow times or longer Markov chain lengths viable, improving accuracy in combination with increasing the number of chains/initial conditions. In turn, this could allow for effective simulations of multi-field models in higher dimensions, for longer physical times.
\acknowledgments
PMS and SW were supported by STFC Grant No. ST/L000393/1 and ST/P000703/1. AT is supported by a UiS-ToppForsk grant. The numerical work was performed on the Abel supercomputing cluster of the Norwegian computing network Notur.
|
1402.7176
|
\section{Introduction. Basic formulas and results}
The physical system on which we will focus is a free massless scalar quantum field theory defined over the finite interval $[0,L]$. The quantum Hamiltonian that describes the one particle states of this quantum field theory is given by the Laplace operator over the finite line $[0,L]$. It is a very well known fact that the Laplace operator over the finite line $[0,L]$ is not an essentially selfadjoint operator but instead admits an infinite set of selfadjoint extensions. We will denote by ${\cal M}$ the set of all the selfadjoint extensions of $\Delta$ over the finite line $[0,L]$. Physically speaking this means that there is an infinite set of possible quantum field theories that describe the behavior of a free quantum massless scalar field confined to propagate in the interval $[0,L]$. In order to respect the unitarity principle of quantum field theory we must only take into account those selfadjoint extensions of the Laplace operator that give rise to non-negative selfadjoint operators (see \cite{asor13-874-852}). As described in \cite{asor13-874-852} among the set of non-negative selfadjoint extensions we can distinguish between two different types:
\begin{enumerate}
\item Non-negative selfadjoint extensions of $\Delta$ over $[0,L]$ that are non-negative only for certain values of the finite length $L$ of the interval. Typically these selfadjoint extensions are non-negative for $L\geq L_0$ for a given $L_0$ that depends on the selfadjoint extension. When $L<L_0$ these selfadjoint extensions have negative eigenvalues and thus give rise to non unitary quantum field theories. We will call these selfadjoint extensions {\it weakly consistent selfadjoint extensions}.
\item Non-negative selfadjoint extensions of $\Delta$ over $[0,L]$ that are non-negative for any value of the finite length $L$ of the finite line. These selfadjoint extensions have only zero and positive eigenvalues for any value of $L\in(0,\infty)$. We will call these selfadjoint extensions {\it strongly consistent selfadjoint extensions} and following \cite{asor13-874-852} we denote by ${\cal M}_F$ the set of strongly consistent selfadjoint extension.
\end{enumerate}
In this paper we will focus only on those free massless scalar quantum field theories defined by selfadjoint extensions of $\Delta$ over $[0,L]$ that are non-negative for all $L\in(0,\infty)$. Hence we will only be interested in the selfadjoint extensions contained in ${\cal M}_F$. This restriction is very natural from a quantum field theoretical point of view because the space ${\cal M}_F$ is stable under the renormalization group transformations. On the other hand the whole set of non-negative selfadjoint extensions for fixed length $L$ is not stable under renormalization group transformations since there are non-negative selfadjoint extensions that will loose the non-negativity condition under a renormalization group transformation giving rise to non-unitary quantum field theories (see Ref. \cite{aso-jpa07}). Typically one distinguishes separated and coupled boundary conditions \cite{zett05b}, but in the formulation
of \cite{asor13-874-852} this will not be necessary (an extension of the AIM formalism was first addressed in the first chapter of Ref. \cite{muca-phd09} and later on reformulated in a more rigorous approach in Ref. \cite{perez-pardo}).
In order to be able to characterize the selfadjoint extensions of ${\cal M}_F$ we will use the Asorey-Ibort-Marmo (AIM) formalism (see \cite{asor05-20-1001}) to characterize the selfadjoint extensions of $\Delta$ over the finite line $[0,L]$. From the first AIM theorem (see \cite{asor05-20-1001,asor13-874-852}) the set of selfadjoint extensions of $\Delta$ over $[0,L]$ is in one-to-one correspondence with the group ${\rm U}(2)$. Given any $U\in{\rm U}(2)$ we will denote the corresponding selfadjoint extension by $\Delta_U$. Each selfadjoint extension $\Delta_U$ is defined by its domain of functions ${\cal D}_U\subset H^2([0,L],\mathbb{C})$ being $H^2([0,L],\mathbb{C})$ the Sobolev space of functions over the finite interval that are $L^2$ together with their derivatives up to second order. The domain ${\cal D}_U\subset H^2([0,L],\mathbb{C})$ that defines the selfadjoint extension $\Delta_U$ is given in terms of the matrix $U\in{\rm U}(2)$ (see \cite{asor05-20-1001,asor13-874-852}) by
\begin{equation}
{\cal D}_U=\left\{ \psi\in H^2([0,L],\mathbb{C})/ \,\, \varphi-i\dot\varphi=U(\varphi+i\dot\varphi)\right\},\label{dom}
\end{equation}
where $\varphi$ and $\dot\varphi$ are the boundary data\footnote{It is worth noting that for spacetime dimension higher than $1+1$ the maximal domain of the symmetric operator $\Delta$ defined over the compact manifold $M$ with smooth boundary $\partial M$ is the Sobolev space $H^2(M,\mathbb{C})$ and the domain of its adjoint $\Delta^\dagger$ is $H^2(M,\mathbb{C})\oplus\ker(\Delta^\dagger)$. This is of crucial importance from a physical point of view because boundary values of the fields that have singularities represent important physical situations as for example point charge distributions over the boundary. Nevertheless in the case of spacetime dimension $1+1$ and $M=[0,L]$ things become simpler because the space of boundary data is the linear vector space $\mathbb{C}^{2}$ (see Refs. \cite{muca-phd09,perez-pardo}). } for $\psi\in H^2([0,L],\mathbb{C})$:
\begin{equation}
\varphi\equiv\left(\begin{tabular}{c} $\psi(0)$ \\ $\psi(L)$ \end{tabular}\right),\quad \dot\varphi\equiv\left(\begin{tabular}{c} $-\psi'(0)$ \\ $\psi'(L)$ \end{tabular}\right).
\end{equation}
Following the notation in \cite{asor13-874-852} for any $\psi\in H^2([0,L],\mathbb{C})$ we introduce the 2 dimensional column vectors $\varphi_\pm(\psi)$:
\begin{equation}
\varphi_\pm(\psi)\equiv\left(\begin{tabular}{c} $\psi(0)\mp i\psi'(0)$ \\ $\psi(L)\pm i\psi'(L)$ \end{tabular}\right).\label{phipm}
\end{equation}
We can write the boundary condition given in eq. (\ref{dom}) in terms of $\varphi_\pm(\psi)$ as
\begin{equation}
\varphi_-(\psi)=U\cdot\varphi_+(\psi) .\label{bcphipm}
\end{equation}
Following the conventions and notation used in \cite{asor13-874-852} we parameterize the elements $U\in{\rm U}(2)$ by using 5 parameters:
\begin{equation}
U(\alpha,\beta,{\bf n})=e^{i\alpha}\left[ \cos(\beta)\mathbb{I}+i\sin(\beta) ({\bf n}\cdot \boldsymbol{\sigma})\right], \label{uparam}
\end{equation}
where $\mathbb{I}$ is the 2$\times$2 identity matrix, $\boldsymbol{\sigma}=(\sigma_1,\sigma_2,\sigma_3)$ are the Pauli matrices, ${\bf n}$ is a 3 dimensional unit vector ($n_1^2+n_2^2+n_3^2=1$) and the angles $\alpha$ and $\beta$ are such that
\begin{equation}
\alpha\in [-\pi,\pi];\quad\beta\in[-\pi/2,\pi/2].
\end{equation}
Using this parametrization we can characterize the non-zero part of the spectrum for any $\Delta_U\in{\cal M}$ including multiplicities of eigenvalues
(see the consistency lemma in \cite{asor13-874-852}). The secular equation
obtained in \cite{asor13-874-852} for any $\Delta_U\in{\cal M}$ is given by
\begin{eqnarray}
h_U(k)&=&\nonumber 2i e^{i\alpha}\left[ \sin(kL)\left((k^2-1)\cos(\beta)+(k^2+1)\cos(\alpha)\right)\right.\\
&-&\left.2k\sin(\alpha)\cos(kL)-2k n_1\sin(\beta)\right].\label{hspec}
\end{eqnarray}
The non-zero part $\tilde\sigma(\Delta_U)$ of the spectrum of $\Delta_U\in{\cal M}$ is given by
\begin{equation}
\tilde\sigma(\Delta_U)=\{k^2\in\mathbb{R}-\{0\}/\,\,h_U(k)=0\}=\{k^2\in\mathbb{R}-\{0\}/\,\,k\in{\rm Z}(h_U)-\{0\}\},
\end{equation}
where ${\rm Z}(h_U)$ denotes the set of zeroes of the function $h_U(k)$. For any non-zero root of $h_U(k)$ the multiplicity $d_U(k^2)$ of the corresponding eigenvalue is
\begin{equation}
\forall k\in{\rm Z}(h_U)-\{0\}:\quad d_U(k^2)=\left.{\rm Res}\left(\frac{d}{dz}\log(h_U(z))\right)\right|_{z=k}.
\end{equation}
Let us mention, that the bound states of a given selfadjoint extension $\Delta_U\in{\cal M}$ are given by zeroes of $h_U(z)$ of the form $k=i\kappa$ with $\kappa>0$, i.e. $k^2<0$. Furthermore, note that from eq. (\ref{hspec}) it is easy to see that $\lim_{k\to 0}h_U(k)=0$. This fact does not ensure that the corresponding selfadjoint extension $\Delta_U$ admits zero modes. The question about which selfadjoint extensions of ${\cal M}_F$ admit zero modes will be solved in the next section.
\par
Once all the selfadjoint extensions of ${\cal M}$ have been explicitly characterized using the AIM formalism (see \cite{asor05-20-1001} for details), following \cite{asor13-874-852} we can characterize all the selfadjoint extensions that belong to ${\cal M}_F$ and hence that give rise to strongly consistent quantum field theories\footnote{Given that the AIM formalism (first AIM theorem in \cite{asor05-20-1001,asor13-874-852})
ensures the one-to-one correspondence between selfadjoint extensions of $\Delta$ over $[0,L]$ and unitary matrices of ${\rm U}(2)$ from now on we will not make a distinction between selfadjoint extensions $\Delta_U\in{\cal M}$ and unitary matrices $U\in{\rm U}(2)$ (see the appendix)}. One of the main results in \cite{asor13-874-852} is the characterization of the set ${\cal M}_F\subset{\cal M}$ of non-negative selfadjoint extensions $\forall L\in(0,\infty)$ (``strong consistency lemma''):
\begin{equation}
{\cal M}_F=\{U(\alpha,\beta,{\bf n})\in{\rm U}(2)={\cal M}/\quad 0\leq\alpha\pm\beta\leq\pi\}.\label{mf}
\end{equation}
In Figure \ref{mfpic} we can see a representation of the set ${\cal M}_F$ in the $\alpha\beta$-plane.
\begin{figure}[h]
\center{\includegraphics[width=6cm]{mf.jpg}}
\caption{\footnotesize{This graphic shows the set ${\cal M}_F$ in the $\alpha\beta$-plane. In the top corner of the rhombus are placed the Dirchlet boundary conditions, the bottom corner corresponds to Neumann boundary conditions meanwhile the left and right corners correspond to periodic (left for $n_1=1$ and right for $n_1=-1$) and anti-periodic (left for $n_1=-1$ and right for $n_1=1$).}}
\label{mfpic}
\end{figure}
Whereas extensive results on the spectral zeta functions and the heat kernel are available for the standard boundary conditions like Dirichlet, Neumann, Robin or periodic \cite{eliz94b,gilk95b,gilk04b,kirs02b,vass03-388-279},
general boundary conditions as described in (\ref{bcphipm}) have not been analyzed in comparable detail. This is the topic of the current paper.
Generic interest in the analysis of spectral functions stems from their relevance in global analysis \cite{gilk95b,ray71-7-145}
and quantum field theory topics such as the Casimir effect
\cite{blau88-310-163,bord09b,bord01-353-1,byts96-266-1,dowk76-13-3224,dowk78-11-895,hawk77-55-133,milt01b}.
The paper is organized as follows.
In Section 2 we will answer the question which selfadjoint extensions within the strongly consistent extensions allow for zero modes. This is necessary as the details of the zeta function
analysis depend on this input. Based upon the function $h_U (k)$, eq. (\ref{hspec}), a contour integral representation of the zeta function for any strongly consistent
selfadjoint extension will be derived. As usual, residues and certain values of the zeta function determine the associated heat kernel coefficients. The cases with and without zero modes are
treated in different subsections of Section 3. Results for standard boundary conditions are verified as a check. In Section 4 we use the integral representation of the zeta function to
compute its derivative at $s=0$, once again for all possible cases. Checks for known results
are provided. In the conclusions we summarize the most important aspects of our work together with possible future directions of research.
\section{Zero modes of $\Delta_U\in{\cal M}_F$}
The purpose of this section is to study the zero mode structure of selfadjoint extensions contained in ${\cal M}_F$. In particular we will focus our attention on two main questions:
\begin{itemize}
\item Characterize the subset ${\cal M}^{(0)}_F\subset{\cal M}_F$ of selfadjoint extensions that have zero modes,
\begin{equation}
{\cal M}^{(0)}_F\equiv\left\{ \Delta_U\in{\cal M}_F/\quad 0\in\sigma(\Delta_U)\right\}.
\end{equation}
\item Study the zero mode structure and compute $\dim\left(\ker \Delta_U\right)$ of any $\Delta_U\in{\cal M}^{(0)}_F$.
\end{itemize}
The motivation to study these two questions about the zero modes of the selfadjoint extensions contained in ${\cal M}_F$ is to obtain a correct result of the $a_{1/2}$ heat kernel coefficient, for which we must know explicitly $\dim\left(\ker \Delta_U\right)$.
There are no contributions of zero modes to residues of the zeta function.
\par
The differential equation for the zero modes is
\begin{equation}
\frac{d^2}{dx^2}\psi_0(x)=0,\label{eqzm}
\end{equation}
and its general solution is given by
\begin{equation}
\psi_0(x)=a+b x,\label{genzm}
\end{equation}
where $a$ and $b$ are complex constant numbers. Notice that:
\begin{itemize}
\item When $\Delta$ is defined over the whole real line, the only solution to eq. (\ref{eqzm}) given by (\ref{genzm}) with finite ${\cal L}^2$ norm is given by $a=b=0$. Hence when $\Delta$ is defined over the real line there are no zero modes.
\item On the other hand, when $\Delta$ is defined over the finite line $[0,L]$, due to the finite length of the interval the general solution (\ref{genzm}) has always finite ${\cal L}^2$ norm. Hence when $\Delta$ is defined over the finite interval there exists the possibility of having constant and linear zero modes.
\end{itemize}
Given a selfadjoint extension $\Delta_U\in{\cal M}_F$, in order to decide if it admits zero modes of the general form (\ref{genzm}) we must impose over (\ref{genzm}) the corresponding boundary condition given by (\ref{dom}). From eq. (\ref{phipm}) we obtain for $\psi_0(x)$
\begin{equation}
\varphi_\pm(\psi_0)\equiv\left(\begin{tabular}{c} $a\mp ib$ \\ $a+b(L\pm i)$ \end{tabular}\right)= \left(\begin{tabular}{cc} 1 & $\mp i$ \\ 1 & $L\pm i$ \end{tabular}\right)\cdot \left(\begin{tabular}{c} $a$ \\ $b$ \end{tabular}\right).\label{phizm}
\end{equation}
Using (\ref{phizm}) in the boundary condition (\ref{bcphipm}) we obtain the linear system
\begin{equation}
\left[\left(\begin{tabular}{cc} 1 & $i$ \\ 1 & $L- i$ \end{tabular}\right)-U\cdot \left(\begin{tabular}{cc} 1 & $- i$ \\ 1 & $L+ i$ \end{tabular}\right)\right]\cdot \left(\begin{tabular}{c} $a$ \\ $b$ \end{tabular}\right)=0.\label{bczm}
\end{equation}
This linear system is nothing else than the boundary condition for the zero modes. Given its importance in this section, let us call the matrix of the linear system $D_U$:
\begin{equation}
D_U=\left(\begin{tabular}{cc} 1 & $i$ \\ 1 & $L- i$ \end{tabular}\right)-U\cdot \left(\begin{tabular}{cc} 1 & $- i$ \\ 1 & $L+ i$ \end{tabular}\right) .\label{du}
\end{equation}
Next we investigate the solutions of the linear system (\ref{bczm}).
\subsection{The first question: characterization of ${\cal M}_F^{(0)}$}
From basic algebra we know that $\Delta_U\in{\cal M}_F$ will admit zero modes if and only if the linear system (\ref{bczm}) has non-trivial solutions, i.e.
\begin{equation}
\ker(\Delta_U)\neq 0\,\,\Leftrightarrow\,\,\ker (D_U)\neq 0\,\,\Leftrightarrow\,\,\det(D_U)=0.
\end{equation}
Hence the characterization of ${\cal M}_F^{(0)}$ is given by
\begin{equation}
{\cal M}_F^{(0)}=\left\{ U\in{\cal M}_F/\quad \det(D_U)=0\right\}.
\end{equation}
To explicitly compute all the selfadjoint extensions contained in ${\cal M}_F^{(0)}$ we need to solve the secular equation of the linear system (\ref{bczm})
\begin{equation}
\det(D_U)=0 . \label{eqdu}
\end{equation}
Introducing the parametrization (\ref{uparam}) in (\ref{du}) and simplifying we obtain
\begin{equation}
\det(D_U)=2 e^{i\alpha}\left[ L\left( \cos(\alpha)-\cos(\beta)\right)-2\left( \sin(\alpha)+n_1\sin(\beta)\right)\right].\label{detdu}
\end{equation}
Therefore neglecting the global factor $2 e^{i\alpha}$ that is never zero the equation to solve is
\begin{equation}
L\left( \cos(\alpha)-\cos(\beta)\right)-2\left( \sin(\alpha)+n_1\sin(\beta)\right)=0\label{seceq}
\end{equation}
with the restrictions ensuring that the corresponding solution gives a matrix $U$ that is in ${\cal M}_F$:
\begin{equation}
n_1\in[-1,1];\,\,\alpha\in[0,\pi];\,\,\beta\in[-\pi/2,\pi/2];\,\, 0\leq\alpha\pm\beta\leq\pi \,.\label{mfcond}
\end{equation}
The simplest way to solve (\ref{seceq}) is by imposing $$\cos \alpha - \cos \beta =0 \Longrightarrow \alpha = \pm \beta,$$
which makes $$\sin \alpha + n_1 \sin \beta = \sin \alpha \pm n_1 \sin \alpha =0 \Longrightarrow n_1 = \mp 1$$ necessary. In fact,
these are all possible solutions. Because if $\cos \alpha - \cos \beta \neq 0$, we have
$$ L = \frac{ 2\sin \alpha + n_1 \sin \beta }{ \cos \alpha - \cos \beta } $$
with $L>0$. However, with the parameters confined by the conditions in (\ref{mfcond}) one can show that the right hand side is always negative.
As a consequence we have shown that
all the solutions to (\ref{seceq}) that satisfy conditions (\ref{mfcond}) are given by
\begin{equation}
{\cal M}_F^{(0)}=\left\{U\in{\cal M}_F/\quad n_1=\pm 1;\,\,\alpha\in[0,\pi/2];\,\,\beta=-n_1\alpha\right\} . \label{premf0}
\end{equation}
\begin{figure}[h]
\center{\includegraphics[width=6cm]{mfz+1.jpg}\qquad\includegraphics[width=6cm]{mfz-1.jpg}}
\caption{\footnotesize{Representation of ${\cal M}_F^{(0)}$ (red lines) over the $\alpha\beta$-plane}}
\label{mfzeropic}
\end{figure}
\par
In terms of the parametrization given in (\ref{uparam}) the unitary matrices contained in ${\cal M}_F^{(0)}$ are given by
\begin{equation}
U\in{\cal M}_F^{(0)}\Rightarrow U=e^{i\alpha}\left[ \cos(\alpha)\mathbb{I}-i n_1\sin(\alpha)\sigma_1 \right];\,\,\alpha\in[0,\pi/2];\,\,n_1\in \{-1,1\}. \label{umfz}
\end{equation}
Using the expression above for the matrices contained in ${\cal M}_F^{(0)}$ and the definition (\ref{du}), we find
\begin{equation}
D_U=\left(\begin{tabular}{cc} 0 & $i e^{i\alpha}\left(2\cos(\alpha)+L\sin(\alpha)\right)$ \\ 0 & $-i e^{i\alpha}\left(2\cos(\alpha)+L\sin(\alpha)\right)$ \end{tabular}\right)\quad\forall \,\,U\in{\cal M}_F^{(0)}.\label{dumfz}
\end{equation}
As can be seen from this expression above, when $U\in{\cal M}_F^{(0)}$ the matrix $D_U$ has indeed zero determinant. In Figure \ref{mfzeropic} it is shown the space ${\cal M}_F^{(0)}$ in the $\alpha\beta$-plane.
\subsection{The second question: $\dim\left(\ker(\Delta_U)\right)$ for $\Delta_U\in{\cal M}_F^{(0)}$}
Taking into account eq. (\ref{bczm}) and the meaning of the constants $a$ and $b$, see eq. (\ref{genzm}),
the second question will be answered by studying the explicit solutions to (\ref{bczm}) when $D_U$ is given by expression (\ref{dumfz}), i.e. $U\in{\cal M}_F^{(0)}$.
We will answer this second question in two lemmas with their corresponding demonstrations.
\begin{lemma}\label{le1}
Any selfadjoint extension $\Delta_U\in{\cal M}_F^{(0)}$ admits a constant zero mode.
\end{lemma}
\paragraph{Proof.} To proof the lemma we only need to demonstrate that the column vector
\begin{equation}
v_c^{(0)}=\left(\begin{tabular}{c} a \\ 0 \end{tabular}\right),\quad a\neq 0,\label{vc}
\end{equation}
belongs to $\ker(D_U)$ for any $U\in{\cal M}_F^{(0)}$ (notice that according to (\ref{genzm}) when $b=0$ and $a\neq0$ the expression gives rise to the constant function over the interval $[0,L]$). For any $U\in{\cal M}_F^{(0)}$ the associated matrix $D_U$ is given by (\ref{dumfz}). Since the first column in (\ref{dumfz}) is identically zero by direct trivial calculation
\begin{equation}
D_U\cdot v_c^{(0)}=\left(\begin{tabular}{c} 0 \\ 0 \end{tabular}\right)\quad \forall\,\,U\in{\cal M}_F^{(0)}.
\end{equation}
Therefore $v_c^{(0)}$ is a solution to the linear system (\ref{bczm}). Hence taking (\ref{genzm}) into account for any $\Delta_U\in{\cal M}_F^{(0)}$ there exists a constant zero mode. $\blacksquare$
This lemma ensures that any selfadjoint extension $\Delta_U\in{\cal M}_F^{(0)}$ has at least a constant zero mode, i.e.
\begin{equation}
\forall\,\,\Delta_U\in{\cal M}_F^{(0)},\quad \,\,\,\dim\left(\ker(\Delta_U)\right)=\dim\left(\ker(D_U)\right)\geq 1.\label{dimker1}
\end{equation}
Since any $\Delta_U\in{\cal M}_F^{(0)}$ has a constant zero mode the only possibility to be explored now is the possibility of having selfadjoint extensions $\Delta_U\in{\cal M}_F^{(0)}$ that also admit a linear zero mode.
The condition for a selfadjoint extension $\Delta_U\in{\cal M}_F^{(0)}$ to admit a linear zero mode is given by
\begin{equation}
\dim\left(\ker(\Delta_U)\right)=\dim\left(\ker(D_U)\right)=2.\label{dimker2}
\end{equation}
Since $D_U$ is a $2\times 2$ complex matrix
\begin{equation}
\dim\left(\ker(D_U)\right)=2\,\,\Longleftrightarrow\,\, D_U=0.\label{duzero}
\end{equation}
This condition ensures the existence of a linear zero mode for any selfadjoint extension $\Delta_U\in{\cal M}_F^{(0)}$ by the following argumentation:
\begin{enumerate}[i.]
\item For any $\Delta_U\in{\cal M}_F^{(0)}$ there is a constant zero mode $\Rightarrow\,\,v_c^{(0)}$ given by (\ref{vc}) belongs to $\ker(D_U)$ for any $\Delta_U\in{\cal M}_F^{(0)}$.
\item $\Delta_U\in{\cal M}_F^{(0)}$ will admit a linear zero mode if and only if the matrix $D_U$ is such that there exists in addition to $v_c^{(0)}$ a solution to the linear system (\ref{bczm}) with $b\neq 0$ (see eq. (\ref{genzm})).
\item Hence $\Delta_U\in{\cal M}_F^{(0)}$ will admit a linear zero mode if and only if
\begin{equation}
\dim\left(\ker(D_U)\right)=2\,\,\Longleftrightarrow\,\, D_U=0,
\end{equation}
because any solution to (\ref{bczm}) with $b\neq 0$ will be linearly independent of the vector $v_c^{(0)}\in\ker(D_U)\,\,\forall\,\, \Delta_U\in{\cal M}_F^{(0)}$.
\end{enumerate}
\begin{lemma}\label{le2}
There are no selfadjoint extensions $\Delta_U\in{\cal M}_F^{(0)}$ that admit a linear zero mode.
\end{lemma}
\paragraph{Proof.} Given any $\Delta_U\in{\cal M}_F^{(0)}$ the necessary and sufficient condition to admit a linear zero mode is (\ref{duzero}). Since for $\Delta_U\in{\cal M}_F^{(0)}$ the associated $D_U$ matrix is given by (\ref{dumfz}) the condition $D_U=0$ is given by the equation
\begin{equation}
2\cos(\alpha)+L\sin(\alpha)=0\,\,\Rightarrow\,\,\tan(\alpha)=-2/L.\label{dueqz}
\end{equation}
Because $L$ is the length of the interval $-2/L\leq 0$. Therefore there is no $\alpha\in[0,\pi/2]$ satisfying (\ref{dueqz})\footnote{Since $\Delta_U\in{\cal M}_F^{(0)}$ the angle $\alpha$ is restricted to lie in the interval $[0,\pi/2]$.}. Therefore no $\Delta_U\in{\cal M}_F^{(0)}$ can satisfy the condition $D_U=0$, i.e. no $\Delta_U\in{\cal M}_F^{(0)}$ admits a linear zero mode. $\blacksquare$
To conclude this section we compile all the results in the following theorem.
\begin{theor}
The space ${\cal M}_F^{(0)}\subset{\cal M}_F$ of non-negative selfadjoint extensions of the Laplace operator $\Delta$ over $[0, L]$ that admit zero modes is given by
\begin{equation}
{\cal M}_F^{(0)}=\left\{U\in{\cal M}_F/\quad n_1=\pm 1;\,\,\alpha\in[0,\pi/2];\,\,\beta=-n_1\alpha\right\}.\label{mf0}
\end{equation}
In addition $\dim\left(\ker(\Delta_U)\right)=1$ for any selfadjoint extension $\Delta_U\in{\cal M}_F^{(0)}$ and the unique zero mode is the constant function over the interval $[0,L]$.
\end{theor}
\subsection{A remark about the Von Neumann-Krein extension}
To complete the study of the zero-modes we will determine the minimal non-negative selfadjoint extension: the so-called Von Neumann-Krein extension. To introduce the general definition of the Von Neumann-Krein extension we need a quick overview of some general results (see Refs. \cite{alsi-jot80,teschl-am10,teschl-12}). Let $T_1$ and $T_2$ be two non-negative selfadjoint operators with dense domains in a Hilbert space ${\cal H}$. We say that
\begin{equation}
T_1\leq T_2
\end{equation}
if and only if
\begin{enumerate}[i]
\item ${\cal D}(T_2^{1/2})\subseteq {\cal D}(T_1^{1/2}),$
\item $\langle T_1^{1/2}\psi\vert T_1^{1/2}\psi \rangle_{L^2}\leq \langle T_2^{1/2}\psi\vert T_2^{1/2}\psi \rangle_{L^2}$ for all $\psi\in {\cal D}(T_2^{1/2}).$
\end{enumerate}
Let now $T$ be a non-negative symmetric operator over a Hilbert space ${\cal H}$. Then there exist two unique non-negative selfadjoint extensions $T_{min}$ and $T_{max}$ such that $T_{min}\leq T_{max}$ and every non-negative selfadjoint extension $S$ of $T$ satisfies
\begin{equation}
T_{min}\leq S\leq T_{max}.
\end{equation}
The minimal non-negative selfadjoint extension $T_{min}$ is the so-called Von Neumann-Krein (VNK) extension. From now on we will denote the Von Neumann-Krein extension with the sub-index $VNK$.
Following subsection 11.1 in Ref. \cite{teschl-12} the VNK extension of the operator $T=-\Delta$ over the finite interval $[0,L]$ is characterized as the unique selfadjoint extension with a maximal number of zero modes. From
eq. (\ref{bczm}) the maximum number of zero-modes for the Laplace operator over the finite line is two: a constant zero-mode, and a linear zero mode. Therefore the condition that characterizes uniquely the VNK extension is
\begin{equation}
D_U=0 \Rightarrow U_{VNK}=\left(\begin{tabular}{cc} 1 & $i$ \\ 1 & $L- i$ \end{tabular}\right)\cdot \left(\begin{tabular}{cc} 1 & $- i$ \\ 1 & $L+ i$ \end{tabular}\right)^{-1},
\end{equation}
\begin{equation}
U_{VNK}=\frac{1}{L+2i}\left(\begin{tabular}{cc} $L$ & $2i$ \\ $2 i$ & $L$ \end{tabular}\right).\label{uvnk}
\end{equation}
It is straightforward to check that $U_{VNK}$ is a unitary matrix and therefore defines a selfadjoint extension of the Laplacian over the finite line. In order to demonstrate
whether or not the VNK extension belongs to ${\cal M}_F$ we must compute the parameters $\{\alpha_{VNK},\beta_{VNK},{\bf n}_{VNK}\}$ that characterize the VNK extension in the parametrization given by (\ref{uparam}). Since (\ref{uvnk}) is a symmetric matrix and both diagonal elements are equal we must require $n_2=n_3=0$. Therefore we can assume without loss of generality that ${\bf n}_{VNK}=(1,0,0)$. Knowing that
\begin{equation}
\left.{\bf U}\right\vert_{n_1=1}=e^{i\alpha}\left(\begin{tabular}{cc} $\cos(\beta)$ & $i\sin(\beta)$ \\ $ \sin(\beta)i$ & $\cos(\beta)$ \end{tabular}\right)
\end{equation}
and comparing with (\ref{uvnk}) we obtain the following two equations:
\begin{eqnarray}
e^{i\alpha}\cos(\beta)&=&\frac{L}{L+2i},\label{A}\\
e^{i\alpha}\sin(\beta)&=&\frac{2}{L+2i}.\label{B}
\end{eqnarray}
Dividing eq.~(\ref{B}) by eq.~(\ref{A}) it follows that $\tan(\beta_{VNK})=2/L$. Since the principal value of $\arctan(x)$ is in the interval $[-\pi/2,\pi/2]$ we conclude that $\beta_{VNK}=\arctan\left(2/L\right)$.
In addition it is easy to see that $\sin(\beta_{VNK})=2/\sqrt{L^2+4}$ and $\cos(\beta_{VNK})=L/\sqrt{L^2+4}$. To determine $\alpha_{VNK}$ we sum (\ref{A})$+i$(\ref{B}) to obtain the equation $e^{i(\alpha_{VNK}+\beta_{VNK})}=1\Rightarrow\alpha_{VNK}=-\beta_{VNK}=-\arctan\left(2/L\right)$. Hence the VNK extension is characterized by:
\begin{eqnarray}
&& {\bf n}_{VNK}=(1,0,0),\quad \alpha_{VNK}(L)=-\beta_{VNK}(L),\\
&&\beta_{VNK}(L)=\arctan\left(2/L\right).
\end{eqnarray}
Taking into account that $\arctan\left(2/L\right)$ is a non-negative and monotonically decreasing function in the interval $L\in[0,\infty)$ that goes from the value $\pi/2$ for $L\rightarrow0$ to the value $0$ when $L\rightarrow\infty$ we conclude that $\beta_{VNK}\in[0,\pi/2]$ and $\alpha_{VNK}\in[-\pi/2,0]$ for any $L\in(0,\infty)$. Therefore $U_{VNK}\notin{\cal M}_F$ for any value of $L$. Nevertheless, since the VNK extension is non-negative we will be able to compute the heat kernel coefficients and the derivative at zero of the spectral zeta function with the methods we develop in the following sections.
\section{The heat kernel expansion of $\Delta_U\in{\cal M}_F$}
Using standard methods described for example in Ref. \cite{kirs02b} we will next compute all the coefficients of the asymptotic expansion of the heat kernel corresponding to any selfadjoint extension $\Delta_U\in{\cal M}_F$. Before going over the explicit calculation let us introduce the general results contained in \cite{kirs02b} that will be necessary in our calculation.
Let $\hat{{\cal O}}$ be an elliptic non-negative selfadjoint second order differential operator (in one dimension) over a Hilbert space ${\cal H}$. Let $f_{\hat{{\cal O}}} (z)$ be a holomorphic function over the complex plane such that for $k\in\mathbb{R}$
\begin{equation}
\lim_{k\rightarrow 0}f_{\hat{{\cal O}}}(k)\neq 0,\,\infty,\label{specfzero}
\end{equation}
and such that the non-zero part of the spectrum of $\hat{{\cal O}}$ is given by\footnote{We will denote by $\sigma(\hat{{\cal O}})$ the spectrum of the operator $\hat{{\cal O}}$ and $\tilde{\sigma}(\hat{{\cal O}})$ the non zero part of $\sigma(\hat{{\cal O}})$. Given a function $f(z)$ over the complex plane we will denote by $Z(f)$ the set of its zeroes over the complex plane.}
\begin{equation}
\tilde{\sigma}(\hat{{\cal O}})=Z(f_{\hat{{\cal O}}}),
\end{equation}
where the multiplicities of eigenvalues are reflected in the order of the zeroes.
When $f_{\hat{{\cal O}}}$ satisfies the conditions stated above, the spectral zeta function of the operator $\hat{{\cal O}}$ can be written as:
\begin{equation}
\zeta_{\hat{{\cal O}}}(s)=\frac{\sin(\pi s)}{\pi}\int_0^\infty dk\cdot k^{-2 s}\partial_k\log\left(f_{\hat{{\cal O}}}(i k)\right).\label{zetagen}
\end{equation}
This approach has been used for many examples; see, e.g., \cite{kirs02b,kirs03-308-502}.
The integral in (\ref{zetagen}) in the current context will be convergent in the region $1/2 < \Re s <1$.
However, expression (\ref{zetagen}) admits an analytical continuation to the whole complex plane with, in general, poles at
\begin{equation}
s=\frac{1}{2}-n;\quad n=0,\, 1,\, 2,\, 3...\label{genzpoles}
\end{equation}
The heat kernel coefficients can be computed in terms of the residues at the poles and the values at non-positive integers of $\zeta_{\hat{{\cal O}}}(s)$ \cite{seel68-10-288}:
\begin{equation}
a_{1/2-z}(\hat{{\cal O}})=\Gamma(z){\rm Res}\left(\zeta_{\hat{{\cal O}}},s=z\right),\label{genhc1}
\end{equation}
\begin{equation}
a_{1/2+q}(\hat{{\cal O}})=(-1)^q\frac{\zeta_{\hat{{\cal O}}}(-q)}{\Gamma(q+1)}+\delta_{q,0}N_Z(\hat{{\cal O}}).\label{genhc2}
\end{equation}
In eq. (\ref{genhc2}), $N_Z(\hat{{\cal O}})$ denotes the number of zero modes of the operator $\hat{{\cal O}}$.
Hence, according to formulas (\ref{genhc1}) and (\ref{genhc2}), in order to know all the heat kernel coefficients we only need to know the residues at the poles and the values at the
non-positive integers of the spectral zeta function $\zeta_{\hat{{\cal O}}}(s)$. To use formula (\ref{zetagen}) we will need to use the secular
equation given in formula (\ref{hspec}). Note, however, that $k$ needs to be replaced by $ik$ when used in (\ref{zetagen}).
Directly from formula (\ref{hspec}) it is easy to see that
\begin{equation}
\lim_{k\rightarrow 0}h_U(k)=0.
\end{equation}
Therefore using formula (\ref{zetagen}) to compute the residues and the values at the non-positive integers of $\zeta_U(s)$ for any $\Delta_U\in{\cal M}_F$ is not
possible using the function (\ref{hspec}) because it does not satisfy the condition (\ref{specfzero}).
Hence we need to extract from (\ref{hspec}) the suitable function by studying the behaviour of $h_U(z)$ as $z\to 0$.
\subsection{Behaviour of $h_U(z)$ as $z\to 0$}
If we perform power series expansion in $k$ around $k=0$ of the secular equation given by (\ref{hspec}) up to first order in $k$ we obtain
\begin{equation}
h_U(k)=2 ike^{i\alpha} \left(L \left(\cos (\alpha )-\cos (\beta )\right)-2\left(n_1 \sin (\beta )+ \sin (\alpha)\right)\right)+O\left(k^2\right).
\end{equation}
Taking into account eq. (\ref{detdu}) for any $\Delta_U\in{\cal M}_F$ we can write the power series expansion above as
\begin{equation}
h_U(k)=ik\det(D_U)+O\left(k^2\right).\label{hu1ordk}
\end{equation}
Hence for any $\Delta_U\in{\cal M}_F-{\cal M}_F^{(0)}$ the function that satisfies the required conditions to be used in the representation of the spectral zeta function given by eq. (\ref{zetagen}) is
\begin{equation}
\Delta_U\in{\cal M}_F-{\cal M}_F^{(0)}\quad\Rightarrow\quad f_U(k)=\frac{h_U(k)}{2ike^{i\alpha}}.\label{funoz}
\end{equation}
When the selfadjoint extension has a constant zero mode ($\Delta_U\in{\cal M}_F^{(0)}$) the first order in $k$ of the power expansion (\ref{hu1ordk}) is zero.
Therefore we must expand $h_U$ up to order 3 (notice from eq. (\ref{hspec}) the function $h_U$ is odd in $k$) to study the behavior at the origin:
\begin{equation}
\det(D_U)=0\Rightarrow h_U(k)=\frac{k^3 L}{3} \left(L\left(2 \sin (\alpha )-n_1 \sin (\beta )\right)+3\left( \cos (\alpha )+ \cos
(\beta )\right)\right)+O(k^5).
\end{equation}
Hence, in order to obtain the function that satisfies the conditions under which (\ref{zetagen}) is valid, we must divide by an extra $k^2$ when $\Delta_U\in{\cal M}_F^{(0)}$:
\begin{equation}
\Delta_U\in{\cal M}_F^{(0)}\quad\Rightarrow\quad f^{(0)}_U(k)=\frac{h_U(k)}{2ik^3e^{i\alpha}}.\label{fusiz}
\end{equation}
\subsection{Heat kernel coefficients for $\Delta_U\in{\cal M}_F-{\cal M}^{(0)}_F$}
For this case, when $\cos (\alpha) + \cos (\beta) \neq 0$, the appropriate function is given by eq. (\ref{funoz}). Using (\ref{hspec}) we can rewrite (\ref{funoz}) for $k=ix$ as
\begin{eqnarray}
f_U(ix)&=&x e^{x L} \frac{\cos(\alpha)+\cos(\beta)}{2}\left[ 1+\frac{2}{x}\frac{\sin(\alpha)}{\cos(\alpha)+\cos(\beta)} +\frac{1}{x^2}\frac{\cos(\beta)-\cos(\alpha)}{\cos(\alpha)+\cos(\beta)}\right.\nonumber\\
&-&e^{-2 x L}\left( 1+\frac{x^{-2}(\cos(\beta)-\cos(\alpha))}{\cos(\alpha)+\cos(\beta)}-\frac{2x^{-1}\sin(\alpha)}{\cos(\alpha)+\cos(\beta)}\right)\nonumber\\
&+&\left.x^{-1}e^{-x L}\frac{4n_1\sin(\beta)}{\cos(\alpha)+\cos(\beta)}\right].\label{fuixnoz}
\end{eqnarray}
For positive $L$, the second and third line in eq. (\ref{fuixnoz}) represent exponentially damped terms as $x\to\infty$.
These terms do not contribute to the poles and to the values of $\zeta_{\Delta_U} (s)$ at non-positive integers.
Therefore, we can neglect them in the following formulas and just denote them as $e.s.t.$ Hence $\log(f_U(ix))$ will be given by
\begin{equation}
\log(f_U(ix))=\log\left(\frac{\cos(\alpha)+\cos(\beta)}{2}\right)+\log(x)+xL+\log(1+\tau_U(x)),
\end{equation}
where $\tau_U(x)$ is given by
\begin{equation}
\tau_U(x)=\frac{2}{x}\frac{\sin(\alpha)}{\cos(\alpha)+\cos(\beta)} +\frac{1}{x^2}\frac{\cos(\beta)-\cos(\alpha)}{\cos(\alpha)+\cos(\beta)} +e.s.t.
\end{equation}
Now if we take into account the series expansion
\begin{equation}
\log(1+\tau)=\sum_{n=1}^\infty(-1)^{n+1}\frac{\tau^n}{n},
\end{equation}
we can write
\begin{equation}
\log(1+\tau_U(x))=\sum_{n=1}^\infty\frac{(-1)^{n+1}}{n}\left(\frac{2}{x}\frac{\sin(\alpha)}{\cos(\alpha)+\cos(\beta)} +\frac{1}{x^2}\frac{\cos(\beta)-\cos(\alpha)}{\cos(\alpha)+\cos(\beta)}\right)^n+e.s.t.
\end{equation}
Using Newton's binomial formula we can write
\begin{equation}
\tau_U(x)^n/n=\sum_{j=0}^n\frac{\Gamma(n)2^{n-j}\sin^{n-j}(\alpha)}{\Gamma(j+1)\Gamma(n-j+1)}\frac{(\cos(\beta)-\cos(\alpha))^j}{(\cos(\alpha)+\cos(\beta))^n}x^{-(n+j)}.
\end{equation}
After reordering the double summation we obtain
\begin{eqnarray}
&&\log(1+\tau_U(x))=\sum_{m=1}^\infty b_mx^{-m},\\
&& b_m\equiv\sum_{j=0}^{[m/2]}(-1)^{m-j+1}\frac{2^{m-2j}\Gamma(m-j)\sin^{m-2j}(\alpha)}{\Gamma(j+1)\Gamma(m-2j+1)} \frac{(\cos(\beta)-\cos(\alpha))^j}{(\cos(\alpha)+\cos(\beta))^{m-j}},\label{jos1}
\end{eqnarray}
where $m=1,2,3,...$. Hence, finally we obtain the following asymptotic series for $\partial_x\log(f_U(ix))$,
\begin{equation}
\partial_x\log(f_U(ix))=L+x^{-1}-\sum_{m=1}^\infty m b_mx^{-m-1}+e.s.t.
\end{equation}
Taking into account the integral representation (\ref{zetagen}) we can write for any selfadjoint extension $\Delta_U\in{\cal M}_F-{\cal M}^{(0)}_F$,
\begin{equation}
\zeta_{\Delta_U}(s)=\frac{\sin(\pi s)}{\pi}\int_0^1 d k\cdot k^{-2s}\partial_k\log(f_U(ik))+\frac{\sin(\pi s)}{\pi}\int_1^\infty d k\cdot k^{-2s}\partial_k\log(f_U(ik)) . \label{anyselfzet}
\end{equation}
With this splitting all the information about the poles and the values of $\zeta_{\Delta_U}(s)$ at the non-positive integers is contained in the integration from $1$ to $\infty$.
Therefore, in order to perform the analytic continuation of $\zeta_{\Delta_U}(s)$ to the complex plane, we have to perform the analytic continuation of
\begin{equation}
\frac{\sin(\pi s)}{\pi}\int_1^\infty d k\cdot k^{-2s}\partial_k\log(f_U(ik)) \label{kk3.24}
\end{equation}
to the complex plane. In order to do so we must remember the following identities:
\begin{eqnarray}
\int_1^\infty dz\cdot z^{-2s}&=&\frac{1/2}{s-1/2},\\
\int_1^\infty dz\cdot z^{-2s-1}&=&\frac{1/2}{s},\\
\int_1^\infty dz\cdot z^{-2s-m-1}&=&\frac{1/2}{s+m/2}.
\end{eqnarray}
Hence, the relevant information about the analytic continuation of (\ref{kk3.24}) is contained in
\begin{equation}
\frac{\sin(\pi s)}{\pi}\int_1^\infty d k\cdot k^{-2s}\partial_k\log(f_U(ik))=\frac{\sin(\pi s)}{\pi}\left(\frac{L/2}{s-1/2}+\frac{1/2}{s} -\sum_{m=1}^{N-1} b_m\frac{m/2}{s+m/2}
+A(s)\right),
\end{equation}
where $N\in\mathbb{N}$ and $A(s)$ in the bracket of the right hand side represents a meromorphic function of $s$ analytic for $\Re s > -N/2$. The integer $N$ can be chosen as large
as we wish and details of the function $A(s)$ are irrelevant for our purposes in this section.
Using this analytic continuation, the poles of $\zeta_{\Delta_U}(s)$ can be easily computed:
\begin{eqnarray}
&&{\rm res}\left(\zeta_{\Delta_U}(s),s=1/2\right)={\rm res}\left(L\sin(\pi s)/(2\pi(s-1/2),s=1/2\right)\nonumber\\
&\Rightarrow &{\rm res}\left(\zeta_{\Delta_U}(s),s=1/2\right)=\frac{L}{2\pi},\label{reszhalf}\\
&&{\rm res}\left(\zeta_{\Delta_U}(s),s=-\frac{(2n+1)}{2}\right)={\rm res}\left(-\frac{(2n+1)b_{2n+1}\sin(\pi s)}{2\pi(s+(2n+1)/2)},s=-\frac{(2n+1)}{2}\right)\nonumber\\
&\Rightarrow &{\rm res}\left(\zeta_{\Delta_U}(s),s=-\frac{(2n+1)}{2}\right)=(-1)^{n}b_{2n+1}\frac{(2n+1)}{2\pi}, \,\,\,\,n=0,1,2,3...\label{resz-halfint}
\end{eqnarray}
Furthermore, it gives the values of $\zeta_{\Delta_U}(s)$ at the non-positive integers:
\begin{eqnarray}
&&\zeta_{\Delta_U}(0)={1\over 2}\lim_{s\rightarrow 0}\frac{\sin(\pi s)}{\pi s}=1/2,\label{zeta0}\\
&&\zeta_{\Delta_U}(-n)=-n b_{2n}\lim_{s\rightarrow -n}\frac{\sin(\pi s)}{\pi(s+n)}=(-1)^{n+1} nb_{2n},\,\,\,\,n=1,2,3...\label{zeta-n}
\end{eqnarray}
Given eqs. (\ref{reszhalf})-(\ref{zeta-n}), for any $\Delta_U\in{\cal M}_F-{\cal M}^{(0)}_F$ such that $\cos(\alpha)+\cos(\beta)\neq 0$ it is easy to compute the heat kernel coefficients using the general formulas (\ref{genhc1}) and (\ref{genhc2}). Namely, we find
\begin{eqnarray}
&&a_0=\frac{L}{2\sqrt{\pi}},\quad a_{n+1}=-\frac{4^n n! b_{2n+1}}{(2n)!\sqrt{\pi}}, \,\,\,n=0,1,2,3,...\label{hkcint-nozm}\\
&&a_{1/2}=1/2,\quad a_{n+1/2}=-\frac{b_{2n}}{(n-1)!},\,\,\,n=1,2,3,...\label{hkchalfint-nozm}
\end{eqnarray}
\subsubsection{The case of $\Delta_U\in{\cal M}_F-{\cal M}^{(0)}_F$ with $\cos(\alpha)+\cos(\beta)=0$}
For this case the appropriate function reads
\begin{eqnarray}
f^{(B)}_U(ix)&=&e^{x L} \left[ \sin(\alpha) +\frac{1}{2x}\left(\cos(\beta)-\cos(\alpha)\right)+e^{-x L}2n_1\sin(\beta)\right.\nonumber\\
&+&\left. e^{-2 x L}\left( \sin(\alpha)-\frac{1}{2x}(\cos(\beta)-\cos(\alpha))\right)\right].\label{fbuix}
\end{eqnarray}
Following the same procedure as in the general case we expand, for $\alpha \neq \pi$,
\begin{equation}
\log\left(f^{(B)}_U(ix)\right)=\log\left(\sin(\alpha)\right)+xL+\sum_{m=1}^\infty c_mx^{-m}+e.s.t.,
\end{equation}
\begin{equation}
c_m=-\frac{{\rm cotg}^m(\alpha)}{m} .
\end{equation}
Again the analytical continuation of
\begin{equation}
\frac{\sin(\pi s)}{\pi}\int_1^\infty d k\cdot k^{-2s}\partial_k\log(f_U(ik))
\end{equation}
provides the residues at the half integers and the values at the non-positive integers of $\zeta^{(B)}_{\Delta_U}(s)$,
\begin{eqnarray}
&&{\rm res}\left(\zeta^{(B)}_{\Delta_U}(s),s=1/2\right)=\frac{L}{2\pi},\\
&&{\rm res}\left(\zeta^{(B)}_{\Delta_U}(s),s=-\frac{(2n+1)}{2}\right)=(-1)^{n}c_{2n+1}\frac{(2n+1)}{2\pi},\,\,\,\,n=0,1,2,3...,\\
&&\zeta^{(B)}_{\Delta_U}(0)=0,\\
&&\zeta^{(B)}_{\Delta_U}(-n)=(-1)^{n+1}nc_{2n},\,\,\,\,n=1,2,3...
\end{eqnarray}
Once we use formulas (\ref{genhc1}) and (\ref{genhc2}) we obtain the corresponding heat kernel coefficients,
\begin{eqnarray}
&&a^{(B)}_0=\frac{L}{2\sqrt{\pi}},\quad a^{(B)}_{n+1}=-\frac{4^n n! c_{2n+1}}{(2n)!\sqrt{\pi}}, \,\,\,n=0,1,2,3,...,\\
&&a^{(B)}_{1/2}=0,\quad a^{(B)}_{n+1/2}=-\frac{c_{2n}}{(n-1)!},\,\,\,n=1,2,3,...
\end{eqnarray}
Finally, the case $\alpha = \pi$, $\beta =0$, has to be treated separately and
$$\left.\partial_x \left( \ln f_U^{(B)} (ix) \right)\right|_{\alpha = \pi} = L - \frac 1 x + e.s.t.$$
From here,
$$ {\rm res} \left( \zeta_{\Delta_U}^{(B)} (s) \right|_{\alpha =\pi}, \left. s= \frac 1 2 \right) = \frac L {2\pi} , \quad \quad
\left.\zeta_{\Delta_U}^{(B)} (0) \right|_{\alpha = \pi} = - \frac 1 2 , $$
and \begin{eqnarray}\left. a_0^{(B)} \right|_{\alpha = \pi} = \frac L { 2 \sqrt \pi} , \quad \quad \left.a_{1/2} ^{(B)} \right|_{\alpha =\pi} = - \frac 1 2,\label{heatdir}\end{eqnarray}
with all other residues and relevant values respectively heat kernel coefficients equal to zero.
\subsection{Heat kernel coefficients for $\Delta_U\in{\cal M}^{(0)}_F$}
Taking into account (\ref{mf0}) we can write (\ref{fusiz}) as
\begin{equation}
f_U^{(0)}(ix)=\frac{e^{xL}}{x}\cos(\alpha)\left[1+ \frac{\tan(\alpha)}{x}-2\frac{e^{-xL}\tan(\alpha)}{x} -e^{-2xL}\left(1-\frac{\tan(\alpha)}{x}\right)\right].
\end{equation}
Therefore
\begin{equation}
\log\left(f_U^{(0)}(ix)\right)=\log(\cos(\alpha))+xL-\log(x)+\sum_{n=1}^\infty\frac{(-1)^{n+1}\tan^n (\alpha)}{n} x^{-n}+ e.s.t.
\end{equation}
\begin{equation}
\Rightarrow\partial_x\log\left(f_U^{(0)}(ix)\right)=L-\frac{1}{x}+\sum_{n=1}^\infty(-1)^{n}\tan^n(\alpha) x^{-n-1}+ e.s.t.
\end{equation}
Hence the analytical continuation gives as before the required residues and values of $\zeta^{(0)}_{\Delta_U}(s)$,
\begin{eqnarray}
&&{\rm res}\left(\zeta^{(0)}_{\Delta_U}(s);s=1/2\right)=L/2\pi,\\
&&{\rm res}\left(\zeta^{(0)}_{\Delta_U}(s);s=-\frac{2n+1}{2}\right)=\frac{(-1)^n}{2\pi} \tan^{2n+1}(\alpha),\,\,\, n=0,1,2,3,...,\\
&&\zeta^{(0)}_{\Delta_U}(0)=-1/2,\\
&&\zeta^{(0)}_{\Delta_U}(-n)=\frac 1 2 (-1)^n\tan^{2n}(\alpha),\,\,\, n=1,2,3,...
\end{eqnarray}
To obtain the heat kernel coefficients, we must take into account that there is one zero mode in all
cases as demonstrated previously. Therefore we must add 1 to $a_{1/2}$:
\begin{eqnarray}
&& a^{(0)}_0=\frac L {2\sqrt{\pi}},\quad a^{(0)}_{1/2}=\frac 1 2 ,\label{heatneu1}\\
&& a^{(0)}_{n+1}=-\frac{4^n n! \tan^{2n+1}(\alpha)}{(2n+1)!\sqrt{\pi}},\,\,\,n=0,1,2,3,...,\label{heatneu2}\\
&& a^{(0)}_{n+1/2}=\frac 1 2 \frac{\tan^{2n}(\alpha)}{n!},\,\,\,n=1,2,3,...\label{heatneu3}
\end{eqnarray}
The heat kernel coefficients obtained above for $\Delta_U\in{\cal M}^{(0)}_F$ become singular for $\alpha=\pi/2$. In this case instead
\begin{equation}
\left.\partial_x\log\left(f_U^{(0)}(ix)\right)\right\vert_{\alpha=\pi/2}=L-\frac{2}{x}+ e.s.t.,
\end{equation}
and therefore the spectral zeta function $\left.\zeta^{(0)}_{\Delta_U}(s)\right\vert_{\alpha=\pi/2}$ will only have a residue at $s=1/2$ and non zero value at $s=0$,
\begin{eqnarray}
&&{\rm res}\left(\left.\zeta^{(0)}_{\Delta_U}(s)\right\vert_{\alpha=\pi/2};s=1/2\right)=L/2\pi,\\
&&\left.\zeta^{(0)}_{\Delta_U}(0)\right\vert_{\alpha=\pi/2}=-1.
\end{eqnarray}
Hence the only non-vanishing heat kernel coefficients are given by
\begin{eqnarray}\label{hcorners1}
&& \left. a^{(0)}_0\right\vert_{\alpha=\pi/2}=\frac L {2\sqrt{\pi}}.
\end{eqnarray}
\paragraph{The Von Neumann-Krein extension} The corresponding results for the VNK extension follow from
\begin{equation}
f_{VNK}(k)=\frac{h_{VNK}(k)}{2 i k^5 e^{i \alpha_{VNK}}}=\frac{\sin(\beta_{VNK}) }{k^{4}}(k L \sin (k L)+2 \cos (k L)-2) . \label{fvnk}
\end{equation}
Note, we divided by $k^5$ instead of the $k^3$ as in eq.~(\ref{fusiz}), this being necessary because the VNK extension has two zero modes.
The large-$k$ expansion relevant for the heat kernel coefficients reads
\begin{equation}
f_{VNK} (ik) = \frac{ 2 \left( 1-\frac{kL} 2\right)} {k^4 \sqrt{L^2+4}} e^{kL} \left( 1 + e.s.t.\right),\label{fvnkinfty}
\end{equation}
and the coefficients follow along the lines explained to be
\begin{eqnarray}
&&a^{(_{VNK})}_0=\frac L {2\sqrt{\pi}},\quad a^{(_{VNK})}_{n+1}=\frac{ n! (4/L)^{2n+1}}{(2n+1)!2\sqrt{\pi}}, \,\,\,n=0,1,2,3,...,\\
&&a^{(_{VNK})}_{1/2}=\frac 1 2,\quad a^{(_{VNK})}_{n+1/2}=\frac 1 2 \frac{(2/L)^{2n}}{n!},\,\,\,n=1,2,3,...
\end{eqnarray}
Let us stress, that in order to obtain $a_{1/2}^{(VNK)}$ we have added $+2$ to $\zeta_{VNK}^{(0)} (0)$, as is requested by having two zero modes.
\subsection{Heat kernel coefficients for common boundary conditions.}
As a check of our calculations let us compare the results found for the heat kernel coefficients with the known ones for the most common boundary conditions.
\begin{itemize}
\item {\bf Periodic boundary conditions}. The periodic boundary conditions are usually written as\footnote{When it is required that the solutions of the Laplace equation are smooth functions the periodic boundary conditions are given by the condition $\psi(0)=\psi(L)$. However square integrable solutions of the Laplace equation are not necessarily smooth. Therefore the condition $\psi(0)=\psi(L)$ does not neceessarily give rise to periodic boundary conditions. As an example it is worth to mention the case of Dirac delta potentials (see references \cite{Bordag:2011aa,Guilarte:2010xn,Munoz-Castaneda:2013yga} for recent developements in the interpretation of Dirac delta potentials as boundary conditions and infinitely thin kinks) where the condition $\psi(0)=\psi(L)$ is satisfyed but obviously the system does not satisfy periodic boundary conditions. Therefore in order to distinguish periodic boundary conditions from other types of point interactions it is necessary to include the second condition over the derivatives: $\psi'(0)=\psi'(L)$.}
\begin{equation}
\psi(0)=\psi(L);\quad \psi'(0)=\psi'(L).
\end{equation}
Equivalently we can write the following two independent equations for periodic boundary conditions
\begin{eqnarray*}
\psi(0)+i\psi'(0)&=&\psi(L)+i\psi'(L),\\
\psi(L)-i\psi'(L)&=&\psi(0)-i\psi'(0).
\end{eqnarray*}
Hence following the notation of eq. (\ref{phipm}) we can write the periodic boundary conditions in the form of (\ref{bcphipm}) as
\begin{equation}
\varphi_-(\psi)=\sigma_1\cdot\varphi_+(\psi),
\end{equation}
being $\sigma_1$ the corresponding Pauli matrix. Therefore the unitary matrix that characterizes periodic boundary conditions is given by $U_p=\sigma_1\in\mathcal{M}_F^{(0)}
\Rightarrow\,\,\alpha=\pi/2,\,\beta=\pm\pi/2,\, n_1=\mp 1$.
The heat kernel coefficients are given by (\ref{hcorners1}).
\item {\bf Dirichlet boundary condition}. The usual form of the Dirichlet boundary condition for any manifod $M$ with boundary $\partial M$ is
\begin{equation}
\left.\psi\right|_{\partial M}=0.
\end{equation}
As can be seen the normal dervatives $\left.\partial_n\psi\right|_{\partial M}$ do not enter in the boundary condition. Form eq. (\ref{dom}) the general boundary condition for those unitary operators $U\in\mathcal{M}$ such that $1\notin\sigma(U)$ can be written as
\begin{equation}
\left.\psi\right|_{\partial M}=i\frac{\mathbb{I}+U}{\mathbb{I}-U}\cdot \left.\partial_n\psi\right|_{\partial M}.\label{cayleybc1}
\end{equation}
From this last expression, it is immediate to notice that the Dirichlet boundary condition is obtained when $U=-\mathbb{I}$. Therefore the Dirichlet boundary condition is given by $U_D=-\mathbb{I}\in\mathcal{M}_F-\mathcal{M}_F^{(0)}
\Rightarrow\,\,\alpha=\pi,\,\beta=0$.
The heat kernel coefficients are given by (\ref{heatdir}).
\item {\bf Neumann boundary condition}. The usual form of the Neumann boundary condition for any manifod $M$ with boundary $\partial M$ is
\begin{equation}
\left.\partial_n\psi\right|_{\partial M}=0,
\end{equation}
where $\partial_n$ denotes the normal derivative to $\partial M$. As can be seen the boundary value $\left.\psi\right|_{\partial M}$ does not enter in the boundary condition. Form eq. (\ref{dom}) the general boundary condition for those unitary operators $U\in\mathcal{M}$ such that $-1\notin\sigma(U)$ can be written as
\begin{equation}
\left.\partial_n\psi\right|_{\partial M}=-i\frac{\mathbb{I}-U}{\mathbb{I}+U}\cdot \left.\psi\right|_{\partial M}.\label{cayleybc2}
\end{equation}
From this last expression it is immediate to notice that the Neumann boundary condition is obtained when $U=\mathbb{I}$. Therefore the Neumann boundary condition is given by
$U_N=\mathbb{I}\in\mathcal{M}_F^{(0)}\Rightarrow\,\,\alpha=\beta=0$. It is of note that in this case $\sin(\alpha)=0$. Therefore from (\ref{heatneu1})-(\ref{heatneu3}),
\begin{eqnarray}
&& a_0^{(N)}=-\frac{L}{2\sqrt{\pi}},\quad a_{1/2}^{(N)}=1/2,\\
&& a_{n+1/2}^{(N)}=0,\quad a_{n}^{(N)}=0,\,\,\, n=1,2,3,...
\end{eqnarray}
\item {\bf Robin boundary conditions}. The common expression for the family of Robin boundary conditions is given by (see for example reference \cite{Romeo:2001dd})
\begin{equation}
\left.\psi\right|_{\partial M}-g\left.\partial_n\psi\right|_{\partial M}=0,\quad g\in(-\infty,\infty).\label{robingen}
\end{equation}
For the case in which the boundary manifold $\partial M$ has several disjoint components $\partial M=\cup_i\Omega_i$ the family of Robin boundary conditions can be written as
\begin{equation}
\left.\psi\right|_{\Omega_i}-g_i\left.\partial_n\psi\right|_{\Omega_i}=0,\quad g_i\in(-\infty,\infty).\label{robingen2}
\end{equation}
The extreme values $g_i=0,\infty$ correspond to Dirichlet and Neumann boundary conditions respectively in the $i^{th}$ component of $\partial M$. Note that in the most general case the set of constants $g_i$ do not have to be the same for all the disjoint components $\Omega_i$ of $\partial M$. For $M=[0,L]$ the boundary is formed by two points and therefore it has two disjoint components. The most simple choice of Robin boundary conditions in this case is
\begin{equation}
-\psi'(0)=\tan\left(\frac{\alpha}{2}\right)\psi(0),\quad \psi'(L)=\tan\left(\frac{\alpha}{2}\right)\psi(L),\quad\alpha\in [0,\pi].\label{robin1d1}
\end{equation}
In a more compact notation we can write
\begin{equation}
\left.\tan \left( \frac{\alpha} 2 \right) \,\, \psi\right|_{\partial M}-\left.\partial_n\psi\right|_{\partial M}=0,\quad\alpha\in [0,\pi].\label{robin1d2}
\end{equation}
Taking into account eq. (\ref{cayleybc2}) and comparing it with expression (\ref{robin1d2}) the unitary operator $U_R$ for Robin boundary conditions satisfies the equation
\begin{equation}
\tan\left(\frac{\alpha}{2}\right)\mathbb{I}=-i\frac{\mathbb{I}-U_R}{\mathbb{I}+U_R}.
\end{equation}
Therefore the unitary operator that characterizes the family of Robin boundary conditions given by (\ref{robin1d1}) is given by $U_R=e^{i\alpha}\mathbb{I}$ as was firstly pointed out in references \cite{Asorey:2006pr,Asorey:2008xt}. Note that $U_R(\alpha=0)=\mathbb{I}=U_N$ and $U_R(\alpha=\pi)=-\mathbb{I}=U_D$. In the parametrization (\ref{uparam}) Robin boundary conditions correspond to $\beta=0$. For $\alpha\in(0,\pi)$ $U_R(\alpha)\in\mathcal{M}_F-\mathcal{M}_F^{(0)}$ with $\cos(\alpha)+\cos(\beta)\neq 0$. Therefore the heat kernel coefficients for Robin boundary conditions are determined by eqs.
(\ref{hkcint-nozm}) and (\ref{hkchalfint-nozm}). From eq. (\ref{jos1}) it is easy to obtain the coefficients $b_m$ for the Robin boundary conditions:
\begin{equation}
b_m^{(R)}=\tan^m\left(\frac{\alpha}{2}\right)\sum_{j=0}^{[m/2]}(-1)^{m-j+1}\frac{2^{m-2j}\Gamma(m-j)}{\Gamma(j+1)\Gamma(m-2j+1)},\,\,\,m=1,2,3,...
\end{equation}
Using now eqs. (\ref{hkcint-nozm}) and (\ref{hkchalfint-nozm}) it is inmediate to compute the heat kernel coefficients for Robin boundary conditions to any desired order using any symbolic calculation software. As an example we show the first ten heat kernel coefficients:
\begin{eqnarray}
&&a_0^{(R)}=L/2\sqrt{\pi},\quad a_{1/2}^{(R)}=1/2,\\
&&a_1^{(R)}=-\frac{2 \tan \left(\frac{\alpha}{2}\right)}{\sqrt{\pi }},\quad a_{2}^{(R)}=-\frac{4 \tan ^3\left(\frac{\alpha}{2}\right)}{3\sqrt{\pi }},\\
&&a_3^{(R)}=-\frac{8 \tan ^5\left(\frac{\alpha}{2}\right)}{15\sqrt{\pi }},\quad a_{4}^{(R)}=-\frac{16 \tan ^7\left(\frac{\alpha }{2}\right)}{105\sqrt{\pi }},\\
&&a_{3/2}^{(R)}=\tan ^2\left(\frac{\alpha }{2}\right),\quad a_{5/2}^{(R)}=\frac{1}{2} \tan ^4\left(\frac{\alpha}{2}\right),\\
&&a_{7/2}^{(R)}=\frac{1}{6} \tan ^6\left(\frac{\alpha}{2}\right),\quad a_{9/2}^{(R)}=\frac{1}{24} \tan ^8\left(\frac{\alpha}{2}\right).
\end{eqnarray}
\end{itemize}
These results coincide with the results obtained by S. Dowker in Ref. \cite{dowk-cqg95} (equations (10) and (14) with $h_1=h_2=\tan(\alpha/2)$). More recently S. Fulling has studied the heat kernel coefficients for Robin boundary conditions in the Ref. \cite{full-2005}.
\section{The functional determinant of $\Delta_U$. Derivative at $s=0$ of the spectral zeta function}
In this section we compute the derivative of the zeta function at $s=0$ for each of the different cases considered in Sections 3.2 and 3.3.
As is well known, this derivative is a natural constituent when defining functional determinants of elliptic operators \cite{ray71-7-145}. As usual
we subtract and add back a suitable number of the asymptotic $k\to\infty$ terms of $\partial_k \log f_{\hat {\cal O}} (ik)$ in (\ref{zetagen}). In the current context
we have to subtract terms up to the order $1/k$ to make the integral well defined at $k=\infty$ once $s=0$ is set.
As a technical tool, at the start of this analysis it is convenient to consider a massive scalar field of mass $m$,
where $m$ will be sent to zero
at a suitable point of the computation. In this way we can avoid splitting the integral representing the zeta function into two pieces and the
computation becomes a little easier. The procedure is valid as in the limit $m\to 0$ the zeta function for the case with vanishing mass
is recovered. A presentation of (\ref{anyselfzet}) valid about $s=0$ is then given by
\begin{eqnarray}
\zeta_{\Delta_U} (s) &=& \frac{\sin \pi s} \pi \int\limits_m^\infty dk (k^2-m^2)^{-s} \partial_k \log \left[ \frac{2 f_U (ik) }{ke^{kL} (\cos \alpha + \cos \beta) } \right] \nonumber\\
& &+ \frac{\sin \pi s} \pi \int\limits_m^\infty dk (k^2 -m^2) ^{-s} \partial_k \log \left[ k e^{kL} \frac{ \cos \alpha + \cos \beta } 2 \right] . \nonumber\end{eqnarray}
The integral in the first line by construction is analytic about $s=0$ and its derivative at $s=0$ is trivially computed. The needed integrals in the second line are known
\cite{grad65b},
\begin{eqnarray}
\int\limits_m^\infty dk (k^2-m^2)^{-s} &=& \frac{ m^{1-2s} \Gamma (1-s) \Gamma \left( s - \frac 1 2 \right)} {2 \sqrt \pi} , \nonumber\\
\int\limits_m^\infty dk (k^2-m^2)^{-s} \,\,\frac 1 k &=& \frac{ m^{-2s} \pi} {2\sin \pi s} , \nonumber\end{eqnarray}
and
$$ \zeta_{\Delta_U} ' (0) = - \log \left| \frac{2 f_U (im)}{me^{mL} (\cos \alpha + \cos \beta)} \right| - Lm - \log m $$
is found. As $m\to 0$ we use
$$ \lim_{m\to 0} f_U (im) = L (\cos \alpha - \cos \beta ) - 2 (\sin \alpha + n_1 \sin \beta ) $$
to obtain
\begin{eqnarray}
\zeta_{\Delta_U} ' (0) = - \log \left|\frac{ 2L (\cos \alpha - \cos \beta ) - 4 (\sin \alpha + n_1 \sin \beta )}{\cos \alpha + \cos \beta } \right| . \label{det1}
\end{eqnarray}
The case treated in Section 3.2.1, for $\alpha \neq \pi$, follows along the same lines from
\begin{eqnarray}
\zeta_{\Delta_U} ^{(B)} (s) &=& \frac { \sin \pi s} \pi \int\limits_m^\infty dk (k^2-m^2)^{-s} \partial _k \log \left[ \frac{ f_U^{(B)} (ik) } { e^{kL} \sin \alpha } \right] \nonumber\\
& &+ \frac{\sin \pi s} \pi \int\limits_m^\infty dk (k^2-m^2)^{-s} \partial_k \log \left[ e^{kL} \sin \alpha \right].\nonumber\end{eqnarray}
In the limit as $m\to 0$ we obtain
$$ {\zeta_{\Delta_U} ^{(B)}}' (0) = - \log \left|\frac{ L (\cos \alpha - \cos \beta ) - 2 (\sin \alpha + n_1 \sin \beta )}{\sin \alpha } \right|.\label{detdet1}$$
For $\alpha = \pi$, $\beta =0$, instead we start with
\begin{eqnarray}
\zeta_{\Delta_U} ^{(B)} (s) \vert_{\alpha = \pi} &=& \frac{ \sin \pi s} \pi \int\limits_m^\infty dk (k^2-m^2)^{-s} \partial _k \log \left[ \frac{ f_U (ik) \vert_{\alpha = \pi} \,\, k}{e^{kL}} \right] \nonumber\\
& &+ \frac{ \sin \pi s} \pi \int\limits_m^\infty dk (k^2-m^2)^{-s} \partial _k \log \left[ \frac{e^{kL}} k \right]\nonumber\end{eqnarray}
to find
\begin{eqnarray}
{\zeta_{\Delta_U} ^{(B)}}' (0) \vert_{\alpha =\pi} = - \log (2L) .\label{detdir}\end{eqnarray}
We are left to treat the cases with a zero mode dealt with in Section 3.3. There, for $\alpha \neq \pi/2$, the starting point is
\begin{eqnarray}
\zeta_{\Delta_U} ^{(0)} (s) &=& \frac{\sin \pi s} \pi \int\limits_m^\infty dk (k^2-m^2)^{-s} \partial _k \log \left[ \frac{ f_U^{(0)} (ik) \,\, k} {e^{kL} \cos \alpha } \right] \nonumber\\
& &+ \frac{\sin \pi s } \pi \int\limits_m^\infty dk (k^2-m^2)^{-s} \partial _k \log \left[ \frac{ e^{kL} \cos \alpha } k\right] \nonumber\end{eqnarray}
leading to
$$ {\zeta_{\Delta_U} ^{(0)}}' (0) = - \log \left| \frac{L (2 \cos \alpha + L \sin \alpha )} {\cos \alpha } \right|.\label{detneu}$$
For $\alpha = \pi /2$ instead
\begin{eqnarray}
\zeta_{\Delta_U} ^{(0)} (s) \vert_{\alpha = \pi /2} &=& \frac{\sin \pi s} \pi \int\limits_m^\infty dk (k^2-m^2)^{-s} \partial _k \log \left[\frac{ f_U^{(0)} (ik)\vert_{\alpha = \pi/2} k^2} {e^{kL} } \right] \nonumber\\
& & + \frac{ \sin \pi s} \pi \int\limits_m^\infty dk (k^2-m^2)^{-s} \partial _k \log \left[ \frac{e^{kL}} {k^2} \right] ,\nonumber\end{eqnarray}
leading to
\begin{eqnarray}
{\zeta_{\Delta_U} ^{(0)} } ' (0) = - 2 \log L. \label{detper}
\end{eqnarray}
The expression obtained for ${\zeta_{\Delta_U} ^{(0)}}' (0)$ does not allow one to compute the derivative of the spectral zeta function at $s=0$ for the VNK extension by just replacing $\{\alpha,\beta\}\mapsto\{\alpha_{VNK},\beta_{VNK}\}$ as it does produce an undefined answer. The reason is that there are two zero modes and the formulas
have to be adapted; see eqs.~(\ref{fvnk}) and (\ref{fvnkinfty}). From the large-$k$ expansion (\ref{fvnkinfty}) and from
$ f_{VNK}(i m)\simeq-L^4/(6\sqrt{L^2+4})+O\left(m\right)$ the computation explained above leads to the following expression for ${\zeta_{VNK} ^{(0)}}' (0) $:
\begin{equation}
{\zeta_{VNK} ^{(0)}}' (0) =-\log\left\vert \frac{\left.f_{VNK}(im)\right\vert_{m\rightarrow 0}}{-L/\sqrt{L^2+4}} \right\vert=-\log\left( \frac{L^3}{6}\right).
\end{equation}
These results can be confronted with the easily computed answers for periodic, Dirichlet and Neumann boundary conditions.
For Dirichlet boundary conditions the spectrum is $\lambda_n = (\pi n/L)^2$, $n\in\mathbb{N}$, with associated zeta function
$\zeta_{Dir} (s) = (\pi /L)^{-2s} \zeta_R (2s)$. This gives $\zeta_{Dir} ' (0) = - \log (2L)$ in agreement with (\ref{detdir}).
For Neumann boundary conditions the spectrum is as above but with zero included. For the determinant the answers therefore again reads $\zeta_{Neu} ' (0) = - \log (2L)$, which agrees with (\ref{detneu}), once
$\alpha =\beta =0$ has been put.
Finally, for periodic boundary conditions the spectrum is $\lambda_n = (2\pi n/L)^2$, $n\in\mathbb{Z}$, with associated zeta function (zero mode excluded)
$\zeta_{per} (s) = 2 (2\pi/L)^{-2s} \zeta _R (2s)$. This shows $\zeta_{per} ' (0) = - 2 \log L$, again in agreement with (\ref{detper}).
As a specific new result, Robin boundary conditions as described above follow from (\ref{det1}) as
$$\zeta _{\Delta_{U_R}} ' (0) = - \log \left( 2 \tan \left( \frac \alpha 2 \right) \left( L \tan \left( \frac \alpha 2 \right) +2 \right) \right).$$
\section{Conclusions}
In this article we have analyzed the spectral zeta function resulting from the Laplacian on the interval $[0,L]$ for the case when strongly consistent
selfadjoint extensions and the Von Neumann-Krein extension are applied. Contour integral representations for the zeta functions are obtained for this class of selfadjoint extensions. These are used
to compute leading heat kernel coefficients and the functional determinant in this context. Our results agree with known results for standard boundary conditions
like Dirichlet, Neumann and periodic. The generalisation of these results to a scalar quantum field theory in
$D+1$ spacetime confined between two $D-1$ dimensional plane parallel plates is straightforward for the heat kernel coefficients
due to the factorization properties of the heat kernel in the same way as it is done in ref. \cite{asor13-874-852}.
The current article represents the start of further investigations into the details of heat kernel coefficients. Heat kernel coefficients are usually represented
in terms of geometric invariants with universal multipliers depending on the boundary condition. The question arises how the multipliers depend on the chosen selfadjoint
extension. In order to get some nontrivial boundary geometry involved a similar computation should be done for balls along the lines of
\cite{bord96-37-895,bord96-179-215,bord96-182-371}, where choosing general selfadjoint extensions will lead to different combinations of Bessel functions. Furthermore,
following \cite{jeff12-45-345201}, surfaces of revolution are possible candidates to analyze how different selfadjoint extensions impact spectral functions.
Finally, the presented analysis could also be done for selfadjoint extensions that allow for finitely many negative eigenvalues by using a variation of the current procedure \cite{kirs04-37-4649}.
(Note, that the results presented in this article actually remain valid beyond strongly consistent selfadjoint extensions as long as the
eigenvalues are positive!)
We believe that this kind of selfadjoint extensions could provide a natural mechanism for inflation in cosmological models with compact extra dimensions with boundary (Kaluza Klein cosmology or RS-type scenarios), where the dark energy is interpreted as the quantum vacuum of a fundamental scalar. In this scenario the inflationary phase is produced by the existence of negative energy modes the existence of which is strongly dependent on the size of the compact extra dimension with boundary.
\section*{Acknowledgement}
We acknowledge the support from DFG in project BO1112-18/1. We also acknowledge M. Asorey for valuable discussions, and S. Dowker and the referee of Lett. Math. Phys. for fruitful comments.
|
2211.10720
|
\section{Introduction}
Reaction-diffusion systems are ubiquitous, and widely appear in nature,
from physics (fluid instability, reacting flow), to chemistry (autocatalytic reaction, flame propagation) to biology (population dynamics, excitable media).
These systems can generate, under certain conditions, different kinds of instabilities (\cite{Manneville2004}, \cite{Perthame2015}),
sometimes leading to the formation of structures known as Turing patterns (\cite{Turing1952}),
first observed experimentally in the so-called Chloride-Iodide-Malonic Acid (CIMA) reaction (\cite{Kepper1990}).
Linear stability analysis (\cite{Kuznetsov1998}) is an efficient tool for predicting the critical value of parameters which can trigger instabilities in the system.
Reaction-diffusion systems augmented with self- and/or cross-diffusion terms have been an active research topic since the seminal work of \cite{SKT1979},
where a generalization of a predator-prey system was introduced, called the SKT system.
Stability analysis, as well as simulations, have revealed the existence of Turing instabiltities and pattern formation in various systems containing self- and/or cross-diffusion
(e.g. in \cite{Yin2013}, \cite{Li2019}, \cite{Moussa2019}, \cite{Gambino2012}, \cite{Breden2021}, \cite{Zhang2021})).
Recently, Hopf instabilities were also reported in the framework of the SKT system (\cite{Soresina2022}).
A classical example of a reaction-diffusion system is the Gray-Scott (GS) system \cite{GS1983},
modeling an autocatalytic process between two chemical species in a stirred tank.
This system is a generalization of the differential system of Sel'kov, modeling the glycolysis process \cite{Selkov1968},
and is known to be able to generate a wide variety of patterns,
observed both in simulations (\cite{Pearson1993}), and in actual experiments on Ferrocyanide-Iodate-Sulfite reaction (\cite{Lee1994}).
As a generic example, we study a modified version of the Gray-Scott system,
augmented with self- and cross-diffusion terms, introduced in \cite{Aymard2022}.
Original transient patterns have been reported in this reference, not observed with linear diffusion only.
One notable finding was that, unlike classical reaction-diffusion systems,
different patterns may be observed by varying only the nonlinear diffusion coefficients, for the same reaction term.
This article also introduced an energy law for this type of system,
and preliminary estimations of energy evolution showed a tendency for energy dissipation, towards a possible steady state.
In this article, we continue this work, studying the long-term behavior of the system,
by evaluating the equilibrium solutions, and their local stabilities.
Evaluation of the steady states is a challenging problem, as one needs to solve a nonlinear system of partial differential equations,
possibly within a complex geometry, that may dramatically modify the observed patterns.
Stability analysis is another challenging problem:
linear stability analysis, relying on a linearization around a steady state may not be sufficient, as it is known that there exist cases where instabilities
appear before the threshold predicted by LSA.
This article is organized as follows.
In the first section, steady states are studied.
Starting with numerical simulations of the energy evolution, revealing the presence of stable equilibria,
a direct method to compute the steady states is introduced and validated against the long-term solutions of the temporal model.
The second section is devoted to the stability analysis of the steady states.
First, we use linear stability analysis, leading to a marginal stability surface and corresponding critical wavelengths.
Then, we study an example with a numerical approach, emphasizing the limitations of LSA on this problem.
In the last section we present a numerical exploration of the patterns, using numerical continuation in the parameter space,
and exploring various geometries.
\section{Steady states}
In this section we study the long-term behavior of reaction-diffusion systems augmented with self- and cross-diffusion, that reads:
\begin{align}
\frac{\partial \phi_1}{\partial t} &= \Delta \mu_1 + R_1,\notag\\
\frac{\partial \phi_2}{\partial t} &= \Delta \mu_2 + R_2,\label{dGS}
\end{align}
with $R_1, R_2$ the reaction terms and $\mu_1, \mu_2$ chemical potentials, defined by:
\begin{align}
\mu_1 &= (d_1 + d_{11}\phi_1^{2} + d_{12}\phi_2^{2})\phi_1,\notag\\
\mu_2 &= (d_2 + d_{22}\phi_2^{2} + d_{21}\phi_1^{2})\phi_2,\label{mGS}
\end{align}
The problem is closed by adding initial conditions:
\[
(\phi_1(t=0),\phi_2(t=0))= (\phi_1^0,\phi_2^0),
\]
and null-flux boundary conditions:
\[
\frac{\partial \phi_1}{\partial n} = \frac{\partial \phi_2}{\partial n} = 0,
\]
The variables $d_1, d_{11}, d_2, d_{22}, d_{12}, d_{21}$ are real non-negative parameters.
In the context of reaction-diffusion systems, $\phi_1$ is called an inhibitor, and $\phi_2$ an activator.
It has been shown that, under certain conditions, this model presents a Turing instability, when a slow diffusion of the activator is coupled to a rapid diffusion of the inhibitior.
As a generic example, we will consider along the article, the reaction terms corresponding to the Gray-Scott model, defined by:
\begin{align}
R_1 &= - \phi_1\phi_2^2 + F(1-\phi_1),\notag\\
R_2 &= \phi_1\phi_2^2 - (F+k)\phi_2,\notag
\end{align}
with $F, k$ real non-negative parameters.
\subsection{Energy dissipation}
Systems of the form (\ref{dGS}),(\ref{mGS}) respect the following energy law \cite{Aymard2022}:
\begin{equation}
\frac{d}{dt}E(\phi_1,\phi_2) = - \| \nabla \mu_1 \|^2 - \| \nabla \mu_2 \|^2 + (R_1,\mu_1) + (R_2,\mu_2),
\label{Elaw}
\end{equation}
with $E$ defined by:
\begin{equation}
E(\phi_1,\phi_2) =
\int_{\Omega}
\left(d_1 \frac{\phi_1^2}{2} + d_{11}\frac{\phi_1^4}{4}
+ d_2 \frac{\phi_2^2}{2} + d_{22}\frac{\phi_2^4}{4}
+ d_{12}\frac{\phi_1^2 \phi_2^2}{2}
\right) d\omega .
\label{energy}
\end{equation}
In order to study its long-term behavior, model (\ref{dGS}),(\ref{mGS}) was solved numerically,
monitoring energy (\ref{energy}), looking for potential convergence towards a steady state.
The numerical method provided in the cited reference was used, and implemented on FreeFem++ software \cite{Hecht2012}.
Let us recall that the scheme is second order accurate in time, and second order accuracy in space is reached by using P2 finite element method.
In this case, adaptive mesh refinement strategy was used, with mesh size varying from $h_{\min} = 0.001$ to $h_{\max} = 0.07$,
according to both the variation of $\phi_1$ and $\phi_2$, and the time step was set to $\Delta t = 0.1$.
Model parameters were set to: $F = 0.037$, $k = 0.06$, within a square domain $\Omega = [-0.2,0.2]^2$, starting from the initial condition:
\begin{align}
\phi_1^0 &= (C + 0.05U)\mathbf{1}_S + \mathbf{1}_{\Omega \setminus S}, \label{CI}\\
\phi_2^0 &= (0.25 + 0.05V)\mathbf{1}_S. \notag
\end{align}
with $C = 0.3$, $S = [-0.05,0.05]^2$, and $U,V$ random numbers uniformly distributed on $[0,1]$.
In Figure \ref{Ecurve}, we plot the energy (\ref{energy}) evolution in four key cases,
representing different combinations of linear and nonlinear diffusion:
a homogeneous case ($d_1 = 1e^{-5}, d_2 = 1e^{-5}, d_{11} = 0, d_{22} = 0, d_{12}=0$),
a linearly unstable case ($d_1 = 2e^{-5}, d_2 = 1e^{-5}, d_{11} = 0, d_{22} = 0, d_{12} = 0$),
a self-diffusion unstable case ($d_1 = 2e^{-5}, d_2 = 1e^{-5}, d_{11} = 2e^{-6}, d_{22} = 0, d_{12} = 1e^{-6}$.),
and a combination of linear, self and cross diffusion cases ($d_1 = 2e^{-5}, d_2 = 1e^{-5},d_{11} = 2e^{-6}, d_{22} = 0, d_{12}=1e^{-6}$).
We oberve that, for each case, a steady state is reached after a sufficiently long period of time.
Interestingly, energy decreases for each case leading to a pattern formation (dissipative structure),
but increases in the case where a homogeneous state is reached.
\begin{remark}
Energy (\ref{energy}) spontaneously decreases in two obvious cases:
first, when there is no reaction term ($R_i = 0$), and second, if $\forall i, R_i = - \mu_i$.
In general, energy does not necessarily decrease in the Gray-Scott case, notably due to the source term $F$,
increasing $R_1$ term on the right hand side of (\ref{Elaw}).
\end{remark}
\begin{figure}[!htbp]\centering
\begin{subfigure}{0.4\textwidth}
\includegraphics[width=\linewidth]{Images/Estable.png}
\caption{Homogeneous case
\\$d_1 = 1e^{-5}, d_2 = 1e^{-5},
\\d_{11} = 0, d_{22} = 0, d_{12}=0$.}
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\includegraphics[width=\linewidth]{Images/ELinearInstability.png}
\caption{Linear diffusion instability
\\$d_1 = 2e^{-5}, d_2 = 1e^{-5},
\\d_{11} = 0, d_{22} = 0, d_{12} = 0$.}
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\includegraphics[width=\linewidth]{Images/ESelfInstability.png}
\caption{Self-diffusion instability
\\$d_1 = 2e^{-5}, d_2 = 1e^{-5},
\\d_{11} = 2e^{-6}, d_{22} = 0, d_{12} = 1e^{-6}$.}
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\includegraphics[width=\linewidth]{Images/ElinearSelfCross.png}
\caption{Linear and self- and cross-diffusion instability
\\$d_1 = 2e^{-5}, d_2 = 1e^{-5},
\\d_{11} = 2e^{-6}, d_{22} = 0, d_{12}=1e^{-6}$.}
\end{subfigure}
\caption{Evolution of energy (\ref{energy}) with respect to time for four key cases
(the ordinate axis is scaled with initial energy, in order to always start at $E=1$).
We observe in each case a convergence towards an equilibrium state.}
\label{Ecurve}
\end{figure}
\subsection{Finite Element Method}
In this subsection, we derive a direct finite element method, based on a Newton algorithm,
in order to directly compute the steady states of the problem.
We refer the reader to \cite{Ciarlet1989} for details about optimization and finite element approximation.
The steady states of system (\ref{dGS}),(\ref{mGS}), if they exist, verify:
\begin{equation}
R(W) = 0,
\label{NLsteady}
\end{equation}
with:
\[
R(W) =
\begin{pmatrix}
\Delta \mu_1 + R_1(\phi_1,\phi_2) \\
\Delta \mu_2 + R_2(\phi_1,\phi_2) \\
d_1 \phi_1 + d_{11} \phi_1^3 + d_{12}\phi_2^2 \phi_1 -\mu_1 \\
d_2 \phi_2 + d_{22} \phi_2^3 + d_{12}\phi_1^2 \phi_2 -\mu_2 \\
\end{pmatrix}
\]
and $W = (\phi_1,\phi_2,\mu_1,\mu_2)$.
Our strategy to numerically compute the steady states is to apply a Newton method:
\[
W^{(n+1)} = W^{(n)} - \delta W,
\]
with $\delta W$ solution of:
\begin{equation}
J_{R(W)} \delta W = R(W).
\label{deltaW}
\end{equation}
Iterations continue until the convergence criteria:
\[
\frac{\| \delta \phi_1\|_{L^\infty}}{\|\phi_1\|_{L^{\infty}}} < \epsilon,
\]
are satisfied. This method requires the knowledge of the Jacobian matrix:
\[
J_{R(W)}\delta W
=
\begin{pmatrix}
\Delta \delta\mu_1 + \frac{\partial R_1}{\partial \phi_1}\delta\phi_1 + \frac{\partial R_1}{\partial \phi_2}\delta\phi_2\\
\Delta \delta\mu_2 + \frac{\partial R_2}{\partial \phi_1}\delta\phi_1 + \frac{\partial R_2}{\partial \phi_2}\delta\phi_2 \\
d_1 \delta \phi_1 + d_{11} 3\phi_1^2\delta \phi_1 + d_{12}(\phi_2^2 \delta \phi_1 + 2 \phi_1 \phi_2 \delta \phi_2) -\delta \mu_1 \\
d_2 \delta \phi_2 + d_{22} 3\phi_2^2\delta \phi_2 + d_{12}(\phi_1^2 \delta \phi_2 + 2 \phi_1 \phi_2 \delta \phi_1) -\delta \mu_2\\
\end{pmatrix}
\]
Perturbation $\delta W$ is found by solving the following linear system, based on the mixed variational formulation of (\ref{deltaW}):
\begin{align}
&\forall \psi_1,\psi_2, \nu_1, \nu_2 \in H^1(\Omega),\notag\\
\int_{\Omega}& - \left(\nabla \delta\mu_1 , \nabla \psi_1 \right)
- \left(\frac{\partial R_1}{\partial \phi_1}\delta\phi_1 + \frac{\partial R_1}{\partial \phi_2}\delta\phi_2\right)\psi_1
- \left(\nabla \delta\mu_2 , \nabla \psi_2\right)
+ \left(\frac{\partial R_2}{\partial \phi_1}\delta\phi_1 + \frac{\partial R_2}{\partial \phi_2}\delta\phi_2\right)\psi_2 \notag\\
& +\left(d_1 \delta \phi_1 + d_{11} 3\phi_1^2\delta \phi_1 + d_{12}(\phi_2^2 \delta \phi_1 + 2 \phi_1 \phi_2 \delta \phi_2) -\delta \mu_1\right)\nu_1 \notag\\
& +\left(d_2 \delta \phi_2 + d_{22} 3\phi_2^2\delta \phi_2 + d_{12}(\phi_1^2 \delta \phi_2 + 2 \phi_1 \phi_2 \delta \phi_1) -\delta \mu_2\right)\nu_2 d\omega \notag\\
& = \notag\\
\int_{\Omega}&
- \left(\nabla \mu_1 , \nabla \psi_1 \right)
+ R_1(\phi_1,\phi_2) \psi_1
- \left(\nabla \mu_2 , \nabla \psi_2 \right)
+ R_2(\phi_1,\phi_2)\psi_2 \notag\\
&+(d_1 \phi_1 + d_{11} \phi_1^3 + d_{12} \phi_2^2 \phi_1 - \mu_1)\nu_1
+(d_2 \phi_2 + d_{22} \phi_2^3 + d_{12} \phi_1^2 \phi_2 - \mu_2)\nu_2 d\omega
\label{FEM}
\end{align}
Space discretization is achieved by defining a regular grid with $N$ elements by direction, and using finite elements on the latter grid.
In Figure \ref{steadyStates}, we compare the steady states obtained with two different methods.
On the left column, we have plotted the density solutions of (\ref{NLsteady}).
The initial guess of the Newton method was set to the initial condition of the temporal model (\ref{CI}),
and parameters were set to $N=420$ and $\epsilon = 1.e^{-11}$.
On the right column, we have plotted the density solutions obtained from simulating the temporal model (\ref{dGS}),(\ref{mGS}) until convergence, for the same sets of model parameters.
The same four cases described in the first paragraph were considered.
We observe a very good agreement between the two methods, despite the different approaches, and especially the different meshing techniques.
Direct steady state computation presents the advantage of being incomparably faster;
however, as for any Newton method approach, convergence depends on a correct choice for the initial guess.
\begin{remark}
For the sake of clarity, only the case with two species has been presented.
Generalization to cases with $M \geq 2$ is straightforward, considering $W = (\phi_1,...,\phi_M,\mu_1,...,\mu_M)$.
\end{remark}
\begin{figure}[!htbp]\centering
\begin{subfigure}{\textwidth}
\includegraphics[width=0.45\linewidth]{Images/linearInstability.png}
\includegraphics[width=0.45\linewidth]{Images/TlinearInstability.pdf}
\caption{Case of a linear diffusion instability:
\\$d_1 = 2e^{-5}, d_2 = 1e^{-5}$,
$d_{11} = 0, d_{22} = 0, d_{12}=0$.}
\end{subfigure}
\begin{subfigure}{\textwidth}
\includegraphics[width=0.45\linewidth]{Images/selfInstability.png}
\includegraphics[width=0.45\linewidth]{Images/TSelfInstability.pdf}
\caption{Case of a self-diffusion instability:
\\$d_1 = 2e^{-5}, d_2 = 1e^{-5}$,
$d_{11} = 2e^{-6},d_{22} = 0, d_{12} = 1e^{-6}$.}
\end{subfigure}
\begin{subfigure}{\textwidth}
\includegraphics[width=0.45\linewidth]{Images/linearSelfCross.png}
\includegraphics[width=0.45\linewidth]{Images/TlinearSelfCross.png}
\caption{Linear and self- and cross-diffusion instability
\\$d_1 = 2e^{-5}, d_2 = 1e^{-5}$,
$d_{11} = 2e^{-6}, d_{22} = 0, d_{12}=1e^{-6}$.}
\end{subfigure}
\caption{Comparison of steady states obtained with two different methods:\\
directly solving the steady state problem (nonlinear elliptic system (\ref{NLsteady})) on the left column,
solving the temporal model (nonlinear parabolic system (\ref{dGS}),(\ref{mGS})) until equilibirum, on the right column.
A good agreement is observed.}
\label{steadyStates}
\end{figure}
\section{Stability analysis}
In this section, we study the local stability of the steady states of systems (\ref{dGS}),(\ref{mGS}).
First, using classical linear stability analysis (LSA), and then, using a numerical exploration, we emphasize the limitations of LSA in this case.
For more details about the techniques used here, we refer the reader to \cite{Manneville2004} and \cite{Kuznetsov1998}
\subsection{Linear Stability Analysis}
The first step is to find the homogeneous steady state of the diffusionless system, solution(s) of:
\[
\begin{cases}
R_1 = 0, \\
R_2 = 0.
\end{cases}
\]
Then, we study the growth of a perturbation of the linearized system around a steady state using Fourier analysis.
Let us consider a perturbation $(\tilde{\phi_1},\tilde{\phi_2})$ around a steady state $(\bar{\phi_1},\bar{\phi_2})$ of the diffusionless system, and let us denote:
\begin{align*}
\phi_1(r,t) = \bar{\phi_1} + \tilde{\phi_1}(r,t),\\
\phi_2(r,t) = \bar{\phi_2} + \tilde{\phi_2}(r,t).
\end{align*}
We first linearize the reaction term as follows:
\[
\begin{pmatrix}
R_1(\phi_1,\phi_2) \\
R_2(\phi_1,\phi_2)
\end{pmatrix}
=
\begin{pmatrix}
R_1(\bar{\phi_1},\bar{\phi_2}) \\
R_2(\bar{\phi_1},\bar{\phi_2})
\end{pmatrix}
+
\mathbb{A}
\begin{pmatrix}
\tilde{\phi_1} \\
\tilde{\phi_2}
\end{pmatrix}
+
\mbox{(higher order terms)},
\]
with
\[
\mathbb{A}
=
\begin{pmatrix}
a_{11} & a_{12}\\
a_{21} & a_{22}\\
\end{pmatrix}
=
\begin{pmatrix}
\frac{\partial R_1}{\partial \phi_1} & \frac{\partial R_1}{\partial \phi_2}\\
\frac{\partial R_2}{\partial \phi_1} & \frac{\partial R_2}{\partial \phi_2}\\
\end{pmatrix}.
\]
Let us consider a perturbation of a self-diffusion terms:
\[
\Delta [(p + \tilde{p})^3]
=
\Delta p^3 + 3p^2\Delta \tilde{p} + \mbox{ (higher order terms)}.
\]
and of a cross-diffusion term:
\begin{align*}
\Delta [(u+\tilde{u})^2(v+\tilde{v})]
&=
\Delta (u^2v) + 2uv\Delta(\tilde{u}) + u^2\Delta(\tilde{v}) + \mbox{(higher order terms)}.
\end{align*}
Using these identities, we may write the dynamics of the perturbation around an equilibrium point:
\begin{equation}
\frac{\partial}{\partial t}
\begin{pmatrix}
\tilde{\phi}_1\\
\tilde{\phi}_2
\end{pmatrix}
=
\mathbb{A}
\begin{pmatrix}
\tilde{\phi}_1\\
\tilde{\phi}_2
\end{pmatrix}
+
\mathbb{D}
\begin{pmatrix}
\Delta (\tilde{\phi}_1)\\
\Delta (\tilde{\phi}_2)
\end{pmatrix}
.
\label{perturbation}
\end{equation}
with
\[
\mathbb{D}
=
\begin{pmatrix}
\delta_{11} & \delta_{12}\\
\delta_{21} & \delta_{22}\\
\end{pmatrix}
=
\begin{pmatrix}
d_1 + 3d_{11}\bar{\phi_1}^2 + 2d_{12}\bar{\phi_1}\bar{\phi_2} & d_{12}\bar{\phi_1}^2 \\
d_{12}\bar{\phi_2}^2 & d_2 + 3d_{22}\bar{\phi_2}^2 + 2d_{12}\bar{\phi_1}\bar{\phi_2} \\
\end{pmatrix}
\]
Let us remark that, if the equilibrium solution $(\bar{\phi_1},\bar{\phi_2})$ is positive, then the coefficients of $\mathbb{D}$ are positive.
The Fourier transform of the perturbation equation (\ref{perturbation}) reads:
\[
\frac{\partial}{\partial t}
\begin{pmatrix}
\widehat{\tilde{\phi}}_1\\
\widehat{\tilde{\phi}}_2
\end{pmatrix}
=
\mathbb{A}_{\xi}
\begin{pmatrix}
\widehat{\tilde{\phi}}_1\\
\widehat{\tilde{\phi}}_2
\end{pmatrix},
\]
with:
\[
\mathbb{A}_{\xi}
= \mathbb{A} - \|\xi\|^2\mathbb{D}
=
\begin{pmatrix}
a_{11} - \delta_{11}\|\xi\|^2 & a_{12} - \delta_{12}\|\xi\|^2\\
a_{21} - \delta_{21}\|\xi\|^2 & a_{22} - \delta_{22}\|\xi\|^2\\
\end{pmatrix}
.
\]
The perturbation analysis is reduced to the spectral analysis of matrix $\mathbb{A}_{\xi}$.
For any mode $\xi$, the trace is given by:
\[
\mbox{Tr}(\mathbb{A}_{\xi})
=
\mbox{Tr}(\mathbb{A}) - \|\xi\|^2\mbox{Tr}(\mathbb{D}),
\]
and the determinant by:
\[
\mbox{Det}(\mathbb{A}_{\xi})
=
\mbox{Det}(\mathbb{D}) \|\xi\|^4
+ \|\xi\|^2 (a_{21}\delta_{12} + \delta_{21}a_{12} - a_{11}\delta_{22} - \delta_{11}a_{22})
+ \mbox{Det}(\mathbb{A}).
\]
The eigenvalues of $\mathbb{A}_{\xi}$, roots of polynomial:
\[
\sigma^2 - \sigma \mbox{Tr}(\mathbb{A}_{\xi}) + \mbox{Det}(\mathbb{A}_{\xi}),
\]
are given by:
\[
\sigma_{\pm} = \frac{1}{2}\left(\mbox{Tr}(\mathbb{A_{\xi}}) \pm \sqrt{\Delta_{\xi}}) \right),
\]
with:
\[
\Delta_{\xi} = \left(\mbox{Tr}(\mathbb{A_{\xi}})\right)^2 - 4 \mbox{Det}(\mathbb{A}_{\xi}).
\]
A mode $\xi$ is stable if and only if $\mbox{Tr}(\mathbb{A_{\xi}}) < 0$ and $\mbox{Det}(\mathbb{A_{\xi}}) > 0$.
Indeed, the first condition ensures that at least one eigenvalue is negative, and the second ensures that both eigenvalues have the same sign.
Using the definition of $\mathbb{A}$ and $\mathbb{D}$, and the assumptions on the model, we know that $\mbox{Tr}(\mathbb{A}_{\xi}) <0$.
Therefore, the stability analysis is reduced to the verification of the sign of $\mbox{Det}(\mathbb{A}_{\xi})$.
Let us rewrite:
\[
\mbox{Det}(\mathbb{A}_{\xi})
=
\mbox{Det}(\mathbb{D}) \|\xi\|^4
- C \|\xi\|^2
+ \mbox{Det}(\mathbb{A})
=
\mbox{Det}(\mathbb{D})(\|\xi\|^2 - \alpha)^2 + \beta,
\]
with:
\begin{align*}
C &= -a_{21}\delta_{12} - \delta_{21}a_{12} + a_{11}\delta_{22} + \delta_{11}a_{22},\\
\alpha &= \frac{C}{2 \mbox{Det}(\mathbb{D})},\\
\beta &= \mbox{Det}(\mathbb{A}) - \mbox{Det}(\mathbb{D})\alpha^2.
\end{align*}
The function $\|\xi\|^2 \to \mbox{Det}(\mathbb{A_{\xi}})$ has a minimum at $\|\xi\|^2 = \alpha$, and we know that for $\|\xi\|^2=0$,
$\mbox{Det}(\mathbb{A_{\xi}}) = \mbox{Det}(A) >0$.
Let us remark that if $\alpha < 0$, there is no real root, therefore, as $\mbox{Det}(\mathbb{A})>0$, then $\mbox{Det}(\mathbb{D})>0$, the mode is stable.
If $\alpha >0$, the mode $\xi$ is unstable if the determinant is negative, that is if:
\[
\beta < 0
\iff
\mbox{Det}(\mathbb{A}) < \mbox{Det}(\mathbb{D})\alpha^2
\iff
2 \sqrt{\mbox{Det}(\mathbb{A}) \mbox{Det}(\mathbb{D})} < C.
\label{dispersion}
\]
This last inequality is the dispersion condition.
At the threshold of instability, we have:
\begin{equation}
C - 2\sqrt{\mbox{Det}(\mathbb{A}) \mbox{Det}(\mathbb{D})} = 0
\label{dispersion}
\end{equation}
and only one mode $\xi$ is unstable, solution of:
\[
\beta = 0 \iff \alpha = \sqrt{\frac{\mbox{Det}(\mathbb{A})}{\mbox{Det}(\mathbb{D})}}.
\]
Therefore, the wave number is:
\[
\xi = \sqrt{\alpha} = \left( \frac{\mbox{Det}(\mathbb{A})}{\mbox{Det}(\mathbb{D})} \right)^{\frac{1}{4}},
\]
and the wavelength is given by:
\begin{equation}
\lambda = \frac{2\pi}{\xi}.
\label{wl}
\end{equation}
\subsection{Beyond Linear Stability Analysis}
In this subsection, we apply the results of the former subsection to the Gray-Scott system.
The reaction matrix reads:
\[
\mathbb{A}
=
\begin{pmatrix}
-F - \bar{\phi}_2^2 & -2\bar{\phi}_1\bar{\phi}_2 \\
\bar{\phi}_2^2 & 2\bar{\phi}_1\bar{\phi}_2 - (F+k)\\
\end{pmatrix}
\]
In particular, let us remark that $\mbox{Tr}(A) < 0$. Steady states are given by:
\[
\begin{cases}
\phi_1\phi_2^2 = F(1-\phi_1), \\
\phi_1\phi_2^2 = (F+k)\phi_2.
\end{cases}
\]
For any value of $F$ and $k$, a trivial solution exists:
\[
\begin{cases}
\bar{\phi}^0_1 = 1, \\
\bar{\phi}^0_2 = 0.
\end{cases}
\]
This steady state is always stable. Assuming that $(\phi_1,\phi_2) \not= (1,0)$, one may find
\begin{align*}
\bar{\phi}_1^{\pm} &= \frac{1}{2} \left(1 \pm \sqrt{1 - 4\frac{(F+k)^2}{F}} \right).\\
\bar{\phi}_2^{\mp} &= \frac{F}{(F+k)}\bar{\phi}_1^{\pm}.
\end{align*}
However, these two equilibrium points $(\bar{\phi}_1^+,\bar{\phi}_2^-)$ and $(\bar{\phi}_1^-,\bar{\phi}_2^+)$ only exist if the square root
$\sqrt{1 - 4\frac{(F+k)^2}{F}}$ is defined, that is when:
\[
k = -F + \frac{\sqrt{F}}{2} \mbox{ and } 0 \leq F \leq \frac{1}{4}.
\]
In other words, the diffusionless system undergoes a saddle-node bifurcation.
Figure \ref{figLSA} shows the marginal stability surface, defined as the zero set of equation (\ref{dispersion}),
around the unstable state $(\phi_1^-,\phi_2^+)$ with $k = 0.01$, $F = 0.05$, together with the critical wavelengths (formula (\ref{wl})).
Instability occurs at a threshold value, which linearly depends on different types of diffusion parameters ($d_1$, $d_{11}$, $d_{12}$).
We observe in particular that the presence of cross-diffusion significantly increases the critical wavelength, the other parameters being fixed.
We also observe, looking at the marginal stability surface around the origin, that the cross-diffusion term cannot trigger an instability alone,
whereas linear diffusion can, but also self-diffusion.
\begin{figure}[ht]\centering
\begin{subfigure}{0.6\textwidth}
\includegraphics[width=\linewidth]{Images/LSA.png}
\caption{Marginal stability surface, defined as the zero set of equation (\ref{dispersion}),
\\for $d_2 = 1e^{-5}$ and $d_{22}=0$.}
\end{subfigure}\\
\begin{subfigure}{0.4\textwidth}
\includegraphics[width=\linewidth]{Images/WLcross0.png}
\caption{Isolines of critical wavelength (formula (\ref{wl})):
\\without cross diffusion
\\($d_2 = 1e^{-5}$, $d_{22}=d_{12} = 0$.)}
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\includegraphics[width=\linewidth]{Images/WLcross.png}
\caption{Isolines of critical wavelength (formula (\ref{wl})):
\\with cross-diffusion
\\($d_2 = 1e^{-5}$, $d_{22} = 0$, $d_{12} = 4e^{-4}$).}
\end{subfigure}
\caption{Linear Stability Analysis: a marginal stability surface,
indicating the stability threshold, and the critical wavelength for the instability.
We observe that cross-diffusion increases the critical wavelength significantly.}
\label{figLSA}
\end{figure}
In Figure \ref{figPT}, we study the case where $F=0.037$ and $k=0.06$, starting with initial condition (\ref{CI}).
In this case, only one steady state exists for the diffusionless system (the trivial state $(1,0)$),
and Linear Stability Analysis predicts that this point is always stable.
However, numerical simulations show that bifurcations occur, when small perturbations are applied to the parameters associated to the homogeneous steady state (panel (a)).
Different combinations triggering an instability, leading to different dissipative structures, have been identified.
As expected, applying a sufficiently high linear diffusion leads to a classical Turing pattern (panel (b)).
More originally, the application of self diffusion leads to the formation of dramatically different dissipative structures,
depending on whether the self-diffusion is applied on the inhibitor $\phi_1$ (panel (c)), or the activator $\phi_2$ (panel (d)).
On panel (e), a linear diffusion is applied, sufficient to trigger an instability, supplemented with cross-diffusion.
In this case, a diagonally oriented structure is observed, as opposed to the regular structure observed without cross-diffusion.
As seen in Figure \ref{figLSA}, cross-diffusion increases the critical wavelength:
the transition between vertical and diagonal orientation might be explained by the increase of the critical wavelength,
forcing the modes to grow diagonally.
Let us remark that systems of the form (\ref{dGS}),(\ref{mGS}) remain unchanged by applying a reflection of the axis $R(x,y) = (-x,-y)$.
We conclude that the same density, after a mirror reflection, is also a solution of the system, denoting the presence of at least three solutions in this case
(one homogeneous, and two diagonally oriented). This could be the sign of a pitchfork bifurcation.
In subfigure (f), a combination of linear and self and cross diffusion is applied.
As for the case of linear and cross diffusion, a diagonal structure is observed; therefore, the same remarks apply.
\begin{figure}[ht]\centering
\begin{subfigure}{0.4\textwidth}
\includegraphics[width=\linewidth]{Images/stability.png}
\caption{Reference: stable case
\\$d_1 = 1.4e^{-5}, d_2 = 1e^{-5}$,
\\$d_{11} = d_{22} = d_{12}=0$.}
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\includegraphics[width=\linewidth]{Images/LinearInstability.png}\\
\caption{Linear diffusion instability:
\\$d_1 = 2e^{-5}, d_2 = 1e^{-5}$,
\\$d_{11} = d_{22} = d_{12}=0$.}
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\includegraphics[width=\linewidth]{Images/SelfInstability.png}\\
\caption{Inhibitor self-diffusion instability:
\\$d_1 = 2e^{-5}, d_2 = 1e^{-5}$,
\\$d_{11} = 2e^{-6},d_{22} = 0, d_{12} = 0$.}
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\includegraphics[width=\linewidth]{Images/d22.png}\\
\caption{Activator self-diffusion $d22$ instability:
\\$d_1 = 1.4e^{-5}, d_2 = 1e^{-5}$,
\\$d_{11} = 0, d_{22} = 2e^{-5}, d_{12}=0$.}
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\includegraphics[width=\linewidth]{Images/LinearSelfInstability.png}\\
\caption{Linear and self-diffusion instability:
\\$d_1 = 2e^{-5}, d_2 = 1e^{-5}$,
\\$d_{11} = 5e^{-6},d_{22} = d_{12}=0$.}
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\includegraphics[width=\linewidth]{Images/LinearCrossInstability.png}\\
\caption{Linear and cross-diffusion instability:
\\$d_1 = 2e^{-5}, d_2 = 1e^{-5}$,
\\$d_{11} = d_{22} = 0,d_{12} = 1e^{-6}$.}
\end{subfigure}
\caption{Beyond Linear Stability Analysis:
four parameters can trigger a bifurcation towards different stable steady states,
when linear stability analysis only predicts a single, stable steady state.}
\label{figPT}
\end{figure}
\section{Pattern formation}
This section is a numerical exploration of pattern formation observed as steady states of system (\ref{dGS}),(\ref{mGS}).
Two phenomena have been observed, influencing the equilibrium dissipative structures:
a dependency on initial conditions, and a dependency on the geometry of the domain.
\subsection{Density effects}
In Figure \ref{figNC}, steady states for the system (\ref{dGS}),(\ref{mGS}) in the case of a Gray-Scott reaction is simulated,
starting from the different value of $C$ for initial conditions (\ref{CI}),
with $F=0.037, k=0.06, d_1 = 2e^{-5}, d_2 = 0, d_{11} = 0, d_{12}=1e^{-6}$.
Three values of $C$ are tested: $0.3$, $0.41$ and $0.5$.
A numerical continuation \cite{Kuznetsov1998} is done on parameter $d_{22}$, starting from $9e^{-5}$, and decreasing with a step $0.1$:
after each step, the steady state of the former step is used as the initial condition for the following value of $d_{22}$.
We observe that the final structure is dependent on the initial guess of the Newton method,
leading to clearly distinct dissipative patterns, differing notably by their topologies.
The reader may also note that the reaction term is the same for all tests
(in classical Gray-Scott systems, one needs to vary the reaction term to obtain different patterns).
For a fixed value of $C$, decreasing the value of $d_{22}$ also modifies the final structure.
More generally, in the majority of the tests done preparing this article, decreasing the value of $d_{22}$,
and therefore increasing the ratio between cross and self-diffusion,
led to the formation of diagonally aligned dissipative paterns, after a threshold value.
As remarked in the previous section, due to the symmetry of the system by reflection, the reflected solution is also a solution.
This phenomenon could be the signature of an underlying low dimensional bifurcation such as a pitchfork, or imperfect.
It would be interesting to carry out further theoretical work on the problem.
\begin{figure}[ht]\centering
\begin{subfigure}{\textwidth}
\includegraphics[width=0.45\linewidth]{Images/03CN1.png}
\includegraphics[width=0.45\linewidth]{Images/03CN2.png}
\caption{Initial condition: $C = 0.3$. Parameter $d_{22} = 9e^{-5}$ (left) and $d_{22} = 7.2e^{-5}$ (right)}
\end{subfigure}
\begin{subfigure}{\textwidth}
\includegraphics[width=0.45\linewidth]{Images/042CN1.png}
\includegraphics[width=0.45\linewidth]{Images/042CN2.png}
\caption{Initial condition: $C = 0.42$. Parameter $d_{22} = 9e^{-5}$ (left) and $d_{22} = 7.65e^{-5}$ (right)}
\end{subfigure}
\begin{subfigure}{\textwidth}
\includegraphics[width=0.45\linewidth]{Images/05CN1.png}
\includegraphics[width=0.45\linewidth]{Images/05CN2.png}
\caption{Initial condition: $C = 0.5$. Parameter $d_{22} = 9e^{-5}$ (left) and $d_{22} = 8.1e^{-5}$ (right)}
\end{subfigure}
\caption{Numerical continuation through parameter space.
Parameters are set to ($d_1 = 2e^{-5}, d_2 = 0, d_{11} = 0, d_{12}=1e^{-6}$).
Initial condition $C$ and self diffusion $d_{22}$ are varied.
We observe a dependency of the pattern with respect to the inital value of the density in the domain,
inducing a change of topology in the patterns.}
\label{figNC}
\end{figure}
\subsection{Geometrical effects}
In this paragraph, we study the effect of geometry on steady state solutions.
Figure \ref{figFFT} analyzes the pattern obtained in subfigure (c) of Figure \ref{figPT}.
We consider a horizontal cut at half height of the image, giving a 1d signal, represented in blue in subfigure (a),
together with its low pass filtered version, in orange, emphasizing the harmonics that are present in the signal.
The Fourier Transform is represented in subfigure (b);
on abscissa, the wavelength $\lambda$ is represented (with an expected cut-off at $0.4$, length of the domain).
The spectrum presents its highest peak at $\lambda = 0.2$.
This value is the wavelength of the main harmonic present in the equilibrium steady state, as seen as a standing wave
(in the spirit of the Turing article).
We also observe that this value corresponds, in terms of order of magnitude, to the theoretical value of the unstable wavelength obtained in subfigure (b) of Figure \ref{figLSA},
even though this Figure \ref{figLSA} was done for a different (but close) set of parameters.
\begin{figure}[ht]\centering
\begin{subfigure}{0.4\textwidth}
\includegraphics[width=\linewidth]{Images/Signal1D.png}
\caption{Signal (horizontal cut at half-heigth), in blue, and low-pass filtered signal, in orange.}
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\includegraphics[width=\linewidth]{Images/FFT1D.png}\\
\caption{Fourier transform of the signal}
\end{subfigure}
\caption{Fourier analysis of the steady state patterns of Figure \ref{figPT}, case (c)
($d_1 = 2e^{-5}, d_2 = 1e^{-5}, d_{11} = 2e^{-6},d_{22} = 0, d_{12} = 0$).
We observe a peak around $0.2$, comparable in order of magnitude to the value obtained by linear stability analysis.}
\label{figFFT}
\end{figure}
Figure \ref{figGeom} displays an analysis of the dependency of steady state patterns with respect to the geometry of the domain in 2D.
The same set of parameters is used for all the tests, that is
$d_1=0.00002,d_{11}=0.000002,d_{12}=0.000001, d_2=0.00001,d_{22}=0.00000$, $\phi_0 = 0.3$,
only the geometry is varied. For the geometries, we consider two rectangles ($Lx = 0.4, Ly = 0.1$, and $Lx = 0.41, Ly = 0.15$),
a circle of radius $R = 0.18$,
an astroid: $(x(t),y(t)) = (R(3 \cos(t) + \cos(3t)), R(3. \sin(t) - \sin(3t)))$ with $R = 0.075$,
and finally an epicycloid: $(x(t),y(t)) = (5R\cos(t)-R\cos(5t),5R\sin(t)-R\sin(5t))$ with $r = 0.03$.
In the cases of the circle, the astroid and the epicycloid, an initial mesh adaptation is done in order to ensure an almost constant space discretization,
with the value used for the square used as a reference: $\Delta x = \frac{L}{N} = 0.00095$
Let us consider the square geometry as the reference case (panel (a)).
Let us notice numerous symmetries, and the presence of spots, and a circling oscillating stripe.
Moving to a circular case (panel (b)), all the spot patterns disappear, leading to a single, regular ring.
Comparing it with the 1D spectrum of Figure \ref{figFFT},
this means that only the main peak is present in the steady state, the other being cancelled by the geometry.
Decreasing the size of the square, one gets a transition phase (panel (c)), with a compressed version of the square pattern, until a threshold value,
where a fully striped pattern is observed (panel (d)), favoring one direction only, where the instability can develop.
In the astroid case (panel (e)), two regimes may be identified:
a sufficient space allows the formation of a ring pattern, such as panel (b), as if there were only a circular border,
and sharp edges trigger the formation of stripes. This effect of curvature is similar, in a sense to the nucleation process,
favored around sharp edges.
Inverting the curvature around the border singularities leads to the epicycloid case (panel (f)):
the central circular pattern is still present, and the inward cusps trigger a spot pattern in the latter.
This numerical exploration emphasizes the link betwen the critical wavelength an the geometry of the domain.
Further theoretical work could be done to investigate it,
as a close link betwen critical values of the dynamical system,
and the eigenvalues and eigenvectors of the Laplace operator in $\Omega$ is suspected.
\begin{figure}[ht]\centering
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\linewidth]{Images/geomSquare.png}
\caption{Square: $L_x=L_y=0.4$.}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\linewidth]{Images/geomHole.png}\\
\caption{Circle: radius $R=0.2$.}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\linewidth]{Images/rectangle2.png}\\
\caption{Rectangle: $L_x = 0.41,L_y=0.15$.}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\linewidth]{Images/rectangle1.png}\\
\caption{Rectangle: $L_x = 0.4,L_y=0.1$.}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\linewidth]{Images/geomAstroid.png}
\caption{Astroid: radius $R_{\max} = 0.3$.}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\linewidth]{Images/geomEpicycloid.png}\\
\caption{Epicycloid: $R_{\max} = 0.12$.}
\end{subfigure}
\caption{Geometrical effects on steady state solutions, for a fixed set of parameters: $d_1 = 2e^{-5}, d_2 = 1e^{-5}, d_{11} = 2e^{-6}, d_{22} = 0, d_{12}=1e^{-6}$.
We observe a dependency of the patterns on the geometry of the domain.}
\label{figGeom}
\end{figure}
\section{Conclusion}
In this article, we have studied the long-term behavior of systems of the form (\ref{dGS}),(\ref{mGS}).
First, a numerical investigation has been carried out, monitoring the energy evolution (\ref{Elaw}) in the temporal model (Figure \ref{Ecurve}),
with a Gray-Scott reaction term. Energy dissipation and convergence to a steady state is almost always observed,
except in the case where the system converges to the homogeneous steady state.
An original method has been proposed (\ref{FEM}) to directly solve the steady state model, and validated against the temporal model (Figure \ref{steadyStates}).
This method, like Newton methods in general, is very efficient, provided that a good initial guess is supplied.
However, several cases of divergence have been observed (an increase in mesh refinement may help, but this approach is limited).
As a second step, stability of the steady states solutions has been studied.
A classical linear stability analysis has been carried out, leading to marginal stability surface (\ref{dispersion}) and critical wavelength (\ref{wl}) identification (Figure \ref{figLSA}).
In complement, numerical simulations have emphasized the limitation of the approach on a real case (Figure \ref{figPT}):
a case, not predicted by LSA, has shown rich dynamics, involving multi-parameters bifurcation, multistability and symmetry breaking (possibly histeresis).
Finally, a numerical exploration has been done on equilibrium pattern formation.
On top of revealing original patterns (Figure \ref{figNC}), a link between dissipative structure and the initial energy injected in the system has been observed.
Also, a link between the domain geometry (Figures \ref{figFFT} and \ref{figGeom}), and the equilibrium wavelength of the steady states has been reported.
As future works, several paths may be explored.
Improving the Newton method presented in the first section, and in particular its robustness to initial guess, could help reach a broader variety of steady states solutions.
A potential mesh adaptation strategy could be an alternative, complementary way, to capture a broader range of solutions.
Further work could be carried out to explain the behavior, numerically observed, but not predicted by, standard linear stability analysis in the second section.
The link between the steady state spectrum, and domain geometry, may be investigated with the prism of spectral geometry, both theoretically and numerically.
|
1609.00111
|
\section{Introduction}
\label{Introduction}
In \cite{MR1013073} Oguiso studied the Kummer surface $\mathcal{Y}=\operatorname{Kum}(\mathcal{E}_1\times \mathcal{E}_2)$ obtained by the minimal resolution of the quotient surface of the product abelian surface $\mathcal{E}_1\times \mathcal{E}_2$ by the inversion automorphism, where the elliptic curves $\mathcal{E}_i$ for $i=1,2$ are not mutually isogenous. As it is well known, such a Kummer surface $\mathcal{Y}$ is an algebraic $K3$ surface of Picard rank $18$ and can be equipped with Jacobian elliptic fibrations.
Oguiso classified them, and proved that on $\mathcal{Y}$ there are eleven distinct Jacobian elliptic fibrations, labeled $\mathcal{J}_1, \dots, \mathcal{J}_{11}$. Kuwata and Shioda furthered Oguiso's work in \cite{MR2409557} where they computed elliptic parameters and Weierstrass equations for all
eleven different fibrations, and analyzed the reducible fibers and Mordell-Weil lattices.
These Weierstrass equations
are in fact families of minimal Jacobian elliptic fibrations over a two-dimensional moduli space.
We denote by $\lambda_i \in \mathbb{P}^1 \backslash \lbrace 0, 1, \infty \rbrace$ for $i=1, 2$ the modular parameter for the
elliptic curve $\mathcal{E}_i$ defined by the Legendre form
\begin{equation}
y_i^2 = x_i \, (x_i-1) \, (x_i - \lambda_i) \;.
\end{equation}
The moduli space for the fibrations $\mathcal{J}_1, \dots, \mathcal{J}_{11}$ is then given by unordered pairs
\begin{equation}
\label{ModuliSpace}
(\tau_1, \tau_2) \in \mathcal{M} = \Big( \Gamma(2) \times \Gamma(2) \Big) \rtimes \mathbb{Z}_2 \backslash \mathbb{H} \times \mathbb{H} \;,
\end{equation}
such that $\lambda_i =\lambda(\tau_i)$ where $\lambda$ is the modular lambda function of level two
for the genus-zero, index-six congruence subgroup $\Gamma(2) \subset \operatorname{PSL}_2(\mathbb{Z})$, and
the generator of $\mathbb{Z}_2$ acts by exchanging the two parameters.
Base changes and quadratic twists provide powerful methods to produce new elliptic surfaces from simpler ones.
Miranda and Persson provided in \cite{MR867347} a classification of all extremal rational elliptic surfaces.
Extremal rational Jacobian elliptic surfaces are among the simplest non-trivial elliptic surfaces.
The first goal of this article is to construct all eleven Jacobian elliptic fibrations on the Kummer surface
$\operatorname{Kum}(\mathcal{E}_1\times \mathcal{E}_2)$ from a small number of extremal rational elliptic surfaces
by using only these two operations. As it will turn out, four extremal rational elliptic surfaces from the list in~\cite{MR867347} will suffice.
For each elliptic fibration $\mathcal{J}_i$ for $i=1,\dots,11$ there is a two-dimensional variety in algebraic correspondence with
$\mathcal{M}$ such that the elliptic fibration $\mathcal{J}_i$ is obtained from an extremal rational Jacobian elliptic fibration
by base change and quadratic twisting. In this way, the modular parameters $\lambda_1$ and $\lambda_2$ of the elliptic curves $\mathcal{E}_1$ and $\mathcal{E}_2$, respectively,
determine a rational base transformation and quadratic twist for an extremal rational elliptic surface (without moduli)
that yield the Jacobian elliptic fibration on $\operatorname{Kum}(\mathcal{E}_1\times \mathcal{E}_2)$. This will be proved in Section~\ref{SEC:EllFib}.
It is easy to show that the family $q: \mathcal{Y}_{\lambda_1,\lambda_2} \to \mathcal{M}$ is in fact
a projective family of smooth connected projective varieties over $\mathbb{C}$ and $q$ is a proper and smooth morphism.
Moreover, there is a unique holomorphic two-form $\omega$ (up to scaling) on each $K3$ surface $\mathcal{Y}_{\lambda_1,\lambda_2}$, and
differential equations can be used to express the variation in the cohomology $H^{2,0}(\mathcal{Y}_{\lambda_1,\lambda_2},\mathbb{C})$
as the moduli vary. One of the fundamental problems in Hodge theory is to determine the canonical flat connection, known as the Gauss-Manin connection.
The connection reduces to a system of differential equations satisfied by the periods of $\omega$ called the Picard-Fuchs system \cite[Sec. 4, 21]{MR0258824}.
Since the second homology of a $K3$ surface has rank 22 and the Picard rank of
$\operatorname{Kum}(\mathcal{E}_1\times \mathcal{E}_2)$ is eighteen if the the two elliptic curves are not mutually isogenous,
there must be four transcendental two-cycles which upon integration with the holomorphic two-form $\omega$ will give
four linearly independent periods. In turn, the Picard-Fuchs system for $\mathcal{Y}_{\lambda_1,\lambda_2}$ must be a system of linear partial differential equations in two variables which is holonomic of rank four.
In our situation, period integrals of the holomorphic two-form $\omega$ over transcendental two-cycles on $\mathcal{Y}_{\lambda_1,\lambda_2}$
can be evaluated using any of the eleven elliptic fibrations. Moreover, since we are able to relate every elliptic fibration to an extremal
rational elliptic surface by a rational base transformation and quadratic twist, period integrals reduce to simple iterated double integrals
representing so-called $\mathcal{A}$-hypergeometric functions. In Section~\ref{SEC:periods}, we will determine -- using the geometry of the
eleven fibrations -- several different descriptions for the Picard-Fuchs system. As the answer must be independent of the fibration used, we obtain identities relating
different GKZ systems, i.e., systems of linear partial differential equations satisfied by $\mathcal{A}$-hypergeometric functions.
All identities are then summarized in Theorem~\ref{thm:GM}; among them we recover
the linear transformation law for Appell's hypergeometric system, a famous quadratic identity due to Barnes and Bailey, and a new identity relating Appell's hypergeometric system
to a product of two Gauss' hypergeometric differential equations by a cubic transformation.
\tocless\section{Acknowledgments}\setcounter{section}{1}
The first author acknowledges support from the Undergraduate Research and Creative Opportunities Grant Program
by the Office of Research and Graduate Studies at Utah State University.
\bigskip
\section{Elliptic fibrations}
\label{SEC:EllFib}
A surface is called a Jacobian elliptic fibration if it is a (relatively) minimal elliptic surface $\pi: \mathcal{X} \to \mathbb{P}^1$ over $\mathbb{P}^1$
with a distinguished section $S_0$. The complete list of possible singular fibers has been given by Kodaira~\cite{MR0184257}.
It encompasses two infinite families $(I_n, I_n^*, n \ge0)$ and six exceptional cases $(II, III, IV, II^*, III^*, IV^*)$.
To each Jacobian elliptic fibration $\pi: \mathcal{X} \to \mathbb{P}^1$ there is an associated Weierstrass model $\bar{\pi}: \bar{\mathcal{X}\,}\to \mathbb{P}^1$
with a corresponding distinguished section $\bar{S}_0$ obtained by contracting
all components of fibers not meeting $S_0$. $\bar{\mathcal{X}\,}$ is always singular
with only rational double point singularities and irreducible fibers, and $\mathcal{X}$ is the minimal desingularization.
If we choose $t \in \mathbb{C}$ as a local affine coordinate on $\mathbb{P}^1$, we can write $\bar{\mathcal{X}\,}$ in the Weierstrass normal form
\begin{equation}
\label{Eq:Weierstrass}
y^2 = 4 \, x^3 - g_2(t) \, x - g_3(t) \;,
\end{equation}
where $g_2$ and $g_3$ are polynomials in $t$ of degree four and six, or, eight and twelve
if $\mathcal{X}$ is a rational surface or a $K3$ surface, respectively. In the following, we will use $t$ and $(x,y)$ as the affine base coordinate and coordinates of the elliptic fiber for a rational elliptic surface,
and $u$ and $(X,Y)$ for an elliptic $K3$ surface. It is of course well known
how the type of singular fibers is read off from the orders of vanishing of the functions $g_2$, $g_3$ and the discriminant $\Delta= g_2^3 - 27 \, g_3^2$
at the singular base values. Note that the vanishing degrees of $g_2$ and $g_3$ are always less or equal to three and five, respectively,
as otherwise the singularity of $\bar{X}$ is not a rational double point.
For a family of Jacobian elliptic surfaces $\pi: \mathcal{X} \to \mathbb{P}^1$, the two classes in N\'eron-Severi lattice $\mathrm{NS}(\mathcal{X})$ associated
with the elliptic fiber and section span a sub-lattice $\mathcal{H}$ isometric to the standard hyperbolic lattice $H$ with the quadratic form $Q=x_1x_2$, and we have the following decomposition
as a direct orthogonal sum
\begin{equation*}
\mathrm{NS}(\mathcal{X}) = \mathcal{H} \oplus \mathcal{W} \;.
\end{equation*}
The orthogonal complement $T(\mathcal{X}) = \mathrm{NS}(\mathcal{X})^{\perp} \in H^2(\mathcal{X},\mathbb{Z})\cap H^{1,1}(\mathcal{X})$ is
called the transcendental lattice and carries the induced Hodge structure.
Moreover, an elliptic fibration $\pi$ is called extremal if and only if the rank of the Mordell-Weil group of sections, denoted by $\operatorname{MW}(\pi)$,
vanishes, i.e., $\operatorname{rank} \operatorname{MW}(\pi)=0$, and the associated elliptic surface has maximal Picard rank.
\subsection{Extremal rational elliptic surfaces}
\label{ERES}
We describe the subset of the extremal rational elliptic surfaces in \cite{MR867347} that will be needed in Section~\ref{Sec:K3}.
In Table \ref{tab:3ExtRatHg}, $g_2, g_3, \Delta, J= g_2^3 / \Delta$ are the Weierstrass coefficients, discriminant, and $J$-function;
the ramification points of $J$ and the Kodaira-types of the fibers over the ramification points are given, as well as the sections that
generate the Mordell-Weil group of sections.
For the rational families of Weierstrass models in Equation~(\ref{Eq:Weierstrass}) we will use $dx/y$ as the holomorphic
one-form on each regular fiber of $\bar{\mathcal{X}\,}\!$. It is well-known (cf.~\cite{MR927661}) that the Picard-Fuchs equation is given by the Fuchsian system
\begin{equation}
\label{FuchsianSystem}
\frac{d}{dt} \left( \begin{array}{c} \omega_1 \\ \eta_1 \end{array} \right) = \left( \begin{array}{ccc} - \frac{1}{12} \frac{d \ln\Delta}{dt} && \frac{3\,\delta}{2\,\Delta} \\
- \frac{g_2 \, \delta}{8 \, \Delta}& & \frac{1}{12} \frac{d \ln \Delta}{dt} \end{array} \right) \cdot \left( \begin{array}{c} \omega_1 \\ \eta_1 \end{array} \right) \;,
\end{equation}
where $\omega_1 = \oint_{\; \Sigma_1} \frac{dx}{y}$ and $\eta_1 = \oint_{\; \Sigma_1} \frac{x \, dx}{y}$ for each one-cycle $\Sigma_1$
and with $\delta=3 \, g_3 \, g_2' - 2\, g_2 \, g_3' $.
We have the following lemma:
\begin{lemma}
\label{Lem1}
For $t \not \in \lbrace 0,1, \infty \rbrace$ there is a smooth family of closed one-cycles $\Sigma_1=\Sigma_1(t)$ in the first homology of the elliptic curve given by Equation~(\ref{Eq:Weierstrass})
such that the period integral $\oint_{\; \Sigma_1} \frac{dx}{y}$ for the rational elliptic surfaces in Table~\ref{tab:3ExtRatHg} with $\mu \not = 0$
reduces to the following hypergeometric function holomorphic near $t=0$
\begin{equation}
\label{rk2hgf}
\omega_1= (2 \pi i) \;\hpg21{ \mu, 1-\mu}{1}{t}.
\end{equation}
The period is annihilated by the second-order, degree-one Picard-Fuchs operator
\begin{equation}
\label{L2mu}
\mathsf{L}_2 = \theta^2 - t \, \big(\theta + \mu\big) \, \big(\theta + 1 -\mu\big) \;.
\end{equation}
For $\mu=0$ in Table~\ref{tab:3ExtRatHg}, the period holomorphic near $t=0$ is given by
\begin{equation}
\label{rk2hgfb}
\omega_1 = (2 \pi i) \;\, \hpg21{ \frac{1}{2}, \frac{1}{2}}{1}{\lambda} \;\hpgo10\!\left.\left(\frac{1}{2} \right| t \right)
\end{equation}
and annihilated by the first-order, degree-one Picard-Fuchs operator
\begin{equation}
\label{L1}
\mathsf{L}_1 = \theta - t \, \left(\theta + \frac{1}{2}\right) \;.
\end{equation}
\end{lemma}
\begin{proof}
The proof was given in \cite{Doran:2015aa}.
\end{proof}
\begin{remark}
The names of the Jacobian elliptic surfaces in Table \ref{tab:3ExtRatHg} coincide with the ones used by Miranda and Persson \cite{MR867347} and
Herfurtner \cite{MR1129371}.
\end{remark}
\begin{remark}
The definition and basic properties of the hypergeometric functions $\hpgo21$ will be given in Section~\ref{EulerIntegrals}.
\end{remark}
\begin{remark}
\label{Rem:dual_period}
For the rational elliptic surfaces in Table~\ref{tab:3ExtRatHg} with $\mu \not =0$, there is a smooth family of closed dual one-cycles $\Sigma'_1=\Sigma'_1(t)$
such that the period integral reduces to the second, linearly independent solution annihilated by the operator~(\ref{L2mu}) that has a singular point at $t=0$ and is given by
\begin{equation}
\label{rk2hgf_dual}
\omega'_1 = \oint_{\; \Sigma_1'} \frac{dx}{y} = \dfrac{(2 \pi i)}{t^{\mu}} \; \hpg21{ \mu, \mu}{2\mu}{\frac{1}{t}} \;.
\end{equation}
\end{remark}
\subsection{$K3$ fibrations from base transformations and twists}
\label{Sec:K3}
Rational base changes provide a convenient method to produce Jacobian elliptic $K3$ surfaces from rational elliptic surfaces. The set-up is as follows: suppose we have a rational Jacobian elliptic surface
$\pi: \mathcal{X} \to C_\mathcal{X} = \mathbb{P}^1$ over the rational base curve $C_\mathcal{X} $. To apply a base change, we need a rational ramified cover
$\mathbb{P}^1 \to C_\mathcal{X}=\mathbb{P}^1$ of degree $d$ mapping surjectively to $C_\mathcal{X}$. To be precise, for each $[u:1] \in \mathbb{P}^1$ we set $t= p(u)/u^n$ for $n \in \mathbb{N}$ where
$p$ is a polynomial of degree $d > n \ge 0$ with the following three properties: (1) the points $t=0$ and $t=1$ have $d$ pre-images each with branch numbers zero; (2)
$t=\infty$ is a branching point with corresponding ramification points $u=\infty$ with branch number $d-n-1$ and $u=0$ with branch number $n-1$ if $n \ge 1$;
(3) there are $d$ additional ramification points not coincident with $\lbrace 0, 1, \infty\rbrace$ with branch number $1$.
The Riemann-Hurwitz formula $g-1=B/2+d \cdot (g'-1)$ is then satisfied for $g=g'=0$, $B=(d-n-1)+(n-1)+d$.
The base change is defined as the following fiber product:
\begin{equation}
\begin{array}{ccc}
\mathcal{Y}:=\mathcal{X} \times_{C_\mathcal{X}} \mathbb{P}^1 & \longrightarrow & \mathbb{P}^1 \\
\downarrow && \downarrow\\
\mathcal{X} & \longrightarrow & C_\mathcal{X}
\end{array}
\end{equation}
Generically, one expects $d=2$ in order to turn a rational surface into a $K3$ surface by a rational base change. However, the extremal rational elliptic surfaces from
Section~\ref{ERES} have star-fibers at $t=\infty$ whence values with $2 \le d \le 4$ can all produce $K3$ surfaces as well. We then obtain Jacobian elliptic $K3$ surfaces
with $d$ singular fibers of the same Kodaira-type as the rational elliptic surface $\mathcal{X}$ has at $u=0$ and $u=1$, respectively.
The effect of a base change on the singular fiber at $t=\infty$ depends on the local ramification of the cover $\mathbb{P}^1 \to C_\mathcal{X}=\mathbb{P}^1$.
Two elliptic surfaces with the same $J$-map have the same singular fibers up to some quadratic twist.
The effect of a quadratic twist on the singular fibers is as follows:
\begin{equation}
I_n \leftrightarrow I_n^*, \qquad II \leftrightarrow IV^*, \qquad III \leftrightarrow III^*, \qquad IV \leftrightarrow II^* \;.
\end{equation}
It is well-known that any two elliptic surfaces that are quadratic twists of each other become isomorphic after a suitable finite base change~\cite{MR2732092}.
For us, quadratic twisting is understood by starting with the Weierstrass equation~(\ref{Eq:Weierstrass}) and replacing it by the following Weierstrass equation
for $\bar{\mathcal{Y}\,}$
\begin{equation}
\label{Eq:Weierstrass_b}
Y^2 = 4 \, X^3 - g_2\left(\frac{p(u)}{u^n}\right) \, T(u)^2 \; X - g_3\left(\frac{p(u)}{u^n}\right) \, T(u)^3 \;,
\end{equation}
where $T$ is a quadratic polynomial in $u$, and we have already combined the twisting with the aforementioned rational base transformation.
We will always require that Equation~(\ref{Eq:Weierstrass_b}) is a minimal Weierstrass fibration.
We then have the following result constructing each Jacobian elliptic fibration on $\mathcal{Y}=\operatorname{Kum}(\mathcal{E}_1\times \mathcal{E}_2)$
from extremal rational elliptic surfaces:
\begin{proposition}
\label{Prop1}
We have the following statements:
\begin{enumerate}
\item[(1)] The Jacobian elliptic fibrations $\mathcal{J}_1, \dots,$ $\mathcal{J}_7, \mathcal{J}_9$ given in \cite{MR2409557,MR1013073}
on the Kummer surface $\operatorname{Kum}(\mathcal{E}_1\times \mathcal{E}_2)$ are obtained in Equation~(\ref{Eq:Weierstrass_b}) from the extremal Jacobian elliptic surfaces given
in Table~\ref{tab:3ExtRatHg} by using the rational base transformations $t=t_i(u)$ and quadratic twists $T=T_i(u)$ in Table~\ref{tab:KummerFibs} for $i=1, \dots, 9$.
\item[(2)] For $i \in \lbrace 1,2,3,7,9 \rbrace$ the formulas in Table~\ref{tab:KummerFibs} are given over the quadratic field extension $K[d_i]$ of the field $K=\mathbb{C}(\lambda_1,\lambda_2)$ of moduli
of $\mathcal{E}_1$ and $\mathcal{E}_2$. Table~\ref{tab:KummerFibs} presents $d_i^2$ as a polynomial in terms of their elliptic modular parameters $\lambda_1$ and $\lambda_2$.
\end{enumerate}
\end{proposition}
\begin{proof}
For each fibration we apply a transformation $(Y,X) \mapsto (Y/2,X+p(\lambda_1,\lambda_2; u))$ to the elliptic fibrations in \cite{MR2409557}
-- where $p(\lambda_1,\lambda_2; u)$ is a polynomial in the modular parameters and the affine coordinate $u$ -- to obtain a Jacobian elliptic fibration in Weierstrass normal form.
In addition, for $\mathcal{J}_5$ we apply the transformation $(Y,X,u) \mapsto (Y/u^6,X/u^4,1+1/u)$ to move the singular fibers into convenient positions.
The proof then follows by comparing the obtained Weierstrass normal forms with the ones obtained in Equation~(\ref{Eq:Weierstrass_b}) from the extremal Jacobian elliptic surfaces given
in Table~\ref{tab:3ExtRatHg} by using the rational base transformations $t=t_i(u)$ and quadratic twists $T=T_i(u)$ in Table~\ref{tab:KummerFibs} for $i=1, \dots, 9$.
\end{proof}
\begin{remark}
For $\mathcal{J}_4, \mathcal{J}_5, \mathcal{J}_6$ the base transformations and twists do not depend on a quadratic field extension.
In these cases, the decomposition into a rational base transformation and quadratic twist is well-defined over the function field $K$ itself.
\end{remark}
The remaining fibrations, i.e., $\mathcal{J}_8$, $\mathcal{J}_{10}$, and $\mathcal{J}_{11}$, are found to be related to other Jacobian elliptic
fibrations by rational transformations that leave the holomorphic two-form invariant.
We have the following proposition:
\begin{proposition}
\label{Prop2}
The Jacobian elliptic fibrations $\mathcal{J}_8, \mathcal{J}_{10}, \mathcal{J}_{11}$ given in \cite{MR2409557, MR1013073}
on the Kummer surface $\mathcal{Y}=\operatorname{Kum}(\mathcal{E}_1\times \mathcal{E}_2)$ are obtained from the Jacobian elliptic fibrations $\mathcal{J}_7, \mathcal{J}_{9}$, and $\mathcal{J}_{7}$,
respectively, by the rational transformations given in Table~\ref{tab:KummerRels} that leave the holomorphic two-form invariant.
\end{proposition}
\begin{proof}
The proof follows by explicit computation. The transformation $(Y,X) \mapsto (Y/2,X+p(\lambda_1,\lambda_2; u))$
rescales the holomorphic two-form $\omega=du \wedge dx/y$ by a constant factor of two for \emph{each} fibration, and therefore does not affect the result.
\end{proof}
\section{Period integrals}
\label{SEC:periods}
In \cite{MR902936,MR948812} Gel'fand, Kapranov and Zelevinsky defined a general class of hypergeometric functions, encompassing
the classical one-variable hypergeometric functions, the Appell and Lauricella functions. Today they are known as
GKZ hypergeometric functions and provide an elegant basis for a theory of hypergeometric functions in several variables.
Integral representations for these functions generalizing the classical integral transform for Gauss' hypergeometric function found by Euler
are known as $\mathcal{A}$-hypergeometric functions and were studied in \cite{MR1080980}.
\subsection{Euler integrals}
\label{EulerIntegrals}
The classical Euler integral transform
for Gauss' hypergeometric function $\hpgo21$ for $\textnormal{Re}(\gamma)>\textnormal{Re}(\beta)>0$ is given by
\begin{equation}
\label{GaussIntegral}
\hpg21{\alpha,\,\beta}{\gamma}{z} = \frac{\Gamma(\gamma)}{\Gamma(\beta) \, \Gamma(\gamma-\beta)} \, \int_0^1 (1-x)^{\gamma-\beta-1} \, (1- z\, x)^{-\alpha} \; x^{\beta-1} \, dx\;.
\end{equation}
The differential equation satisfied by $\hpgo21$ is
\begin{equation} \label{eq:euler}
z(1-z)\;\frac{d^2f}{dz^2}+
\big(\gamma-(\alpha+\beta+1)\, z\big) \; \frac{df}{dz}-\alpha\,\beta\,f=0.
\end{equation}
Equation~(\ref{eq:euler}) is a Fuchsian\footnote{Fuchsian means linear homogeneous and with regular singularities.} equation with three regular singularities at $z=0$, $z=1$ and $z=\infty$ with local exponent differences equal to $1-\gamma$, $\gamma-\alpha-\beta$, and $\alpha-\beta$, respectively.
For $\alpha=1-\beta=\mu$ and $\gamma=1$, it coincides with the differential operator $\mathsf{L}_2$ in Equation~(\ref{L2mu}).
The linear differential equation satisfied by the hypergeometric function $\hpgo21$
when written as a first-order Pfaffian system will be denoted by $\hpgdo21$ with
\begin{equation}
\label{PfaffianSystem2F1}
\hpgd21{\alpha,\,\beta}{\gamma}{z}: \quad d \vec{f}_{z} = \Omega^{(\,_2F_1)}_{z} \cdot \vec{f}_{z}
\end{equation}
for the vector-valued function
$$
\vec{f}_{z} = \langle f(z), \, \theta_{z} f(z) \rangle^t
$$
with $\theta_{z}= z \, \partial_{z}$. The Pfaffian matrix associated with the differential equation~(\ref{eq:euler}) is given by
\begin{equation}
\label{connection2F1}
\Omega^{(\,_2F_1)}_{z} = \left(
\begin {array}{cc}
0& \frac {1}{z} \\
- \frac{\alpha \beta}{z-1}
& \left( \frac{1-\gamma}{z} + \frac{\gamma-\alpha-\beta-1}{z-1} \right)
\end {array}
\right) \; dz\;.
\end{equation}
The outer tensor product of two rank-two Pfaffian systems is constructed by introducing $\vec{H}_{z_1,z_2} = \vec{f}_{z_1} \boxtimes \vec{f}_{z_2} $, i.e.,
\begin{equation*}
\begin{split}
\vec{H}_{z_1,z_2} = \, \langle f(z_1) \, f(z_2), \; \theta_{z_1} f(z_1) \, f(z_2), \; f(z_1) \, \theta_{z_2} f(z_2), \; \theta_{z_1} f(z_1) \, \theta_{z_2}f(z_2)\rangle^t \;.
\end{split}
\end{equation*}
The associated Pfaffian system is the rank-four system
\begin{equation}
\label{PfaffianSystemSqr}
\hpgd21{\alpha_1,\,\beta_1}{\gamma_1}{z_1} \boxtimes \hpgd21{\alpha_1,\,\beta_1}{\gamma_2}{z_2}: \quad d\vec{H}_{z_1,z_2} = \Omega^{(\,_2F_1 \boxtimes \,_2F_1)}_{z_1,z_2}\cdot \vec{H}_{z_1,z_2}
\end{equation}
with the connection form
\begin{equation}
\Omega^{(\,_2F_1 \boxtimes \,_2F_1)}_{z_1,z_2} = \Omega^{(\,_2F_1)}_{z_1} \boxtimes \mathbb{I} + \mathbb{I} \boxtimes \Omega^{(\,_2F_1)}_{z_2} \;.
\end{equation}
The multivariate Appell's hypergeometric function $\appo2$ has an integral representation for $\textnormal{Re}{(\gamma_1)} > \textnormal{Re}{(\beta_1)} > 0$ and $\textnormal{Re}{(\gamma_2)} > \textnormal{Re}{(\beta_2)} > 0$
given by
\begin{equation}
\label{IntegralFormula}
\begin{split}
\app2{\alpha;\;\beta_1,\beta_2}{\gamma_1,\gamma_2}{z_1, z_2} = \frac{\Gamma(\gamma_1) \, \Gamma(\gamma_2)}{\Gamma(\beta_1) \, \Gamma(\beta_2) \, \Gamma(\gamma_1 - \beta_1) \, \Gamma(\gamma_2-\beta_2)} \quad \qquad\\
\times \, \int_0^1 dt \int_0^1 dx \;
\frac{1}{t^{1-\beta_2} \, (1-t)^{1+\beta_2-\gamma_2} \, x^{1-\beta_1} \, (1-x)^{1+\beta_1-\gamma_1} \, (1-z_1 \, x - z_2 \, t)^{\alpha}} \;.
\end{split}
\end{equation}
Appell's function $\appo2$ satisfies a Fuchsian system of partial differential equations analogous to the hypergeometric equation for the function $\hpgo21$.
The system of linear partial differential equations satisfied by $\appo2$ is given by
\begin{equation} \label{app2system}
\begin{split}
z_1(1-z_1)\frac{\partial^2F}{\partial z_1^2}-z_1z_2\frac{\partial^2F}{\partial z_1\partial z_2}
+\left(\gamma_1-(\alpha+\beta_1+1)\, z_1\right)\frac{\partial F}{\partial z_1}-\beta_1z_2\frac{\partial F}{\partial z_2}
-\alpha \beta_1F=0,\\
z_2(1-z_2)\frac{\partial^2F}{\partial z_2^2}-z_1z_2\frac{\partial^2F}{\partial z_1\partial z_2}
+\left(\gamma_2-(\alpha+\beta_2+1)\, z_2\right)\frac{\partial F}{\partial z_2}-\beta_2 z_1\frac{\partial F}{\partial z_1}
- \alpha \beta_2F=0.
\end{split}
\end{equation}
This is a holonomic system of rank four whose singular locus on $\mathbb{P}^1\times\mathbb{P}^1$ is the union of the following lines
\begin{equation} \label{app2sing}
z_1=0,\quad z_1=1,\quad z_1=\infty, \quad z_2=0,\quad z_2=1,\quad z_2=\infty, \quad z_1+z_2=1.
\end{equation}
The system (\ref{app2system}) of differential equations satisfied by the Appell hypergeometric function
when written as the Pfaffian system will be denoted by $\appdo2$ with
\begin{equation}
\label{PfaffianSystemF2}
\appd2{\alpha;\;\beta_1,\beta_2}{\gamma_1,\gamma_2}{z_1, z_2}: \quad d\vec{F}_{z_1,z_2} = \Omega^{(F_2)}_{z_1,z_2} \cdot \vec{F}_{z_1,z_2}
\end{equation}
for the vector-valued function
$$
\vec{F}_{z_1,z_2} = \langle F, \; \theta_{z_1} F, \; \theta_{z_2} F, \; \theta_{z_1}\theta_{z_2} F \rangle^t \;
$$
with $\theta_{z_i}= z_i \, \partial_{z_i}$ for $i=1, 2$. The Pfaffian matrix associated with~(\ref{app2system}) has rank four and its explicit form
is found in~\cite{MR1086776}.
The connection between the hypergeometric function $\hpgo21$ and Appell's hypergeometric function $\appo2$ is given by an integral transform
that was proved in ~\cite{Clingher:2015aa}:
\begin{lemma}
\label{EulerIntegralTransform}
For $\textnormal{Re}{(\gamma_1)} > \textnormal{Re}{(\beta_1)} > 0$ and $\textnormal{Re}{(\gamma_2)} > \textnormal{Re}{(\beta_2)} > 0$, we have the following relation between
the hypergeometric function and Appell's hypergeometric function:
\begin{equation}
\label{IntegralTransform}
\begin{split}
\frac{1}{A^{\alpha}} \;
\app2{\alpha;\;\beta_1,\beta_2}{\gamma_1,\gamma_2}{\frac{1}{A}, 1 - \frac{B}{A}} = - \frac{\Gamma(\gamma_2) \, (A-B)^{1-\gamma_2}}{ \Gamma(\beta_2)\, \Gamma(\gamma_2-\beta_2)} \quad \\
\times \; \int_A^B \frac{dt}{ (A-t)^{1-\beta_2} \, (t-B)^{1+\beta_2-\gamma_2}} \, \frac{1}{t^{\alpha}} \; \hpg21{\alpha,\,\beta_1}{\gamma_1}{\frac{1}{t}} \;.
\end{split}
\end{equation}
\end{lemma}
\subsection{Differential systems from fibrations $\mathcal{J}_4$, $\mathcal{J}_6$, $\mathcal{J}_7$, $\mathcal{J}_9$}
As a reminder, $\lambda_1$ and $\lambda_2$ are the modular parameters of the elliptic curves $\mathcal{E}_1$ and $\mathcal{E}_2$, respectively.
For the fibration $\mathcal{J}_4$ we have the following lemma:
\begin{lemma}
\label{lem:J4}
The Picard-Fuchs system for the periods of the holomorphic two-form on the family
$\operatorname{Kum}(\mathcal{E}_1\times \mathcal{E}_2)$ is given by
\begin{equation}
\label{PF_J4}
\hpgd21{\frac{1}{2},\,\frac{1}{2}}{1}{\lambda_1} \boxtimes \hpgd21{\frac{1}{2},\,\frac{1}{2}}{1}{\lambda_2} \;.
\end{equation}
\end{lemma}
\begin{proof}
For the Jacobian elliptic fibration $\mathcal{J}_4$ on $\operatorname{Kum}(\mathcal{E}_1\times \mathcal{E}_2)$
the holomorphic two-form is given by $\omega=du \wedge dX/Y$. There is a transcendental two-cycle $\Sigma_2$ such that the period integral
reduces to the iterated integral
\begin{equation}
\oiint_{ \Sigma_2} \omega = 2 \, \int_1^\infty \dfrac{dt_4}{\sqrt{t_4 \, (t_4-\lambda_1)}} \, \oint_{\Sigma_1} \frac{dx}{y} \;.
\end{equation}
where we used Proposition~\ref{Prop1} to relate the double integral to an integral for the holomorphic one-form $dx/y$ on the extremal
rational elliptic surface $\mathcal{X}_{11}(\lambda_2)$ and then reduced the outer integration to an integration along the branch cut for the function
$\sqrt{\, t_4 \, (t_4-\lambda_1)}$. Using Lemma~\ref{Lem1} and Equation~(\ref{GaussIntegral}), we evaluate the period integral further to obtain
\begin{equation}
\begin{split}
\oiint_{ \Sigma_2} \omega &= 4 \pi i \int_1^\infty \dfrac{dt_4}{\sqrt{\, t_4 \, (t_4-\lambda_1)}} \;\hpgo10\!\left.\left(\frac{1}{2} \right| t_4 \right) \; \hpg21{ \frac{1}{2}, \frac{1}{2}}{1}{\lambda_2} \\
& = 4 \pi^2 \;\, \hpg21{ \frac{1}{2}, \frac{1}{2}}{1}{\lambda_1} \; \hpg21{ \frac{1}{2}, \frac{1}{2}}{1}{\lambda_2} \;.
\end{split}
\end{equation}
We can change the two-cycle $\Sigma_2$ to obtain a second, linearly-independent solution for each of the factors $\hpgdo21(\lambda_1)$ and $\hpgdo21(\lambda_2)$, respectively.
This proves that there are at least four linearly independent period integrals of the holomorphic two-form $\omega$ that are annihilated by the differential system in~(\ref{PF_J4}).
As the Picard rank of $\operatorname{Kum}(\mathcal{E}_1\times \mathcal{E}_2)$ is 18 if the the two elliptic curves are not mutually isogenous, the rank of the Picard-Fuchs system equals four, and the lemma follows.
\end{proof}
Next, we look at the fibration $\mathcal{J}_7$. Here, we will need to consider a quadratic field extension of the field $K=\mathbb{C}(\lambda_1,\lambda_2)$ of moduli for the pair
$\mathcal{E}_1$ and $\mathcal{E}_2$. We have the following lemma:
\begin{lemma}
\label{lem:J7}
Over $K[d_7]$ with $d_7^2=\lambda_1\lambda_2$, the Picard-Fuchs system for the periods of the holomorphic two-form on the family
$\operatorname{Kum}(\mathcal{E}_1\times \mathcal{E}_2)$ is given by
\begin{equation}
\label{PF_J7}
\dfrac{1}{\sqrt{\lambda_1+\lambda_2+2 \, d_7}} \; \appd2{\frac{1}{2};\;\frac{1}{2},\frac{1}{2}}{1,1}{v_7,w_7} \;,
\end{equation}
where we have set
\begin{equation}
\label{transfo:PF_J7}
\Big(v_7, w_7\Big) = \left( \dfrac{4 \, d_7}{\lambda_1+\lambda_2+2 \, d_7}, -\dfrac{(1-\lambda_1)(1-\lambda_2)}{\lambda_1+\lambda_2+2 \, d_7} \right) \;.
\end{equation}
Equivalently, the Picard-Fuchs system is given by
\begin{equation}
\label{PF_J7b}
\dfrac{1}{\sqrt{1+\lambda_1\lambda_2+2 \, d_7}} \; \appd2{\frac{1}{2};\;\frac{1}{2},\frac{1}{2}}{1,1}{\tilde{v}_7, \tilde{w}_7} \;,
\end{equation}
where we have set
\begin{equation}
\label{transfo:PF_J7b}
\Big(\tilde{v}_7, \tilde{w}_7\Big) = \left( \dfrac{4 \, d_7}{1+\lambda_1\lambda_2+2 \, d_7}, \dfrac{(1-\lambda_1)(1-\lambda_2)}{1+\lambda_1\lambda_2+2 \, d_7} \right) \;.
\end{equation}
\end{lemma}
\begin{proof}
Using the Jacobian elliptic fibration $\mathcal{J}_7$
and the holomorphic two-form $\omega=du \wedge dX/Y$, there is a transcendental two-cycle $\Sigma'_2$ such that the period integral
reduces to the iterated integral
\begin{equation}
\oiint_{ \Sigma'_2} \omega = 2 \int_0^\infty \dfrac{du}{\sqrt{\,T_7(u)}} \; \oint_{\Sigma'_1} \frac{dx}{y} \;,
\end{equation}
where we used Proposition~\ref{Prop1} to relate the double integral to an integral for the holomorphic one-form $dx/y$ on the extremal
rational elliptic surface $\mathcal{X}_{411}$ and then reduced the outer integration to an integration along a branch cut.
Using Remark~\ref{Rem:dual_period} and Equation~(\ref{IntegralTransform}), we evaluate the period integral further to obtain
\begin{equation}
\begin{split}
\oiint_{ \Sigma'_2} \omega &= - 2 \pi i \int_{A_7}^{B_7} \dfrac{dt_7}{\sqrt{\, d_7 \; (t_7-A_7) \, (t_7-B_7)}} \; \dfrac{1}{\sqrt{t_7}} \; \hpg21{ \frac{1}{2}, \frac{1}{2}}{1}{\frac{1}{t_7}} \\
& = \frac{4 \pi^2}{\sqrt{\, 4 \, d_7 A_7}} \; \app2{\frac{1}{2};\;\frac{1}{2},\frac{1}{2}}{1,1}{\frac{1}{A_7}, 1 - \frac{B_7}{A_7}} \;,
\end{split}
\end{equation}
where we have set
\begin{equation}
\label{params:J7}
\Big(A_7, B_7 \Big) = \left( \dfrac{\lambda_1+\lambda_2}{4 \, d_7}+\dfrac{1}{2}, \, \dfrac{1+\lambda_1 \lambda_2}{4 \, d_7}+\dfrac{1}{2} \right)\;.
\end{equation}
We can change the two-cycle $\Sigma'_2$ to obtain three more linearly independent solutions with different characteristic behavior at the lines in~(\ref{app2sing}).
The rest of the proof is analogous to the proof of Lemma~\ref{lem:J4}.
Equation~(\ref{PF_J7b}) and Equation~(\ref{transfo:PF_J7b}) follow from swapping the roles of $A_7$ and $B_7$ in Equation~(\ref{params:J7}).
\end{proof}
The comparison of Lemma~\ref{lem:J4} and Lemma~\ref{lem:J7} proves that
the Appell hypergeometric system can be decomposed as an outer tensor product of two rank-two Fuchsian systems.
We have the following corollary:
\begin{corollary}
\label{LemmaTensorSystem}
We have the following equivalence of systems of linear differential equations in two variables holonomic of rank four:
\begin{equation}
\label{id:J4J7}
\hpgd21{\frac{1}{2},\,\frac{1}{2}}{1}{\lambda_1} \boxtimes \hpgd21{\frac{1}{2},\,\frac{1}{2}}{1}{\lambda_2}
= \dfrac{1}{\sqrt{\lambda_1+\lambda_2+2 \, d_7}} \; \appd2{\frac{1}{2};\;\frac{1}{2},\frac{1}{2}}{1,1}{v_7,w_7} \;.
\end{equation}
In particular, there is a gauge transformation $G=(G_{ij})_{i,j=1}^4$ with
$$G_{11} = \sqrt{\lambda_1+\lambda_2+2 \, d_7}, \quad G_{1j}=0 \; \text{for $j=2,3,4$},$$
such that the connection forms satisfy
\begin{equation}
\label{RelationConnectionMatrices}
\Omega^{(\,_2F_1 \boxtimes \,_2F_1)}_{\lambda_1,\lambda_2} = G^{-1} \cdot \Omega^{(F_2)}_{v_7,w_7} \cdot G + G^{-1}\cdot dG \;.
\end{equation}
\end{corollary}
\begin{proof}
The second statement is a special case of a more general computation that was carried out in \cite{Clingher:2015aa} where the explicit form of the gauge transformation can be found as well.
\end{proof}
\begin{remark}
\label{RefCDM}
If a carefully crafted transcendental two-cycles is chosen for the period integral, one can relate not only the two differential systems,
but also explicit solutions to both systems. One obtains a special case of an identity by Barnes and Bailey relating
Appell's hypergeometric function to a product of Gauss' hypergeometric functions. This stronger identity was proved in \cite{Clingher:2015aa}
using period integrals on superelliptic curves and generalized Kummer varieties for general rational parameters $(\alpha,\beta_1, \beta_2, \gamma_1, \gamma_2)$.
\end{remark}
\begin{corollary}
We have the following equivalence of systems of linear differential equations in two variables holonomic of rank four:
\begin{equation}
\label{id:J6J7}
\appd2{\frac{1}{2};\;\frac{1}{2},\frac{1}{2}}{1,1}{w_7, v_7} = \dfrac{1}{\sqrt{1-w_7}} \; \appd2{\frac{1}{2};\;\frac{1}{2},\frac{1}{2}}{1,1}{\frac{w_7}{w_7-1}, \frac{v_7}{1-w_7}} \;.
\end{equation}
\end{corollary}
\begin{proof}
The proof follows from the identity
\begin{equation}
\appd2{\frac{1}{2};\;\frac{1}{2},\frac{1}{2}}{1,1}{w_7, v_7} = \sqrt{\dfrac{\lambda_1+\lambda_2+2 \, d_7}{1+\lambda_1\lambda_2+2 \, d_7}} \; \appd2{\frac{1}{2};\;\frac{1}{2},\frac{1}{2}}{1,1}{\tilde{w}_7, \, \tilde{v}_7}
\end{equation}
obtained by comparing Equation~(\ref{PF_J7}) and Equation~(\ref{PF_J7b}) after working out the linear relation between the variables on the left and right hand side.
\end{proof}
\begin{remark}
\label{RefCDM2}
Equation~(\ref{id:J6J7}) can be extended to a relation not only between differential systems, but between explicit solutions.
Equation~(\ref{id:J6J7}) is then the linear transformation for Appell's hypergeometric function $\appo2$.
This stronger identity was proved in \cite{Clingher:2015aa} for general rational parameters $(\alpha,\beta_1, \beta_2, \gamma_1, \gamma_2)$.
\end{remark}
Next, we look at the fibration $\mathcal{J}_6$. However, this fibration will not provide us with a new characterization of the Picard-Fuchs system. We have the following lemma:
\begin{lemma}
\label{lem:J6}
Over $K[d_6]$ with $d_6^2=\lambda_1\lambda_2$, the Picard-Fuchs system for the periods of the holomorphic two-form on the family
$\operatorname{Kum}(\mathcal{E}_1\times \mathcal{E}_2)$ is given by
\begin{equation}
\label{PF_J6}
\dfrac{1}{\sqrt{\lambda_1+\lambda_2+ 2 \, d_6}} \; \appd2{\frac{1}{2};\;\frac{1}{2},\frac{1}{2}}{1,1}{v_6, w_6} \;,
\end{equation}
where we have set
\begin{equation}
\Big(v_6, w_6\Big) = \left( -\dfrac{(1-\lambda_1)(1-\lambda_2)}{\lambda_1+\lambda_2+ 2 \, d_6}, \dfrac{4 \,d_6}{\lambda_1+\lambda_2+ 2 \, d_6} \right) \;.
\end{equation}
In particular, Equation~(\ref{PF_J6}) coincides with Equation~(\ref{PF_J7}) up to swapping the order of variables.
\end{lemma}
\begin{proof}
Using the Jacobian elliptic fibration $\mathcal{J}_6$ on $\operatorname{Kum}(\mathcal{E}_1\times \mathcal{E}_2)$
and the holomorphic two-form $\omega=du \wedge dX/Y$, there is a transcendental two-cycle $\Sigma'_2$ such that the period integral
reduces to the iterated integral
\begin{equation}
\begin{split}
\oiint_{ \Sigma'_2} \omega &= - 4 \pi i \int_{A_6}^{B_6} \dfrac{\sqrt{(1-\lambda_1)(\lambda_2-1)} \, dt_6}{\sqrt{\, p_2(t)}} \; \dfrac{1}{\sqrt{t_6}} \; \hpg21{ \frac{1}{2}, \frac{1}{2}}{1}{\frac{1}{t_6}} \;,
\end{split}
\end{equation}
where the polynomial $p_2(t)$ is given by
$$
p_2(t) = (1-\lambda_1)^2 (\lambda_2-1)^2 \, t^2 - 2 \, (1-\lambda_1) (\lambda_2-1) (\lambda_1+\lambda_2) \, t + (\lambda_2-\lambda_1)^2 \;,
$$
and its two roots $A_6, B_6$ are
\begin{equation}
\label{params:J6}
\Big(A_6, B_6 \Big) = \left(\dfrac{\lambda_1+\lambda_2 \pm 2 \, d_6}{(1-\lambda_1)(\lambda_2-1)},\dfrac{\lambda_1+\lambda_2 \mp 2 \, d_6}{(1-\lambda_1)(\lambda_2-1)}\right) \;.
\end{equation}
As in the proof of Lemma~\ref{lem:J7} we obtain
\begin{equation}
\begin{split}
\oiint_{ \Sigma'_2} \omega &= \frac{4 \pi^2}{\sqrt{\, (1-\lambda_1)(\lambda_2-1) \, A_6}} \; \app2{\frac{1}{2};\;\frac{1}{2},\frac{1}{2}}{1,1}{\frac{1}{A_6}, 1 - \frac{B_6}{A_6}} \;.
\end{split}
\end{equation}
Notice that swapping the roles of $A_6$ and $B_6$ in the transformation~(\ref{params:J6}) interchanges $\pm d_6$.
The rest of the proof is analogous to the one of Lemma~\ref{lem:J7}.
\end{proof}
\begin{remark}
There is a beautiful geometric reason why the Picard-Fuchs systems for fibrations $\mathcal{J}_6$ and $\mathcal{J}_7$ coincide which generalizes to lower Picard rank as well.
This will be subject of a forthcoming article.
\end{remark}
Next, we look at the fibration $\mathcal{J}_9$. We have the following lemma:
\begin{lemma}
\label{lem:J9}
Over $K[d_9]$ with $d_9^2= (\lambda_1^2 - \lambda_1 +1) (\lambda_2^2 - \lambda_2 +1)$, the Picard-Fuchs system for the periods of the holomorphic two-form on the family
$\operatorname{Kum}(\mathcal{E}_1\times \mathcal{E}_2)$ is given by
\begin{equation}
\label{PF_J9}
\dfrac{1}{\sqrt{R_9 \pm S_9 + 4 \, d_9}} \; \appd2{\frac{1}{2};\;\frac{1}{6},\frac{1}{2}}{\frac{1}{3},1}{v_9, w_9} \;,
\end{equation}
where we have set
\begin{equation}
\Big(v_9, w_9\Big) = \left( \dfrac{8 \,d_9}{R_9 \pm S_9 + 4 \, d_9}, \dfrac{\pm S_9}{R_9 \pm S_9 + 4 \, d_9} \right) \;,
\end{equation}
and
\begin{equation}
\begin{split}
R_9 & = 27 \, \lambda_1 \lambda_2 (\lambda_1-1) (\lambda_2-1), \\
S_9 & = (\lambda_1+1)(\lambda_1-2)(2\lambda_1-1) (\lambda_2+1)(\lambda_2-2)(2\lambda_2-1) \;.
\end{split}
\end{equation}
\end{lemma}
\begin{proof}
Using the Jacobian elliptic fibration $\mathcal{J}_9$ on $\operatorname{Kum}(\mathcal{E}_1\times \mathcal{E}_2)$
and the holomorphic two-form $\omega=du \wedge dX/Y$, there is a transcendental two-cycle $\Sigma'_2$
such that the period integral
reduces to the following integral:
\begin{equation}
\oiint_{ \Sigma'_2} \omega = 2 \, \int_0^\infty \dfrac{du}{\sqrt{\,T_9(u)}} \; \oint_{\Sigma'_1} \frac{dx}{y} \;.
\end{equation}
where we used Proposition~\ref{Prop1} to relate the double integral to an integral for the holomorphic one-form $dx/y$ on the extremal
rational elliptic surface $\mathcal{X}_{211}$ and then reduced the outer integration to an integration along a branch cut.
Using Remark~\ref{Rem:dual_period} we evaluate the period integral further to obtain
\begin{equation}
\begin{split}
\oiint_{ \Sigma'_2} \omega &= - \frac{4\sqrt{3}\pi i}{\sqrt{2}} \; \int_{A_6}^{B_6} \dfrac{dt_9}{\sqrt{\, d_9 \, (t_9-A_9) (t-B_9)}} \; \dfrac{1}{t_9^{1/6}} \; \hpg21{ \frac{1}{6}, \frac{1}{6}}{\frac{1}{3}}{\frac{1}{t_9}} \;,
\end{split}
\end{equation}
where $A_9, B_9$ are given by
\begin{equation}
\label{params:J9}
\begin{split}
A_9 & = \dfrac{(2 \lambda_1 \lambda_2 - \lambda_1 - \lambda_2 +2) (\lambda_1 \lambda_2 + \lambda_1 - 2 \lambda_2 +1) (\lambda_1 \lambda_2 - 2 \lambda_1 + \lambda_2 +1) }{4 \, d^3_9} - \dfrac{1}{2}\;, \\
B_9 & = \dfrac{(2 \lambda_1 \lambda_2 -\lambda_1 - \lambda_2 -1 )(\lambda_1 \lambda_2 + \lambda_1 + \lambda_2 -2) (\lambda_1 \lambda_2 - 2 \lambda_1 - 2 \lambda_2 +1) }{4 \, d^3_9} -\dfrac{1}{2} \;.
\end{split}
\end{equation}
As in the proof of Lemma~\ref{lem:J7} we obtain
\begin{equation}
\begin{split}
\oiint_{ \Sigma''_2} \omega &= \frac{4 \sqrt{3} \pi^2}{(8 \, d_9^3 \, A_9)^{1/6}} \; \app2{\frac{1}{6};\;\frac{1}{6},\frac{1}{2}}{\frac{1}{3},1}{\frac{1}{A_9}, 1 - \frac{B_9}{A_9}} \;.
\end{split}
\end{equation}
The rest of the proof is analogous to the one of Lemma~\ref{lem:J7}.
Equation~(\ref{PF_J7b}) and Equation~(\ref{transfo:PF_J7b}) follow from swapping the roles of $A_9$ and $B_9$ in Equation~(\ref{params:J9}).
\end{proof}
\begin{remark}
All Appell hypergeometric systems considered in Lemmas~\ref{lem:J7},~\ref{lem:J6},~\ref{lem:J9} are systems of linear differential equations in two variables holonomic of rank four.
In addition they all satisfy
\begin{equation}
\label{QuadricProperty}
\alpha= \beta_1 + \beta_2 - \frac{1}{2}, \; \gamma_1 = 2\beta_1\, \; \gamma_2 = 2\beta_2 \;,
\end{equation}
which implies the so-called \textbf{quadric property} as proved in~\cite{MR960834}. The quadric property for a holonomic differential system
states that linearly independent solutions are quadratically related.
It is obvious that the outer tensor product in Equation~(\ref{PF_J4}) satisfies this quadric property as well.
From a geometric point of view, the quadratic property stems from the existence of a polarization for the variation of Hodge structure defined by the family of Kummer surfaces.
\end{remark}
\subsection{Differential systems from fibrations $\mathcal{J}_8$, $\mathcal{J}_{10}$, $\mathcal{J}_{11}$}
Proposition~\ref{Prop2} proves that the elliptic fibrations $\mathcal{J}_8, \mathcal{J}_{10}, \mathcal{J}_{11}$ will not give rise to additional identities relating
differential systems beyond the results obtained for fibrations $\mathcal{J}_7$, $\mathcal{J}_{9}$.
In fact, the systems derived from fibrations $\mathcal{J}_8, \mathcal{J}_{11}$ and $\mathcal{J}_{10}$ coincide with the ones found in Lemma~\ref{lem:J7}
and Lemma~\ref{lem:J9}, respectively.
\subsection{A particular GKZ system}
\label{sec:GKZ}
For the remaining fibrations a reduction of the Picard-Fuchs system to an Appell hypergeometric system is in general not possible.
Instead, we will give a description of the differential systems by restricting a particular family of GKZ systems.
We start with the two subsets $\mathcal{A}_1, \mathcal{A}_2 \subset \mathbb{Z}^2$ given by
\begin{equation}
\mathcal{A}_2 = \left\lbrace \left(\begin{array}{c} 0 \\ 1 \end{array} \right), \left(\begin{array}{c} 0 \\ 0 \end{array} \right) \right\rbrace \;,
\quad
\mathcal{A}_1 = \mathcal{A}_2 \cup \left\lbrace \left(\begin{array}{c} 3 \\ 0 \end{array} \right), \left(\begin{array}{c} 2 \\ 0 \end{array} \right),
\left(\begin{array}{c} 1 \\ 0 \end{array} \right), \left(\begin{array}{r} -1 \\ 0 \end{array} \right) \right\rbrace \;.
\end{equation}
To each element $\mathbf{n} = (n_1, n_2) \in \mathbb{Z}^2$ we associate the Laurent monomial
$x^{\mathbf{n}} = x_1^{n_1} \, x_2^{n_2}$ in the two complex variables $x_1$ and $x_2$. We identify the vector space $\mathbb{C}^{\mathcal{A}_i}$ for $i=1, 2$
with the space of Laurent polynomials of the following form
\begin{equation}
\begin{split}
P_1 & = v_{(1|3,0)} x^3_1 + v_{(1|2,0)} x^2_1 + v_{(1|1,0)} x_1 + v_{(1|0,0)} + v_{(1|-1,0)} x_1^{-1} + v_{(1|0,1)} x_2 \,,\\
P_2 & = v_{(2|0,1)} x_2 + v_{(2|0,0)} \,,
\end{split}
\end{equation}
where we have set
$$\mathbf{v} = \Big( v_{(1|3,0)}, v_{(1|2,0)}, v_{(1|1,0)}, v_{(1|0,0)}, v_{(1|-1,0)}, v_{(1|0,1)}, v_{(2|0,0)}, v_{(2|0,1)} \Big)\;,$$
and $\mathbf{P}=(P_1,P_2) \in \mathbb{C}^{\mathcal{A}_1} \times \mathbb{C}^{\mathcal{A}_2}$.
For $\vec{\alpha}=(\alpha_1,\alpha_2) \in \mathbb{Q}^2$ and $\vec{\beta}=(\beta_1,\beta_2) \in \mathbb{Q}^2$ we study the
$\mathcal{A}$-hypergeometric integrals of the form
\begin{equation}
\label{Ahypergeom}
\phi_{\Sigma_2}\Big(\vec{\alpha}, \vec{\beta} \; \big| \, \mathbf{v}\Big) = \oiint_{\Sigma_2}
P_1(x_1,x_2)^{\alpha_1} \, P_2(x_2)^{\alpha_2} \, x_1^{\beta_1} \, x_2^{\beta_2} \, dx_1 \wedge dx_2 \;.
\end{equation}
The domain of integration is contained in $\mathcal{U}(\mathbf{P}) := (\mathbb{C}^*)^2 \backslash \cup_i \lbrace P_i=0\rbrace$
where we assume that the hypersurfaces $P_i=0$ for $i=1,2$ are smooth and intersect each other transversely.
The one-dimensional local system on $\mathcal{U}(\mathbf{P})$ defined
by the monodromy exponents $\alpha_i$ around $\lbrace P_i=0\rbrace$ and $\beta_j$ around $\lbrace x_j=0\rbrace$ for $i,j =1,2$
will be denoted by $\mathcal{L}$. Since the integrand is multivalued and can have singularities, one has to carefully explain the meaning
of the integral in Equation~(\ref{Ahypergeom}). These technical points were all addressed in \cite[Sec.\!~2.2]{MR1080980}.
There, a suitable chain complex with homology $H_*(\mathcal{U}(\mathbf{P}), \mathcal{L})$ was defined such that $\phi_{\Sigma_2}$ depends only on the
homology class of $[\Sigma_2] \in H_2(\mathcal{U}(\mathbf{P}), \mathcal{L})$.
In this way, the $\mathcal{A}$-hypergeometric integrals in Equation~(\ref{Ahypergeom}) becomes a multivalued functions in the variables $\mathbf{v}$.
For $v_{(2|0,1)}, v_{(2|0,0)}, v_{(1|0,1)}, v_{(1|-1,0)} \not = 0$, we have
\begin{equation}
\label{ToriAction}
\begin{split}
& \quad \phi_{\Sigma_2}\Big(\vec{\alpha}, \vec{\beta} \; \big| \, \mathbf{v}\Big) = \left( \frac{ v_{(2|00)} }{ v_{(2|01)} } \right)^{\alpha_2+\beta_2-\beta_1} \left( \frac{ v_{(1|-10)} }{ v_{(1|01)} } \right)^{1+\beta_1}
v_{(2|00)}^{\alpha_1} v_{(1|01)}^{\alpha_2} \\
\times \; & \;\left. \phi_{\Sigma_2}\left( \vec{\alpha}, \vec{\beta} \; \right| \, \kappa^4 \frac{v_{(1|30)}}{v_{(1|-10)}}, \kappa^3 \frac{v_{(1|20)}}{v_{(1|-10)}}, \kappa^2 \frac{v_{(1|10)}}{v_{(1|-10)}},
\kappa \frac{v_{(1|00)}}{v_{(1|-10)}},
1, 1, 1, 1 \right) ,
\end{split}
\end{equation}
where we have set
\begin{equation}
\kappa= \frac{v_{(2|0,1)} v_{(1|-1,0)}}{v_{(2|0,0)} v_{(1|0,1)}} \;.
\end{equation}
We define an affine version of the $\mathcal{A}$-hypergeometric integral by setting
\begin{equation}
\label{Ahypergeom_aff}
\varphi_{\Sigma_2}\Big(\vec{\alpha}, \vec{\beta} \; \big| \, w_4, w_3, w_2, w_1\Big): =\phi_{\Sigma_2}\left(\vec{\alpha}, \vec{\beta} \; \big| \,w_4, w_3, w_2, w_1, 1, 1, 1, 1\right) .
\end{equation}
We now construct the differential system satisfied by the $\mathcal{A}$-hypergeometric integrals in Equation~(\ref{Ahypergeom}).
Using the Cayley trick we combine the sets $\mathcal{A}_1$ and $\mathcal{A}_2$ into the finite set $\mathcal{A} \subset \mathbb{Z}^4$ with
$$
\mathcal{A} = \left\lbrace
\left(\begin{array}{c} 1 \\ 0 \\ \hline 3 \\ 0 \end{array}\right), \left(\begin{array}{c} 1 \\ 0 \\ \hline 2 \\ 0 \end{array}\right), \left(\begin{array}{c} 1 \\ 0 \\ \hline 1 \\ 0 \end{array}\right),
\left(\begin{array}{c} 1 \\ 0 \\ \hline 0 \\ 0 \end{array}\right), \left(\begin{array}{r} 1 \\ 0 \\ \hline -1 \\ 0 \end{array}\right), \left(\begin{array}{c} 1 \\ 0 \\ \hline 0 \\ 1 \end{array}\right),
\left(\begin{array}{c} 0 \\ 1 \\ \hline 0 \\ 0 \end{array}\right), \left(\begin{array}{c} 0 \\ 1 \\ \hline 0 \\ 1 \end{array}\right) \right\rbrace.
$$
As the union of $\mathcal{A}_1$ and $\mathcal{A}_2$ generates $\mathbb{Z}^2$ as an Abelian group, and each $\mathcal{A}_i$ contains zero, the set $\mathcal{A}$ generates
$\mathbb{Z}^4$. There is a group homomorphism $h: \mathbb{Z}^4 \to \mathbb{Z}$ such that $h(\vec{\rho})=1$ for every $\vec{\rho} \in \mathcal{A}$. The homomorphism $h$ is obtained by
taking the sum of the first two components of each vector. This means that $\mathcal{A}$ lies in a three-dimensional affine hyperplane in $\mathbb{Z}^4$.
Denote by $L(\mathcal{A}) \subset \mathbb{Z}^\mathcal{A}$ the lattice of linear relations among the elements of $\mathcal{A}$, i.e., the set of integer row vectors $(a_{\vec{\rho}^{\, t}})_{\vec{\rho} \in \mathcal{A}}$
such that $\sum_{\vec{\rho} \in \mathcal{A}} a_{\vec{\rho}^{\, t}} \cdot \vec{\rho} = 0$. In our case, this lattice of relations is $L(\mathcal{A})\cong \mathbb{Z}^4$ and generated by
the following row vectors
$$
L(\mathcal{A})\cong\left\lbrack \begin{array}{rrrrrr|rr}
a_{(1|3,0)} & a_{(1|2,0)} & a_{(1|1,0)} & a_{(1|0,0)} & a_{(1|-1,0)} & a_{(1|0,1)} & a_{(2|0,0)} & a_{(2|0,1)} \\
\hline
0 & 0 & 0 & -1 & 0 & 1 & 1 & -1 \\
0 & 0 & 1 & -2 & 1 & 0 & 0 & 0 \\
0 & 1 & -2 & 1 & 0 & 0 & 0 & 0 \\
1 & -2 & 1 & 0 & 0 & 0 & 0 & 0 \\
\end{array}\right\rbrack \;.
$$
It follows from the above construction that the quotient $\mathbb{Z}^8/L(\mathcal{A}) $ is $\mathbb{Z}^4$ and torsion free.
The complex torus $(\mathbb{C}^*)^{\mathcal{A}}$ within $\mathbb{C}^\mathcal{A}$, i.e., within the space of all vectors $\mathbf{v}=(v_{\vec{\rho}^{\, t}})_{\vec{\rho} \in \mathcal{A}}$ that
define pairs $\mathbf{P}=(P_1,P_2)$ of Laurent polynomials, contains a subtorus $\mathbb{T}^4$ such that
the quotient equals
$$
(\mathbb{C}^*)^{\mathcal{A} }/ \mathbb{T}^4 = \operatorname{Hom}\Big(L(\mathcal{A}), \mathbb{C}^*\Big) \;.
$$
In order to obtain the natural space on which the $\mathcal{A}$-hypergeometric integrals in Equation~(\ref{Ahypergeom}) are defined,
Gel'fand, Kapranov and Zelevinsky developed the theory of the \emph{secondary fan}, i.e., a complete fan of rational polyhedral
cones in the real vector space $\operatorname{Hom}(L(\mathcal{A}), \mathbb{R})$. The associated toric variety in turn determines the domains
of convergence for various series expansions of the solutions as discs about the special points coming from the maximal cones in the secondary fan.
Equation~(\ref{Ahypergeom_aff}) then gives an integral representation in an affine chart.
Using the components of the vectors in $\mathcal{A}$, we define the first-order linear differential operators
\begin{equation}
\begin{split}
\mathsf{Z}_1 = \sum_{k=-1}^3 v_{(1|k,0)} \frac{\partial}{\partial v_{(1|k,0)} } + v_{(1|0,1)} \frac{\partial}{\partial v_{(1|0,1)} }, &\quad
\mathsf{Z}_2 = v_{(2|0,0)} \frac{\partial}{\partial v_{(2|0,0)}} + v_{(2|0,1)} \frac{\partial}{\partial v_{(2|0,1)} }, \\
\mathsf{Z}_3 = \sum_{k=-1}^3 k \, v_{(1|k,0)} \frac{\partial}{\partial v_{(1|k,0)} }, & \quad
\mathsf{Z}_4 = v_{(1|0,1)} \frac{\partial}{\partial v_{(1|0,1)}} + v_{(2|0,1)} \frac{\partial}{\partial v_{(2|0,1)} } .
\end{split}
\end{equation}
Similarly, using $L(\mathcal{A})$ one defines the second-order linear differential operators
\begin{equation}
\begin{split}
\Box_1 = \frac{\partial^2}{\partial v_{(2|0,0)}\, \partial v_{(1|0,1)}} - \frac{\partial^2}{\partial v_{(2|0,1)} \, \partial v_{(1|0,0)}} , & \quad
\Box_2 = \frac{\partial^2}{\partial v_{(1|-1,0)}\, \partial v_{(1|1,0)}} - \frac{\partial^2}{\partial v_{(1|0,0)}^{\,2}} ,\\
\Box_3 = \frac{\partial^2}{\partial v_{(1|0,0)}\, \partial v_{(1|2,0)}} - \frac{\partial^2}{\partial v_{(1|1,0)}^{\,2}} , & \quad
\Box_4 = \frac{\partial^2}{\partial v_{(1|1,0)}\, \partial v_{(1|3,0)}} - \frac{\partial^2}{\partial v_{(1|2,0)}^{\,2}} .
\end{split}
\end{equation}
The following lemma follows by applying \cite[Thm.\!~2.7]{MR1080980}:
\begin{lemma}
\label{lem:GKZ}
Under the above assumptions the $\mathcal{A}$-hypergeometric integral in Equation~(\ref{Ahypergeom}) satisfies for every $\Sigma_2 \in H_2(\mathcal{U}(\mathbf{P}),\mathcal{L})$ the system $\Phi$
of linear partial differential equations with finite dimensional solution space given by
\begin{equation}
\Phi \Big(\vec{\alpha}, \vec{\beta} \; \big| \, \mathbf{v}\Big) : \quad \left\lbrace \quad
\begin{aligned}
\Box_i \; \phi_{\Sigma_2}\Big(\vec{\alpha}, \vec{\beta} \; \big| \, \mathbf{v}\Big) & = 0 \;, \\
\mathsf{Z}_j \;\phi_{\Sigma_2}\Big(\vec{\alpha}, \vec{\beta} \; \big| \, \mathbf{v}\Big) & = \gamma_j \; \phi_{\Sigma_2}\Big(\vec{\alpha}, \vec{\beta} \; \big| \, \mathbf{v}\Big) \;
\end{aligned}\right.
\end{equation}
for $i, j=1, \dots, 4$ and $\vec{\gamma} = \langle \alpha_1, \alpha_2, -\beta_1-1, -\beta_2-1 \rangle$.
\end{lemma}
\subsection{Differential systems from fibrations $\mathcal{J}_1$, $\mathcal{J}_2$, $\mathcal{J}_3$, $\mathcal{J}_5$}
For the remaining fibrations we will give a description of the Picard-Fuchs system by restricting the particular GKZ system described in Section~\ref{sec:GKZ}.
We have the following lemma:
\begin{lemma}
\label{lem:JJ}
For $i=1,2,3$ over $K[d_i]$ with $d_i^2$ given in Table~\ref{tab:KummerFibs},
the Picard-Fuchs system for the periods of the holomorphic two-form on the family
$\operatorname{Kum}(\mathcal{E}_1\times \mathcal{E}_2)$ is given by
\begin{equation}
\label{PF_JJ}
\frac{1}{g_i} \; \Phi \left.\left(\vec{\alpha}_i, \vec{\beta}_i \; \right| \, \mathbf{v}_i \right)\;,
\end{equation}
where $g_i, \vec{\alpha}_i, \vec{\beta}_i, \mathbf{v}_i$ are given in Table~\ref{tab:KummerGKZ}.
In particular, the restrictions define systems of linear differential equations in two variables holonomic of rank four.
\end{lemma}
\begin{proof}
We first look at the fibration $\mathcal{J}_2$.
Using the Jacobian elliptic fibration $\mathcal{J}_2$ on $\operatorname{Kum}(\mathcal{E}_1\times \mathcal{E}_2)$
and the holomorphic two-form $\omega=du \wedge dX/Y$, there is a transcendental two-cycle $\Sigma'_2$ such that the period integral
reduces to the iterated integral
\begin{equation}
\oiint_{ \Sigma'_2} \omega = 2 \int_0^\infty \dfrac{du}{\sqrt{\,T_2(u)}} \; \oint_{\Sigma'_1} \frac{dx}{y} \;,
\end{equation}
where we used Proposition~\ref{Prop1} to relate the double integral to an integral for the holomorphic one-form $dx/y$ on the extremal
rational elliptic surface $\mathcal{X}_{411}$ and then reduced the outer integration to an integration along a branch cut.
Using Remark~\ref{Rem:dual_period} and Equation~(\ref{IntegralTransform}), we evaluate the period integral further to obtain
\begin{equation}
\begin{split}
\oiint_{ \Sigma'_2} \omega &= 2 \int_0^\infty \dfrac{du}{\sqrt{\,T_2(u)}}\; \dfrac{1}{\sqrt{t_2}} \; \hpg21{ \frac{1}{2}, \frac{1}{2}}{1}{\frac{1}{t_2}} \;.
\end{split}
\end{equation}
Using the integral representation of $\hpgo21$ in Equation~(\ref{GaussIntegral}) and Table~\ref{tab:KummerFibs}, it follows that
the period integrals are of $\mathcal{A}$-hypergeometric type and annihilated by the GKZ system
\begin{equation}
\dfrac{1}{\sqrt{d_2}} \; \Phi \left.\left(\vec{\alpha}=\left\langle - \frac{1}{2}, - \frac{1}{2} \right\rangle, \vec{\beta}=\left\langle - \frac{1}{2}, - \frac{1}{2} \right\rangle\; \right| \, \mathbf{v}\right) \;,
\end{equation}
where we have set
\begin{equation}
\mathbf{v} = \left( \frac{1}{16 \, d_2}, 0, \frac{1}{2}, \frac{(2\lambda_1\lambda_2-\lambda_1-\lambda_2+2)}{8 \, d_2}, \frac{(\lambda_1-\lambda_2)^2}{16 \, d_2}, 1, 1, 1\right).
\end{equation}
We use the torus action given in Equation~(\ref{ToriAction}) to normalize.
For $\mathcal{J}_5$ we applied the transformation $(Y,X,u) \mapsto (Y/u^6,X/u^4,1+1/u)$ in the proof of Proposition~\ref{Prop1}.
The transformation changes only the sign of the holomorphic two-from. The result for fibration $\mathcal{J}_5$ then follows from the computation for $\mathcal{J}_2$.
For fibration $\mathcal{J}_1$ the coordinate transformation $u \mapsto \sqrt{u}$ allows us to show that the period integrals are of $\mathcal{A}$-hypergeometric type
and annihilated by the GKZ system
\begin{equation}
\dfrac{1}{\sqrt{d_1}} \; \Phi \left.\left(\vec{\alpha}=\left\langle - \frac{1}{2}, - \frac{1}{2} \right\rangle, \vec{\beta}=\left\langle - 1, - \frac{1}{2} \right\rangle\; \right| \, \mathbf{v}\right) \;,
\end{equation}
where we have set
\begin{equation}
\mathbf{v} = \left( 0, 0, \frac{(1-\lambda_1)^2}{16 \, d_1}, \frac{1}{2} - \frac{(1+\lambda_1)(1+\lambda_2)}{8 \, d_1}, \frac{(1-\lambda_2)^2}{16 \, d_1}, 1, 1, 1 \right) \;.
\end{equation}
Again we use the torus action given in Equation~(\ref{ToriAction}) to normalize.
The result for fibration $\mathcal{J}_3$ follows closely the computation for $\mathcal{J}_1$.
\end{proof}
In summary, we have considered various quadratic field extensions of the field $K=\mathbb{C}(\lambda_1,\lambda_2)$ of moduli for the pair of elliptic
curves $\mathcal{E}_1$ and $\mathcal{E}_2$ and derived all representations for the Picard-Fuchs system satisfied by the periods of the holomorphic two-form
that can be derived from the eleven Jacobian elliptic fibrations on the Kummer surface $\operatorname{Kum}(\mathcal{E}_1\times \mathcal{E}_2)$ of two non-isogeneous elliptic curves:
\begin{theorem}
\label{thm:GM}
The Picard-Fuchs system for the periods of the holomorphic two-form on the family $\operatorname{Kum}(\mathcal{E}_1\times \mathcal{E}_2)$
of Kummer surfaces for two non-isogeneous elliptic curves $\mathcal{E}_1$ and $\mathcal{E}_2$ with modular parameters $\lambda_1$ and $\lambda_2$, respectively,
has the following equivalent representations as linear differential systems in two variables holonomic of rank four:
\begin{equation}
\label{form:J4}
\hpgd21{\frac{1}{2},\,\frac{1}{2}}{1}{\lambda_1} \boxtimes \hpgd21{\frac{1}{2},\,\frac{1}{2}}{1}{\lambda_2} \;.
\end{equation}
Over $K[d_7]$ with $d_7^2=\lambda_1\lambda_2$, the system~(\ref{form:J4}) is equivalent to the Appell hypergeometric system
\begin{equation}
\label{form:J7}
\dfrac{1}{\sqrt{\lambda_1+\lambda_2+2 \, d_7}} \; \appd2{\frac{1}{2};\;\frac{1}{2},\frac{1}{2}}{1,1}{\dfrac{4 \, d_7}{\lambda_1+\lambda_2+2 \, d_7}, -\dfrac{(1-\lambda_1)(1-\lambda_2)}{\lambda_1+\lambda_2+2 \, d_7}} \;.
\end{equation}
Over $K[d_9]$ with $d_9^2= (\lambda_1^2 - \lambda_1 +1) (\lambda_2^2 - \lambda_2 +1)$, the system~(\ref{form:J4}) is equivalent to the Appell hypergeometric system
\begin{equation}
\label{form:J9}
\dfrac{1}{\sqrt{R_9 + S_9 + 4 \, d_9}} \; \appd2{\frac{1}{2};\;\frac{1}{6},\frac{1}{2}}{\frac{1}{3},1}{\dfrac{8 \,d_9}{R_9 + S_9 + 4 \, d_9}, \dfrac{S_9}{R_9 + S_9 + 4 \, d_9}} \;,
\end{equation}
with
\begin{equation}
\begin{split}
R_9 & = 27 \, \lambda_1 (\lambda_1-1) \lambda_2 (\lambda_2-1) \;, \\
S_9 & = (\lambda_1+1)(\lambda_1-2)(2\lambda_1-1) (\lambda_2+1)(\lambda_2-2)(2\lambda_2-1) \;.
\end{split}
\end{equation}
Over $K[d_i]$ for $i=1,2,3$ with $d_i^2$ given in Table~\ref{tab:KummerFibs}, the system~(\ref{form:J4}) is equivalent to the restrictions of the GKZ system
introduced in Section~\ref{sec:GKZ} given by
\begin{equation}
\label{form:JJ}
\frac{1}{g_i} \; \Phi \left.\left(\vec{\alpha}_i, \vec{\beta}_i \; \right| \, \mathbf{v}_i \right)
\end{equation}
where $\vec{\alpha}_i, \vec{\beta}_i, g_i, \mathbf{v}_i$ are given in Table~\ref{tab:KummerGKZ}.
\end{theorem}
\begin{proof}
The comparison Lemma~\ref{lem:J4} , Lemma~\ref{lem:J7}, Lemma~\ref{lem:J9}, and Lemma~\ref{lem:JJ}
gives the desired result.
\end{proof}
In particular, the comparison of Equation~(\ref{form:J4}) and Equation~(\ref{form:J9}) proves that
the Appell hypergeometric system can be decomposed as an outer tensor product of two rank-two systems using a cubic transformation.
\newpage
\begin{table}[ht]
\scalebox{0.75}{
\begin{tabular}{|c|lcl|c|c|c|c|} \hline
\# & \multicolumn{3}{|c|}{$g_2, g_3, \Delta, J$} & \multicolumn{4}{|c|}{ramification of $J$ and singular fibers} \\[0.2em]
\hline
$\operatorname{MW}(\pi)$ & \multicolumn{3}{|c|}{sections} & $t$ & $J$ & $m(J)$ & fiber \\
\hline
\hline
$\mathcal{X}_{11}(\lambda)$ & $g_2$ & $=$ & $\frac{16}{3} (\lambda ^2 - \lambda +1)(t-1)^2$ & $1$ & $J(\lambda)$ & - & $I_0^* \; \; (D_4)$ \\[0.4em]
$\mu=0$ & $g_3$ & $=$ & $\frac{32}{27}(\lambda - 2)(\lambda +1)(2 \lambda -1)(t-1)^3$ & $\infty$ & $J(\lambda)$ & - & $I_0^* \; \; (D_4)$ \\[0.4em]
& $\Delta$& $=$ & $16 \, \lambda^2 (\lambda-1)^2 \, (t-1)^6$ & & & & \\[0em]
& $J=J(\lambda)$& $=$ & $\frac{4 \,(\lambda^2-\lambda+1)^3}{27 \lambda^2 (\lambda-1)^2}$ & & & & \\[0.3em]
\cline{1-4}
$(\mathbb{Z}/2\mathbb{Z})^2$ & $(X,Y)_{1}$ & $=$ & $(-\frac{2}{3}(\lambda+1)(t-1),0)$ & & & & \\[0.4em]
& $(X,Y)_{2}$ & $=$ & $(-\frac{2}{3}(\lambda-2)(t-1),0)$ & & & & \\[0.4em]
& $(X,Y)_{3}$ & $=$ & $(\frac{2}{3}(2\lambda-1)(t-1),0)$ & & & & \\[0.4em]
\hline
$\mathcal{X}_{411}$ & $g_2$ & $=$ & $ \frac{1}{3} (64t^2-64t+4)$ & $\frac{1}{4} \left( 2 \pm \sqrt{3}\right)$ & $0$ & $3$ & smooth \\[0.4em]
$\mu=\frac{1}{2}$ & $g_3$ & $=$ & $\frac{8}{27}(2t-1)(32t^2-32t-1)$ & $\frac{1}{8} \left( 4 \pm 3 \sqrt{2}\right), \frac{1}{2}$ & $1$ & $2$ & smooth \\[0.4em]
& $\Delta$& $=$ & $256 \, t \, (t-1)$ & $0$ & $\infty$ & $1$ & $I_1$ \\[0.0em]
& $J$ & $=$ & $\frac{(16t^2-16 t+1)^3}{108 t (t-1)}$ & $1$ & $\infty$ & $1$ & $I_1$ \\[0.3em]
\cline{1-4}
$\mathbb{Z}/2\mathbb{Z}$ & $(X,Y)_1$ & $=$ & $(-\frac{4}{3}t+\frac{2}{3},0)$ & $\infty$ & $\infty$ & $4$ & $I_4^* \; (D_8)$ \\[0.4em]
\hline
$\mathcal{X}_{222}$ & $g_2$ & $=$ & $\frac{16}{3} (t^2-t+1)$ & $\frac{1}{2} \left( 1 \pm i \sqrt{3}\right)$ & $0$ & $3$ & smooth \\[0.4em]
$\mu=\frac{1}{2}$ & $g_3$ & $=$ & $\frac{32}{27}(t-2)(t+1)(2t-1)$ & $-1, \frac{1}{2}, 2$ & $1$ & $2$ & smooth \\[0.4em]
& $\Delta$& $=$ & $1024 \, t^2 \, (t-1)^2$ & $0$ & $\infty$ & $2$ & $I_2$ \\[0.0em]
& $J$ & $=$ & $\frac{4 \,(t^2-t+1)^3}{27 t^2 (t-1)^2}$ & $1$ & $\infty$ & $2$ & $I_2$ \\[0.3em]
\cline{1-4}
$(\mathbb{Z}/2\mathbb{Z})^2$ & $(X,Y)_1$ & $=$ & $(-\frac{2}{3}(t+1),0)$ & $\infty$ & $\infty$ & $2$ & $I_2^* \; (D_6)$ \\[0.4em]
& $(X,Y)_2$ & $=$ & $(-\frac{2}{3}(t-2),0)$ & & & & \\[0.4em]
& $(X,Y)_3$ & $=$ & $(\frac{2}{3}(2t-1),0)$ & & & & \\[0.4em]
\hline
$\mathcal{X}_{211}$ & $g_2$ & $=$ & $3$ & $\infty$ & $0$ & $2$ & $II^*\; \; (E_8)$ \\[0.4em]
$\mu=\frac{1}{6}$ & $g_3$ & $=$ & $-1 + 2 \, t$ & $\frac{1}{2}$ & $1$ & $2$ & smooth \\[0.4em]
& $\Delta$& $=$ & $- 108 \, t \, (t-1)$ & $0$ & $\infty$ & $1$ & $I_1$ \\[0.0em]
\cline{1-1}
$\lbrace 0 \rbrace$ & $J$ & $=$ & $- \frac{1}{4 \, t \, (t-1)}$ & $1$ & $\infty$ & $1$ & $I_1$ \\[0.4em]
\hline
\end{tabular}
\caption{Extremal rational elliptic surfaces}\label{tab:3ExtRatHg}}
\end{table}
\begin{landscape}
\begin{table}[ht]
\scalebox{0.75}{
\begin{tabular}{|c|c|c|c|}
\hline
\# & singular fibers & rational & rational base transformation \\[-1pt]
\cline{2-2}\cline{4-4}
& $\operatorname{MW}(\pi)$ & surface & quadratic twist, $d^2$ \\
\hline
\hline
&&&\\[-1.7em]
$\mathcal{J}_{1}$ & $2 I_{8} + 8 I_{1}$
& $\mathcal X_{411}$
& $t_1=\dfrac{(1-\lambda_1)^2 \, u^4 - 2(1+\lambda_1)(1+\lambda_2) \, u^2 + (1-\lambda_2)^2}{16 \, d_1 \, u^2}+ \dfrac{1}{2}$ \\[-3pt]
& $\mathbb{Z}^2 \oplus \mathbb{Z}/2\mathbb{Z}$
& & $T_1=- \,4 \, d_1 \, u^2, \quad d_1^2 = \lambda_1 \lambda_2$ \\
\hline
&&&\\[-1.7em]
$\mathcal{J}_{2}$ & $I_{4} +I_{12} + 8I_{1} $
& $\mathcal X_{411}$
& $t_2= \dfrac{u^4 + 2 \, (2\lambda_1\lambda_2-\lambda_1-\lambda_2+2) \, u^2 + (\lambda_1-\lambda_2)^2}{16 \, d_2 \, u} + \dfrac{1}{2}$ \\[-3pt]
& $A_2^*[2] \oplus \mathbb{Z}/2\mathbb{Z}$
& & $T_2=- \, 4 \, d_2 \, u, \quad d_2^2=-\lambda_1 \lambda_2 (1-\lambda_1)(1-\lambda_2)$ \\
\hline
&&&\\[-1.7em]
$\mathcal{J}_{3}$ & $2 IV^{*} + 8 I_{1}$
& $\mathcal X_{211}$
& $t_3=\dfrac{27\lambda_1^2(\lambda_1-1)^2\,u^4 + 2 (\lambda_1+1)(\lambda_1-2)(2\lambda_1-1)(\lambda_2+1)(\lambda_2-2)(2\lambda_2-1) \, u^2 + 27\lambda_2^2(\lambda_2-1)^2}{16\, d_3^3\, u^2} + \dfrac{1}{2}$ \\
& $\big(A_2^*[2]\big)^2$
& & $T_3=-\frac{8}{3} \, d_3 \, u^2, \quad d_3^2=(\lambda_1^2-\lambda_1+1)(\lambda_2^2-\lambda_2+1)$\\
\hline
$\mathcal{J}_{4}$ & $4I_{0}^{*}$
& $X_{11}(\lambda_2)$
& $t_4=u$ \\[-4pt]
& $\big(\mathbb{Z}/2\mathbb{Z}\big)^2$
& & $T_4=\frac{1}{2} \, u\,(u-\lambda_1)$\\
\hline
&&&\\[-1.7em]
$\mathcal{J}_{5}$ & $I_{6}^{*} + 6 I_{2}$
& $\mathcal X_{222}$
& $t_5= \dfrac{-\lambda_1^2 (\lambda_2-1)^2 u^3 +\lambda_1(\lambda_2-1)(1+\lambda_1+\lambda_2-2\lambda_1\lambda_2)\, u^2-(1-\lambda_1\lambda_2)(\lambda_1+\lambda_2-\lambda_1\lambda_2) \, u}{\lambda_2(\lambda_1-1)}+1$\\
& $\big(\mathbb{Z}/2\mathbb{Z}\big)^2$
& & $T_5=-\frac{1}{2} \, \lambda_1 \lambda_2 (\lambda_1-1) (\lambda_2-1)$\\
\hline
&&&\\[-1.7em]
$\mathcal{J}_{6}$ & $2 I_{2}^{*} + 4 I_{2}$
& $\mathcal X_{222}$
& $t_6 = \dfrac{\lambda_2\, u^2 + (\lambda_2- \lambda_1) \, u + \lambda_1}{(1-\lambda_1)(1-\lambda_2) \, u}$\\
& $\big(\mathbb{Z}/2\mathbb{Z}\big)^2$
& & $T_6=- \frac{1}{2} \, (\lambda_1-1) \, (\lambda_2-1) \, u^2$\\
\hline
&&&\\[-1.7em]
$\mathcal{J}_{7}$ & $I_{4}^{*} + 2 I_{0}^{*} + 2 I_{1}$
& $\mathcal X_{411}$
& $t_7 =\dfrac{(\lambda_1\lambda_2+1) \, u - \lambda_1 -\lambda_2}{4 \, d_7 \, (u-1)} + \dfrac{1}{2}$ \\
& $\mathbb{Z}/2\mathbb{Z}$
& & $T_7=d_7 \, u \, (u-1)^2, \quad d_7^2=\lambda_1\lambda_2$\\
\hline
&&&\\[-1.7em]
$\mathcal{J}_{9}$ & $II^{*} + 2 I_{0}^{*} + 2 I_{1}$
& $\mathcal X_{211}$
& $t_9 =\dfrac{B_9 \, u - A_9}{u-1}, \quad T = - \frac{2}{3} d_9 \, u \, (u-1)^2,\quad d_9^2= (\lambda_1^2 - \lambda_1 +1) (\lambda_2^2 - \lambda_2 +1)$ \\[3pt]
& $\lbrace 0 \rbrace$
&& $A_9= \dfrac{(2 \lambda_1 \lambda_2 - \lambda_1 - \lambda_2 +2) (\lambda_1 \lambda_2 + \lambda_1 - 2 \lambda_2 +1) (\lambda_1 \lambda_2 - 2 \lambda_1 + \lambda_2 +1) }{4 \, d^3_9} - \dfrac{1}{2}$ \\[5pt]
&&& $B_9 = \dfrac{(2 \lambda_1 \lambda_2 -\lambda_1 - \lambda_2 -1 )(\lambda_1 \lambda_2 + \lambda_1 + \lambda_2 -2) (\lambda_1 \lambda_2 - 2 \lambda_1 - 2 \lambda_2 +1) }{4 \, d^3_9} -\dfrac{1}{2}$ \\[8pt]
\hline
\end{tabular}
\caption{Fibrations on $\operatorname{Kum}(\mathcal{E}_1\times \mathcal{E}_2)$ by rational base transformations and quadratic twists}\label{tab:KummerFibs}}
\end{table}
\end{landscape}
\begin{table}[ht]
\scalebox{0.75}{
\begin{tabular}{|c|c|c|c|}
\hline
\# & singular fibers & related & rational transformation \\[-1pt]
\cline{2-2}
& $\operatorname{MW}(\pi)$ & fibration & \\
\hline
\hline
&&&\\[-1.7em]
$\mathcal{J}_{8}$ & $III^* + I_2^* + 3 I_2 + I_1$ & $\mathcal{J}_{7}$
& $u_8 = \dfrac{u_7^2 \, (u_7-1)}{x_7}$ \\[5pt]
& & & $x_8=\dfrac{\big(x_7- (\lambda_1-1)(\lambda_2-1) u_7^2(u_7-1)\big)\, u_7^2}{x_7^2}$ \\[5pt]
\cline{2-2}
& $\mathbb{Z}/2\mathbb{Z}$ & & $y_8 = -\dfrac{\big(x_7 - (\lambda_1-1)(\lambda_2-1)u_7^2(u_7-1)\big) \, u_7^4 \, y_7}{x_7^4}$ \\[10pt]
\hline
&&&\\[-1.7em]
$\mathcal{J}_{10}$ & $I_8^*+I_0^*+4 I_1$ & $\mathcal{J}_{9}$
& $u_{10}=\dfrac{x_9}{(u_9-1)^2}$ \\[5pt]
& & & $x_{10}=-\dfrac{\lambda_1 \lambda_2 (\lambda_1-1)(\lambda_2-1)\, u_9}{u_9-1}$ \\[5pt]
\cline{2-2}
& $\lbrace 0 \rbrace$ & & $y_{10} = -\dfrac{\lambda_1 \lambda_2 ( \lambda_1-1)(\lambda_2 -1) y_9}{(u_9-1)^4}$ \\[10pt]
\hline
&&&\\[-1.7em]
$\mathcal{J}_{11}$ & $2 I_4^*+4 I_1$ & $\mathcal{J}_{7}$
& $u_{11} = \dfrac{\lambda_2 u_7^2 (u_7 -1)^2}{x_7}$ \\[5pt]
& & & $x_{11} = -\dfrac{\lambda_2^2 (\lambda_1-1)(\lambda_2-1) \, u_7^4 \, (u_7-1)^3}{x_7^2}$ \\[5pt]
\cline{2-2}
& $\lbrace 0 \rbrace$ & & $y_{11} = \dfrac{\lambda_2^3 (\lambda_1 - 1)(\lambda_2-1) \, u_7^6 \, (u_7 - 1)^4\, y_7}{x_7^4}$ \\[10pt]
\hline
\end{tabular}
\caption{Related elliptic fibrations on $\operatorname{Kum}(\mathcal{E}_1\times \mathcal{E}_2)$}\label{tab:KummerRels}}
\end{table}
\bigskip
\begin{table}[ht]
\scalebox{0.75}{
\begin{tabular}{|c|c|c|c|}
\hline
\# & $\vec{\alpha}, \vec{\beta}$ & $g$ & $\mathbf{v}$ \\[5pt]
\hline
\hline
$\mathcal{J}_{1}$ & $\vec{\alpha}=\left\langle - \frac{1}{2}, - \frac{1}{2} \right\rangle$ & $g_1 =\sqrt{d_1}$
& $\mathbf{v}_1=\left( 0, 0,v_{(1|1,0)}, v_{(1|0,0)}, 1, 1, 1, 1 \right)$ \\[5pt]
\cline{3-3}
& $\vec{\beta}=\left\langle - 1, - \frac{1}{2} \right\rangle$ &
\multicolumn{2}{|c|}{$ v_{(1|1,0)}=\frac{(1-\lambda_1)^2(1-\lambda_2)^2}{2^8 \, d_1^2}, \; v_{(1|0,0)}= \frac{1}{2} - \frac{(1+\lambda_1)(1+\lambda_2)}{8 \, d_1}$}\\[5pt]
\hline
$\mathcal{J}_{2}$ & $\vec{\alpha}=\left\langle - \frac{1}{2}, - \frac{1}{2} \right\rangle$ & $g_2 =\frac{d_2}{\lambda_1-\lambda_2}$
& $\mathbf{v}_2=\left( v_{(1|3,0)}, 0, v_{(1|1,0)}, \frac{1}{2}, 1, 1, 1, 1 \right)$ \\[5pt]
& $\vec{\beta}=\left\langle - \frac{1}{2}, - \frac{1}{2} \right\rangle$ &
\multicolumn{2}{|c|}{$v_{(1|3,0)} = \frac{(\lambda_1-\lambda_2)^6}{2^{16} \, d_2^4}, \; v_{(1|1,0)}= \frac{(2\lambda_1\lambda_2-\lambda_1-\lambda_2+2)(\lambda_1-\lambda_2)^2}{2^7 \, d_2^2}$}\\[5pt]
\hline
$\mathcal{J}_{3}$ & $\vec{\alpha}=\left\langle - \frac{1}{6}, - \frac{5}{6} \right\rangle$ & $g_2 =\sqrt{d_3}$
& $\mathbf{v}_3=\left( 0, 0, v_{(1|1,0)}, v_{(1|0,0)}, 1, 1, 1, 1 \right)$ \\[5pt]
\cline{3-3}
& $\vec{\beta}=\left\langle - 1, - \frac{5}{6} \right\rangle$ &
\multicolumn{2}{|c|}{$ v_{(1|1,0)} =\frac{3^6\lambda_1^2\lambda_2^2(1-\lambda_1)^2(1-\lambda_2)^2}{2^8 \, d_3^6}, \; v_{(1|0,0)}= \frac{1}{2} + \frac{(\lambda_1+1)(\lambda_1-2)(2\lambda_1-1)(\lambda_2+1)(\lambda_2-2)(2\lambda_2-1)}{8 \, d_3^3}$}\\[5pt]
\hline
$\mathcal{J}_{5}$ & $\vec{\alpha}=\left\langle -\frac{1}{2}, - \frac{1}{2} \right\rangle$ & $g_5 =\sqrt{\frac{\lambda_1(\lambda_2-1)}{\lambda_2(\lambda_1-1)}}(1-\lambda_1\lambda_2)(\lambda_1+\lambda_2-\lambda_1\lambda_2)$
& $\mathbf{v}_5=\left( v_{(1|3,0)}, v_{(1|2,0)}, 1, 0, 0, 1, 1, 1 \right)$ \\[5pt]
\cline{3-3}
& $\vec{\beta}=\left\langle 0, - \frac{1}{2} \right\rangle$ &
\multicolumn{2}{|c|}{$v_{(1|3,0)} = - \frac{\lambda_1^2\lambda_2^2(\lambda_1-1)^2(\lambda_2-1)^2}{(1-\lambda_1\lambda_2)^3(\lambda_1+\lambda_2-\lambda_1\lambda_2)^3}, \;
v_{(1|2,0)} = - \frac{\lambda_1\lambda_2(\lambda_1-1)(\lambda_2-1)(1+\lambda_1+\lambda_2-2\lambda_1\lambda_2)}{(1-\lambda_1\lambda_2)^2(\lambda_1+\lambda_2-\lambda_1\lambda_2)^2}$}\\[5pt]
\hline
\end{tabular}
\caption{Restrictions of the GKZ system from Section~\ref{sec:GKZ}}\label{tab:KummerGKZ}}
\end{table}
\newpage
\bibliographystyle{amsplain}
|
1609.00178
|
\section*{Figure captions}}
\begin{figure*}[p]
\center
\resizebox{10cm}{!}{\includegraphics{fig1.pdf}}
\caption{Close-up view of the bipolar planetary nebula Henize~2--428. This 2 hour-deep image in H$\alpha$ 656.3~nm was observed with the INT/WFC. North is up and East is to the left.}
\label{Fp}
\end{figure*}
\begin{figure*}[p]
\center
\resizebox{14cm}{!}{\includegraphics{fig2.pdf}}
\caption{{\it (a)} Light curve measurements and model. {\it (a)} Light curves of Henize~2--428 in the Johnson B and Sloan i filters (0.44 and 0.78 $\mu$m, respectively) and model, along with {\it (b)} their respective residuals. The B-band data have been shifted up by 1 magnitude for displaying purposes. The data are shown here folded on the orbital period of the system, 0.1758 day, or 4.2 hours, along with the model (solid line). Error bars represent 1-$\sigma$ formal measurement errors.}
\label{F2}
\end{figure*}
\begin{figure*}[p]
\center
\resizebox{14cm}{!}{\includegraphics{fig3.pdf}}
\caption{Time evolution of the spectrum profile of Henize~2-428. The double He {\sc ii} 541.2~nm absorption lines show significant Doppler-shifts in the VLT spectra. The flux is normalized with respect to the continuum. Velocities with respect to the He {\sc ii} 541.2~nm rest wavelength are displayed in the $x$ axis. The top spectrum {\it (a)} corresponds to the night of June 19, 2010, while the three remaining, consecutive spectra were taken on July 8, 2012 and are chronologically ordered from top to bottom {\it (b, c, d)}.}
\label{F3}
\end{figure*}
\begin{figure*}[p]
\center
\resizebox{14cm}{!}{\includegraphics{fig4.pdf}}
\caption{Radial velocity measurements and orbit solution. {\it (a)} Radial velocity curves of the central stars of Henize~2--428 obtained with GTC/OSIRIS on August 11, 2013, and model, along with {\it (b)} their respective residuals. The data have been folded on the 4.2 hour period determined in the text. The primary star is depicted by black points and the secondary star by white ones, and the dashed horizontal line represents the systemic velocity. Error bars represent 1-$\sigma$ formal measurement errors.}
\label{F4}
\end{figure*}
\clearpage
\newpage
{\center\section*{Methods}}
\subsection{Information on the observational data}
Four short series of I-band time-resolved photometry of Henize~2--428 with MEROPE\citemet{davignon04} on the Mercator telescope\citemet{raskin04} on La Palma on August 28 and 30, 2009. They showed a photometric variability as large as $\sim$0.36~mag between series, so the system was monitored for a single, 4-hour interval on the night of September 2, 2009, and an orbital period was determined. Another similar 4-hour time-series was covered in the Johnson B-band with the SAAO 1m telescope/SHOC on July 11, 2013, and in the Sloan i-band with the INT/WFC on August 2, 2013.
A single 20~min spectrum with VLT/FORS was secured in the blue range on June 19, 2010, under program ID 085.D-0629(A), and three additional 15 min spectra with the same configuration on the night of July 8, 2012, with program ID 089.D-0453(A). The 1200g grism with a slit width of 0.7 arcsec was used in all cases. The spectra covered the 409-556~nm range. The resulting effective resolution was 0.8 \AA. These 4 spectra taken at different times during an orbit clearly showed radial velocity variations from both stars and established the need for and feasibility of a systematic radial velocity study (see Fig.~3). This was carried out with GTC/OSIRIS on August 11, 2013. The object was monitored for 3.8 hours with a slit width of 0.6 arcsec at parallactic angle. The R2000B grating was used, resulting in a wavelength coverage from 396 to 569.5~nm, and the effective resolution was 1.9 \AA. The spectra were binned once (binning 2$\times$1) in the spatial direction.
\subsection{Data modelling}
The magnitude values form the INT/WFC Sloan i and the SAAO 1m telescope/SHOC Johnson B light curve data were corrected for an extinction of $A_\mathrm{v}$= 2.96$\pm$0.34, recomputed from an available value\citeart{rodriguez01} using a different extinction law\citemet{fitzpatrick04}. We performed a period analysis using Schwarzenberg-Czerny's\citeart{Scwarzenberg96} analysis-of-variance (AOV) method on the photometric data set with the greatest time coverage (from MERCATOR/MEROPE covering 6.25 hours along three nights, not shown here). The AOV periodogram shows the strongest peak at $\sim$11.379 cycles d$^{-1}$, which would correspond to a period of 0.0879 day, or 2.1 hours. The orbital period of the system is, however, twice as long, 0.1758$\pm$0.0005 day (4.2 hours), as indicated by the ellipsoidal modulation of the light curve, with the two minima showing similar but different depths.
The ephemerides of the light curves indicated in Table 1 were computed from their respective light curve data. Due to the asymmetry in the radial velocity curves, the corresponding ephemeris of the zero phase was computed from the ephemeris of the Sloan i light curve data, taken 9 nights before, and therefore the associated, accumulated error is much larger.
The orbital and physical parameters of the nucleus of Henize~2--428 were determined by modelling the light curves, together with the radial velocity-curves, all folded on the orbital period, 0.1758 day. The mass ratio $q$ was fixed to 1, as suggested by the amplitude of the radial velocity curves. A systematic search of the parameter space was performed on the inclination, orbital separation, centre-of-mass velocity, temperatures and surface potential of both stars until $\chi^2$ was globally minimised. Given the high effective temperature of both stars, an albedo of 1.0 was used for both components. Gravity brightening and square-root limb darkening coefficients were computed according to the temperature and gravity of each component\citemet{claret11}$^,$\citemet{castelli04}.
We provide a rough estimate of the distance to Henize~2--428 from the comparison of the model's total luminosity to the dereddened, apparent magnitudes, adopting a bolometric correction according to the stars effective temperatures. The resulting distance is 1.4$\pm$0.4 kpc.
\subsection{B-band data phase determination and alternate model}
Observations in the B-band were taken 22 nights before the data used to determine the orbital phase; error accumulation during this time interval amounts to half an orbital period, thus preventing accurate phase determination of the B-band data. Therefore, actually two possibilities were independently considered and modelled; one where the deepest minimum in both light curves occurs at orbital phase 0 (model shown in the paper), and another one where the deepest minimum is offset by half an orbit.
The results of both models are very similar within uncertainties. These include the inclination, which is confined to a narrow range between 62.2$^\mathrm{o}$ and 63.8$^\mathrm{o}$, still a few degrees away from the $\sim$68$^\mathrm{o}$-inclined equatorial ring of the nebula. The surface potential of both stars are slightly different in this model, and the temperature of the primary is around a thousand K lower than that of the secondary. The total mass is slightly larger but similar to the model shown in the paper, 1.84 M$_\odot$, still above the Chandrasekhar limit even when considering uncertainties. The total luminosity of the system is somewhat larger, 1200 L$_\odot$.
There is not enough solid ground to adopt one model over the other.
\bibliographymet{msantander}
\end{document}
|
2009.07868
|
\section{Introduction}
Linear optical quantum computing (LOQC) \cite{kok_linear_2007} has gained tremendous momentum since its inception in the early 2000s, when Knill, Laflamme and Milburn (KLM) showed that it is possible to implement the two-photon interactions necessary for quantum computation by means of post-selection and ancilla photons, thus producing near-deterministic gates with a scalable polynomial overhead \cite{knill_scheme_2001}.
Although the gate-based approach to photonic quantum computing suffers from an unsustainable overhead that is polynomial in the asymptotic limit only \cite{hayes_utilizing_2004},
there exists an alternative universal quantum computing paradigm known as `one-way' \cite{yoran_deterministic_2003,raussendorf_one-way_2001,walther_experimental_2005} or `measurement-based' quantum computation (MBQC) \cite{raussendorf_measurement-based_2003}.
MBQC uses an entangled multiparticle state, the cluster or graph state, as an input \cite{nielsen_optical_2004}.
Optical cluster states can be efficiently created by the Browne-Rudolph fusion mechanism \cite{browne_resource-efficient_2005}, and it is even possible to renormalize an imperfect cluster state by applying ideas of percolation theory \cite{kieling_percolation_2007}.
This is now widely considered to be the most promising route to photonic quantum computing \cite{rudolph_why_2017}.
{In any case, LOQC using the circuit model, MBQC, constructing a quantum network \cite{kimble2008quantum}, and many other important quantum protocols, including quantum teleportation \cite{giacomini_active_2002,ursin_quantum_2004,valivarthi2020teleportation}, remote state preparation \cite{bennettRSP}, quantum error correction \cite{vijayan2020robust}, improving photon sources \cite{kanedaHighefficiencySinglephotonGeneration2019a,meyer-scottExponentialEnhancementMultiphoton2019}, and even quantum metrology \cite{higgins2007entanglement} and quantum data compression \cite{rozema2014quantum}, require one to conditionally apply a quantum operation based on the outcome of an earlier measurement \cite{pittman_demonstration_2002,prevedel_high-speed_2007,saggio_experimental_2019}.
Given the extremely wide range of applications, effective feed-forward techniques will be required in bulk optics, integrated optics and fiber optics.
Here we present a new technique for feed-forward, and apply it in a experiment.
Measurement and feed-forward has historically been done in bulk optics by means of Pockels cells {\cite{giacomini_active_2002,walther_experimental_2005,sciarrinoRealizationMinimalDisturbance2006,bohi_implementation_2007,ma_experimental_2012}}, whose tunable retardance directly implements an operation on the polarization state of single photons.
However, each Pockels cell can only rotate about one axis on the Bloch sphere; therefore, realizing a general unitary transformation requires the use of 3 active elements, each of which needs to be precisely calibrated.
{When it comes to quantum information processing, this} results in large technical overheads.
As such, even experimental demonstrations, whose goal is to develop real-world deployable technologies, often use post-selection rather than real feed-forward \cite{valivarthi2020teleportation}.
Here, we {explore an alternative approach to measurement and feed-forward with bulk components: we route} photons through passive elements using simpler active elements, \textit{i.e.} ultrafast optical {(electro-optic)} switches (UFOS).
The critical component is the switch that receives a signal from the heralding detector, as this redirects the heralded photon towards the polarization correcting components.
Recent schemes have used integrated switches, such as fast opto-ceramic switches \cite{xiongBidirectionalMultiplexingHeralded2013,collinsIntegratedSpatialMultiplexing2013,hoggarthResourceefficientFibreintegratedTemporal2017}, electro-optic switches \cite{bridaExperimentalRealizationLownoise2011,bridaExtremelyLownoiseHeralded2012,meanyHybridPhotonicCircuit2014,mendozaActiveTemporalSpatial2016,francis-jonesFibreintegratedNoiseGating2017,massaro_improving_2019} or bulk electro-optic polarization rotating switches \cite{jeffreyPeriodicDeterministicSource2004,ma_experimental_2011,pittman_demonstration_2002,kiyoharaRealizationMultiplexingHeralded2016,kaneda_time-multiplexed_2015} or phase modulators \cite{mikovaIncreasingEfficiencyLinearoptical2012,mikovaOptimalEntanglementassistedDiscrimination2014,mikovaFaithfulConditionalQuantum2016} in interferometers .
In all cases, the heralded photons must be delayed to allow time to process the heralding signals and activate the switch.
The ensuing latency is largely dominated by electronic processing time, with typical latency values ranging from 20 to 1000 ns \cite{giacomini_active_2002,mikovaIncreasingEfficiencyLinearoptical2012, meyer-scottSinglephotonSourcesApproaching2020}.
We present a fiber-compatible feed-forward scheme and apply it to remote preparation \cite{bennettRSP} of single-qubit states encoded in the polarization of telecommunication-wavelength photons.
We experimentally implement our scheme using relatively inexpensive, off-the-shelf components compatible with standard telecommunication technologies and thus demonstrate a high-speed and high-fidelity photonic feed-forward protocol.
We verify this by remotely preparing the polarization state of a single photon without post-selection, and we achieve an average fidelity of (99.0 $\pm$ 1)\%, where the error bar includes statistical errors such as finite photon statistics and waveplate errors, after correcting for imperfections in generating the entangled state.
\begin{figure*}[t]
\centering\includegraphics[width=2.05\columnwidth]{figure1.pdf}
\caption{\textbf{(a)} \textit{Type-II SPDC source ---} A $775$ nm, CW beam pumps a PPKTP crystal in a Sagnac-loop configuration, yielding the singlet state $\ket{\psi^-}$. Here `dm' is a dichroic mirror reflecting $775$ nm light and transmitting $1550$ nm, while `m' are standard mirrors.
\textbf{(b)} \textit{Feed-Forward setup ---} Photons pairs are sent towards two different stations. A projective measurement is performed on the polarization of the idler photon which is used to deterministically route the sister photon into one of the two paths using classical feed-forward control of ultrafast optical switches (UFOS). Detecting the idler photon also heralds the presence of the sister photon in the other half of the experiment, thus setting a time reference for the UFOS. In the two different paths that the signal photon may take, either operation $U_A$ or $U_B$ (an arbitrary polarization unitary) is performed. Fiber paddles P1-P4 and the waveplates in the source are used to correct for polarization rotations in the optical fiber. The state of the signal photon is finally measured by quantum state tomography.}
\label{fig:setup}
\end{figure*}
\section{Fiber-compatible photonic feed-forward}\label{sec:fiberbased}
Our fiber-compatible feed-forward is built around a pair of 2x2 in-fiber ultrafast optical switches, the BATi 2x2 Nanona switch.
These optical switches can route light from two input modes into two output modes with a variable splitting ratio.
The response time of our UFOS is below 60 ns, with a maximal duty-cycle of 1 MHz, and a cross-channel isolation greater than 20 dB for any polarization.
Although a single UFOS controls the path of incident light, we use two UFOSs together with passive polarization optics to implement ultrafast feed-forward operations on the polarization state of single photons, as sketched in Fig.\ref{fig:setup}.
{Note that with two switches we are only able to switch between two operations. Switching between N different operations could be done, but it would require $\approx \log N$ switches. For large N, using three Pockels cells to directly implement the feed-forward operation may be less resource intensive.}
To demonstrate our feed-forward protocol we first generate photon pairs using spontaneous parametric down-conversion (SPDC), as sketched in Fig.\ref{fig:setup} \textbf{(a)} and discussed below.
The feed-forward begins by performing a projective measurement on the polarization degree of freedom (DOF) of the idler photon by means of a quarter waveplate (QWP) and a half waveplate (HWP) followed by a polarizing beam splitter (PBS).
Photons at the output of the PBS are coupled to single-mode fibers and detected by two superconducting nanowire single-photon detectors (SNSPD) from PhotonSpot Inc. (deadtime $50$ ns, average system detection efficiency $90\%$).
Detection events are recorded and processed by a commercial time tagging module (UQDevices Logic16 TTM).
The TTM is programmed to generate an output TTL pulse ($5$ V high, and $700$ ns duration) when a photon is detected in the transmitted port of the PBS, and to do nothing otherwise.
This classical electronic feed-forward signal is used to control the UFOS, as indicated by the blue lines in Fig.\ref{fig:setup} \textbf{(b)}.
In the absence of a triggering pulse, the UFOS are set to a ``bar state'': UFOS1 routes the signal photon to unitary transform $U_A$, and UFOS2 sends it towards the quantum state tomography (QST) station.
The triggering pulse sets both UFOS in the ``cross state'' for 700 ns (the duration of the feed-forward pulse).
In this case, UFOS1 routes the signal photon through unitary transform $U_B$, and UFOS2 sends it towards the QST station.
Note that the signal photon exits in the same mode towards the QST stage in both cases.
We implement $U_A$ and $U_B$ by briefly outcoupling to free space in a ThorLabs u-bench system, and then use a HWP sandwiched between two QWPs to implement an arbitrary polarization unitary (see Appendix \ref{app:C}).
The coupling loss of our u-benches is less than $0.7$ dB, which is comparable to the $1.3$ dB insertion loss of the BATi UFOS.
{ Our use of these free-space passive optics is key to our high-fidelity feed-forward. Although they are a free-space optical elements,
the u-benches allow us to use these well-calibrated passive bulk optics to implement $U_A$ and $U_B$ in a fiber-compatible form factor.
Another approach could be to use fiber-paddles, Faraday rotators or other in-fiber polarization rotators \cite{han2016wavelength} to directly implement $U_A$ and $U_B$, but this would likely come at the cost of a lower fidelity.
}
This is in contrast to the standard approach using Pockels cells, where the birefringence is actively modulated to implement the unitary operation{, which is less amenable to fiber-based applications such as communication networks}.
On the other hand, the active components in our work are standard UFOS, which can set the desired path with a crosstalk of less than 20 dB.
{Specifically, the UFOS we use are operated with a half-voltage of 5V (to be compared with the hundred-volt voltage required to operate Pockels cells \cite{gillettExperimentalFeedbackControl2010,heTimeBinEncodedBosonSampling2017}) and a repetition rate of 1MHz. Our method could be made faster by resorting to existing more expensive, faster switches.}
Additionally, we can easily place different passive polarization optics in either path to enact different and arbitrary $U_A$ and $U_B$. Note that we must compensate for the fixed (but random) polarization rotation implemented by our single-mode optical fibers (see Appendix B).
{Fiber-based feed-forward may also be implemented by encoding the qubit in the path DOF instead of the polarization DOF, in which case the UFOS are replaced by other integrated electro-optic modulators that can be as fast as, and operate with a lower half-voltage than, the UFOS --- see \textit{eg} \cite{mikovaIncreasingEfficiencyLinearoptical2012} in which a fidelity of 97.6$\%$ was obtained.}
The delay between detecting the signal photon at the projective measurement (PM) stage and changing the state of the two UFOS is approximately $560$ ns. This consists of: $\approx 160$ ns for the electrical signal from the SNSPD to reach the TTM, $\approx 300$ ns for the TTM to generate the output pulse, and another $\approx 100$ ns for the signal to propagate to the UFOS and change their state.
{ Many of these latencies can easily be improved. For example, the $\approx 300$ ns processing time of our TTM can be pushed down to a few nanoseconds by triggering the switch directly with the detector signal.}
Hence we use a fiber link of $162$ m to transmit and delay the arrival of the signal photon at the UFOS1 (by $\approx 800$ ns) such that enough time has elapsed for the state of the idler photon to be fed-forward to UFOS1.
We further leave both UFOS in the cross state for $700$ ns to give ample time for the signal photon to traverse the feed-forward operations.
The combination of these various features allows us to implement high-fidelity feed-forward operations with a net loss of only $\approx 3$ dB. We will now discuss how we benchmark our feed-forward by performing deterministic RSP, but our path-based approach to feed-forward could easily be used in a host of photonic quantum information settings.
\section{Remote state preparation}\label{sec:remotestate}
We use the setup shown in Fig.\ref{fig:setup} to implement post-selection--free RSP.
In what follows, $\ket{H}$ ($\ket{V}$) represents a state of linear polarization along the horizontal (vertical) axis, and the idler and signal photons are identified with indices $i$ and $s$, respectively.
At the bottom station, a projective measurement on the polarization degree of freedom of the idler photon is performed: it is projected in either
\begin{align}
\ket{\Psi}_i=\alpha\ket{H}_i+\beta\ket{V}_i \label{eq:idlerstate}
\intertext{or}
\ket{\Psi^\bot}_i=\alpha\ket{V}_i-\beta\ket{H}_i, \label{eq:idlerstateperp}
\end{align}
with $\alpha=\cos{\theta/2}$, and $\beta=\sin{\theta/2}\exp{i\phi}$. These coefficients are controlled by setting the angle of the HWP and QWP before the PBS (see Appendix \ref{app:C}).
The result of the measurement (\textit{i.e.}, whether the idler photon is in state $\ket{\Psi}_i$ or $\ket{\Psi^\bot}_i$) is sent to the upper station by a classical communication channel.
The signal photon passes through a fiber delay, arriving at the upper station \textit{after} the result of the projective measurement on the idler photon has reached it.
Upon arrival at the upper station, the signal photon has been collapsed into either of the two states
\begin{align}
\label{eq:signalinitialstate}
\ket{\Psi}_s=\bra{\Psi^\bot}_i\ket{\psi^-}=\alpha\ket{H}_s+\beta\ket{V}_s
\intertext{or}
\ket{\Psi^\bot}_s=\bra{\Psi}_i\ket{\psi^-}=\alpha\ket{V}_s-\beta\ket{H}_s,
\end{align}
depending on the outcome of the first projective measurement.
Hence, performing a projective measurement on the polarization DOF of the idler photon sets the polarization of the signal photon into one of two definite states, which we know without having to measure it.
Conditionally on the state of the idler photon, we can either do nothing (which corresponds to applying unitary operation $U_A=\mathbb{1}$) or \textit{correct} the state of the signal photon so that it is always in $\ket{\Psi}$:
\begin{equation}
\label{eq:signalcorr}
\begin{aligned}
\mathrm{if}
\end{aligned}
\left\{
\qquad
\begin{aligned}
\ket{\Psi}_i&=\ket{\Psi^\bot}_i \rightarrow \mathbb{1}\ket{\Psi}_s=\ket{\Psi}_s, \\
\ket{\Psi}_i&=\ket{\Psi}_i\rightarrow U_B\ket{\Psi^\bot}_s=\ket{\Psi}_s.
\end{aligned}
\right.
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{figure2.pdf}
\caption{Quantum state tomography of the input and output states. The color bar on the right refers to the amplitude of all the density matrices in panels \textbf{(a)}-\textbf{(h)}.
Left: \textbf{(a)} Real- and \textbf{(b)} imaginary-parts of the density matrix elements of the two-photon input state for a measurement on the equatorial plane.
Right: real part of the density matrix elements of the one-photon output states \textbf{(f)-(h)} with and \textbf{(c)-(e)} without feed-forward.
The imaginary components are relatively small, and not shown.
The H-tomography (V-tomography) results are post-selected on the idler photon being transmitted (reflected) in the projective measurement (PM). In the complete tomography results no post-selection is performed.}
\label{fig:instate}
\end{figure}
Note that in general $U_B$ depends on $\theta$ and $\phi$ \cite{pengExperimentalImplementationRemote2003}.
However, there are classes of states for which the $U_B$ is independent of the state one wishes to prepare: this is the standard RSP result \cite{bennettRSP}.
We consider two specific cases in which $U_B=i\sigma_y$ and $U_B=\sigma_z$.
In the first case, $\phi=0$ and the signal's state is prepared on a meridian plane of the Bloch sphere (HDVA plane) --- $\ket{\Psi}_s=\cos{\theta/2}\ket{H}+\sin{\theta/2}\ket{V}$. To produce this family of states we remove the QWP from the PM station and vary the HWP from $0$ to $90\degree$.
In the second case, the signal's state is prepared as an arbitrary equatorial state (DRAL plane) --- $\ket{\Psi}_s=\frac{1}{\sqrt{2}}\left(\ket{H}+e^{i\phi}\ket{V}\right)$. This projection is made by setting the QWP to $45\degree$ and varying the HWP from $22.5$ to $112.5\degree$.
\section{Experiment}\label{sec:experiment}
The input state is generated by type-II spontaneous parametric down conversion, as shown in Fig.\ref{fig:setup} \textbf{(b)}.
The source of entangled photons with a degenerate wavelength is based on a Sagnac loop.
It consists of a PPKTP crystal, with a length of $30$ mm, a poling period of $46.2$ $\mu$m, and tunable temperature.
The crystal is pumped by an $80$ mW, CW laser of wavelength $775$ nm.
Ultra-narrow bandpass filters with a full width at half maximum bandwidth of $3.2$ nm and an almost top-hat shape transmittivity are used to filter the photons.
Directly after the source, we measure singles rates of $85$ kHz and a coincidence rate of $17$ kHz with our SNSPDs, corresponding to a pair coupling efficiency of $20\%$.
We use the same TTM to measure the count rates and generate the feed-forward signal.
The idler and signal photons are sent to their respective stations by fiber (SMF28) links.
After coupling through our experiment, we measure singles rates of $50$ kHz and $30$ kHz at the PM and QST stations, respectively, and a net coincidence rate of $\approx 3$ kHz.
\begin{figure}[t]
\centering
\includegraphics[width=0.97\columnwidth]{figure3.pdf}
\caption{Preparation of the signal photon on the meridian plane of the Bloch sphere. \textbf{(a)} Target signal states, assuming a perfect singlet state. \textbf{(b)} Predicted signal states theoretically calculated accounting for experimental imperfections in generating the singlet state. \textbf{(c)} Measured signal states. The different colors of the points in panels \textbf{(a)}-\textbf{(c)} correspond to the same experimental setting.
\textbf{(d)} Fidelity of the measured experimental states with the targets with (red) and without (yellow) feed-forward plotter versus the Bloch angle of the target state (black arrow in \textbf{(a)}). This Bloch angle maps onto the half waveplate angle in the protective measurement station as $\theta=4\theta
`$.
\textbf{(e)} Fidelity of the experimentally prepared states with the predicted states versus the Bloch angle. \label{fig:projreal}}
\end{figure}
In Fig.\ref{fig:instate} \textbf{(a)} and \textbf{(b)}, we show the real and imaginary parts of the reconstructed density matrix of the two-photon entangled state obtained by using quantum state tomography based on a least-squares optimization \cite{qi2013quantum} at the output of the setup (using the PM and QST stations) when the feed-forward is turned off; see Appendix A for details.
{For these measurements, we acquire $\approx 40$,$000$ two-photon events per measurement basis.
We measure typical fidelities of ($92\pm 1$)$\%$ to the Bell singlet state $\ket{\psi^-}$ and purities of ($89\pm2$)$\%$, where the error bars are estimated by performing a Monte Carlo simulation starting with the experimental data, and then adding the Poissonian fluctuations from $40$,$000$ counts, and uncertainties of $0.5$ degrees in setting the waveplate angles.}
The decreased fidelity from the perfect Bell state is mostly due to distinguishability between the two pathways in the Sagnac source, and a limited amount of polarization rotation in the optical fibers.
This decreased fidelity will also result in a lower RSP fidelity. However, we stress that this decrease is distinct from any errors introduced by our feed-forward. Hence, we will use these two-qubit tomography results to treat these error sources separately (see Appendix \ref{app:A}).
We first measure the net RSP fidelity, by projecting the idler photon into a given basis and then estimating a series of single-qubit density matrices at the QST station.
The plots in the shaded area on the right of Fig.\ref{fig:instate} (showing the real part of the reconstructed density matrices with (\textbf{(f)-(h)}) and without (\textbf{(c)-(e)}) feed-forward) illustrate the effect of our protocol on the signal photon's state.
In these data, the idler photon is measured in the $(\ket{H}\pm\ket{V})/\sqrt{2}$ basis, and each row shows three different analyses of the results in the same configuration.
The `H-tomography' (panels \textbf{(c)} and \textbf{(f)}), corresponds to events where we post-select on the idler photon being transmitted through the PM beamsplitter (\textit{i.e.} it is found to be $(\ket{H}+\ket{V})/\sqrt{2}$). Without feed-forward, this collapses the signal photon into $(\ket{H}-\ket{V})/\sqrt{2}$ (since we began with a singlet state); this is evident in negative off-diagonal components of the density matrix of panel \textbf{(c)}.
However, when the feed-forward is enabled (panel \textbf{(f)}) the state is flipped to $(\ket{H}+\ket{V})/\sqrt{2}$, and the off-diagonal components become positive.
Similarly, the `V-tomography' results come from events where the idler photon is reflected, projecting the signal photon into $(\ket{H}-\ket{V})/\sqrt{2}$. In this case, the data with \textbf{(g)} and without \textbf{(d)} feed-forward agree, since the feed-forward operation is applied.
Finally, the complete tomography results used no post-selection. Hence, without feed-forward the signal photon is in a mixed state (the off-diagonal components in panel \textbf{(e)} are $\approx 0$), but using feed-forward allows us to deterministically prepare $(\ket{H}+\ket{V})/\sqrt{2}$.
\begin{figure}[t]
\centering
\includegraphics[width=0.97\columnwidth]{figure4.pdf}
\caption{Preparation of the signal photon on the equatorial plane of the Bloch sphere. The data in all panels are plotted as in Fig. \ref{fig:projreal}, except, here, $\phi$ of the target states in the equatorial plane is related to the half waveplate angle $\theta'$ as $\phi=4(\theta'-22.5)$.
\label{fig:projcplx}}
\end{figure}
To further characterize the RSP fidelity, we prepare a range of states along the meridian and equatorial planes, and perform QST on the output{, obtaining approximately $35$,$000$ photon counts per measurement setting.}
The target states (\textit{i.e.} the states we attempt to remotely prepare) are plotted as points on the Bloch sphere in panels \textbf{(a)} of
Fig.\ref{fig:projreal} and Fig.\ref{fig:projcplx}, and the resulting experimental states are plotted in panels \textbf{(c)} of the same figures.
The fidelities between the experimental states and the target states with the feed-forward turned off are plotted in yellow in panels (\textbf{d}), and have an average fidelity of (49 $\pm$ 2)\% to the target state. The uncertainty is obtained by propagating the individual uncertainties.
{The error bars presented in panels \textbf{(c)} and \textbf{(d)} of Figs.\ref{fig:projreal} and \ref{fig:projcplx} are calculated from a Monte Carlo simulation including a 0.5 degree error on setting the tomography waveplates, and the finite counting statistics.}
This low fidelity is to be expected, since, in the absence of feed-forward, the signal photon is a maximally mixed state and we have not post-selected our results.
When the feed-forward is enabled (in red in panels \textbf{(d)}), the fidelity increases dramatically.
For these data, we obtain average RSP fidelities of (91 $\pm$ 1)\% and (92 $\pm$ 1)\% to the target state when projecting on the equatorial and meridian planes, respectively.
However, these fidelities are severely limited by the reduced fidelity of the experimental entangled state with $\ket{\psi^-}$.
Again, we stress that the reduction of the fidelity of our entangled state comes primarily from two sources. The first is from distinguishability between the HV and VH paths in the Sagnac interferometer, which reduces fidelity of the entangled state (by entangling to other degrees of freedom, hence reducing the purity).
The second error source is imperfect polarization compensation in our fiber network (see Appendix B).
{We model both of these imperfections, and the result is plotted as the red line in panel (d) of the two figures.
Our model also includes a $1\%$ polarization dependent loss, which is consistent with the specified $<0.2$ dB polarization dependent loss of our switches.
Although it is difficult to exactly determine the form of the imperfect polarization compensation, we find that that a residual birefringence of $0.5$ rad on one side of the experiment is sufficient to describe the oscillations observed in the data presented in Fig.\ref{fig:projreal}, while a birefringence of $0.25$ rad fits well the data in Fig.\ref{fig:projcplx} \textbf{(d)}. (Since the polarization compensation was redone between these two measurements, it is reasonable to have different values for this.)
The simulations presented in both figures assume a purity of the $0.89$ for the shared Bell state.
}
To assess the feed-forward fidelity directly, we use the two-photon tomography results presented above to infer and remove the errors due to the state generation.
To do this, we predict the states we expect to remotely prepare given our actual experimental entangled state.
These predicted states are plotted in panels \textbf{(b)} of Fig.\ref{fig:projreal} and Fig.\ref{fig:projcplx}. The lower-than-unity fidelity between these states and the corresponding experimental states are caused by imperfections in our feed-forward.
{ These errors come from a combination of slightly miscalibrated waveplates used to implement $U_A$ and $U_B$, polarization dependent losses in the switch (which our correction does not account for), and imperfect polarization compensation.
The simulations presented in panels (\textbf{e}) of both figures assume a perfect singlet state, polarization dependent losses of $1\%$, and residual birefringence of $0.2$ rad.
}
Leakage into the incorrect path could cause additional errors, but we estimate this to be negligible.
The resulting feed-forward fidelities are plotted in green in panels \textbf{(e)} of both figures. The average feed-forward fidelity of these data is (99.0 $\pm$ 1)\% and (99.0 $\pm$ 1)\%, for states in the meridian and equatorial planes, respectively.
This demonstrates that our feed-forward indeed operates with extremely high-fidelity,
{ one of the highest in the state-of-the-art to the best of our knowledge. Our low error rate is largely due to our use of well-calibrated passive optics, which could in fact be implemented with a variety of other switching technologies, also in bulk or integrated settings.
Furthermore, the switches and electronics used in our experiment were not state-of-the-art, and the timing performance of our techniques could easily be improved with either faster commercial or research-level switches.
}
\section{Discussion}
Our feed-forward scheme enables the remote state preparation of the polarization state of single photons without post-selection.
In our photonics implementation we use a novel technique for fiber-compatible feed-forward, exploiting the precision of passive polarization optics in combination with high-quality ultrafast optical switches.
We find that the errors introduced into remote state preparation by our feed-forward operation are less than 1$\%$.
This figure of merit, combined with the use of standard, off-the-shelf photonic technology for telecommunication-wavelengths, makes our methods usable for quantum photonics tasks in which fast switching and routing of light are critical.
{The experimental platform could be improved by resorting to faster switches realized with \textit{e.g.} four-wave mixing in interferometers, which promises rates up to 500 MHz with losses below 1 dB \cite{leeLowLossHighSpeedFiberOptic2018}.}
{Applications of our methods include}: measurement based quantum computation \cite{saggio_experimental_2019}, experiments on the foundations of quantum physics \cite{maclean_quantum-coherent_2017,saunders_experimental_2017,vedovato_postselection-loophole-free_2018}{, neuromorphic computing}\cite{shen_deep_2017} or quantum information applications like photonic simulations \cite{schreiber_2d_2012}, photon counting \cite{tiedau_high_2019}, or establishing a quantum internet \cite{kimble2008quantum}.
\section*{Acknowledgements}
The authors are thankful to Valeria Saggio, Bob Peterson and Teo Str\"omberg for insightful discussions.
\section*{Funding}
Conselho Nacional de Desenvolvimento Científico e Tecnológico (204937/2018-3); Red Bull GmbH; Air Force Office of Scientific Research (QAT4SECOMP (FA2386-17-1-4011)); Austrian Science Fund (FWF): F 7113-N38 (BeyondC), FG 5-N (Research Group), P 30817-N36 (GRIPS) and P 30067\_N36 (NaMuG);
{\"O}sterreichische Forschungsf{\"o}rderungsgesellschaft (QuantERA ERA-NET Cofund project HiPhoP (No.731473));
Research Platform for Testing the Quantum and Gravity Interface (TURIS), the European Commission (ErBeSta (No.800942)), Christian Doppler Forschungsgesellschaft; {\"O}sterreichische Nationalstiftung f{\"u}rForschung, Technologie und Entwicklung; Bundesministerium f{\"u}r Digitalisierung und Wirtschaftsstandort.
\section*{Disclosures}
The authors declare no conflicts of interest.
\newline
|
1708.06469
|
\section{Introduction}
\label{s:intro}
DNA computing provides relatively new paradigms of computation \cite{Adleman,Paun} from the end of the last century. In contrast, automata theory is one of the base of computer science. Watson-Crick-automata (abbreviated as WK automata), as a branch of DNA computing was introduced in \cite{Freund}; they relate to both mentioned fields: they have important relation to formal language and automata theory. More details can be found in \cite{Paun} and \cite{Czeizle}. WK automata work on double-stranded tapes called Watson-Crick tapes (i.e., DNA molecules), whose strands are scanned separately by read-only heads. The symbols in the corresponding cells of the double-stranded tapes are related by (the Watson-Crick) complementarity relation. The relationships between the classes of the Watson-Crick automata are investigated in \cite{Freund,Paun,Kuske}. The two strands of a DNA molecule have opposite $5'\rightarrow 3'$ orientation. Considering the reverse and the $5'\rightarrow 3'$ variants, they are more realistic in the sense, that both heads use the same biochemical direction (that is opposite physical directions) \cite{Freund,DNA2008,Leupold}. Some variations of the reverse Watson-Crick automaton with sensing power which tells whether the upper and the lower heads are within a fixed distance (or meet at the same position) are discussed in \cite{DNA2008,Nagy2009,iConcept,Nagy2013}.
Since the heads of a WK automaton may read longer strings in a transition, in these models the sensing parameter took care of the proper meeting of the heads by sensing if the heads are close enough to meet in the next transition step.
The motivation of the new model is to erase the rather artificial term of sensing parameter from the model. By the sensing parameter one can `cheat' to allow only special finishing transitions, and thus, in the old model the all-final variants have the same accepting power as the variants without this condition. Here, the accepted language classes of the new model are analyzed. Variations such as
all-final, simple, 1-limited, and stateless $5'\rightarrow 3'$ Watson-Crick automata are also detailed.
\section{Preliminaries, Definitions}
\label{s:pre}
We assume that the reader is familiar with basic concepts of formal languages and automata, otherwise she or he is referred to \cite{Handb}. We denote the empty word by $\lambda$.
The two strands of the DNA molecule have opposite $5'\rightarrow3'$ orientations. For this reason, it is worth to take into account a variant of Watson-Crick finite automata that parse the two strands of the Watson-Crick tape in opposite directions. Figure \ref{sd} indicates the initial configuration of such an automaton.
\begin{figure}[h]
\centering
\includegraphics[scale=0.29]{strands.png}
\caption{A sensing $5'\rightarrow 3'$ WK automaton in the initial configuration and in an accepting configuration (with a final state $q$).}
\label{sd}
\end{figure}
The $5'\rightarrow3'$ WK automaton is sensing, if the heads sense that they are meeting. \\
Formally, a Watson-Crick automaton is a 6-tuple $M=(V,\rho,Q,q_0,F,\delta)$, where:
\begin{itemize}
\item $V$ is the (input) alphabet,
\item $\rho\subseteq V\times V$ denotes a complementarity relation,
\item $Q$ represents a finite set of states,
\item $q_0\in Q$ is the initial state,
\item $F\subseteq Q$ is the set of final (accepting) states and
\item $\delta$ is called transition mapping and it is of the form $\delta: Q \times \left(\begin{array}{c}V^{*}\\ V^{*}\end{array}\right)\rightarrow 2^Q$, such that it is non empty only for finitely many triplets $(q,u,v), q \in Q, u,v\in V^*$.
\end{itemize}
In sensing $5'\rightarrow3'$ WK automata every pair of positions in the Watson-Crick tape is read by exactly one of the heads in an accepting computation, and therefore the complementarity relation cannot play importance, instead, we assume that it is the identity relation. Thus, it is more convenient to consider the input as a normal word instead the double stranded form. Note here that complementarity can be excluded from the traditional models as well, see \cite{Kuske} for details.\\
Let us define the radius of an automaton by $r$ which shows the maximum length of the substrings of the input that can be read by the automaton in a transition.
A configuration of a Watson-Crick automaton is a pair $(q,w)$ where $q$ is the current state of the automaton and $w$ is the part of the input word which has not been processed (read) yet. For $w',x,y\in V^*,q,q'\in Q$, we write a transition between two configurations as:\\
$(q,xw'y)\Rightarrow(q',w' )$ if and only if $q'\in \delta(q,x,y)$. We denote the reflexive and transitive closure of the relation $\Rightarrow$ by $\Rightarrow^*$. Therefore, for a given $w\in V^*$, an accepting computation is a sequence of transitions $(q_0,w) \Rightarrow^* (q_F,\lambda)$, starting from the initial state and ending in a final state.
The language accepted by a WK automaton $M$ is:\\
$L(M)=\{ w\in V^* \mid (q_0,w) \Rightarrow^* (q_F,\lambda), q_F\in F$\}.
The shortest nonempty word accepted by $M$ is denoted by $w_s$, if it is uniquely determined or any of them if there are more than one such word(s).\\
There are some restricted versions of WK automata which can be defined as follows:
\begin{itemize}
\item $\textbf{N}$: stateless, i.e., with only one state: if $Q=F=\{q_0\}$;
\item $\textbf{F}$: all-final, i.e., with only final states: if $Q=F$;
\item $\textbf{S}$: simple (at most one head moves in a step) $\delta:(Q\times ((\lambda,V^* )\cup(V^*,\lambda)))\rightarrow 2^Q$.
\item $\textbf{1}$: 1-limited (exactly one letter is being read in each step) $\delta:(Q\times ((\lambda,V)\cup (V,\lambda)))\rightarrow2^Q$.
\end{itemize}
Additional versions can be determined using multiple constrains such as \textbf{F1}, \textbf{N1}, \textbf{FS}, \textbf{NS} WK automata.
Now, as an example, we show the language $L=\{a^nb^m\mid n,m\geq 0 \}$ that can be accepted by an $\textbf{N1}$ sensing $5'\rightarrow3'$ WK automaton (Figure \ref{tm:2a}).
\begin{figure}[h]
\centering
\includegraphics[scale=0.15]{N1.png}
\caption{A sensing $5' \rightarrow 3'$ WK automaton of type $\textbf{N1}$ accepting the language $\{a^nb^m\mid n,m\geq 0 \}$.}
\label{tm:2a}
\end{figure}
\section{Hierarchy by sensing $5'\rightarrow 3'$ WK automata}
\label{s:WK}
\begin{theorem}\label{thm:1} The following classes of languages coincide:
\begin{itemize}
\item the class of linear context-free languages defined by linear context-free grammars,
\item the language class accepted by sensing $5'\rightarrow 3'$ WK finite automata,
\item the class of languages accepted by $\textnormal{\textbf{S}}$ sensing $5'\rightarrow 3'$ WK automata,
\item the class of languages accepted by $\textnormal{\textbf{1}}$ sensing $5'\rightarrow 3'$ WK automata.
\end{itemize}
\end{theorem}
\begin{proof} For DNA computing reasons (and for simplicity) we work with $\lambda$-free languages.
The proof is constructive, first we show that the first class is included in the last one. Let $G=(N,T,S,P)$ be a linear context-free grammar having productions only in the forms $A\to aB, A\to Ba, A\to a$ with $A,B\in N,\ a\in T$. Then the $\textbf{1}$ sensing $5'\rightarrow 3'$ WK automaton $M=(T,id,N\cup\{q_f\},S,\{q_f\},\delta)$ is defined with $B\in \delta(A,u,v)$ if $A\to uBv \in P$ and
$q_f \in \delta(A,u,\lambda)$ if $A\to u \in P$ ($u,v\in T\cup\{\lambda\}$). Clearly, each (terminated) derivation in $G$ coincides to a(n accepting) computation of $M$, and vice versa. Thus the first class is included in the last one.
The inclusions between the fourth, third and second classes are obvious by definition. To close the circle, we need to show that the second class is in the first one. Let the sensing $5'\to 3'$ WK automaton $M=(V,id,Q,q_0,F,\delta)$ be given. Let us construct the linear context-free grammar
$G=(Q,V,q_0,P)$ with productions: $p\to u q v $ if $q\in\delta(q,u,v)$
and $p\to u v \in P$ if $q\in\delta(q,u,v)$ and $q\in F$
($p,q\in Q,\ u,v\in V^*$). Again, the (accepting) computations of $M$ are in a bijective correspondence to the (terminated) derivations in $G$. Thus, the proof is finished.
\end{proof}
Based on the previous theorem we may assume that the considered sensing $5'\rightarrow 3'$ WK automata have no $\lambda$-movements, i.e., at least one of the heads is moving in each transition.
\begin{lemma}\label{l1}
Let $M$ be an $\textnormal{\textbf{F1}}$ sensing $5'\rightarrow 3'$ WK automaton and let the word $w \in V^+$ that is in $L(M)$. Let $\left|w\right|=k$, then for each $l$, where $0\leq l \leq k$, there is at least one word $w_l\in L(M)$ such that $\left|w_l\right|=l$.
\end{lemma}
\begin{proof}
According to the definition of $\textbf{F1}$ sensing $5'\rightarrow 3'$ WK automaton, $w$ can be accepted in $k$ steps such that in each step, the automaton can read exactly one letter. Moreover, each state is final, therefore by considering the first $l$ steps of the $k$ steps, the word $w_l=w'_lw''_l$ is accepted by $M$, where $w'_l$ is read by the left head and $w''_l$ is read by the right head during these $l$ steps, respectively.
\end{proof}
\begin{remark}\label{R1}
Since, by definition, every $\textnormal{\textbf{N1}}$ sensing $5'\rightarrow 3'$ WK automaton is $\textnormal{\textbf{F1}}$ WK automaton at the same time, Lemma \ref{l1} applies for all $\textnormal{\textbf{N1}}$ sensing $5' \rightarrow 3'$ WK automata also.
\end{remark}
\begin{theorem}\label{thm:2}
The class of languages that can be accepted by $\textnormal{\textbf{N1}}$ sensing $5'\rightarrow 3'$ WK automata is properly included in the language class accepted by $\textnormal{\textbf{NS}}$ sensing $5'\rightarrow 3'$ WK automata.
\end{theorem}
\begin{proof}
Obviously, these automata have exactly one state. In $\textbf{NS}$ machines, the reading head may read some letters in a transition, while the input should be read letter by letter by $\textbf{N1}$ machines. The language $L=\{a^{3n} b^{2m}\mid n,m\geq 0\}$ proves the proper inclusion. In this language $w_s$ is $bb$ and in an $\textbf{NS}$ automaton it can be accepted by any of the following transitions: $(bb,\lambda)$, $(\lambda,bb)$. Although by Lemma \ref{l1}, $w_s$ cannot be the shortest nonempty accepted word in a language accepted by an $\textbf{N1}$ sensing $5'\rightarrow 3'$ WK automaton. Figure \ref{tm:2} shows that language $L$ can be accepted by an $\textbf{NS}$ sensing $5'\rightarrow 3'$ WK automaton. Therefore, the proper inclusion stated in the theorem is proven.
\end{proof}
\begin{figure}[h]
\centering
\includegraphics[scale=0.15]{Theorem2.png}
\caption{A sensing $5' \rightarrow 3'$ WK automaton of type $\textbf{NS}$ accepting the language $\{a^{3n} b^{2m}\mid n,m\geq 0\}$.}
\label{tm:2}
\end{figure}
\begin{theorem}\label{thm:3}
The class of languages that can be accepted by $\textnormal{\textbf{NS}}$ sensing $5'\rightarrow 3'$ WK automata is properly included in the language class accepted by $\textnormal{\textbf{N}}$ sensing $5'\rightarrow 3'$ WK automata.
\end{theorem}
\begin{proof}
The language $L=\{a^{(2n+m)} b^{(2m+n)}\mid n,m\geq0\}$ proves the proper inclusion. Suppose that there is an $\textbf{NS}$ sensing $5'\rightarrow 3'$ WK automaton that accepts $L$. The $\textbf{NS}$ sensing $5'\rightarrow 3'$ WK automaton has exactly one state and one of the heads can move at a time. The $w_s$ of $L$ is $aab$ (or $abb$). It can be accepted by one of the following loop transitions: $(aab,\lambda)$, $(\lambda,aab)$, $(abb,\lambda)$ or $(\lambda,abb)$ by an $\textbf{NS}$ sensing $5'\rightarrow 3'$ WK automaton. Each of the mentioned transitions can lead to accept different language from the language $\{a^{(2n+m)} b^{(2m+n)}\mid n,m \geq 0\}$. For instance, using several times the transition $(aab,\lambda)$, the language $\{(aab)^n\mid n\geq0\}$ is accepted which is not a subset of the language $L$.
Therefore, the language $\{a^{(2n+m)} b^{(2m+n)}\mid n,m\geq0\}$ cannot be accepted by $\textbf{NS}$ sensing $5'\rightarrow 3'$ WK automata. Figure \ref{tm:3} shows that this language can be accepted by an $\textbf{N}$ sensing $5'\rightarrow 3'$ WK automaton. Hence, the theorem holds.
\end{proof}
\begin{figure}[h]
\centering
\includegraphics[scale=0.15]{Theorem3N.png}
\caption{An $\textbf{N}$ sensing $5' \rightarrow 3'$ WK automaton of type $\textbf{N}$ accepting the language $\{a^{2n+m} b^{2m+n}\mid n,m\geq 0\}$.}
\label{tm:3}
\end{figure}
Now the concept of sensing $5' \rightarrow 3'$ WK automata with sensing parameter is recalled \cite{Nagy2009,Nagy2013}. Formally, a 6-tuple $M=(V,\rho,Q,q_0,F,\delta')$ is a sensing $5' \rightarrow 3'$ WK automaton with sensing parameter,
where, $V$, $\rho$, $Q$, $q_0$ and $F$ are the same as in our model and $\delta'$ is the transition mapping
defined by the sensing condition in the following way: \\
$\delta': \left(Q \times \left(\begin{array}{c}V^{*}\\ V^{*}\end{array}\right) \times D\right)\rightarrow 2^Q$, where the sensing distance set is indicated by $D=\{0,1,\dots,r,+\infty\}$ where $r$ is the radius of the automaton.
In $\delta'$, the distance between the two heads is used from the set $D$ if it is between $0$ and $r$, and $+\infty$ is used, when the distance of the two heads is more than $r$. In this way, the set $D$ is an efficient tool and it controls the appropriate meeting of the heads:
When the heads are close to each other only special transitions are allowed.
\\
The next three theorems highlight the difference between the new model and the model with sensing parameter.
\begin{theorem}\label{thm:4}
The class of languages that can be accepted by $\textnormal{\textbf{F1}}$ sensing $5' \rightarrow 3'$ WK automata is properly included in the language class of $\textnormal{\textbf{FS}}$ sensing $5' \rightarrow 3'$ WK automata.
\end{theorem}
\begin{proof}
Obviously, all states of these automata are final and $\textbf{F1}$ sensing $5' \rightarrow 3'$ WK automata should read the input letter by letter, while $\textbf{FS}$ sensing $5' \rightarrow 3'$ WK automata may read some letters in a transition. To show proper inclusion, consider the language $L=\{(aa)^n (bb)^m\mid m \leq n\leq m+1,m\geq 0\}$. The word $w_s$ can be $aa$ and by Lemma \ref{l1}, $w_s$ cannot be the shortest nonempty accepted word for an $\textbf{F1}$ sensing $5' \rightarrow 3'$ WK automaton. However, $L$ can be accepted by an $\textbf{FS}$ sensing $5' \rightarrow 3'$ WK automaton as it is shown in Figure \ref{tm:4}. The theorem is proven.
\end{proof}
\begin{figure}[h]
\centering
\includegraphics[scale=0.15]{Theorem4.png}
\caption{A sensing $5' \rightarrow 3'$ WK automaton of type $\textbf{FS}$ accepting the language $\{(aa)^n (bb)^m\mid m \leq n\leq m+1,m\geq 0\}$.}
\label{tm:4}
\end{figure}
\begin{theorem}\label{thm:5}
The language class accepted by $\textnormal{\textbf{FS}}$ sensing $5' \rightarrow 3'$ WK automata is properly included in the language class of $\textnormal{\textbf{F}}$ sensing $5' \rightarrow 3'$ WK automata.
\end{theorem}
\begin{proof}
The language $L=\{a^{2n+q} c^{4m}b^{2q+n}\mid n,q\geq 0,m\in \{0,1\} \}$ proves the proper inclusion. Let us assume, contrary that $L$ is accepted by an $\textbf{FS}$ sensing $5' \rightarrow 3'$ WK automaton.
Let the radius of this automaton be $r$. Let $w=a^{2n+q}b^{2q+n}\in L$ with $ n,q\geq r$ such that $|w|=3n+3q>r$. Then the word $w$ cannot be accepted by using only one of the transitions (from the initial state $q_0$), i.e., $\delta(q_0,a^{2n+q}b^{2q+n},\lambda)$ or $\delta(q_0,\lambda,a^{2n+q}b^{2q+n})$ is not possible.
Therefore, by considering the position of the heads after using any of the transitions from the initial state $q_0$ in $\textbf{FS}$ sensing $5' \rightarrow 3'$ WK automaton (all states are final and one of the heads can move),
it is clear that either a prefix or a suffix of $w$ with length at most $r$ is accepted by the automaton.
But neither a word from $a^+$, nor from $b^+$ is in $L$.
This fact contradicts to our assumption,
hence $L$ cannot be accepted by any $\textbf{FS}$ sensing $5' \rightarrow 3'$ WK automata. However, it can be accepted by $\textbf{F}$ $5' \rightarrow 3'$ WK automata, since the two heads can move at the same time and they can read both blocks of $a$'s and $b$'s simultaneously. In Figure \ref{tm:5}, an all-final $5' \rightarrow 3'$ WK automaton can be seen which accepts $L$.
\end{proof}
\begin{figure}[h]
\centering
\includegraphics[scale=0.15]{Theorem5.png}
\caption{A sensing $5' \rightarrow 3'$ WK automaton of type $\textbf{F}$ accepting the language $\{a^{2n+q} c^{4m}b^{2q+n}\mid n,q\geq 0, m\in \{0,1\} \}$.}
\label{tm:5}
\end{figure}
The following result also shows that the new model differs from the one
that is using the sensing parameter in its transitions.
\begin{theorem}\label{thm:6}
The language class accepted by $\textnormal{\textbf{F}}$ sensing $5' \rightarrow 3'$ WK automata is properly included in the language class of sensing $5' \rightarrow 3'$ WK automata.
\end{theorem}
\begin{proof}
The language $L=\{a^n cb^n c\mid n\geq 1\}$ can be accepted by a sensing $5' \rightarrow 3'$ WK automaton (without restrictions) (see Figure \ref{tm:6}).
Now we show that there is no $\textbf{F}$ sensing $5'\rightarrow3'$ WK automaton which accepts $L$. Assume the contrary that the language $L$ is accepted by an $\textbf{F}$ sensing $5' \rightarrow 3'$ WK automaton. Let the radius of the automaton be $r$.
Let $w=a^mcb^mc \in L$ with $m\geq r$.
Thus the word $w$ cannot be accepted by applying exactly one transition from the initial state $q_0$.
Now, suppose that there exists $q\in \delta(q_0,w_1,w_2)$ such that $w$ can be accepted by using transition(s) from $q$. Since in $\textbf{F}$ sensing $5' \rightarrow 3'$ WK automaton all states are final, then the concatenation of $w_1$ and $w_2$ is accepted, thus, it must be in $L$ (i.e. $w_1w_2\in L$). Therefore $w_1w_2=a^{m'}cb^{m'}c$ where $2m'+2\leq r\leq m$. To expand both blocks $a^+$ and $b^+$ to continue the accepting path of $w$, the left head must be before/in/right after the subword $a^{m'}$, and the right head must be right before/in/right after the subword $b^{m'}$. However, this is contradicting the fact that the two heads together already read $a^{m'}cb^{m'}c$.
Hence, it is not possible to accept $w$ by an $\textbf{F}$ sensing $5' \rightarrow 3'$ WK automaton and the language $L$ cannot be accepted by an $\textbf{F}$ sensing $5' \rightarrow 3'$ WK automaton.
\end{proof}
\begin{figure}[h]
\centering
\includegraphics[scale=0.15]{Theorem6.png}
\caption{A sensing $5' \rightarrow 3'$ WK automaton accepts the language $\{a^n cb^n c\mid n\geq 1\}$.}
\label{tm:6}
\end{figure}
\begin{proposition} \label{c3}
The language $L=\{a^n b^m\mid n=m$ or $n=m+1\}$ can be accepted by $\textnormal{\textbf{F1}}$ sensing $5' \rightarrow 3'$ WK automata, but cannot be accepted by $\textnormal{\textbf{N1}}$, $\textnormal{\textbf{NS}}$ and $\textnormal{\textbf{N}}$ sensing $5' \rightarrow 3'$ WK automata.
\end{proposition}
\begin{proof}
As it is shown in Figure \ref{c:3}, $L$ can be accepted by an $\textbf{F1}$ sensing $5' \rightarrow 3'$ WK automaton.
Suppose that $L$ can be accepted by an $\textbf{N}$ sensing $5' \rightarrow 3'$ WK automaton. The $w_s$ of $L$ is $a$, therefore at least one of the loop-transitions $(a,\lambda)$ and $(\lambda,a)$ is possible from the only state.
Since this automaton has only one state, using any of these transitions leads to accept $a^n$ for any $n\geq2$ which are not in $L$. Thus this language cannot be accepted by an $\textbf{N}$, $\textbf{N1}$, $\textbf{NS}$ sensing $5' \rightarrow 3'$ WK automaton.
\end{proof}
\begin{figure}[h]
\centering
\includegraphics[scale=0.15]{ex3.png}
\caption{An $\textbf{F1}$ sensing $5' \rightarrow 3'$ WK automaton accepts the language $L=\{a^n b^m\mid n=m$ or $n=m+1\}$.}
\label{c:3}
\end{figure}
\begin{remark}\label{R2}The following statements follow from Proposition \ref{c3}:
\begin{enumerate}[label=(\alph*)]
\item The class of languages that can be accepted by $\textnormal{\textbf{N1}}$ sensing $5'\rightarrow 3'$ WK automata is properly included in the language class accepted by $\textnormal{\textbf{F1}}$ sensing $5'\rightarrow 3'$ WK automata.
\item The class of languages that can be accepted by $\textnormal{\textbf{NS}}$ sensing $5'\rightarrow 3'$ WK automata is properly included in the language class accepted by $\textnormal{\textbf{FS}}$ sensing $5'\rightarrow 3'$ WK automata.
\item The class of languages that can be accepted by $\textnormal{\textbf{N}}$ sensing $5'\rightarrow 3'$ WK automata is properly included in the language class accepted by $\textnormal{\textbf{F}}$ sensing $5'\rightarrow 3'$ WK automata.
\end{enumerate}
\end{remark}
\subsection{Incomparability results}
\begin{theorem}\label{thm:18}
The class of languages that can be accepted by $\textnormal{\textbf{N}}$ sensing $5' \rightarrow 3'$ WK automata is incomparable with the classes of languages that can be accepted by $\textnormal{\textbf{FS}}$ and $\textnormal{\textbf{F1}}$ sensing $5' \rightarrow 3'$ WK automata under set theoretic inclusion.
\end{theorem}
\begin{proof}
The language $L=\{ww^R\mid w\in\{a,b\}^*\}$ can be accepted by an $\textbf{N}$ sensing $5' \rightarrow 3'$ WK automaton (Figure \ref{tm:11A}). Suppose that an $\textbf{FS}$ sensing $5' \rightarrow 3'$ WK automaton accepts $L$. Let the radius of this automaton be $r$. Let $w_2=w_1w_1^R\in L$ with $w_1=(bbbaaa)^m$ and $m>r$. The word $w_2$ cannot be accepted by using only one of the transitions from the initial state $q_0$, i.e., $\delta(q_0,w_1w_1^R,\lambda)$ or $\delta(q_0,\lambda,w_1w_1^R)$ is not possible (because the length of $w_2$ ). Therefore there exists either $q\in\delta(q_0, w_3w_3^R,\lambda)$, $w_3\in V^*$ or $q\in\delta(q_0, \lambda,w_3w_3^R)$, $w_3\in V^*$ such that $w_2$ can be accepted by using transition(s) from $q$.
Since the word $w_3w_3^R$ should be in the language $L$ (i.e., it is an even palindrome) and the length of $bbb$ and $aaa$ patterns in $w_2$ is odd, the only even palindrome proper prefix (suffix) of $w_2$ is $bb$. Thus $w_3w_3^R=bb$ must hold.
Without loss of generality, assume that there exists $q\in\delta(q_0,bb, \lambda)$ in the automaton.
By continuing the process,
we must have at least one of $q'\in \delta(q,w_4,\lambda)$ or
$q'\in \delta(q,\lambda,w_4)$ such that $bb w_4 \in L$ and $w_4$ is either the prefix or the suffix of the remaining unread part of word $w_2$, i.e., $ba^3(b^3a^3)^{m-1}(a^3b^3)^m$, with length less than $m$. Clearly, $w_4$ cannot be a prefix, and it can be only the suffix $bb$.
Thus, in $q'$ the unprocessed part of the input is $ba^3(b^3a^3)^{m-1}(a^3b^3)^{m-1}a^3b$.
Now the automaton must read a prefix or a suffix of this word, let us say $w_5$ such that $bbw_5bb \in L$, that is $w_5$ itself is an even palindrome, and its length is at most $r< m$.
But such a word does not exist,
the length of $bbb$ and $aaa$ patterns in the unread part is odd and their length is more than $r$.
We have arrived to a contradiction, thus $L$ cannot be accepted by any
$\textbf{FS}$ sensing $5' \rightarrow 3'$ WK automaton.
To prove the other direction, let us consider the language $L=\{a^n b^m\mid n=m \text{ or } n=m+1\}$. This language can be accepted by an $\textbf{F1}$ sensing $5' \rightarrow 3'$ WK automaton as it is shown in Figure \ref{c:3}. Moreover, by Proposition \ref{c3} this language cannot be accepted by any $\textbf{N}$ sensing $5' \rightarrow 3'$ WK automata.
\end{proof}
\begin{figure}[h]
\centering
\includegraphics[scale=0.15]{Theorem11A.png}
\caption{A sensing $5' \rightarrow 3'$ WK automaton of type $\textbf{N}$ accepting the language of even palindromes $\{ww^R\mid w\in\{a,b\}^* \}$.
}
\label{tm:11A}
\end{figure}
\begin{theorem}\label{thm:19}
The language class accepted by $\textnormal{\textbf{NS}}$ sensing $5' \rightarrow 3'$ WK automata is incomparable with the language class accepted by $\textnormal{\textbf{F1}}$ sensing $5' \rightarrow 3'$ WK automata.
\end{theorem}
\begin{table}[t]
\centering
\captionof{table}{Some specific languages belonging to language classes accepted by various classes of WK automata. Reference to figures indicate a specific automaton that accept the given language. \xmark \ indicates that the language cannot be accepted by the automata type of the specific column. Trivial inclusions are also shown, e.g., in the first line $\textbf{N1}$ in, e.g., column $\textbf{F}$ means that every $\textbf{N1}$ automaton is, in fact, also an $\textbf{F}$ automaton.} \label{table:1}
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{P{5.61cm} | c c c c c c c c c c c c }
Language & $\textbf{N1}$ & $\textbf{NS}$ & $\textbf{N}$ & $\textbf{F1}$ & $\textbf{FS}$ &$\textbf{F}$ & WK \\
\hline
$\{a^nb^m\mid n,m\geq 0 \}$ &Fig. \ref{tm:2a}& $\textbf{N1}$ &$\textbf{N1}$ & $\textbf{N1}$ & $\textbf{N1}$ & $\textbf{N1}$ &$\textbf{N1}$ \\
$\{a^{3n}b^{2m}\mid n,m\geq 0 \}$ &\xmark& Fig. \ref{tm:2} & $\textbf{NS}$ & \xmark & $\textbf{NS}$ & $\textbf{NS}$ &$\textbf{NS}$ \\
$\{ww^R\mid w\in\{a,b\}^* \}$ &\xmark& \xmark &Fig. \ref{tm:11A} & \xmark &\xmark & $\textbf{N}$ & $\textbf{N}$ \\
$\{a^nb^m\mid n=m$ or $n=m+1 \}$ &\xmark& \xmark &\xmark & Fig. \ref{c:3} & $\textbf{F1}$ & $\textbf{F1}$ &$\textbf{F1}$ \\
$\{(aa)^n(bb)^m\mid m\leq n \leq m+1,m\geq 0 \}$ &\xmark& \xmark &\xmark & \xmark& Fig. \ref {tm:4} & $\textbf{FS}$ & $\textbf{FS}$ \\
$\{a^{2n+q}c^{4m}b^{2q+n}\mid n,q\geq 0, m\in \{0,1\} \}$ &\xmark& \xmark &\xmark & \xmark & \xmark & Fig. \ref{tm:5} &$\textbf{F}$ \\
$\{a^n cb^n c\mid n\geq 1\}$ &\xmark& \xmark &\xmark & \xmark & \xmark & \xmark &Fig. \ref{tm:6} \\
\end{tabular} \bigskip
\end{table}
\begin{proof}
Consider the language $L=\{a^{3n}b^{2m}\mid n,m\geq0\}$. An $\textbf{NS}$ sensing $5' \rightarrow 3'$ WK automaton can move one of its heads at a time. Therefore it can read three $a$'s by the left head or two $b$'s by the right head (see Figure \ref{tm:2}). Although, according to Lemma \ref{l1}, $w_s$ is $bb$ and it cannot be the shortest nonempty accepted word for an $\textbf{F1}$ sensing $5' \rightarrow 3'$ WK automaton. Therefore, this language cannot be accepted by an $\textbf{F1}$ sensing $5' \rightarrow 3'$ WK automaton.
Now let us consider the language $L=\{a^n b^m\mid n=m \text{ or } n=m+1\}$. This language can be accepted by an $\textbf{F1}$ sensing $5' \rightarrow 3'$ WK automaton as it is shown in Figure \ref{c:3}. By Proposition \ref{c3}, it is already shown that $L$ cannot be accepted by any $\textbf{N}$ sensing $5' \rightarrow 3'$ WK automata and obviously it cannot be accepted by any $\textbf{NS}$ sensing $5' \rightarrow 3'$ WK automata, neither.
\end{proof}
\section{Conclusion}
\begin{figure}[t]
\centering
\includegraphics[scale=0.41]{hasse.png}
\caption{Hierarchy of sensing $5' \rightarrow 3'$ WK finite automata languages in a Hasse diagram (the language classes accepted by various types of sensing $5' \rightarrow 3'$ WK finite automata, the types of the automata are displayed in the figure with the abbreviations: $\textbf{N}$: stateless, $\textbf{F}$: all-final, $\textbf{S}$: simple, $\textbf{1}$: 1-limited; Lin stands for the class of linear context-free languages).
Labels on the arrows indicate where the proof of the proper containment was presented (Th stands for Theorems, Re stands for Remark). The language classes for which the containment is not shown are incomparable.}
\label{hasse}
\end{figure}
Comparing the new model to the old models we should mention that the general model (the automata without using any restrictions) has the same accepting power, i.e., the linear context-free languages, as the old sensing $5' \rightarrow 3'$ WK automata model with sensing parameter. However, by our proofs, the new model gives a more finer hierarchy, as it is displayed in Figure \ref{hasse}. Table \ref{table:1} gives some specific languages that separate some of the language classes.
Further comparisons of related language classes and properties of the language classes defined by the new model are left to the future: in a forthcoming paper the deterministic variants are addressed.
It is also an interesting idea to see the connections of other formal models and our WK automata, e.g.,
similarities with some variants of Marcus contextual grammars \cite{MarcusCG} can be established.
\section*{Acknowledgements} The authors are very grateful to the anonymous reviewers for their comments and remarks.
\nocite{*}
\bibliographystyle{eptcs}
|
1708.06647
|
\section{Introduction}
Accurate prediction of electronic conductivity in dense plasmas has proved
to be a challenging problem. One approach is to use density functional theory molecular
dynamics (DFT-MD) coupled with the
Kubo-Greenwood formalism \cite{desjarlais02, desjarlais17, sjostrom15, holst11, french14, hu14}.
This method is thought to be accurate but is
limited in the problems it can be applied to due to its significant computational
expense. Moreover, fundamental questions like its treatment of electron-electron
collisional effects are still active areas of research \cite{desjarlais17}.
Experimentally, the subject is an active area of research \cite{sperling15} and
has a long history \cite{milchberg88, clerouin12, clerouin08, desilva98, benage99},
in part due to the difficulty in obtaining model independent measurements.
Another class of methods starts from the Boltzman equation and introduces
a relaxation-time approximation \cite{lee84} in which electrons are scattered
from ions and other electrons. The question then becomes one of calculating
the electron-ion and electron-electron cross sections. In the degenerate electron limit
this approach becomes identical to the famous Ziman method \cite{ziman61,burrill16}.
To calculate the electron-ion cross section in dense plasmas
one needs to be able to model partial ionization induced by density and
temperature, and also the corresponding changes in the ionic structure.
Average atom models have long been used for this purpose
\cite{rinker88, scaalp, sterne07, ovechkin16, burrill16, perrot87, starrett12a}.
They are DFT based models that attempt to calculate the properties of
one averaged atom in the plasma. They are computationally efficient and
are well suited to making wide ranging tables of data \cite{rinker88}.
While on the face of it, it seems natural to couple these average atoms
to the relaxation time approximation and
thus calculate conductivities, it turns out that the results are sensitive
to exactly how this coupling is done. Specifically, one needs to
define an electron-ion scattering potential, and how this definition
is made strongly effects the resulting conductivities
\cite{ovechkin16, burrill16, starrett16b, perrot99}.
In this paper we explore a new method of coupling average atom models to the
relaxation-time approximation that extends the potential
of mean force from classical fluid theory \cite{baalrud13}
to the quantum domain. Thus, a new quantum potential of mean force is defined.
It includes correlations with electrons and ions
surrounding the central scatterer through the quantum fluid
equations known as the quantum Ornstein-Zernike equations \cite{chihara91}.
We find that the new potential leads to generally more accurate conductivity predictions when
to compared to DFT-MD simulations and experiments for aluminum over
a wide range of conditions.
We also explore the influence of electron-electron collisions by
including a correction factor due to Reinholz {\it et al} \cite{reinholz15}, and find
that plays a significant and important role in certain cases.
\section{Theoretical model\label{sec_mod}}
\subsection{Conductivity in terms of the electron relaxation time}
In the relaxation-time approximation the conductivity is given by
\footnote{Unless otherwise stated, atomic units are used throughout in
which $\hbar = m_e = e = k_B = a_B = 1$, and the symbols have their usual
meanings.}
\begin{equation}
\sigma_{DC} = \int_0^\infty \left( -\frac{df}{d\epsilon} \right) N_e(\epsilon) \tau_\epsilon d\epsilon
\end{equation}
where $\tau_\epsilon$ is the energy dependent relaxation time, $f$ is the Fermi-Dirac occupation factor,
\begin{equation}
N_e(\epsilon) = n_I^0 \int_0^\epsilon d\epsilon^\prime \chi(\epsilon^\prime)
\label{nee}
\end{equation}
and $\chi(\epsilon)$ is the density of states such that the number of valence electrons per atom
$\bar{Z}$ is
\begin{equation}
\bar{Z} = \frac{\bar{n}_e^0}{n_I^0} = \int_0^\infty d\epsilon \chi(\epsilon) f(\epsilon, \mu_e)
\label{zbar}
\end{equation}
Here $\mu_e$ is chemical potential and $\bar{n}_e^0$ ($n_I^0$) is the density of
valence electrons (ions).
If we take the density of states to be its free electron form
\begin{equation}
\chi^{free}(\epsilon) = \frac{\sqrt{2 \epsilon}}{n_I^0 \pi^2 }
\end{equation}
then
\begin{equation}
N_e^{free}(\epsilon) = \frac{ v^3 }{3 \pi^2 }
\end{equation}
where $\epsilon=v^2/2$. The resulting equation that we will use here is
\begin{equation}
\sigma_{DC} = \frac{1}{3\pi^2} \int_0^\infty \left( -\frac{df}{d\epsilon} \right) v^3 \tau_\epsilon d\epsilon
\end{equation}
\subsection{The electron relaxation time's relation to the momentum transport cross section}
The relaxation time is related to the mean free path $\lambda_\epsilon$
by
\begin{equation}
\tau_\epsilon = \frac{\lambda_\epsilon} {v}
\end{equation}
which is in turn related to the momentum transport cross section $\sigma_{TR}(\epsilon)$
\begin{equation}
\lambda_\epsilon = \frac{1}{n_I^0 \sigma_{TR}(\epsilon) }
\end{equation}
hence
\begin{equation}
\tau_\epsilon = \frac{1}{n_I^0\,v\, \sigma_{TR}(\epsilon) }
\end{equation}
\subsection{The momentum transport cross section in terms of scattering phase shifts}
The momentum transport cross section is calculated from
\begin{equation}
\sigma_{TR}(p) = 2\pi \int_0^\pi d\theta
\frac{d\sigma}{d\theta}(\epsilon,\theta)
(1-\cos\theta)\sin\theta
\label{tr}
\end{equation}
where $p=m_e v$ and $\frac{d\sigma}{d\theta}(\epsilon,\theta)$ is the differential cross section for one
scatterer in the plasma. This can be calculated from the scattering phase shifts $\eta_l(\epsilon)$
once the single center scattering potential $V^{scatt}(r)$ is known.
\begin{equation}
\frac{d\sigma}{d\theta}(\epsilon,\theta) = \left| \pazocal{F}(\epsilon,\theta)\right|^2
\label{dsdt}
\end{equation}
where the scattering amplitude $\pazocal{F}$ is
\begin{equation}
\pazocal{F}(\epsilon,\theta)
= \frac{1}{p}
\sum\limits_{l=0}^{\infty} (2l+1) \sin \eta_l e^{\imath \eta_l} P_l(\cos\theta)
\label{scattamp}
\end{equation}
The angular integral in equation (\ref{tr}) can be carried out analytically, yielding
\begin{equation}
\sigma_{TR}(\epsilon) = \frac{4 \pi}{p^2} \sum\limits_{l=0}^{\infty} (l+1) \left( \sin\left( \eta_{l+1} - \eta_{l} \right)\right)^2
\label{tr2}
\end{equation}
\subsection{Choice of scattering potential}
To calculate the scattering amplitude $\pazocal{F}$ or momentum transfer cross section
$\sigma_{TR}(r)$ we need a scattering potential $V^{scatt}(r)$. The physical
picture inherent to our application of the relaxation time approximation is that electrons
scatter off one center at a time, and that the plasma
is made up of identical scattering centers. The question then becomes: what defines a scattering
center?
\subsubsection{Pseudoatom (PA) potential}
Lets assume that the plasma is made up of an ensemble of identical pseudoatoms. Each pseudoatom
is a nucleus and a spherically symmetric screening cloud of electrons with density $n_e^{PA}(r)$.
The total potential for the plasma is then
\begin{equation}
V({\bm r}) = \sum_{i=1}^N V^{PA}(|{\bm r} -{\bm R}_i|)
\label{v}
\end{equation}
where the sum is over the $N$ nuclei in the plasma and
\begin{equation}
V^{PA}(r) = -\frac{Z}{r} + \int d^3r^\prime \frac{n_e^{PA}(r^\prime)}{\left| {\bm r} - {\br}^\prime \right|}
+ V^{xc}[n_e^{PA}(r)]
\end{equation}
with $V^{xc}[n]$ being the contribution from electron exchange and correlation.
In the high energy limit the Born cross section becomes accurate and the scattering cross
section for the whole plasma is
\begin{equation}
\frac{d\sigma^{plasma}}{d\theta}(\epsilon,\theta) = \left| \frac{V({\bm q})}{2\pi} \right|^2
\end{equation}
where
\begin{equation}
q^2 = 2p^2[1-\cos\theta]
\end{equation}
and the Fourier transform of the potential is
\begin{equation}
V({\bm q}) = \int d^3r e^{-\imath {\bm r}\cdot{\bm q}} V({\bm r})
\end{equation}
Taking the Fourier Transform of equation (\ref{v}) and using the definition of the ionic structure factor
\begin{equation}
S_{\rm ii}(q) = \frac{1}{N}\left< \rho_{{\bm q}} \rho_{-{\bm q}} \right>
\label{sii_micro}
\end{equation}
where
\begin{equation}
\rho_{\bm q} = \sum_i e^{-\imath {\bm q} \cdot {\bm R}_i }
\label{micro}
\end{equation}
and the angular brackets indicate that the configurational average
has been taken, the differential cross section per scattering center becomes
\begin{equation}
\frac{d\sigma}{d\theta}(\epsilon,\theta) = S_{ii}(q) \left| \frac{V^{PA}({q})}{2\pi} \right|^2
\label{dsdt_pa}
\end{equation}
This is valid when the Born approximation is accurate (i.e. for weak scattering, typically high energy scattering electrons).
To return to the strong scatterer picture for which the Born approximation is invalid, one
approach \cite{burrill16} is to replace the Born cross section $|V^{PA}(q)/2\pi|^2$ with its t-matrix equivalent, equations
(\ref{dsdt}) and (\ref{scattamp}), i.e.
\begin{equation}
\frac{d\sigma}{d\theta}(\epsilon,\theta) = S_{ii}(q) \left| \pazocal{F}^{PA}(\epsilon,\theta)\right|^2
\label{dsdt_tpa}
\end{equation}
where $\pazocal{F}^{PA}$ is the scattering amplitude for the $V^{PA}(r)$ potential.
However, as the differential cross section now depends on
$S_{ii}(q)$, the angular integral in equation (\ref{tr}) can longer be done analytically and must be done numerically
\footnote{From a numerical point of view the angular integral in equation (\ref{tr}) is not particularly difficult.}.
Using this approach the Born limit of the scattering cross section is recovered, and so the method
should be accurate at high temperatures or high densities where the scattering
electrons have high energies. However, at relatively lower densities and temperatures (like room temperature
and pressure), the Born cross section will be significantly in error, and therefore equation (\ref{dsdt_tpa}) may also be significantly
in error.
\subsubsection{Average atom (AA) potential}
An alternative and routinely used \cite{ovechkin16,faussurier15,pain10, rinker88, sterne07, rozsnyai08, starrett12a} definition of the scattering potential is to use an average atom potential $V^{AA}(r)$. There
are a number of realistic variations on how this should be defined but such variations are
relatively unimportant for present purposes. We define
\begin{equation}
V^{AA}(r) = -\frac{Z}{r} + \int_{r<R_{WS}} d^3r^\prime \frac{n_e^{AA}(r^\prime)}{\left| {\bm r} - {\br}^\prime \right|}
+ V^{xc}[n_e^{AA}(r)]
\end{equation}
where the integral and the potential are confined to the ion sphere with (Wigner-Seitz) radius $R_{WS}$.
The ion-sphere is required to the charge neutral and typically $V^{AA}(r) = 0$ for $r>R_{WS}$.
In this approach
\begin{equation}
\frac{d\sigma}{d\theta}(\epsilon,\theta) = S_{ii}(q) \left| \pazocal{F}^{AA}(\epsilon,\theta)\right|^2
\label{dsdt_aa}
\end{equation}
In deriving this result \cite{evans73} one assumes a muffin-tin potential and a crude estimate
of multiple scattering effects gives rise to the ionic structure factor $S_{ii}(q)$.
The average atom potential $V^{AA}(r)$ cannot recover the Born limit and equation (\ref{dsdt_aa})
will be incorrect in the high temperature or high density limit.
\begin{figure}
\begin{center}
\includegraphics[scale=0.40]{potentials.pdf}
\end{center}
\caption{(Color online) Examples of scattering potentials for two different
aluminum cases. In the top panel, for aluminum at 2.7 g/cm$^3$ and 2 eV,
the mean force potential is close to the
average atom potential (which only extends to the ion sphere radius).
In the bottom panel, for aluminum at 0.1 g/cm$^3$ and 2 eV, the mean force
potential is closer to the pseudoatom potential.
}
\label{potentials}
\end{figure}
\subsubsection{Potential of mean force (MF)}
We desire the potential felt by one electron as it scatters from one center. This scattering
does not happen in isolation as in the $V^{PA}(r)$ approximation (equation (\ref{dsdt_tpa})).
For classical (``$cl$'') particles the Ornstein-Zernike
equations can be used to define a potential of mean force
that includes the potential created by the central scatterer as well as its correlations with surrounding
scattering centers. In the hyper-netted chain closure approximation (HNC) this reads
\begin{eqnarray}
V^{MF,cl}(r) & = & V(r) - \frac{1}{\beta} \left( h(r) - C(r)\right)\nonumber\\
& = & V(r) + n^0 \int d^3 r^\prime \frac{ C(|{\bm r} -{\br}^\prime|) }{-\beta} h(r^\prime)
\end{eqnarray}
where $h(r)$ is the pair correlation function\footnote{$h(r)$ is simply related the structure factor $S(q) = 1+n^0 h(q)$.},
$C(r)$ the direct correlation function, $n^0$ is the particle
density, $V(r)$ the direct interaction potential between two particles (e.g. the coulomb potential), and $\beta$ is the inverse
temperature. Such a
mean field potential has been used successfully for calculation of ionic transport quantities \cite{daligault16, baalrud13}. The
analogous electron-ion potential when considering quantal electrons is given by the quantum Ornstein-Zernike
equations as \cite{chihara91, starrett14b}
\begin{eqnarray}
V^{MF}(r) & = & V_{ie}(r) + n_i^0 \int d^3 r^\prime \frac{ C_{ie}(|{\bm r} -{\br}^\prime|) }{-\beta} h_{ii}(r^\prime) \nonumber\\
& & + \bar{n}_e^0 \int d^3 r^\prime \frac{ C_{ee}(|{\bm r} -{\br}^\prime|) }{-\beta} h_{ie}(r^\prime) \nonumber\\
\label{vmf1}
\end{eqnarray}
where $C_{ie}$ ($C_{ee}$) is the electron-ion (-electron) direct correlation function.
The quantum Ornstein-Zernike equations are valid when the electrons respond linearly to the ions, i.e. in the linear
response regime. Thus is it necessary to artificially separate the electrons into two groups: those that are
bound the nucleus ($n_e^{ion}(r)$) and therefore respond very non-linearly, and those that are not bound ($n_e^{scr}(r)$), and for which
linear response is reasonably accurate. This was the approach taken in \cite{starrett13,starrett14} where
the quantum Ornstein-Zernike equations where solved and resulting pair distribution functions where
shown to be accurate. Adopting that model, $V_{ie}(r)$ becomes
\begin{equation}
V_{ie}(r) = -\frac{Z}{r} + \int d^3r^\prime \frac{n_e^{ion}(r^\prime)}{\left| {\bm r} - {\br}^\prime \right|}
+ V^{xc}[{n_e^{ion}}(r)]
\end{equation}
Using the relation \cite{starrett13}
\begin{equation}
C_{ij}(k) = -\beta V_{ij}^C(k) + \widetilde{C}_{ij}(k)\label{cij_tilde}
\end{equation}
where $V_{ij}^C(k) = Z_i Z_j 4\pi/k^2$ is the Coulomb potential in Fourier Space between two charges $Z_i$ and $Z_j$,
and collecting terms gives
\begin{eqnarray}
V^{MF}(r) & = & V^{PA}(r) + n_i^0 \int d^3 r^\prime \frac{ -\bar{Z} h_{ii}(r^\prime) + n_e^{ext}(r^\prime)}{|{\bm r} -{\br}^\prime|} \nonumber\\
& & + V^{xc}[n_e^{ext}(r)] \nonumber\\
& & + n_i^0 \int d^3 r^\prime \frac{ \widetilde{C}_{ie}(|{\bm r} -{\br}^\prime|) }{-\beta} h_{ii}(r^\prime)
\end{eqnarray}
with
\begin{eqnarray}
n^{ext}_e(r) & = & \int d^3 r^\prime n_e^{scr}(r^\prime) h_{ii}(|{\bm r} -{\br}^\prime|)
\end{eqnarray}
and $\bar{Z} = \bar{n}_e^0 / n_i^0 = \int d^3r\, n_e^{scr}(r)$.
$V^{MF}(r)$ is the scattering potential for one scattering center, and implicitly includes the ionic
structure ($S_{ii}(q) = 1 + n_i^0 h_{ii}(q)$), i.e. $S_{ii}(q)$ does not explicity appear as in equations (\ref{dsdt_tpa}) and (\ref{dsdt_aa}) for
the pseudoatom and average atom differential cross sections. Hence, the angular integral in equation (\ref{tr}) can be carried out
analytically and we directly solve equation (\ref{tr2}).
\begin{figure}
\begin{center}
\includegraphics[scale=0.30]{qlb.pdf}
\end{center}
\caption{(Color online) The effect of electron-electron collisions on the electrical
conductivity of hydrogen at 40 g/cm$^3$. Compare with figure 3a of reference \cite{desjarlais17}.
Our calculations using the mean force potential (MF) agree well with the quantum Lenard-Balescu calculations
with only electron-ion collisions included (QLB ei). Including a correction factor that accounts for
electron-electron collisions due to Reinholz {\it et al} \cite{reinholz15}, our calculations
(MF+ee) fall into close agreement with the QLB calculation that include electron-electron collisions,
as well as DFT simulation results.
}
\label{ee}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.40]{al_10_30_kK.pdf}
\end{center}
\caption{(Color online)
Electrical conductivity of aluminum at 10 kK (top panel) and 30 kK (bottom panel) as calculated in the
present approach for the three potentials $V^{AA}(r)$, $V^{PA}(r)$ and $V^{MF}(r)$. Comparisons
are made to QMD simulations of Desjarlais et al \cite{desjarlais02} and to the experiments
of DeSilva {\it et al} \cite{desilva98} and Cl\'erouin {\it et al} \cite{clerouin08}. MF+ee
refers to calculation using $V^{MF}(r)$ and explicitly accounting for electron-electron
collisions using the fit formula of reference \cite{reinholz15}.
}
\label{fig_des}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{al_clerouin.pdf}
\end{center}
\caption{(Color online) Electrical resistivity of aluminum at 0.1 g/cm$^3$ (top panel) and
0.3 g/cm$^3$ (bottom panel). Experiments are from Cl\'erouin {\it el at }\cite{clerouin12} and
DeSilva {\it et al} \cite{desilva98}. Also shown are DFT-MD (QMD) results of Desjarlais {\it et al} \cite{desjarlais02}.
Present results use the three potentials discussed in the text: $V^{AA}(r)$, $V^{PA}(r)$ and $V^{MF}(r)$,
as well as the effect of electron-electron collisions for $V^{MF}(r)$ (MF+ee).
Also shown is Sesame 29371 which is based on the models of references \cite{desjarlais01,lee84}.
}
\label{fig_cl}
\end{figure}
In the high temperature regime $h_{ii}(r) \to 0$ $\forall r$, and $\widetilde{C}_{ie}(r) \to 0$ $\forall r$,
hence $V^{MF}(r) \to V^{PA}(r)$, and the Born limit will be recovered.
It is interesting to note that the potential of mean force obtained above (equation (\ref{vmf1})) is that same
as the potential used in Chihara's QHNC model \cite{chihara91}. There it has not been used for conductivity calculations.
In figure \ref{potentials} examples of these three potentials $V^{AA}(r)$, $V^{MF}(r)$ and $V^{PA}(r)$
are shown for aluminum at 2 eV and 2.7 g/cm$^3$ (top panel) and 0.1 g/cm$^3$ (bottom panel). For the higher density
case correlations are important and $V^{MF}(r)$ is close to $V^{AA}(r)$, whereas for the lower density case
correlations are less important and $V^{MF}(r)$ is closer to $V^{PA}(r)$.
\section{Numerical results}
To generate the potentials needed for calculation of the conductivities ($V^{AA}(r)$, $V^{PA}(r)$ and $V^{MF}(r)$) we have used the model of references
\cite{starrett13, starrett14}. In this model DFT is used to determine the electronic structure of one average
atom in the plasma. This provides a closure relation for the quantum Ornstein Zernike equations,
which are thus solved self-consistently for all quantities of interest (eg. $S_{ii}(k)$, $C_{ie}(k)$, $\bar{n}_e^0$ etc.)
This model has been shown to be realistic for equation of state \cite{starrett16} and
ionic structure \cite{starrett15b}. It is a plasma model, and as such is most accurate at elevated temperatures. For example,
for aluminum at solid density realistic structure factors were predicted for temperatures greater than $\sim$1 eV \cite{starrett13}.
A numerical issue for the solution of equation (\ref{tr2}) at very high temperatures is discussed in the appendix.
The conductivity model discussed in section \ref{sec_mod} includes electron collisions with ions in the plasma. There is no explicit account
of electron-electron collisions. This difficult problem has been discussed by a number
of authors \cite{desjarlais17, reinholz15, potekhin96, kuhlbrodt01, wardana06}. Here we
test the effect of including electron-electron collisions by using the fit formula
due to Reinholz {\it et al} \cite{reinholz15}. This formula takes as input
the ion density, temperature and the average ionization (which is provided by
the model of references \cite{starrett13, starrett14}). Very recently, the
effect of electron-electron collisions on electrical conductivity was considered in
detail \cite{desjarlais17} by comparing quantum Lenard-Balescu calculations
that explicitly account for both electron-electron and electron-ion collisions, to
DFT-MD simulations that use the Kubo-Greenwood formalism. In figure \ref{ee}
we compare to the results of reference \cite{desjarlais17} for hot, dense hydrogen.
From the figure it is clear that including only electron-ion collisions in the
present method leads to good agreement with the QLB results that also only include
electron-ion collisions. Adding the electron-electron collisions factor to the mean force
results, we now see agreement with the QLB collision that also explicitly account for these,
as well as good agreement with the DFT results. This comparison strongly indicates
that it is necessary to explicitly account of electron-electron collisions in our
relaxation time approach.
In figure \ref{fig_des} we compare to the experiments of references \cite{desilva98} and
\cite{clerouin08} for aluminum at two temperatures (10 kK and 30 kK) as a function
of density. We also show DFT-MD (also known as QMD) results from reference \cite{desjarlais02},
which is a less approximate method than the present, but is much more computationally expensive.
Results using $V^{AA}(r)$ are in reasonable agreement with the QMD calculations at high densities, but
tend to be too large at lower densities, compared to the experimental data. In contrast,
$V^{PA}(r)$ gives reasonable results at lower densities, but underestimates the QMD results
at high densities. $V^{MF}(r)$ gives reasonable, but not perfect, agreement at both low and
high densities, for both temperatures. Including the electron-electron collision factor, agreement
with the experiment and the DFT-MD simulations for the mean force potential is markedly improved.
This further strengthens the case that electron-electron collisions should be explicitly accounted for
when using the relaxation time approach. We note that the correction factor is the same
for all three potentials, but its effect is only shown for $V^{MF}(r)$ for clarity.
\begin{figure}
\begin{center}
\includegraphics[scale=0.3]{al_2p7gpcc.pdf}
\end{center}
\caption{(Color online)
Electrical conductivity of aluminum at 2.7 g/cm$^3$ as calculated in the
present approach for the three potentials $V^{AA}(r)$, $V^{PA}(r)$ and $V^{MF}(r)$, compared
to the experiment of Milchberg {\it et al} \cite{milchberg88}. Also shown are DFT-MD
results due to Sjostrom {\it et al} \cite{sjostrom15} and Sesame 29371.
}
\label{fig_mil}
\end{figure}
It should also be noted that the range of conditions plotted in figure \ref{fig_des} is a particularly
challenging regime to model due to the relocalization of electrons as density is lowered.
This relocalization explains the structures in the present results, which are a consequence
of the spherical symmetry in the underlying model \cite{starrett13,starrett14}, and
are probably too pronounced.
Also shown in the figure are Sesame 29371 curves, which were designed to fit the experiments \cite{desilva98} and as a consequence
agree very well with those experiments. At the highest densities shown, the Sesame
curve underestimates the QMD calculations, particularly at 30 kK.
In figure \ref{fig_cl} calculations of the resistivity
of warm dense aluminum at 0.1 g/cm$^3$ and 0.3 g/cm$^3$ are compared
to the experimental results of Cl\'erouin {\it et al} \cite{clerouin12, clerouin08}
and DeSilva {\it et al} \cite{desilva98} as well as DFT-MD results from reference \cite{desjarlais02}.
Results using all three types of potential are shown.
All the theory curves are reasonably close to each other for temperatures
greater than 2 eV. For temperatures lower that this the variance significantly increases.
Compared to the Cl\'erouin {\it et al} data the mean-force potential gives reasonable agreement
for both densities, whereas neither $V^{AA}(r)$ or $V^{PA}(r)$
give a consistent level of agreement with the data at both densities. $V^{AA}(r)$
is closer to the data at the higher density, where correlations with surrounding ions are relatively important,
while $V^{PA}(r)$ is closer at the lower density, where the influence of correlations
reduced. While the agreement of the $V^{MF}$ calculations is not perfect it should
be remembered that neither is the underlying model \cite{starrett13,starrett14} or
the experiments, which themselves do not agree with each other.
The effect of electron-electron collisions is significant in for the lower temperatures.
We note that the temperatures of the experiments of Cl\'erouin {\it et al} are obtained through comparison
with DFT-MD simulations \cite{clerouin12} and are not measured directly.
For completeness we have also compared
to Sesame calculations that are based on the model of \cite{desjarlais01}. The
Sesame result tends to overestimate the Cl\'erouin {\it et al} data. They are fit
to the DeSilva {\it et al} data \cite{desjarlais01} and are therefore is good agreement with those
experiments.
\begin{figure}[t!]
\begin{center}
\includegraphics[scale=0.3]{compare_tf.pdf}
\end{center}
\caption{(Color online)
Electrical conductivity of aluminum using the $V^{MF}(r)$ potential (no electron-electron factor).
Results using a potential created by a Kohn-Sham calculation are compared to those created using a Thomas-Fermi
calculation for three densities. For the cases shown the two calculations agree
well for temperatures greater than roughly 25 eV.
}
\label{fig_tf}
\end{figure}
In figure \ref{fig_mil} we compare to the experiment of reference \cite{milchberg88}. This
experiment reported results for solid density aluminum up to high temperatures. We also
compare to DFT-MD results from reference \cite{sjostrom15}. At the lower temperatures (20 eV and below)
the results using $V^{MF}(r)$ agree well with the experiment. They underestimate the
DFT-MD results of \cite{sjostrom15} by $\sim 20\%$ at 1 eV. Note that the increase in conductivity
in going from 5 to 10 eV reported by Sjostrom {\it et al} \cite{sjostrom15} may be an artifact
of the pseudopotential used \cite{sjostrom_p}. The $V^{PA}(r)$ results
significantly underestimate the conductivity in the low temperature regime compared
to both the DFT-MD and experimental results. The $V^{AA}(r)$ results over predict the
conductivity in this regime. At higher temperatures none of the results from the potentials match the
experiment. However, it has been pointed out \cite{dwp92} that the temperatures
reported in \cite{milchberg88} are dependent on a model for ionization, and that
different models yield significantly different temperatures (eg. the 40 eV point
becomes 23.5eV). Moreover, while dc conductivity was reported, it as an ac conductivity
at 4.026 eV that was measured. Finally, we also show the Sesame result. This curve
has a much sharper drop at low temperatures, and a different slope as temperature is
increased. Over the temperature range shown the maximum difference between Sesame and the
$V^{MF}(r)$ curve is a factor of $\sim 2$, but is generally closer. The effect of
electron-electron collisions is small due to the relatively high degeneracy of the plasma
and relatively large electron-ion collision cross section.
While generation of the potential $V^{MF}(r)$ using the model of references \cite{starrett13,starrett14}
is computationally inexpensive relative to DFT-MD simulations, is still takes 30 minutes to 1 hour per
density temperature point. With a view to generating tables of conductivity data it is desirable to find
an even more computationally efficient model. One option is to use a Thomas-Fermi based DFT model, in place
of the Kohn-Sham description used above \cite{starrett13,starrett14}. With such a model it takes seconds to generate
$V^{MF}(r)$. In figure \ref{fig_tf} we compare conductivities using the Kohn-Sham and Thomas-Fermi methods of generating
$V^{MF}(r)$. We find that for the densities shown the Thomas-Fermi and Kohn-Sham results agree very closely for
temperature greater than 25 eV. The Thomas-Fermi model therefore offers a rapid shortcut to realistic potentials
for sufficiently high temperatures.
\begin{figure}
\begin{center}
\includegraphics[scale=0.3]{compare_sandia.pdf}
\end{center}
\caption{(Color online)
Electrical conductivity of aluminum using the $V^{MF}(r)$ potential
compared to Sesame 29371 \cite{desjarlais01,lee84}. The mean force
calculations suggest that conductivity is significantly more sensitive to density than
the Sesame model indicates.
}
\label{fig_san}
\end{figure}
In figure \ref{fig_san} we compare calculations based on the present $V^{MF}(r)$ model to the Sesame table 29371, based on the
model of \cite{desjarlais01,lee84}, up to 1000 eV. We find that the present model shows much more sensitivity to density
than Sesame 29371 does and that the effect of electron-electron collisions remains relatively small even at these
high temperatures. At 1000 eV the present model is $\sim$ 13 \% larger for 2.7 g/cm$^3$, while at 0.027 g/cm$^3$,
the present model gives conductivities $\sim$ 25 \% lower.
\section{conclusions}
We have presented a potential of mean force that, coupled with the relaxation time approximation, is used to calculate
electrical conductivity of dense plasmas. Compared to previously used potentials we found improved agreement
with experiments and bench mark calculations across a range of temperatures and densities for aluminum.
We note that the new potential takes into account the self-consistent ionic structure factor implicitly, in contrast
to other methods where the structure factor appears explicitly. This is important because to generate the potential
of mean force, one needs a model that self-consistently solves the quantum Ornstein Zernike equations, whereas previously
the scattering potential and structure factor have often been generated by distinct models.
In applying the new model the influence of electron-electron collisions was included using the fit formula of reference \cite{reinholz15}.
It was found that for dense hydrogen this was essential for agreement with other methods that are thought to be
accurate, indicating the importance and correctness of their inclusion. In comparison with experiments on aluminum
by DeSilva {\it et al} \cite{desilva98}, it was found that inclusion of electron-electron collisions significantly improved
agreement with both the experiments and bench mark calculations.
We also tested the effect of generating the potential of mean force with a Thomas-Fermi DFT based model as opposed to the more
accurate Kohn-Sham DFT calculation. It was found that for aluminum from 0.027 to 2.7 g/cm$^3$ the TF based model was accurate
for temperatures greater than $\sim$25 eV, a fact that greatly speeds up the calculations. It is important to realise
that while the potential can be generated using the TF based model, the conductivity itself is based on a calculation of
the phase shifts in the usual way.
The new method was compared to Sesame table 29371 for aluminum and was found to have a significantly greater dispersion with respect
to density at high temperature than Sesame. At low temperatures significant differences (up to a factor of $\sim$ 2) were also observed.
While we have demonstrated that the new potential results in generally improved agreement with bench mark calculations and experiments,
the model as a whole is not perfect and could be further improved in a number of ways. Perhaps
the most apparent error is that the potential assumes on average ionic configuration, instead of a weighted range of configurations. This leads
to structures in the conductivity as a function of density and temperature that are likely to be too pronounced. This predominately effects
the lower densities and temperatures considered here. Other potential sources of error or uncertainty include the in influence
of a more realistic density of states factor \cite{starrett16}, or the choice of exchange and correlation potential in the underlying
DFT calculation \cite{ovechkin16}.
We have focused mainly of aluminum, for which most experimental data is available, but there is no fundamental
restriction and the method should be applicable other materials. Also, it is straightforward to extend to the calculation
of thermal conductivity. However, recent investigations \cite{desjarlais17} point to the need for care with such calculation with
respect to the inclusion of electron-electron collisions. Finally, we point out that this new potential is also
applicable to opacity calculations \cite{shaffer17} where it could aid in the assessment of the influence of
ionic structure on opacity \cite{krief16}.
\section*{Acknowledgments}
The author acknowledges useful conversations with S. Baalrud and thanks T. Sjostrom for providing his DFT-MD data and
M. Desjarlais for useful comments on the manuscript.
This work was performed under the auspices of the United States Department of Energy under contract DE-AC52-06NA25396.
|
2006.14207
|
\section{Introduction}
Pedestrian emergency evacuation is a movement of people from a place of danger to a safer place in case of life-threatening incidents such as fire and terrorist attacks. Numerical simulation has been a popular approach to perform pedestrian emergency evacuation studies, for instance, predicting total evacuation time in a class room~\cite{Guo_TrB2012} and preparing an optimal evacuation plan for a large scale pedestrian facility~\cite{Abdelghany_EJOR2014}.
Based on numerical simulations, it has been identified that evacuees are often in conflict with others when more than two evacuees try to move to the same position~\cite{Yanagisawa_PRE2007}. Game theory has been used to model strategic interactions among evacuees in such a conflict. Under game theoretic assumptions, each evacuee has his own strategies and selects a strategy in a way to maximize his own payoff. Various emergency evacuation simulations have been performed based on different game theory models including evolutionary game~\cite{Hao_PRE2011}, snowdrift game~\cite{Shi_PRE2013}, and spatial game~\cite{Heliovaara_PRE2013,vonSchantz_PRE2015}.
Although those game theory models successfully modeled evacuees’ egress, especially from a room, other aspects of evacuees’ behavior such as helping behavior have not been sufficiently studied. In the context of emergency evacuations, it has been reported that evacuees help injured evacuees to evacuate from the place of danger, for instance the WHO concert disaster occurred on December 3, 1979 in Cincinnati, Ohio, United States~\cite{Johnson_1987} and 2005 London bombings in United Kingdom~\cite{Drury_2009}.
A few studies have investigated helping behavior in emergency evacuation by means of pedestrian simulation. Von Sivers~\textit{et al.}~\cite{vonSivers_PED2014,vonSivers_SafetySci2016} applied social identity and self-categorization theories to pedestrian simulation in order to simulate helping behavior observed in 2005 London bombings. In their studies, they assumed that all the evacuees share the same social identify which makes them be willing to help others rather than be selfish. Lin and Wong~\cite{Lin_PRE2018} applied the volunteer’s dilemma game~\cite{Diekmann_JCR1985,Diekmann_SF2016} to model the behavior of volunteers who removed obstacles from the exit. Their work can be considered as a helping behavior modeling study in that some evacuees were voluntarily removing the obstacles so they helped others in the same room to evacuate faster.
One can observe that such a helping behavior provides a collective good in case of emergency evacuations. This is especially true when there are not enough rescuers, more injured persons can be rescued with the help of other evacuees than by only the rescuers. In order to study the collective effects of helping behavior in emergency evacuations, we have developed an agent-based model simulating such helping behaviors among evacuees. Based on the agent based model, we represent individual behaviors with a set of behavioral rules and then systematically study collective dynamics of interacting individuals. In our agent based model, we assumed that helping an injured person can be a costly behavior because the volunteer spends extra time and take a risk to assist the injured person in the evacuation. If individuals feel that helping behavior is a costly behavior for them, they might not turn into volunteers. Thus we implemented the volunteer’s dilemma game model~\cite{Diekmann_JCR1985,Diekmann_SF2016} to reflect the cost of helping behavior. Pedestrian movement is simulated based on social force model~\cite{Helbing_PRE1995}.
The remainder of this paper is organized as follows. The simulation model and its setup are explained in Sec.~\ref{section:method}. We then present its numerical simulation results with a phase diagram in Sec.~\ref{section:results}. Finally, we discuss the findings of this study in Sec.~\ref{section:conclusion}.
\section{Method}
\label{section:method}
\subsection{Volunteer's Dilemma Game}
\label{section:VDG}
We employ the volunteer’s dilemma game model to study helping behavior of passersby in a room evacuation~\cite{Diekmann_JCR1985,Diekmann_SF2016}. A passerby is an evacuee who is not injured and can play the volunteer’s dilemma game model. According to the volunteer’s dilemma game, two types of players are considered. Passerby $i$ can be either a volunteer (C) who helps an injured person to evacuate or a bystander (D) who does not help the injured person. Once a passerby decides to be a volunteer, he approaches to and then rescues the injured person. We can express the payoff of player $i$ in terms of collective good $U$ and volunteering cost $K < U$, see Table~\ref{table:payoff_matrix}. The payoff of a bystander (D) is $U$ if there is at least one volunteer, 0 otherwise. It can be understood that, bystanders are benefited by the volunteer. However, if nobody volunteers, the collective good $U$ cannot be produced because all the players are bystanders. The collective good $U$ can be produced by volunteers when they rescue injured persons. For simplicity, we assume that the value of $U$ is constant if there is at least one volunteer. The payoff of a volunteer (C) is always $U-K$, indicating that his payoff is constant regardless other players' choice.
\begin{table}[]
\normalsize
\setlength{\tabcolsep}{6pt}
\centering
\caption{Payoff of a volunteer (C) and a bystander (D) for the different number of other players choosing C (based on Refs.~\cite{Diekmann_JCR1985,Diekmann_SF2016}). Here, $U$ is the collective good, $K < U$ is the volunteering cost, and $N \geq 2$ is the number of players.}
\label{table:payoff_matrix}
\resizebox{12cm}{!}{
\begin{tabular}{c*{6}{c}}
\hline\hline
\multirow{2}{*}{Player $i$'s choice}& \multicolumn{5}{c}{The number of other players choosing C}\\
& 0 & 1 & 2 & ... & $N-1$\\
\hline
Volunteer (C) & $U-K$ & $U-K$ & $U-K$ & $U-K$ & $U-K$ \\
Bystander (D) & 0 & $U$ & $U$ & $U$ & $U$ \\
\hline\hline
\end{tabular}
}
\end{table}
Actor $i$'s expected payoff $E_i$ is given as:
\begin{eqnarray}\label{eq:payoff_expected}
E_i = q_i \left( 1-\prod_{j \neq i}^{N} q_j \right)U + (1-q_i)(U-K).
\end{eqnarray}
\noindent
Here, $q_i$ is the probability that player $i$ chooses D and $1-q_i$ for choosing C. The number of players is indicated by $N$. The probability that all players $j \neq i$ choose D is denoted by $\prod q_i$ and $1-\prod q_i$ indicates the probability that at least one actor $j \neq i$ chooses C. The first term on the right hand side reflects the payoff of player $i$ when he selects D but benefited when there is at least one volunteer. The second term on the right hand side indicates the payoff of player $i$ if he selects C.
We assume that player $i$ adopts the mixed-strategy which is the best strategy for him. In a mixed-strategy equilibrium, every action played with positive probability must be a best response to other players’ mixed strategies. This implies that player $i$ is indifferent between choosing C and D, so a small change in the payoff $E_i$ with respect to $q_i$ (i.e., the probability of choosing D) becomes zero:
\begin{eqnarray}\label{eq:mixed-strategy-equilibrium}
\frac{dE_i}{dq_i} = -U \prod_{j \neq i}^{N} q_j + K = 0.
\end{eqnarray}
\noindent
After assuming $q_i = q_j$, we can obtain probability that player $i$ chooses D
\begin{eqnarray}\label{eq:q_i}
q_i = \left[ \frac{K}{U} \right]^{\frac{1}{N-1}} = \beta^{\frac{1}{N-1}},
\end{eqnarray}
\noindent
where $\beta = K/U$ is cost ratio, which can be interpreted as the risk of volunteering. Accordingly, the probability that player $i$ chooses C is given as
\begin{eqnarray}\label{eq:p_i}
p_i = 1-q_i = 1-\beta^{\frac{1}{N-1}}.
\end{eqnarray}
\noindent
The probability that at least one player selects C is denoted by $p^{*}$, i.e.,
\begin{eqnarray}\label{eq:p_star}
p^{*}= 1-q_{i}^{N} = 1-\beta^{\frac{N}{N-1}}.
\end{eqnarray}
Equations~\ref{eq:p_i} and~\ref{eq:p_star} show good agreement with the bystander effect, see Fig.~\ref{fig:probability}. Figure~\ref{fig:probability}(a) shows a decreasing trend of $p_i$ as the number of players $N$ increases, inferring that players are less likely to volunteer seemingly because they believe other players will volunteer. Note that the social pressure from other players is not considered here, so the existence of volunteers does not affect on players' behavior. Figure~\ref{fig:probability}(b) presents the trend of $p^{*}$ which reflects the chance that an injured person is rescued. As the number of players $N$ increases, the value of $p^{*}$ approaches to a certain value, $1-\beta$.
\begin{figure}[!t]
\centering
\begin{tabular}{cc}
\includegraphics[width=.49\columnwidth]{probability_p_i.pdf}&
\includegraphics[width=.49\columnwidth]{probability_p_star.pdf}\\
\end{tabular}
\caption{Bystander effect on helping behavior: (a) $p_i$, the probability that player $i$ volunteers to rescue an injured person and (b) $p^{*}$, the probability that an injured person is rescued.}
\label{fig:probability}
\end{figure}
\subsection{Social Force Model}
\label{section:SFM}
According to the social force model~\cite{Helbing_PRE1995}, the position and velocity of each pedestrian $i$ at time $t$, denoted by $\vec{x}_i(t)$ and $\vec{v}_i(t)$, evolve according to the following equations:
\begin{eqnarray}\label{eq:SFM_v}
\frac{\mathrm{d} \vec{x}_i(t)}{\mathrm{d} t} = \vec{v}_i(t)
\end{eqnarray}
\noindent
and
\begin{eqnarray}\label{eq:SFM_acc}
\frac{\mathrm{d} \vec{v}_i(t)}{\mathrm{d} t} = \vec{f}_{i, d} + \sum_{j\neq i}^{ }{\vec{f}_{ij}} + \sum_{B}^{ }{\vec{f}_{iB}}.
\end{eqnarray}
\noindent
In Eq.~(\ref{eq:SFM_acc}), the driving force term $\vec{f}_{i, d} = (v_d\vec{e}_i - \vec{v}_i)/\tau$ describes the tendency of pedestrian $i$ moving toward his destination. Here, $v_d$ is the desired speed and $\vec{e}_i$ is a unit vector indicating the desired walking direction of pedestrian $i$. The relaxation time $\tau$ controls how quickly the pedestrian adapts one’s velocity to the desired velocity. The repulsive force terms $\vec{f}_{ij}$ and $\vec{f}_{iB}$ reflect his tendency to keep certain distance from other pedestrian $j$ and the boundary $B$, e.g., wall and obstacles. A more detailed description of the social force model can be found in previous studies~\cite{Helbing_PRE1995,Johansson_2007,Kwak_PRE2013,Viswanathan_EPJB2013}.
\subsection{Numerical Simulation Setup}
\label{section:setup}
\begin{figure}[!htp]
\centering
\includegraphics[width=9cm]{room_layout.pdf}
\caption{Schematic depiction of the numerical simulation setup. 100 pedestrians are placed in a 10m$\times$10m room indicated by a yellow shade area. Pedestrians are leaving the room through an exit corridor which is 5~m long and 2~m wide. The place of safety is set on the right, outside of the exit corridor.}
\label{fig:room_layout}
\end{figure}
Our agent-based model consists of helping behavior model and movement model. The helping behavior model computes the probability that a passerby would help an injured person based on the volunteer's dilemma game. The movement model calculates the sequence of pedestrian positions for each simulation time step. Our agent-based model was implemented from scratch in C++.
Each pedestrian is modeled by a circle with radius $r_i = 0.2$~m. $N_0 = 100$ pedestrians are placed in a 10m$\times$10m room indicated by a yellow shade area in Fig.~\ref{fig:room_layout}. Pedestrians are leaving the room through an exit corridor which is 5~m long and 2~m wide. The place of safety is set on the right, outside of the exit corridor. There are $N_i$ injured persons who need a help in escaping the room and $N = N_0 - N_i$ passersby who are ambulant. Some passersby might turn into volunteers who are going to approach to and then rescue the injured persons. The number of volunteers is determined based on the volunteer's dilemma game presented in Sec.~\ref{section:VDG}.
The volunteer's dilemma game is updated for each second. We assumed that the volunteer's dilemma game is a macroscopic behavior like goal selection and path navigation patterns~\cite{Zhong_AAMAS2016}. In line with Heli\"{o}vaara \textit{et al.}~\cite{Heliovaara_PRE2013}, each passerby can play the volunteer's dilemma game a few times during the whole simulation period. With the update frequency of one time per second, most of passersby play the volunteer's dilemma game up to ten times before they leave the room. A passerby can decide whether he will volunteer to rescue an injured person within a range of 3~m. Once the volunteer decides to rescue the injured person, then he shifts his desired direction walking vector $\vec e_i$ toward the position of injured person. Once the volunteer reaches the injured person, he will flee to the place of safety with the injured person after a preparation time of 5~s.
The pedestrian movement is updated with the social force model in Eq.~(\ref{eq:SFM_acc}). The passersby move with the initial desired speed $v_d = v_{d, 0}=1.2$~m/s and with relaxation time $\tau = 0.5$~s, and their speed cannot exceed $v_{\rm max} = 2.0$~m/s. Until now, the speed of volunteers rescuing the injured persons is often assumed by the modelers, like the work of Von Sivers~\textit{et al.}~\cite{vonSivers_PED2014,vonSivers_SafetySci2016}. We applied speed reduction factor $\alpha = 0.5$ to the volunteers rescuing the injured persons, so they move with a reduced desired speed $v_{d} = \alpha v_{d, 0} = 0.6$~m/s. Following previous studies~\cite{Zanlungo_EPL2011,Zanlungo_PRE2014,Kwak_PRE2017}, we discretized the numerical integration of Eq.~(\ref{eq:SFM_acc}) using the first-order Euler method:
\begin{eqnarray}\label{eq:Euler_method}
\vec{v}_i(t + \Delta t) &=& \vec{v}_i(t) + \vec{a}_i(t)\Delta t,\\
\vec{x}_i(t + \Delta t) &=& \vec{x}_i(t) + \vec{v}_i(t + \Delta t)\Delta t.
\end{eqnarray}
\noindent
Here, $\vec{a}_i(t)$ is the acceleration of pedestrian $i$ at time $t$ which can be obtained from Eq.~(\ref{eq:SFM_acc}). The velocity and position of pedestrian $i$ is denoted by $\vec{v}_i(t)$ and $\vec{x}_i(t)$, respectively. The time step $\Delta t$ is set as 0.05~s.
\section{Results and Discussion}
\label{section:results}
\begin{figure}[!htp]
\centering
\includegraphics[width=\textwidth]{snapshots.pdf}
\caption{Snapshots of helping behavior in a room evacuation scenario: (a) all the injured persons are rescued in case of $N_i = 15$ and $\beta = 0.1$, and (b) some injured persons are not rescued (in the red dotted circle) in case of $N_i = 15$ and $\beta = 0.2$. Open black circles indicate injured persons and full dark circles show volunteers helping the injured persons. Gray circles represent the passersby.}
\label{fig:snapshots}
\end{figure}
\begin{figure}[!htp]
\centering
\includegraphics[width=8cm]{schematic_phase_diagram.pdf}
\caption{Schematic phase diagram of collective helping behavior in the room evacuation scenario.} \label{fig:phase_diagram}
\end{figure}
Figure~\ref{fig:snapshots} shows snapshots of our agent-based model simulations. Open black circles indicate injured persons and full dark circles show volunteers helping the injured persons. Gray circles represent the passersby. If the helping cost is low, all the injured persons are likely to be rescued, as shown in Fig.~\ref{fig:snapshots}(a). However, if the helping cost is high (i.e., high $\beta$), then some injured persons might not be rescued, see the red dotted circle in Fig.~\ref{fig:snapshots}(b). By systematically changing the value of $N_i$ and $\beta$, we observed different patterns of collective helping behaviors summarized in the schematic phase diagram (see Fig.~\ref{fig:phase_diagram}). For each parameter combination ($N_i$, $\beta$), we performed 30 independent simulation runs.
\begin{figure}[!htp]
\centering
\includegraphics[width=8cm]{total_evac_time_beta.pdf}
\caption{Average total evacuation time $T_{\rm avg}$ as a function of the number of injured persons $N_i$ and $\beta$.}
\label{fig:total_evac_time_beta}
\end{figure}
\begin{figure}[!t]
\centering
\begin{tabular}{cc}
\includegraphics[width=.49\columnwidth]{total_evac_time_injured_avg.pdf}&
\includegraphics[width=.49\columnwidth]{total_evac_time_injured_sd.pdf}\\
\end{tabular}
\caption{Total evacuation time in case of $\beta = 0.1$ against the number of injured persons $N_i$: (a) average $T_{\rm avg}$ and (b) standard deviation $T_{\rm sd}$.}
\label{fig:total_evac_time_injured}
\end{figure}
We also looked into the impact of cost ratio $\beta$ and the number of injured persons $N_i$ on the total evacuation time $T$. The total evacuation time $T$ is defined as the length of period from the start of evacuation to the moment when the last evacuee leaves the exit corridor. We measured the average and standard deviation of the total evacuation time, i.e., $T_{\rm avg}$ and $T_{\rm sd}$, based on the values of total evacuation time $T$ obtained over 30 independent simulation runs for each parameter combination ($N_i$, $\beta$). Figure~\ref{fig:total_evac_time_beta} indicates that change in the value of $\beta$ does not make a noticeable different to the average total evacuation time $T_{\rm avg}$. This is seemingly because $\beta$ only affects the probability that a passerby turns into a volunteer. As indicated in Fig.~\ref{fig:total_evac_time_injured}(a), the average total evacuation time $T_{\rm avg}$ increases as the number of injured persons $N_i$ grows. Having more injured persons indicates that there are more volunteers who move in the reduced desired speed, so the total evacuation time increases due to the volunteers rescuing the injured persons. In addition, the standard deviation of total evacuation time $T_{\rm sd}$ increases as the number of injured persons $N_i$ grows, in that the difference in evacuation time among evacuees gets larger.
\section{Conclusion}
\label{section:conclusion}
We have numerically investigated helping behavior among evacuees in a room evacuation scenario. Our simulation model is based on the volunteer's dilemma game reflecting volunteering cost and social force model simulating pedestrian movement. We characterized collective helping behavior patterns by systematically controlling the values of cost ratio $\beta$ and the number of injured pedestrians $N_i$. For low cost ratio values, one can expect that all the injured pedestrians are rescued by volunteers. For high cost ratio values, on the other hand, it was observed that not all the injured persons can be rescued. When the number of injured persons is large, a low value of cost ratio yields a result that all the injured pedestrians are rescued. A schematic phase diagram summarizing the collective helping behavior patterns is presented.
A very simple room evacuation scenario has been used in order to study the fundamental role of helping behavior in the evacuation especially the number of evacuated pedestrians. In this study, the severity of injuries are assumed to be the same for all the injured persons, so each injured person can be rescued by a volunteer. According to patient triage scale in Singapore~\cite{Parker_2019,SGH_AcuityScale}, patients can be categorized based on the severity of injuries and the desired number of volunteers are different for various types of injuries. Future work can reflect the impact of patient injury levels in collective helping behavior by assuming different number of required volunteers for each patient. This study can be extended from the perspective of game theory. As stated in Diekmann's study~\cite{Diekmann_SF2016}, it can be interesting to introduce different values of collective good $U$ and volunteering cost $K$ to each passerby. By doing that, we can reflect personal difference in willingness to volunteer in emergency evacuations. In addition, one can imagine that the value of U can be changed depending on the number of injured persons and volunteers. For instance, the values of $U$ are different when there are one injured person, one volunteer and two injured persons, three volunteers. Evolutionary game~\cite{Hao_PRE2011} can be also introduced in order to reflect behavioral changes of passersby influenced by the existence of volunteers, which might be observable in emergency evacuations.
\section*{Acknowledgements}
This research is supported by National Research Foundation (NRF) Singapore, GOVTECH under its Virtual Singapore program Grant No. NRF2017VSG-AT3DCM001-031.
|
1812.06984
|
\section{Convergence Testing}
\label{sec:convergence}
We test the convergence of the two scaling relations plotted in Figs.~\ref{fig:sfms} and \ref{fig:mz} in Fig.~\ref{fig:convergence}. We show the weighed median relations for the simulation volumes used in this study (Ref100 and RecHi25), as well as the RefL025N0376 (Ref25) and RefL025N0752 (RefHi25) volumes for additional convergence testing. The Ref25 simulation uses the reference model and standard resolution, but differs from Ref100 by sampling a smaller volume. The RefHi25 volume is the same volume and resolution as RecHi25, but using the reference subgrid physics parameter values.
We see that for the rSFMS (left panel) the effects of volume sampling and resolution have little influence on the relation over the $\Sigma_\star$ range covered by all the simulations ($\Sigma_\star \lesssim 10^{2.5} \; {\rm M_\odot} \; {\rm pc}^{-2}$), suggesting that this relation converges well.
Comparing the rMZR of Ref100 and Ref25 (right panel), we see that the shape of the relation is preserved reasonably well for the same model and resolution sampling a smaller volume. The turnover is exhibited at the same value of $\Sigma_\star \sim 10^{2.5} \; {\rm M_\odot} \; {\rm pc}^{-2}$, but plateaus at lower abundances in the smaller volume. The close rMZR agreement between Ref100 and Ref25 over the sampled $\Sigma_\star$ range suggests that the influence of volume effects are relatively small. The higher resolution runs display systematically lower abundances (by $\lesssim 0.1$~dex), pointing to a level of numerical influence on the resolved ${\rm O/H}$ values, though we do not consider normalisation of abundances in this work (see \S\ref{sec:zcal} for details). Instead, we consider only the relative abundances, expressed through the slope of the rMZR. We find that at $\Sigma_\star < 10^{2} \; {\rm M_\odot} \; {\rm pc}^{-2}$ the rMZR slopes do not show a systematic difference with resolution, with RefHi25 and RecHi25 respectively exhibiting subtly shallower and steeper slopes than that of Ref100. At $\Sigma_\star > 10^{2} \; {\rm M_\odot} \; {\rm pc}^{-2}$, the comparison becomes less appropriate, as $\Sigma_\star$ values are increasingly biased towards high $M_\star$ galaxies missing from the 25$^3$~cMpc$^3$ volumes. As a result, we can infer little about the convergence properties of the rMZR peak.
\begin{figure*}
\includegraphics[width=0.48\textwidth]{plots/SFMS_convergence.pdf}
\includegraphics[width=0.48\textwidth]{plots/MZ_convergence.pdf}
\caption{The rSFMS (left panel) and rMZR (right panel) for a variety of different simulations, in order to test the influence of numerical convergence and volume selection effects. The Ref100-Ref25 comparison shows the influence of volume effects, whereas the Ref25-RefHi25 and Ref25-RecHi25 show the \textit{`strong'} and \textit{`weak'} convergence properties, respectively \citep[see][]{Schaye15}. We find that the rSFMS converges reasonably well, but the rMZR is slightly steeper at higher resolution, in better agreement with the data.}
\label{fig:convergence}
\end{figure*}
\section{The influence of intrinsic smoothing}
\label{sec:pointlike}
Material in SPH simulations, such as EAGLE, is discretised by mass into \textit{particles}. The spatial extent of a particle is not rigidly defined. For gas particles, an intuitive size to use is that of the SPH kernel used by the simulation to compute hydrodynamical forces, and computed using the distance to the nearest neighbours. Star particles do not have a similar associated size. As a result, stellar smoothing is chosen in a somewhat \textit{ad-hoc} way, as described in \citet{Trayford17}.
In order to test the influence of this smoothing on our kpc-scale resolved scaling relations, we compare to the extreme case of \textit{no smoothing} (i.e. treating particles as point-like). In this case, particles contribute only to the spaxels they are projected onto. By also producing sets of property maps for galaxies where particles are treated as point-like, we can get an idea of how intrinsic smoothing influences our results.
In Fig.~\ref{fig:pointlike}, we plot the rSFMS and rMZR relations of Figs.~\ref{fig:sfms} and \ref{fig:mz} respectively. Encouragingly, we find that for both scaling relations the intrinsic smoothing has only a small effect. The reason for this is that for the selection criteria we employ (sections~\ref{sec:galsel} and \ref{sec:spaxsel}), our choice of nearest-neighbour particle smoothing is small enough to not effect the spaxel values on $\sim$1~kpc scales. In this sense, the scaling relations are well resolved.
\begin{figure*}
\includegraphics[width=0.48\textwidth]{plots/SFMS_nosmooth_point_randMaNGA.pdf}
\includegraphics[width=0.48\textwidth]{plots/MZgas_nosmooth_point_randMaNGA.pdf}
\caption{The rSFMS as in Fig.~\ref{fig:sfms} (left panel) and the rMZR relation as in Fig.~\ref{fig:mz} (right panel), but now using property maps that treat the EAGLE star and gas particles as `point-like'. \textit{Shaded regions} denote the 16th-84th percentile range on the relations produced for a \textit{point-like} particle representation. We find that the intrinsic smoothing has little influence on both relations.}
\label{fig:pointlike}
\end{figure*}
\section{Spaxel passive fractions}
\label{sec:pfrac}
\begin{figure}
\includegraphics[width=\columnwidth]{plots/pfrac_nosmooth_randMaNGA.pdf}
\caption{The fractions of spaxels that are passive as a function of $\Sigma_\star$. We plot the weighted (see \S\ref{sec:spaxsel}) fraction for Ref100 (blue) and RecHi25 (green) galaxies. The vertical dashed and dotted lines represent the 10 star-forming particle resolution limit for Ref100 and RecHi25 respectively. We see that the fractions are minimal at their respective resolution limits. The increasing trend in $f_{\rm pass}$ with $\Sigma_\star$ above the resolution limit is taken to be a physical effect, whereas the decreasing trend below the threshold is spurious.}
\label{fig:pfrac}
\end{figure}
\setcounter{figure}{0}
\makeatletter
\renewcommand{\thefigure}{D\@arabic\c@figure}
\makeatother
\begin{figure*} \includegraphics[width=\columnwidth]{plots/pfrac_gallery_perc99.pdf} \includegraphics[width=\columnwidth]{plots/pfrac_gallery_perc50.pdf}
\caption{Gallery illustrating the distributions of star-forming gas and stars in Ref100 galaxies. \textit{Orange colour scale} shows \textit{spaxels} shaded from light to dark with increasing $\log_{10} \Sigma_\star$ above 1~${\rm M_\odot \; pc^{-2}}$. \textit{Blue colour scale} shows spaxels shaded from dark to light with increasing $\log_{10}({\rm sSFR}/{\rm yr^{-1}})$ above 10$^{-11}$~${\rm yr}^{-1}$. Pairs of images show the resolved properties made using our standard smoothed maps (left) and from maps where particles are treated as point-like (right). The \textit{left hand panel} shows pairs of images for the 21 most massive EAGLE galaxies, while the \textit{right hand panel} shows 21 galaxies just below the median stellar mass. Comparing the star-forming gas morphologies of the massive and median-mass galaxies gives some insight into the $M_\star$ dependence of the rSFMS seen in Fig.~\ref{fig:splitmass}, revealing a much clumpier gas distribution at high $M_\star$.}
\label{fig:sfmorph}
\end{figure*}
While the rSFMS and rMZR of star-forming spaxels can elucidate the nature of regions of star formation in galaxies, considering the fraction of passive spaxels provides further insight into how widespread these regions are, and their distribution within the galaxies. The difficulty with computing spaxel \textit{passive fractions}, $f_{\rm pass}$, is that they can be biased by the numerical effects of sampling gas particles in a simulation with finite mass resolution. At low surface densities, this originates from shot noise in the gas particle counts. By considering only the regime where the threshold for passivity is resolved, i.e. corresponding to a sufficient number of gas particles, these numerical effects can be mitigated.
The relationship between gas particle counts and surface density can be expressed as
\begin{equation}
\Sigma_{\rm gas} = \frac{N_{\rm parts} m_{\rm gas}}{l_{\rm pix}^2},
\end{equation}
where $N_{\rm parts}$ is the number of particles, $m_{\rm gas}$ is the typical gas particle mass and $l_{\rm pix}$ is the physical spaxel width. \\
To classify spaxels as active and passive, we must define some threshold level of star formation. We define a spaxel to be active if it is above a local specific SFR of $10 ^ {-11} \, {\rm yr^{-1}}$, matching the $z=0.1$ threshold value used by \citet{Furlong15}. For a given SFR surface density, this corresponds to a stellar mass surface density threshold of:
\begin{equation}
\Sigma_{\rm \star, t} = \frac{\Sigma_{\rm SFR}}{10^{-11} \, {\rm yr^{-1}}}.
\end{equation}
Furthermore, the SFR surface density is closely related to the gas surface density, which is expected due the pressure dependent formulation of the Kennicutt-Schmidt star formation law in EAGLE. We interpolate the median relation, $\tilde{\Sigma}_{\rm SFR}(\Sigma_{\rm gas})$, to obtain the $\Sigma_{\rm \star, t}$ value corresponding to a given gas particle count:
\begin{equation}
\Sigma_\star =
\tilde{\Sigma}_{\rm SFR}\left(\frac{N_{\rm parts} m_{\rm gas}}{l_{\rm pix}^2}\right) \, 10^{11} \, {\rm yr}.
\end{equation}
For a minimum tolerance of 10 particles at standard resolution, this corresponds to a threshold of $\log_{10} \Sigma_{\rm \star} / ({\rm M_\odot \, pc^{-2}}) > 2.86$. At high resolution this limit is a factor of 8 lower ($\log_{10} \Sigma_{\rm \star} / ({\rm M_\odot \, pc^{-2}}) > 1.96$).
In Fig.~\ref{fig:pfrac} we plot the passive fraction, $f_{\rm pass}$, as a function of $\Sigma_\star$. We plot the threshold $\Sigma_\star$ values as vertical dotted and dashed lines, corresponding to high and standard resolution, respectively. We see that despite the monotonic increase with $\Sigma_\star$ observed in the rSFMS (Fig.~\ref{fig:sfms}), the passive fraction shows a minimum at the $\Sigma_\star$ that corresponds approximately to the 10 particle resolution limit. The trend of increasing $f_{\rm pass}$ with $\Sigma_\star$ above the threshold is thus deemed to be a physical effect, demonstrating the increasing passivity in the dense central regions of galaxies. Conversely, the trend of decreasing $f_{\rm pass}$ with $\Sigma_\star$ trend below the threshold is spurious, and driven by the poor numerical sampling of star-forming gas particles.
Combining the portions of the Ref100 and Recal25 trends that are above the resolution limit, we see that the passive fraction increases with $\Sigma_\star$ for $\Sigma_\star \gtrsim 10^{2} \; {\rm M_\odot} {\rm pc}^{-2}$ (and possibly also at lower $\Sigma_\star$). Taken together, the simulations indicate that the overall passive fraction increases by a factor of $\approx$1.3 over the $2 \lessapprox \log_{10}\Sigma_\star/ ({\rm M_\odot} \; {\rm pc}^{-2}) \lessapprox 3.5$ range.%
The problem with the calculation of $f_{\rm pass}$ and specific SFR comes about due to the consideration of active and passive fractions when calculating the values, unlike the rSFMS and rMZR relations where the average values of only star-forming spaxels are considered. When considering average radial profiles affected by the passive threshold resolution criterion, such as specific SFR and $f_{\rm pass}$ profiles, we can indicate the radii where the average stellar mass surface density is above the threshold value. This approach is employed and discussed in \S\ref{sec:sfmsev}.
\section{Star-forming gas morphologies}
\label{sec:gasmorph}
In Fig.~\ref{fig:sfmorph} we show the morphology of the star-forming gas component in the 21 most massive galaxies in the Ref100 simulation, and the 21 galaxies immediately below the median mass of the $M_\star > 10^{10} {\rm M_\odot}$ sample. Two images are shown for each galaxy, where particles are either smoothed or treated as point-like. We see that the highest-mass galaxies tend to exhibit a much more disrupted and clumpy star forming gas distribution than the median mass galaxies.
\section{Simulations}
\label{sec:eagle}
In this study we utilise the \textit{Evolution and Assembly of GaLaxies and their Environments} (EAGLE) suite of cosmological, hydrodynamical simulations \citep[][]{Schaye15,Crain15}. We focus on two volumes in particular: the fiducial 100$^3$~Mpc$^3$ box (Ref100) and the higher-resolution recalibrated 25$^3$~Mpc$^3$ box (RecHi25). The Ref100 (RecHi25) simulation resolves baryonic matter with an initial particle mass of of $m_g = 1.81\times10^6 {\rm M_\odot}$ ($m_g = 2.25\times10^5 {\rm M_\odot}$). The RecHi25 simulation has a factor 2 higher spatial resolution than Ref100 (see Table~\ref{tab:specs}). A \textit{Planck-1} cosmology is assumed by the simulations and throughout this work \citep{Planck}.
EAGLE follows the co-evolution of baryons and dark matter in each volume using a modified version of the {\sc Gadget-3} TreeSPH code (an update to {\sc Gadget-2}, \citealt{Springel05b}), with star formation and feedback models parametrised differently. Changes to the hydrodynamics calculation include a pressure-entropy formulation \citep{Hopkins13}, artificial viscosity and conduction switches \citep[][]{Cullen10, Price08}, a \citet{Wendland95} C$_2$ smoothing kernel, and the timestep limiter of \citet{Durier12}. These enhancements are described in \citet{Schaye15} and \citet{Schaller15}.
Additional physics models are included for a number of key processes. Star formation is sampled stochastically in gas above a metallicity-dependant density threshold \citep{Schaye04}, using a pressure dependent adaptation of the observed Kennicutt-Schmidt law \citep{Schaye08} to compute star formation rates. Gas particles are converted wholesale into simple stellar populations, inheriting the initial mass and chemical composition of their progenitor gas.
Stellar mass loss and enrichment follow the implementation of \citet{Wiersma09b} for each star particle, distributing ejected material over the SPH neighbours. Nine elements are tracked explicitly: H, He, C, N, O, Ne, Mg, Si and Fe. Nucleosynthetic yields from winds, core-collapse supernovae and SNIa follow \citet{Portinari98}, \citet{Marigo01} and \citet{Thielemann03}. Photoheating and cooling rates are computed for each of these elements individually \citep{Wiersma09a}. Thermal feedback associated with both star formation \citep{DallaVecchia12} and AGN is implemented stochastically. Feedback parameters were calibrated to reproduce the $z=0.1$ galaxy stellar mass function, mass-size relation and black hole mass stellar mass relation.
Dark matter halos are identified using a Friends-of-Friends (FoF) algorithm, and their constituent self-bound substructures (subhalos) are identified using the SUBFIND code \citep{Springel01,Dolag09}. For this work, galaxies are taken to be exclusive to individual subhalos, and within a 30~proper~kpc (pkpc) spherical aperture about the galaxy centre to mimic a Petrosian aperture \citep{Furlong15}. Our galaxy centring method uses a shrinking spheres approach, following \citet{Trayford18}.
\section{Evolution of scaling relations}
\label{sec:evo}
\begin{figure}
\includegraphics[width=\columnwidth]{plots/SFMS_evolution.pdf}
\caption{Evolution of the rSFMS for Ref-100. \textit{Coloured points} indicate the median values for each redshift, with \textit{thin solid lines} indicating power-law fits (the power-law index is noted in the legend). The relations are constructed for an unweighted, volume-limited sample of galaxies with $M_\star \geq 10^{10} {\rm M_\odot}$, distinct from the weighting scheme used for Fig.~\ref{fig:sfms} (see text for details). The higher-redshift data of \citetalias{Wuyts13} and \citetalias{Abdurrouf18} is included for $z \approx 1-2$, coloured to reflect the comparable redshift range of EAGLE. The galaxy selection of \citetalias{Wuyts13} and \citetalias{Abdurrouf18} are roughly imitated to produce alternative $z=2$ EAGLE relations, and their respective power law fits are plotted using \textit{dash-dotted} and \textit{dotted} lines (see text for details). The shaded pink region indicates the 1$\sigma$ scatter on the \citetalias{Abdurrouf18} data. Both the slope and normalisation of the resolved SFMS increase with redshift.}
\label{fig:sfmsevo}
\end{figure}
\begin{figure*}
\includegraphics[width=0.95\textwidth]{plots/sSFR_profiles.pdf}
\caption{The evolving overall passive fraction ($f_{\rm pass}$, where $\Sigma_{\rm SFR}/\Sigma_\star > 10^{-11}$) profiles for galaxies, using the same redshift values as Fig.~\ref{fig:sfmsevo}. \textit{Solid lines} denote Ref-100, while \textit{dashed lines} represent the NoAGN50 simulation. The radii of spaxels are computed for each galaxy in terms of the projected half-mass radius, $R_e$. We only plot the profiles where more than 5 distinct galaxies contribute, and the median $\Sigma_\star$ profile exceeds $\Sigma_{\star, {\rm T}} = 10^{2.86} {\rm M_\odot \; pc^{-2} }$, where gas is deemed resolved at the $\Sigma_{\rm SFR}$ threshold for passivity (see Appendix~\ref{sec:pfrac} for details).}%
\label{fig:ssfrprof}
\end{figure*}
The evolution of spatially resolved scaling relations can provide further clues as to how they manifest the integrated relations, and whether resolved relations are more fundamental. For example, evolution of the integrated relations could coincide with evolution of the resolved relations, or simply reflect that galaxies at different redshifts sample different parts of an unevolving resolved relation (e.g. higher $\Sigma_\star$ at higher redshift for a fixed galaxy $M_\star$).
While observed scaling relations will probe different scales and suffer stronger selection effects at higher redshifts, we aim to isolate the physical evolution in these relations. To this end, we keep a consistent imaging methodology; we dispense with the weighting schemes used in previous sections (see section~\ref{sec:galsel}), with the same $\approx1$~kpc spaxel scale, face-on orientation and spaxel selection described in \S~\ref{sec:spaxsel} described in \S~\ref{sec:galsel}. However, we also restrict our sample to galaxies of $M_\star \geq 10^{10} {\rm M_\odot}$, in order to highlight physical evolution in an $M_\star$ regime that is captured by the high redshift data we compare to. This scheme provides a volume-limited galaxy selection down to $M_\star \geq 10^{10} {\rm M_\odot}$, which we analyse for the four simulation outputs at $z=0.1$, 0.5, 1 and 2. In addition, we make use of the standard resolution 50$^3$~Mpc$^3$ EAGLE simulation where AGN feedback is switched off (NoAGN50) to probe the evolving influence of AGN feedback in some of the following plots, employing the same galaxy and spaxel selection.
\subsection{Evolution of the rSFMS}
\label{sec:sfmsev}
We first consider the rSFMS in the Ref-100 simulation. Fig.~\ref{fig:sfmsevo} shows the $z=0.1$ median as in Fig.~\ref{fig:sfms}, but now for an unweighted and volume-limited sample of galaxies with $M_\star \geq 10^{10} {\rm M_\odot}$, and including the $z=0.5$, 1 and 2 trends. A power law is fit to the median points. We also include the high redshift data of \citet[][hereafter \citetalias{Wuyts13}]{Wuyts13} and \citet[][hereafter \citetalias{Abdurrouf18}]{Abdurrouf18}. These studies rely on high-resolution HST imaging, and broad-band SED fitting on a pixel-by-pixel basis to derive resolved properties of galaxies at $z\approx1-2$ ($0.7 < z < 1.5$ for \citetalias{Wuyts13}, $0.8 < z < 1.8$ for \citetalias{Abdurrouf18}) on kpc scales, assuming a \citet{Chabrier03} IMF. There are differences in the target selection of these studies; \citetalias{Wuyts13} selects galaxies of $M_\star > 10^{10} {\rm M_\odot}$ and with a ${\rm sSFR} > 1/{t_{\rm Hubble}(z)}$, while \citetalias{Abdurrouf18} selects $M_\star > 10^{10.5} {\rm M_\odot}$ face-on spiral galaxies, without an explicit sSFR cut\footnote{Note, however, that a requirement that the galaxy is detected at rest-frame NUV and FUV wavelengths is imposed in \citetalias{Abdurrouf18}.}. %
Comparing to the $z \approx 1$-2 data of \citetalias{Wuyts13}, we find significant discrepancy between the observationally inferred median trends and both those predicted by EAGLE and observed by and \citetalias{Abdurrouf18}. The \citetalias{Wuyts13} observations show $\approx 0.5\,$(0.7)~dex higher star formation rates than the EAGLE relation at $z=2\,$(1). However the shape of the relation agrees well, exhibiting a slope intermediate between $z=1$ and 2, consistent with the redshift range spanned by the data. In addition, \citetalias{Wuyts13} note a slight break and shallower slope for high-$\Sigma_\star$, which is also seen in the EAGLE data points. Again, the strength of this break in the data is intermediate between the $z=1$ and 2 cases and occurs at a similar value of $\log_{10}\Sigma_\star /({\rm M_\star \, pc^{-2}}) \approx 2.7$.
The relation of \citetalias{Abdurrouf18} is normalised significantly lower than that of \citetalias{Wuyts13}. This is attributed by the authors to the different galaxy selection; \citetalias{Wuyts13} select preferentially star-forming galaxies, whereas \citetalias{Abdurrouf18} select star-forming and passive galaxies. Both studies select preferentially massive galaxies, with $M_\star \geq 10^{10} {\rm M_\odot}$ and $M_\star \geq 10^{10.5} {\rm M_\odot}$ for \citetalias{Wuyts13} and \citetalias{Abdurrouf18}, respectively. To assess these selection effects in EAGLE, we plot alternative $z=2$ power law fits employing selection criteria that roughly mimic \citetalias{Wuyts13} (i.e.~$M_\star > 10^{10} { \rm M_\odot}$, ${\rm sSFR} > 2.08\times 10^{-10} {\rm yr^{-1}}$) and \citetalias{Abdurrouf18} (i.e.~$M_\star > 10^{10.5} {\rm M_\odot}$). We find that the effect of this selection is relatively small in EAGLE. The \citetalias{Abdurrouf18} relation agrees better with the EAGLE relations in terms of normalisation, but shows a stronger turnover, taking place at lower-$\Sigma_\star$ values. This leads to a significantly shallower relation for galaxies of $\log_{10}\Sigma_\star / ({\rm M_\star \, pc^{-2}}) \gtrsim 2.5$. We also show the 1$\sigma$ scatter on the observed \citetalias{Abdurrouf18} relation (pink shaded region), demonstrating that the variance in $\Sigma_{\rm SFR}(\Sigma_\star)$ is comparable to the level of difference between the EAGLE and \citetalias{Wuyts13} relations.
While differences in the normalisation, slope and shape of the resolved SFMS exist between EAGLE and the data at $z \approx 1-2$, these are at a similar level to those between the two available data sets. Determining resolved properties at these redshifts is very challenging, with uncertainties in pixel-by-pixel SED fitting at these redshifts and galaxy selection effects potentially contributing significant systematic effects. It is difficult to disentangle the influence of observational systematics from true inadequacies in the EAGLE simulation. However, a well known issue with galaxy formation models, including EAGLE, is the underprediction of the observed $\dot{M}_\star(M_\star)$ relation at $z \approx 2$ \citep[e.g.][]{Weinmann12, Genel14, Henriques15, Furlong15}. A number of plausible explanations have been suggested for this discrepancy, such as shortcomings in the implementation of feedback and subsequent reincorporation times of galaxies \citep[e.g.][]{Mitchell14}, or the effect of a top-heavy IMF in highly star-forming galaxies in changing measured SFRs \citep[e.g.][]{Hayward11, Zhang18, Cowley18}. %
The EAGLE rSFMS evolves significantly with redshift, both in terms of its normalisation and slope. The typical $\Sigma_{\rm SFR}$ steepens with redshift for a fixed $\Sigma_{\star}$ over the entire sampled range. This increase is more pronounced at high $\Sigma_{\star}$, yielding a power law slope ($n$, inset in Fig.~\ref{fig:sfmsevo}) that increases with redshift. We can compare this result with the observed evolution of the integrated main sequence using the equation of \citet{Speagle14}, derived from a compilation of observational results. Qualitatively, the observed integrated main sequence evolution is similar, with increasing normalisation and slope as a function of redshift. However, while the observed integrated main sequence slope decreases by a factor $\approx 1.5$ (0.76 to 0.52) from $z=2$ to $0.1$ \citep{Speagle14}, the slope of the EAGLE rSFMS decreases by a factor of $\approx 1.8$ (0.89 to 0.49) over the same interval.
The more rapid evolution in the slope for the resolved main sequence could indicate a change in how star formation is distributed within galaxies as a function of redshift. In particular, this may be compatible with the concept of \textit{`inside-out'} formation: where the inner parts of galaxies evolve more rapidly than the outer parts. However, it could also be attributable to the evolving $M_\star$ distribution of galaxies, and the strong $M_\star$ dependence of the rSFMS found in EAGLE (seen in Fig.~\ref{fig:splitmass}).
In order to test this more directly, in Fig.~\ref{fig:ssfrprof} we plot the overall passive fraction ($f_{\rm pass}$) of spaxels as a function of their radius in units of $R_e$, at each redshift and in bins of $M_\star$. These profiles are computed using both active and passive spaxels, and the projected half-mass radii, $R_e$, of \citet{Furlong17}. A spaxel is deemed passive if the specific SFR ($\Sigma_{\rm SFR}/\Sigma_\star$) is below a constant threshold of $10^{-11}$~yr$^{-1}$, equal to the integrated passivity threshold for $z=0.1$ galaxies used by \citet{Furlong15}. A subtlety of computing passive fractions is that, for a spaxel of a given $\Sigma_\star$, the passive threshold should correspond to a sufficiently resolved amount of gas to prevent $f_{\rm pass}$ from being influenced by sampling effects and shot noise. The computation of passive fractions is expounded in Appendix~\ref{sec:pfrac} where it is found that $f_{\rm pass}$ is sufficiently resolved above $\Sigma_{\star,{\rm T}} = 10^{2.86} {\rm M_\odot \; pc^{-2}}$. In Fig.~\ref{fig:ssfrprof}, we only plot profiles where the median $\Sigma_\star$ exceeds $\Sigma_{\star,{\rm T}}$, and more than 5 galaxies contribute.
Given the stringent $\Sigma_\star$ criterion, we only show radii within $0.5 R_e$, where some $f_{\rm pass}$ values are reliable. We first focus on the $10.5 \geq \log_{10}(M_\star/{\rm M_\odot}) > 11$ range in the middle panel, where profiles can be found for each redshift. We find that galaxies are typically more passive towards their centres. In addition, we see that for galaxies in this same $M_\star$ range, the passive fractions generally increase with redshift, with spaxels generally becoming passive earlier at shorter radii.
While this hints towards an \textit{inside-out} picture of galaxy formation, this is not particularly compelling, due to the limited regime in which we can determine passivity on kpc scales in EAGLE. There are also exceptions, for example in the $10.5 \geq \log_{10}(M_\star/{\rm M_\odot}) > 11$ panel, the $f_{\rm pass}$ in the very centres (leftmost point) of $z=0.5$ galaxies exceeds that of $z=0.1$ galaxies (only 16 and 18 galaxies contribute to this bin, respectively). We note that in the high $\Sigma_\star$ regime probed here, the role of AGN feedback is key. This is demonstrated by the NoAGN50 profiles showing that, without AGN feedback, the spaxels in the central regions we probe are almost exclusively star-forming ($f_{\rm pass} \approx 0$). These galaxies are also more compact, such that we cannot measure the profiles down to the lowest $R/R_{e}$ values measured for at the kpc-scale resolution of our spaxels.
It is interesting to consider the evolution of resolved star formation in light of the recent results of \citet{Starkenburg18}. They find that, contrary to what is reported observationally, $z=0$ EAGLE (and Illustris) galaxies appear to \textit{quench} in an \textit{outside-in} fashion, in the sense that the simulated sub-SFMS galaxies exhibit more centrally peaked sSFRs. This result is not necessarily in tension with an inside-out \textit{formation} of galaxies overall. Indeed, the general increase in galaxy sizes with cosmic time \citep{Furlong17} implies that galaxies must build up the majority of their stellar mass \textit{inside-out}. In fact, \citet{Clauwens17} explicitly demonstrated that EAGLE galaxies grow inside out, and may even do so somewhat more prominently than observed. Our finding that the star formation efficiency of higher $\Sigma_\star$ regions drops more rapidly with cosmic time than lower $\Sigma_\star$ regions also supports this picture.
The $z=0.1$ $f_{\rm pass}$ profiles of Fig.~\ref{fig:ssfrprof} provide a more direct probe of radial \textit{quenching} in EAGLE galaxies at $z=0.1$. These don't exhibit an outside-in trend, but, as noted previously, the stringent criteria to resolve the passive threshold means that this result cannot be considered representative of the overall population.
\subsection{Evolution of the rMZR}
\label{sec:mzrev}
\begin{figure}
\includegraphics[width=\columnwidth]{plots/MZ_evolution.pdf}
\caption{The median $\Sigma_\star-{\rm O/H}$ (rMZR) relation of Fig.~\ref{fig:mz}, but now plotted at multiple redshifts and constructed using an unweighted, volume-limited sample of $M_\star > 10^{10}$ galaxies used in Fig~\ref{fig:sfmsevo}. We apply a constant recalibration factor to the EAGLE data (see section~\ref{sec:zcal}). Median points are shown for the Ref100 run, while \textit{solid lines} show the best fit quadratic polynomial between $\log_{10}({\rm O/H})$ and $\log_{10}(\Sigma_\star)$ for each redshift. \textit{Dashed lines} also show quadratic fits to the NoAGN50 rMZR for comparison. There is significant evolution in the shape of the Ref100 relation as well as in the normalisation, particularly for low-$\Sigma_\star$ spaxels.}
\label{fig:mzevol}
\end{figure}
We now explore the evolution of the $\Sigma_\star-{\rm O/H}$, or rMZR, relation. Fig.~\ref{fig:mzevol} shows the Ref100 $z=0.1$ rMZR of Fig.~\ref{fig:mz}, except now using the unweighted, volume-limited sample limited to $M_\star > 10^{10}$. The rMZR at $z=0.5$, 1 and 2 are constructed in the same way. To capture the high-$\Sigma_\star$ turnover identified at $z=0.1$, we fit a second-order polynomial in $\log_{10}({\rm O/H})$ and $\log_{10}(\Sigma_\star)$ to the EAGLE data. Again, we concentrate on relative differences between redshifts and the shape of the relation, and not the absolute normalisation of abundance values, which have been shifted as discussed in \S~\ref{sec:zcal}.
First considering the Ref100 $\Sigma_\star-{\rm O/H}$ relation, we see remarkable evolution in the shape of the relation. At $z=2$, the relation is convex, with metallicity increasing by $0.5$~dex from $\Sigma_\star \sim 10 {\rm \; M_\odot pc^{-2}}$ to $\Sigma_\star \sim 10^4 {\rm \; M_\odot pc^{-2}}$. At $z=1$ the relation becomes shallower and close to linear with metallicity, increasing by 0.3~dex at the same stellar surface density range. For $z=0.5$ and $z=0.1$ the relation has become concave, and spans $\lesssim 0.2$~dex in abundance. While the gas-phase abundances in $\Sigma_\star \lesssim 10^{3}$ spaxels tend to decrease with redshift, at the highest $\Sigma_\star$ the metallicity remains nearly constant.%
It is notable that the gas-phase abundances evolve at fixed $\Sigma_\star$ in EAGLE. This demonstrates that the redshift dependence of the integrated MZR cannot be fully explained by an evolving galaxy population sampling an unevolving local relation. For $\Sigma_\star \lesssim 10^{3} {\rm \; M_\odot pc^{-2}}$, lower abundances at higher redshift is ascribed to increased gas fractions, higher outflow rates, and fewer prior stellar generations in galaxies. This has been demonstrated in EAGLE for integrated gas-phase abundances, which evolve roughly as observed even though a remarkably static relation between gas fraction and metallicity exists for most of cosmic time \citep{DeRossi17}.
The evolution at $\Sigma_\star > 10^{3} {\rm \; M_\odot pc^{-2}}$ is perhaps more challenging to understand. It seems likely that the inversion of the $\Sigma_\star > 10^{3} {\rm M_\odot \; pc^{-2}}$ trend at high redshift is related to AGN feedback, as such $\Sigma_\star$ values are only found in the central parts of massive galaxies. To test the influence of AGN directly, we also show the evolution of the NoAGN50 rMZR for comparison (dashed lines). We see that the NoAGN50 rMZR is similarly convex, if slightly steeper, at $z=2$. While the NoAGN50 rMZR shallows between $z=2$ and 0.1, the relation remains convex, with no evidence of a flattening or turnover. This shows explicitly that the Ref100 rMZR inversion can be attributed to AGN. %
In lieu of measurements of the rMZR in high-redshift galaxies, the evolutionary picture provided by EAGLE remains to be tested observationally.
\subsection{The resolved gas fraction-metallicity relation}
\label{sec:gfrac}
Relations that are independent of redshift may point towards fundamental aspects of galaxy evolution. A fundamental three-dimensional relation between the integrated properties of $M_\star$, $Z_{\rm gas}$ and SFR has been identified observationally \citep[e.g.][]{LaraLopez10}, though this appears to break down at high redshift
\citep[$z \gtrsim 2$, e.g.][]{Mannucci10, Salim15}. In EAGLE, \citet{Lagos16} identify a more persistent plane relation by replacing $Z_{\rm gas}$ with the neutral gas fraction, motivated by observational trends. \citet{Matthee18} find that EAGLE predicts smaller scatter when ${\rm \upalpha}$-enhancement is used instead of metallicity. Furthermore, \citet{DeRossi17} show that in EAGLE $Z_{\rm gas}$ and gas-fraction exhibit a strong redshift-independent anti-correlation. Testing the redshift independence of the resolved $Z_{\rm gas}$-$f_{\rm gas}$ relation provides insight into whether or not the integrated relation is borne of more fundamental local relations.
We plot $Z_{\rm gas}$ as a function of the star-forming gas fraction (the ratio of star-forming gas mass to total baryonic mass), $f_{\rm gas}$, at different redshifts in Fig.~\ref{fig:zfgas} for both resolved and integrated values, such that individual kpc scale spaxels and individual galaxies contribute respectively. We construct these relations using the Recal-25 simulation, which better reproduces observations of the integrated MZR and for which the redshift independent integrated trend is recovered \citep{DeRossi17}.
Interestingly, the near redshift-independence exhibited by the integrated relation does not extend to the resolved relation. For spaxels with $f_{\rm gas} \approx 0.4$, for example, the median metallicity increases by $\approx0.3$~dex between $z=2$ and 0.1 for this $f_{\rm gas}$, while the integrated metallicities differ by less than 0.1~dex . The resolved relations show a negative trend, but one that is shallower than for their integrated counterparts at each redshift.
\begin{figure}
\includegraphics[width=\columnwidth]{plots/gasfrac_smooth.pdf}
\caption{The spatially resolved median gas-phase ${\rm O/H}$ as a function of $f_{\rm gas}$ relation for individual spaxels from Recal-25 galaxies, plotted for $z=0.1$, 0.5, 1 and 2 (different colours). \textit{Solid} and \textit{dashed lines} denote the resolved and integrated relations respectively. The resolved relation does not show the redshift independence of its integrated counterpart. }
\label{fig:zfgas}
\end{figure}
The anti-correlation between $Z_{\rm gas}$ and $f_{\rm gas}$ is intuitive: as the gas mass increases relative to the stellar mass, the stellar ejecta that enrich the ISM become more dilute. This scenario is true of a simple \textit{`closed-box'} \citep[e.g.][]{Schmidt63, Tinsley80} model for metal enrichment, but this is clearly not a representative model of a galaxy experiencing continuous inflows and outflows. Indeed, \citet{DeRossi17} show that the overall effective yields measured in the ISM of EAGLE galaxies are below the intrinsic stellar yields, particularly in the AGN dominated high-$M_\star$ regime, indicative of the role of gas flows. Instead, a \textit{dynamic equilibrium} model of metal enrichment \citep[e.g.][]{Erb06, Dave11}, where inflows and outflows balance due to self-regulating feedback processes, likely provides a more suitable description for EAGLE galaxies. Such a model reproduces the tight, unevolving trend between the global gas fractions and gas-phase O/H in EAGLE galaxies remarkably well \citep{DeRossi17}. %
Neither model provides a fitting description of the evolution of the rMZR. This points to the importance of \textit{non-local} aspects of feedback and enrichment processes within galaxies. With no physical barriers separating projected 1~kpc$^2$ regions of a galaxy, stellar feedback causes enriched gas to be redistributed within a galaxy. While the galaxy as a whole may be in a state of near dynamic equilibrium, the non-local nature of enrichment and feedback tends to weaken the anti-correlation between $f_{\rm gas}$ and O/H on these scales. %
\subsection{Interpreting the evolution of resolved relations}
\label{sec:interp}
An interesting question from the analysis of Figs.~\ref{fig:sfmsevo}-\ref{fig:zfgas} is whether the mode of star formation evolves through cosmic time. The relative importance of star formation in the inner and outer parts of galaxies is one aspect of this. The shallowing of the rSFMS with decreasing redshift (Fig.~\ref{fig:sfmsevo}) indicates that galaxy centres were more star forming in the past relative to outer regions, building up their bulges. Tentative evidence of this is seen in the Fig.~\ref{fig:ssfrprof}, where the central regions of galaxies are found to become passive earlier on in cosmic history. We note that this is not necessarily in contradiction with the recent findings of outside-in \textit{quenching} in $z=0$ EAGLE galaxies by \citet{Starkenburg18}. It may be enlightening in future work to test if imitating optical observations \citep[as in][]{Trayford17} and their selection effects, including pollution by ionisation sources not associated with star formation, allows the same qualitative results to be recovered.
The evolution of the rMZR (Fig.~\ref{fig:mzevol}) provides further insight, where the steep slope in the highest density regions ($\Sigma_\star > 10^{3} {\rm M_\odot \; pc^{-2}}$) at $z=2$ inverts to become a distinct downturn by $z=0.1$. These high-density spaxels preferentially probe the inner regions of massive galaxies, a regime where the behaviour of AGN feedback is particularly important. While the turnover in the $z=0.1$ trend is indicative of AGN efficiently removing gas from galaxy centres (see section~\ref{sec:mzr}), the convex trend at $z=2$ suggests ongoing star formation in the centres of massive galaxies, despite the presence of AGN. This can be ascribed to a smaller fraction of super-massive black holes having grown to the \textit{AGN regulated} phase by this time, where AGN feedback can efficiently disrupt star formation \citep{McAlpine18}.
\section{Introduction}
\label{sec:intro}
\vspace{1.5ex}
The mere existence of \textit{`scaling relations'} (i.e. trends between observables and/or inferred physical properties) amongst widely separated galaxies suggests a commonality in their formation processes. Scaling relations encode information about key physical processes in galaxies, and provide touchstones for theoretical models of galaxy formation and evolution.
One such scaling relation is the so-called \textit{star-forming main sequence} (SFMS, \citealt{Noeske07}); a relatively tight\footnote{With a spread of $\approx$0.2-0.35~dex.}, evolving relationship between the integrated stellar masses ($M_\star$) and star formation rates (SFR) of actively star-forming galaxies, observed from $z=0-6$ \citep[e.g.][]{Brinchmann04, Rodighiero11, Elbaz11, Speagle14}. This relation is typically well fit by a power law, ${\rm SFR} \propto M_\star^{n}$. The normalisation has been found to evolve dramatically, with typical SFRs a factor $\approx 20$ higher at $z=2$ relative to the present day \citep[e.g.][]{Whitaker12}. The evolution of the index (or `slope') is less clear, but typically values of $n=0.6-1$ are recovered \citep[for a compilation of observations, see e.g.][]{Speagle14}.
The SFMS tells us about the ongoing growth of galaxies in the Universe, and its existence is suggestive of self regulation, where the inflow and outflow of gas are balanced by the influence of feedback processes \citep[e.g.][]{Schaye10, Bouche10, Dave11, Lilly13, Tachella16}. A possible flattening of the high-mass end has been explained as a physical effect due to the influence of quenching processes \citep[e.g.][]{Keres05, Dekel06, Croton08}, or simply as a manifestation of higher bulge-to-disc ratios in massive galaxies \citep[e.g.][]{Abramson14, Schreiber15}. However, \citet{Whitaker15} suggest the flattening may be spurious, due to a poor separation of the active and passive populations .
The relationship between $M_\star$ and gas-phase metallicity (MZR) complements the SFMS, and has also been widely studied using observations of local galaxies \citep[e.g.][]{Tremonti04, Kewley08} and those at high redshift \citep[e.g.][]{Erb06, Henry13, Maier14, Zahid14}. Metallicity is generally found to increase with $M_\star$ at low masses, but to plateau or turn over for galaxies with $\log_{10}(M_\star /{\rm M_\odot}) \gtrsim 10.5$. Metals are considered to play a key physical role in the star formation process, as more enriched gas can is more efficient at cooling, and metals deposited in dust provide sites for molecule formation in the ISM. The ISM is enriched by previous stellar generations and can be ejected by galactic winds, and thus gas-phase metallicity ($Z_{ \rm gas}$) also encodes information about the star formation and outflow history of a galaxy.
The multivariate relationship between $M_\star$, SFR and $Z_{\rm gas}$ provides insight into key aspects of galaxy formation. \citet{Mannucci10} and \citet{LaraLopez10} identify a \textit{`fundamental'} relationship between these properties, showing an anti-correlation between metallicity and the residuals of the $M_\star$-SFR relation. This relation has been found to exhibit little evolution \citep[e.g.][]{Stott13, Hunt16}, though some studies have observed this to break down at high redshift \citep[$z \gtrsim 2$,][]{Mannucci10,Salim15}. The strong dependence of metallicity on \textit{instantaneous} SFR (as opposed to the total integrated star formation) lends support to the model of galaxies as existing in a self-regulated dynamic equilibrium of inflowing and outflowing gas \citep[e.g.][]{Finlator08, Dave12, DeRossi17, Torrey17}.
While scaling relations such as the SFMS and MZR are typically devised in terms of integrated galaxy properties, they are regulated by the star formation and feedback processes taking place on sub-galactic scales. As such, spatially resolving the properties of \textit{regions} of galaxies may provide further insight into how the integrated relations arise. \citet{Wuyts13} use resolved HST imaging to find a relation between the surface density of stars ($\Sigma_\star$) and star formation ($\Sigma_{\rm SFR}$) in $0.7 \leq z < 1.5$ galaxies analogous to the SFMS, suggesting that the relationship between stellar mass and star formation rate holds down to 1~kpc scales: a resolved star-forming main sequence (rSFMS). %
However, the profiles of galaxies are found to be inherently clumpy for these scales and redshifts \citep{ForsterSchreiber11, Genzel11, Wisnioski15}, so it was not obvious that such a relation would hold for the Hubble-type galaxies dominant locally.
Integral Field Unit (IFU) instruments can be used to measure resolved scaling relations in local galaxies. IFU surveys have now yielded spatially resolved spectroscopy for considerable galaxy samples in the local Universe \citep[e.g.][]{Sanchez12,Bryant15,Bundy15}. IFUs can be used to spatially map star formation via optical proxies such as H$\alpha$ luminosity \citep[e.g.][]{Kennicutt98}, gas phase metallicity through emission line ratios \citep[e.g.][]{Sanchez17}, and stellar mass through SED fitting techniques \citep{Sanchez16, Goddard17}. %
Multiple IFU studies of local ($z < 0.1$) galaxies analysing the rSFMS relation \citep{CanoDiaz16,Hsieh17,Medling18} and the resolved MZR \citep[rMZR, e.g.][]{RosalesOrtega12, BarreraBallesteros17} are now available.
While integrated scaling relations have been widely used to test and calibrate cosmological galaxy formation models with statistically significant galaxy populations, resolved scaling relations have not yet been used for these purposes. Resolved relations could be particularly constraining for large-volume hydrodynamical simulations, as galaxy properties arise from the local hydrodynamical calculation\footnote{As opposed to semi-analytic models where galaxy profiles are typically imposed.}, and subgrid models for unresolved physics. It is possible for a simulated galaxy population to reproduce integrated relations, while failing to yield galaxies with realistic internal structures \citep[e.g.][]{Crain15}. Spatially resolved scaling relations are thus useful diagnostics of both the structure and demographics of simulated galaxy populations, complementing population comparison studies of galaxy morphology \citep[e.g.][]{Croft09, Sales10, Snyder15, Lagos17a, Dickinson18, Trayford18}, property gradients \citep[e.g.][]{Cook16, Taylor17, Tissera18} and sizes \citep[e.g.][]{Mccarthy12, Bottrell17, Furlong17, Genel18}.
If simulations can broadly match both integrated and resolved observations simultaneously, then they may provide insight into their physical origin. Simulations afford the opportunity to follow virtual galaxies through cosmic time and assess how star formation, feedback, environmental effects and angular momentum evolution build their structural properties. While these relationships are likely complex in detail, they may yield effects that are conceptually simple. For example, simulations can show whether `inside-out' star formation is consistent with the evolution of the resolved main sequence, as is hinted at by empirical models \citep[e.g.][]{Abdurrouf18, Ellison18}.
In this study, we explore spatially resolved scaling relations using the EAGLE simulation suite \citep{Schaye15,Crain15,McAlpine16}. By mapping gas and stellar properties of simulated galaxies, we assess how well the simulation reproduces the local relations and how they become established. We focus in particular on the rSFMS and rMZR. In section~\ref{sec:eagle} we briefly describe the aspects of the simulation most relevant for this study. Section~\ref{sec:obs} then describes how we construct resolved property maps and attempt to emulate the selection effects of contemporary IFU studies. We present results in section~\ref{sec:res}, comparing the relations resolved on 1~kpc scales to observations at $z=0.1$, and showing how galaxies of different mass contribute to the overall relation. Section~\ref{sec:evo} then focuses on evolution, showing how the kpc-scale relations vary with redshift, as well as how this relates to the integrated relation and galaxy profiles. Finally, we summarise our conclusions in section~\ref{sec:summary}. Unless stated otherwise, all distances are in proper (as opposed to comoving) coordinates.
\subsection{Metallicity calibration}
\label{sec:zcal}
Estimating metallicities observationally is highly challenging, and metallicity calibrations are subject to considerable uncertainty. For our study, we separate two broad classes of systematic uncertainty. One is absolute calibration, i.e. how well the overall abundance of heavy elements in stars and gas can be inferred. Another pertains to the relative calibration, i.e. how well observable metallicity indicators trace the underlying metallicity variation between, or within, galaxies. While a detailed discussion of metallicity calibration is beyond the scope of this study \citep[see instead e.g.][]{Kewley08}, we discuss some of the most pertinent aspects here.
The EAGLE simulations use the nucleosynthetic yields of \citet{Portinari98} and \citet{Marigo01} for stellar evolution and core-collapse supernovae, as well as the SNIa yields of \citet{Thielemann03} \citep[with some modification, see][]{Wiersma09b}, which dictate the absolute abundances of chemical elements in gas and stars. As discussed in \citet{Wiersma09b}, even for a fixed IMF, the yields are uncertain at a 0.3~dex level. The simulations yield good agreement with integrated mass-metallicity relations for stars and gas \citep{Tremonti04,Zahid14, Gallazzi09} for high stellar masses ($M_\star > 10^{10} {\rm M_\odot}$ for Ref-100 and $M_\star > 10^{9} {\rm M_\odot}$ for Recal-25, see \citealt{Schaye15}). Assuming $12 + \log_{10}({\rm O/H})_\odot = 8.69$ \citep{Allende01}, typical metallicities become super-solar for $\log_{10}(M_\star/{\rm M_\odot}) \gtrsim 9.5$, saturating at around three times the solar value.
\citet{Sanchez16} present a spatially resolved gas-phase mass-metallicity relation and explore a number of metallicity calibrations using CALIFA, with the same pipeline also used to derive the relation in MaNGA data \citep{BarreraBallesteros17}. Fig.~3 of \citet{Sanchez16} demonstrates the integrated mass-metallicity relation for a multitude of calibrators, but finds values systematically lower than that of both EAGLE and other observational studies by $> 0.2$~dex for all calibrators. Part of this discrepancy may be attributable to anchoring the metallicity values to empirical ionisation parameter measurements as opposed to photoionisation models \citep{Sanchez16}.
In order to compare to IFU data, we therefore apply a constant shift to the EAGLE $\log_{10}({\rm O/H})$ values of $-0.6$~dex, which represents the difference between the high-$M_\star$ plateau for the $R_{23}$ calibration of \citet{Sanchez16} and that of \citet{Zahid14}. The motivation for this recalibration is that it brings the absolute calibration of the EAGLE data into agreement with IFU studies for the same metallicity indicator. However, we follow recent theoretical work on metallicity evolution in galaxies and put no emphasis on absolute metallicities, given the large uncertainties \citep[e.g.][]{Torrey17}. Instead, we only compare the shape of the relation, and evolution in metallicity in a relative sense. \citet{Sanchez16} show that despite the shift in absolute calibration, the majority of indicators yield integrated mass-metallicity relations with very similar shapes. This gives us some confidence in the validity of the relative calibration for a comparison between the shape of the resolved mass-metallicity relation in EAGLE and the data.
\section{Comparing to observations}
\label{sec:obs}
\begin{table}
\caption{Specifications of the two primary EAGLE simulation and the three contemporary IFU surveys considered in this work. EAGLE provides two discrete snapshots at comparable redshifts. As these are predominately the same galaxies $\approx 1$~Gyr apart, we quote only the number of coeval galaxies at $z=0$. The chosen surveys all resolve physical scales of $\approx$1~kpc, well matched to the resolution of EAGLE.}
\label{tab:specs}
\begin{center}
\begin{tabular}{lccc}
\hline
& Min. scale$^{\rm a}$ & Redshifts & Selection$^{\rm b}$ \\
& (kpc) & ($z$) & (criterion $\vert$ number)\\
\hline\hline
Ref100 & $\approx 0.7$ & $[0,0.1]$& $M_\star > 10^{9}{\rm M_\odot}$ $\vert$ $10^{4.1}$\\
RecHi25 & $\approx 0.35$ & $[0,0.1]$& $M_\star > 10^{8.1}{\rm M_\odot}$ $\vert$ 620\\
\hline
CALIFA & $0.8-1.0$ & $0.005-0.03$ & $45^{\prime\prime} < {D_{25}}^{\rm c}<80^{\prime\prime}$ $\vert$ 600\\
MaNGA & $1.3-4.5$ & $0.01-0.15$ & $M_\star > 10^{9}{\rm M_\odot}$ $\vert$ $10^{4}$\\
SAMI & $1.1-2.3$ & $0.004-0.095$ & $M_\star > 10^{8.2}{\rm M_\odot}$ $\vert$ 3400\\
\hline
\end{tabular}
\end{center}
{\footnotesize $^{\rm a}$ For EAGLE, this is the Plummer-equivalent maximum gravitational softening.\\
$^{\rm b}$ %
Observed selection functions are not complete in mass. More detail on galaxy selection can be found in \S~\ref{sec:select}.\\
$^{\rm c}$ $D_{25}$ is the $r$-band 25 mag arcsec$^{-2}$ isophotal diameter.
}
\end{table}
In this study we focus on comparing with the results from two contemporary IFU surveys in particular: CALIFA \citep{Sanchez12} and MaNGA \citep{Bundy15}. Results from these campaigns are particularly well-suited for comparison with EAGLE\footnote{Other IFU surveys, such as SAMI \citep{Bryant15}, are similarly compatible. While we compare with certain published resolved scaling relations in this study, future comparison with such surveys would be informative.}, as detailed below. Some specifications of the surveys and simulated data are listed in Table \ref{tab:specs}.
These surveys sample galaxies on spatial scales of $\approx $1~kpc. This scale is well-matched to the standard resolution limit in EAGLE, where structure formation is suppressed on scales $\lesssim 0.7$~kpc due to gravitational smoothing.
MaNGA provides a sample of $\sim 10^4$ galaxies with masses $M_\star > 10^9 {\rm M_\odot}$, comparable to the $\approx 13,000$ galaxies above this mass limit in the Ref100 simulation volume at both $z = 0$ and $z=0.1$. CALIFA offers a smaller sample, selecting more local sets of galaxies using a $M_\star > 10^{8.2} {\rm M_\odot}$ mass cut and apparent size selection respectively. As $10^{8.2} {\rm M_\odot}$ is equivalent to $\sim 100$ star particles at standard EAGLE resolution, such galaxies are likely insufficiently resolved in the simulation. Resolution effects and convergence are explored directly in Appendix~\ref{sec:convergence}. We discuss emulating galaxy and \textit{`spaxel'} selection effects in section~\ref{sec:select}.
\subsection{Property maps}
\label{sec:maps}
This study requires spatially resolved maps of physical properties of EAGLE galaxies. We employ the publicly available {\tt py-sphviewer} code \citep{BenitezLambay15}, which uses adaptive kernel smoothing to create smooth two-dimensional imaging from sets of discrete three-dimensional particles. Galaxies are mapped individually, extracting material for a given subhalo and applying a 30~pkpc spherical aperture about the galaxy center. The maps are made at an intrinsic $256\times256$ \textit{`spaxel'} resolution for a $60\times60$~kpc field and in three projections: simulation $xy$-coordinates, face-on and edge-on. The face-on and edge-on orientations are defined via the primary baryonic rotation axis. Galaxy centering and orientation follow the procedures detailed in \citet{Trayford17}.
When considering spatially resolved properties, the manner in which the particles are smoothed may influence our results. In the adaptive smoothing case, a smoothing length is applied to each particle. For the gas, the obvious choice of smoothing length is the SPH kernel used in the simulation's hydrodynamical calculations. Stars, however, are not subject to hydrodynamic forces. As no physical scale is precomputed for adaptive smoothing between stars, smoothing lengths are calculated for each star particle to enclose a fixed number of neighbours. The choice of this number is a compromise between mitigating both granularity in the stellar profiles and washing out spatial trends. We compute stellar smoothing lengths based on the 64th nearest neighbour, matching the radiative transfer imaging described in \citet{Trayford17}.
In order to test the influence of smoothing, we also make sets of images where the smoothing of stars and gas are set to zero, such that a particle contributes solely to the spatial bin in which it resides. By comparing results with and without smoothing, we can test whether the choice of smoothing scale is important. Suffice to say, the influence of intrinsic smoothing of the gas and stellar material on our results is small, but explored in more detail in Appendix~\ref{sec:pointlike}.
The property maps made for the stars and star-forming gas in each galaxy are listed in Table~\ref{tab:prop}. These properties are either weighted or mapped directly (in the case of masses themselves and star formation rates), and are stored for all spaxels with non-zero mass. For this study, all maps are re-binned onto a factor 4 coarser grid such that the spaxels sample kpc scales (0.9375~kpc). This mapping scheme is applied to a selection of EAGLE galaxies, as detailed below.
\begin{table}
\begin{center}
\caption{Summary of the property maps produced for EAGLE galaxies. Maps are produced for each galaxy in three orientations with 256x256 \textit{`spaxels'} over a 60x60~kpc field of view. Unless stated otherwise, the maps are re-binned to a spatial resolution of $\approx$1~kpc (0.9375~kpc) for comparison to data. The properties used in this work are stellar mass, SFR and the star-forming (SF) gas-phase elemental abundances of O and H. The last column indicates how properties are aggregated within a spaxel, with \textit{`weighted average'} indicating the stellar mass and SFR weighted mean values respectively for stars and gas.}
\label{tab:prop}
\begin{tabular}{lccc}
\hline
& Stars & SF gas & Spaxel value\\
\hline
Mass & \checkmark & \checkmark & Sum \\
SFR & - & \checkmark & Sum \\
Density & - & \checkmark & Mean\\
Metallicity & \checkmark & \checkmark & Weighted average\\
Abundances$^{\rm a}$ & \checkmark & \checkmark & Weighted average\\
LoS velocity & \checkmark & \checkmark & Weighted average\\
Age & \checkmark & - & Weighted average\\
\end{tabular}
\end{center}
{\footnotesize $^{\rm a}$ Mapped elements are H, C, N, O and Fe }
\end{table}
\subsection{Selection effects}
\label{sec:select}
Selection effects are a key consideration for a robust comparison to data. For spatially resolved surveys, these effects are induced by the selection of both the galaxies and the spaxels that sample them. We detail our attempts to imitate observational selection effects below.
\subsubsection{Galaxy selection}
\label{sec:galsel}
The target selection modelling described in this section is used for comparison to local IFU surveys in \S\ref{sec:res}. Galaxy selection of IFU surveys is generally more complex than that of imaging surveys, which are typically complete down to some stellar mass or flux limit.
While MaNGA employs a lower mass limit, the $M_\star$ selection functions of both surveys differ from a simple mass cut. In MaNGA the galaxy selection is designed to be uniform in $\log_{10}(M_\star)$ for $\log_{10}(M_\star/M_\odot) > 9$, in order to sample galaxies across a range in stellar mass \citep{Bundy15}. The MaNGA distribution stays approximately uniform in $\log_{10}(M_\star)$ up to $\log_{10}(M_\star/M_\odot) \approx 11.3$, above which the number density drops rapidly \citep{Wake17}. A uniform selection reduces the number of spaxels contributed by lower-mass objects and boosts that of higher-mass objects relative to a $M_\star$-complete survey.
An intuitive way to reproduce a uniform selection with EAGLE would be to select galaxies with a probability inversely proportional to the galaxy stellar mass function, i.e. $P(M_\star) \propto \phi(M_\star)^{-1}$. However, the low number counts of high-$M_\star$ galaxies in EAGLE, an inherently volume-limited sample, means that a very small fraction of galaxies would tend to be selected.
To enable better utilisation of the available EAGLE galaxy population, we instead emulate a flat galaxy selection in the $z=0.1$ Ref100 population using a hybrid method; we stochastically select galaxies to be uniform in $\log_{10}(M_\star)$ between $9 \leq \log_{10}(M_\star/{\rm M_\odot}) < 10$, while all galaxies of $\log_{10}(M_\star/{\rm M_\odot}) \geq 10$ are selected, and their spaxel contributions are weighted appropriately to mimic a uniform selection at high $\log_{10}(M_\star)$. %
Together, this provides a sample of 9284 galaxies, comparable to that of MaNGA. %
The fiducial weighting scheme, $w$, applied for each galaxy is then
\begin{equation}
\label{eq:w100}
w_{\rm 100}(M_\star) =
\begin{cases}
\frac{\phi_{100}(10^{10} \, {\rm M_\odot})}{\phi_{100}(M_\star)} & \text{if $10^{10} \leq M_\star/{\rm M_\odot} < 10^{11.3}$}\\
\frac{\phi_{100}(10^{10}\, {\rm M_\odot})}{\phi_{100}(10^{11.3}\, {\rm M_\odot})} & \text{if $M_\star/{\rm M_\odot} \geq 10^{11.3}$},
\end{cases}
\end{equation}
where $\phi_{100}(M_\star)$ is the Ref100 galaxy stellar mass function, binned in $\log_{10}(M_\star/{\rm M_\odot})$. For the 35 galaxies above the upper mass limit of $\log_{10}(M_\star/{\rm M_\odot}) = 11.3$ the weighting value saturates at $w_{100} \approx 25$, and the galaxy contribution falls away with the galaxy stellar mass function. For a 25~Mpc simulation box like RecHi25, the volume is 64 times smaller, so all galaxies with $\log_{10}(M_\star/{\rm M_\odot}) \geq 9$ are selected (261 systems in RecHi25), with weighting %
\begin{equation}
\label{eq:w25}
w_{\rm 25}(M_\star) = \frac{\phi_{25}(10^{10})}{\phi_{25}(M_\star)},
\end{equation}
where $\phi_{25}(M_\star)$ is the RecHi25 galaxy stellar mass function.
As no galaxies are found with $\log_{10}(M_\star/{\rm M_\odot}) > 11.3$ in the 25~Mpc volumes, no saturation value for $w_{25}$ is enforced. Throughout, the Ref100 and RecHi25 samples are treated separately.
In addition to this fiducial weighting scheme, we alternatively weight the same sample of EAGLE galaxies to represent the CALIFA mass distribution \citep[e.g.][]{GarciaBenito15}. This mass distribution is peaked at $\log_{10}(M_\star/{\rm M_\odot}) \approx 10.8$, such that pixels from these galaxies contribute more weight than higher- and lower-$M_\star$ galaxies, relative to the MaNGA weighting scheme. We show the relations arising from this alternative weighting scheme in a number of the plots of \S\ref{sec:res} and Appendix~\ref{sec:pfrac}. We note that we do not replicate the projected size selection used to produce the CALIFA galaxy sample.
Alongside the $M_\star$ target selection, an effective cut in specific SFR is also typically employed by IFU studies \citep[e.g.][]{CanoDiaz16, Hsieh17}. This allows the same galaxies to contribute to the rSFMS as contribute to the integrated SFMS, but also has the practical motivation of isolating galaxies with primary ionisation mechanisms attributable to star formation (rather than shocks or AGN). Given the known -0.2~dex offset of the EAGLE integrated SFMS from typical observational studies \citep{Furlong15}, employing the stringent observed cut bisects the integrated SFMS. We instead use the $\log_{10}({\rm sSFR/yr^{-1}}) > -11$ cut of \citet{Furlong15} to isolate \textit{`star-forming'} EAGLE galaxies\footnote{We note that the sSFR criterion may differ for rMZR studies \citep[e.g.][]{Sanchez17}, but as we find the influence is minimal on the EAGLE rMZR, we use the same cut for consistency.}.
\subsubsection{Spaxel selection}
\label{sec:spaxsel}
With the galaxy selection and weighting scheme in place, we now consider which spaxels contribute to the scaling relations. For the rSFMS and rMZR studied here, we require spaxels to sample both star-forming gas and stellar populations. To roughly mimic the observed relations, we employ $\dot{\Sigma}_\star$ and $\Sigma_\star$ limits.
Observationally, star formation rates are typically inferred from (dust-corrected) H$\alpha$ luminosities, $L_{\rm H\alpha}$, via a scaling of
\begin{equation}
\dot{M}_\star = 7.49\times 10^{42} \; {\rm M_\odot} \; {\rm yr^{-1}}f_{\rm IMF} \frac{L_{\rm H\alpha}}{\rm erg \; s^{-1}} ,
\end{equation}
where $f_{\rm IMF}$ is a factor accounting for differences between IMF assumptions. For the \citet{Chabrier03} IMF assumed by EAGLE, we use $f_{\rm IMF} = 1.57$ \citep[e.g.][]{Lacey16}.
Star formation is discernible in CALIFA for spaxels with star formation rate surface densities of $\dot{\Sigma}_\star \gtrsim 10^{-9.} \; {\rm M_\odot \; yr^{-1} \; pc^{-2}} $ \citep[Fig. 2 of][]{CanoDiaz16}. In MaNGA, star formation rate surface densities of $\dot{\Sigma}_\star \gtrsim 10^{-10} \; {\rm M_\odot \; yr^{-1} \; pc^{-2}}$ are detected \citep{Hsieh17}. To represent this selection effect, we employ the MaNGA-like criterion of $\dot{\Sigma}_\star > 10^{-10} \; {\rm M_\odot \; yr^{-1} \; pc^{-2}}$ when selecting spaxels that contribute to the EAGLE relations.
The independent variable in the scaling relations considered here is the stellar mass surface density, $\Sigma_\star$, so we can simply compare EAGLE to observations over a range where this is reliable for both samples. We compare plots in the range $\Sigma_\star > \, 10\ {\rm M_\odot \; pc^{-2}}$, equivalent to a kpc-scale spaxel sampling about 5 star particles at Ref100 resolution\footnote{Typically, more than 5 particles contribute to those spaxels, due to the particle smoothing.}. While the IFU surveys all probe down to lower $\Sigma_\star$ values, this limit mitigates stochastic effects due to poor particle sampling in EAGLE.
Another important factor is the radial coverage of spaxels. IFU instruments have limited angular size, and sample the inner regions of galaxies. Typically, the upper radial limit out to which gradients are measurable is $\lesssim 3$ times the effective radius \citep{Bundy15}. Observationally the effective radius, $R_e$, represents the half-light radius, but here we take $R_e$ to be the projected half-mass radius \citep[see][]{Furlong17}, selecting only spaxels at radii $< 3 R_e$. In practice, the radial cut has no perceptible effect on our result for the stellar surface density regime of $\Sigma_\star > 10\ {\rm M_\odot \; pc^{-2}}$ considered in our plots. %
\section{scaling relations at low redshift}
\label{sec:res}
\begin{figure}
\includegraphics[width=\columnwidth]{plots/SFMS_nosmooth_randMaNGA.pdf}
\caption{The spatially resolved star-forming main sequence relation (rSFMS) at $z=0.1$,
plotted using spaxels sampling 1~kpc scales. We compare to the observed relations
of \citet{CanoDiaz16} (CALIFA) and \citet{Hsieh17} (MaNGA), which sample approximately the same spatial scales at $z<0.1$. The observed rSFMS fits are
plotted for $\Sigma_\star/({\rm {M_\odot \; pc^{-2}}}) > 10^{1.15}$ (the centre of the lowest $\Sigma_\star$ bin), extending to the edge of the contour enclosing $80\%$ of the contributing spaxels in the $\Sigma_\star$-$\Sigma_{\rm SFR}$ plane \citepalias[see][]{CanoDiaz16, Hsieh17}. The weighted
median values are plotted for each bin in which no single galaxy contributes $> 5 \%$ of the
total weighting. %
Shaded regions enclose the 16th-84th percentiles of the
weighted spaxel distribution in each $\log_{10}(\Sigma_\star)$ bin. We find that EAGLE reproduces the observed rSFMS slope well, with a $\approx-0.2$~dex offset in normalisation.}
\label{fig:sfms}
\end{figure}
Having developed a procedure to measure resolved properties of EAGLE galaxies in a manner suitable for a first order comparison with low-redshift IFU surveys, we now present our results and compare directly to the observationally inferred relations. We first consider the $z=0.1$ rSFMS and rMZR in sections \ref{sec:orsr} and \ref{sec:mzr} respectively. We then explore how galaxies of different mass ranges contribute to them in section \ref{sec:mdep}. Finally, we investigate the relationship between the rMZR and rSFMS by comparing their residuals in section \ref{sec:residuals}. Where appropriate, the observed $\Sigma_\star$ and $\Sigma_{\rm SFR}$ are corrected for consistency with the \citet{Chabrier03} IMF assumed by EAGLE. Convergence properties of the relations are investigated further in Appendix~\ref{sec:convergence}.
\subsection{The resolved star forming main sequence}
\label{sec:orsr}
In Fig.~\ref{fig:sfms} we plot the resolved star-forming main sequence (rSFMS) for the Ref-100 and Recal-25 simulations, colour-coded blue and orange respectively. Here, solid lines represent the \textit{`weighted median'} relations. These are calculated by finding the 50th percentile of the weighted (via Equations~\ref{eq:w100} and \ref{eq:w25}) distribution of $\Sigma_{\rm SFR}$ for galaxies in uniform, contiguous bins of $\log_{10}(\Sigma_\star)$. The default weighting scheme is intended to replicate a galaxy selection uniform in $\log_{10}(M_\star)$, approximating that of the MaNGA survey \citep[e.g.][]{Wake17}. We only plot bins to which $\geq 10$ galaxies contribute, and where each individual spaxel contributes $\leq 5\%$ of the total weight. The 16th-84th percentile range of this distribution is indicated by the shaded region. The dashed coloured lines denote an alternative spaxel weighting, intended to represent the CALIFA mass distribution sample (see \S\ref{sec:spaxsel}). We compare to the relations derived for MaNGA \citepalias{Hsieh17} and CALIFA \citepalias{CanoDiaz16}.%
Generally, we see that the fiducial (MaNGA-like) Ref100 rSFMS measured for EAGLE follows the observed slope well across the $1 \lessapprox \log_{10}\Sigma_\star/ ({\rm M_\odot \; pc^{-2}}) \lessapprox 3.2$ range, but with a normalisation $\approx0.15$~dex below the MaNGA relation \citepalias{Hsieh17}. This offset is consistent with the finding that the integrated SFRs of EAGLE galaxies display a $-0.2$~dex offset relative to the majority of observational studies \citep{Furlong15}. It is worth noting that there are observational studies which claim a lower normalisation of the integrated SFMS via SED fitting techniques \citep[e.g.][]{Chang15}, to which EAGLE agrees better, though there is still considerable uncertainty in the absolute normalisation of star formation rates.
Applying the alternative (CALIFA-like) weighting scheme to EAGLE (dashed lines) yields only marginal differences in the shape and normalisation of the relations. Comparing the observations, the CALIFA relation \citep{CanoDiaz16} is normalised $\approx 0.15$~dex higher than MaNGA over the plotted range, but is found to be consistent given the uncertainties \citep{Hsieh17}. The $\approx 0.3$~dex total offset of the Ref100 rSFMS below the CALIFA relation is thus compatible with a $\approx0.15$~dex offset as found for the EAGLE-MaNGA comparison, along with systematic uncertainties in the data. The consistency in the EAGLE relation recovered for both weighting schemes is somewhat reassuring. However, this does not necessarily indicate that the rSFMS is independent of $M_\star$, as is explored further in \S\ref{sec:mdep}.
The RecHi25 relation exhibits marginally closer agreement with the observations for the fiducial weighting scheme, reproducing the observed slope and exhibiting a $\approx0.15$~dex offset below the relation derived for MaNGA \citepalias{Hsieh17} over the probed $\Sigma_\star$ range. The fiducial RecHi25 relation is slightly higher than that of Ref100 (by $\lessapprox0.1$~dex), echoing the higher main sequence normalisation found for RecHi25 \citep[e.g.][]{Schaye15}. However, this difference is erased when a CALIFA-like weighting scheme is used. This may reflect that the CALIFA mass distribution peaks at $\log_{10}(M_\star/{\rm M_\odot}) \sim 10$, where differences between the Ref100 and RecHi25 sSFRs are minimal \citep{Furlong15}. The origin of these differences is explored further below; the influence of massive galaxies, which are not captured within the 25$^3$~Mpc$^3$ volume\ of RecHi25, are shown in \S\ref{sec:mdep}, and the minor influence of resolution is demonstrated in Appendix~\ref{sec:convergence}.
A power law ($\Sigma_{\rm SFR} \propto \Sigma_{\rm \star}^n$) is fit to the plotted Ref100 and RecHi25 weighted median values, and their slopes, $n$, are noted in the legend. We find that Ref100 (RecHi25) exhibits a slope of $\approx 0.71$ (0.75), close to the value of $\approx 0.72$ derived for MaNGA and CALIFA \citepalias{CanoDiaz16,Hsieh17}. The rSFMS slope has been found to be comparable to the integrated SFMS slope observationally. The literature compilation of \citet{Speagle14} predicts a SFMS slope that is slightly shallower than the rSFMS, with a value of 0.53 at $z=0.1$, though some individual studies give slopes varying from significantly steeper to shallower values. Considering active galaxies (${\rm SFR}/M^\star > 10^{-11} \, {\rm yr}^{-1}$) in the Ref100 run, we find a SFMS slope that is slightly shallower than the rSFMS, with a value of 0.6 for the mass range $9 < \log_{10}(M_\star/{\rm M_\odot}) \leq 11.3$, assuming the MaNGA-like galaxy weighting of equation~\ref{eq:w100}.
The relationship between the integrated and resolved star-forming main sequence is complex, because it builds in the mass dependence of sizes, ISM distributions and star formation efficiency. Hence, interpreting the difference between the slopes of the SFMS and rSFMS is difficult. However, the different slopes of the rSFMS constructed from galaxies at different epochs, or from galaxies with differing levels of SFR for their stellar mass, have been used as an indicator of the \textit{inside-out} evolution of galaxies \citep[e.g.][]{Abdurrouf17, Medling18, Liu18, Ellison18}. Such evolution is explored further in \S\ref{sec:evo}.
\subsection{The resolved mass-metallicity relation}
\label{sec:mzr}
\begin{figure} \includegraphics[width=\columnwidth]{plots/MZgas_nosmooth_randMaNGA.pdf}
\caption{The weighted median spatially resolved relation between gas-phase metallicity and stellar mass (rMZR) for $z=0.1$ EAGLE galaxies. Because of uncertainties in absolute metallicity calibration, we consider only the shape of the relation and the EAGLE data has been shifted by -0.6~dex to account for calibration of absolute abundances, see \S\ref{sec:zcal} for details. \textit{Lines} represent EAGLE, with colours and line-styles indicating simulation volume and spaxel weighting scheme respectively, as in Fig.~\ref{fig:sfms}. We compare to the local IFU studies of \citet{BarreraBallesteros16} (MaNGA) and \citet{Sanchez13} (CALIFA), grey circles and dashes respectively, where bars indicate the inferred 1$\sigma$ scatter. We see that the predicted relations are qualitatively similar to the observations.}
\label{fig:mz}
\end{figure}
We now consider the resolved relation between gas-phase metallicity and stellar mass (rMZR) for the Ref-100 and Recal-25 EAGLE simulations, plotted in Fig.~\ref{fig:mz}. This is constructed using the $\Sigma_\star$ and \textit{SF-weighted} gas-phase oxygen abundance, $\log_{10}({\rm O/H})$, of individual 1~kpc scale virtual spaxels. As for Fig.~\ref{fig:sfms}, the galaxy and spaxel selection are as described in sections~\ref{sec:galsel} and \ref{sec:spaxsel}, respectively. We do not concern ourselves with how well EAGLE captures the absolute calibration (i.e. normalisation) of this relation, given the uncertainties discussed in \S\ref{sec:zcal}. For ease of comparison, we thus applied a shift of -0.6~dex to the EAGLE results (see \S\ref{sec:zcal} for details).
Comparing first the Ref100 simulation to observations, we find that the shapes of the relations are similar. For $\Sigma_\star \lesssim 10^{2} {\rm M_\odot \; pc^{-2}}$, we find a positive trend where $\log_{10}({\rm O/H})$ increases by $\approx 0.15$~dex over 1~dex in stellar surface density. Intuitively, this comes about due to local gas being generally subject to a higher level of enrichment in regions of higher stellar density. Ref100 shows a shallower relation between gas-phase metallicity and $\Sigma_\star$ than is inferred from the data. This is likely related to the integrated MZR in EAGLE being flatter than is inferred observationally \citep{Schaye15}, explored further in \S\ref{sec:mdep}.
By $\Sigma_\star \approx 10^{2.5} {\rm M_\odot \; pc^{-2}}$, the observed relations flatten significantly, such that the gas metallicity becomes independent of, or even anti-correlates with, $\Sigma_\star$. A turn-over is measured in the Ref100 relation, but at higher $\Sigma_\star $ than is fully captured in the data. In observational studies, the presence of a plateau in the rMZR has been attributed to a flattening or drop in the gas-phase metallicity towards the innermost parts of massive galaxies \citep[e.g.][]{RosalesOrtega11, Sanchez12, RosalesOrtega12}. Saturation of O/H in gas near the old stellar centres of galaxies, or saturation in the emission lines themselves in highly enriched gas, have been posited as alternative explanations for a flattening in the rMZR \citep[e.g.][]{RosalesOrtega12}, but do not imply a turn-over. %
The turn-over in the rMZR may be related to that observed in the integrated MZR, both observationally \citep[e.g.][]{Yates12} and in EAGLE \citep[e.g.][]{Segers16, DeRossi17}. By exploring different feedback prescriptions, \citet{DeRossi17} show that in EAGLE the turn-over in the integrated MZR is driven by AGN feedback. Unlike stellar feedback, AGN feedback is not directly associated with enrichment of the local gas. As such, AGN may drive large scale galaxy winds that remove metal-rich gas without enriching local gas further. As the high-$\Sigma_\star$ spaxels preferentially probe the central regions of massive galaxies, where AGN are most influential, AGN feedback seems a likely cause of the rMZR turnover in EAGLE. In addition, \citet{DeRossi17} find that AGN induce an inversion in the correlation between integrated sSFR and metallicity at high mass, which we investigate for the resolved properties in \S\ref{sec:residuals}. We explore the evolution of the turn-over in \S\ref{sec:evo}.
We now turn to the rMZR constructed for RecHi25 galaxies, which \citet{Schaye15} showed reproduces the integrated relation better for $M_\star < 10^{10} {\rm M_\odot}$. We find that for the $\Sigma_\star \lesssim 10^{2.5} {\rm M_\odot \; pc^{-2}}$ range, the RecHi25 rMZR exhibits a steeper slope, in better agreement with the data. However, due to the lack of high-mass galaxies captured within the RecHi25 volume, the $\Sigma_\star$ values at or above the Ref100 turn-over are not well sampled. %
We note that the application of the fiducial (MaNGA-like) and CALIFA-like weighting schemes yield only marginal differences for the EAGLE relations. For Ref100, this may also be ascribed to the flatter than observed integrated MZR, such that placing a different emphasis on galaxies at the high-mass end makes little difference to the results. For the RecHi25 relation, this may again be attributed to a limited sampling of massive galaxies.
Another interesting point of comparison is the scatter about the trend. The shaded regions around the EAGLE relations represent the 16th to 84th percentile range, which should be comparable to the $\pm1\,\sigma$ range provided for the observational data, assuming near gaussianity. Comparing to the observations, it seems that the spread about the EAGLE trend is typically larger than observed by a factor $\approx$2-4, as is the case for the integrated relation \citep{Schaye15}. A potential cause of this is the stochastic enrichment and lack of \textit{metal diffusion} in EAGLE. Gas particles are directly enriched by individual star particles in the simulation, and metals are not exchanged between them. SPH smoothing, as is employed here, goes some way towards reducing the variance in metallicity between spaxels, but has no relation to the physical process of metal diffusion. The influence of smoothing is demonstrated by comparing the rMZR scatter of Fig.~\ref{fig:mz} to that of a point-like particle treatment in Appendix~\ref{sec:pointlike}, where the scatter is larger still.
\subsection{Dependence on galaxy stellar mass}
\label{sec:mdep}
\begin{figure*}
\includegraphics[width=\textwidth]{plots/Ref100_stellarmasssplit_nosmooth_randMaNGA.pdf}
\caption{The spatially resolved star-forming main sequence (left, as Fig.~\ref{fig:sfms}) and stellar mass gas-phase metallicity relation (right, as Fig.~\ref{fig:mz}) for the Ref100 simulation at $z=0.1$, split into contiguous bins of integrated stellar mass. Coloured contours enclose 80\% of the weighted spaxel mass in each $M_\star$ bin, with the median relation indicated by the coloured lines. Solid lines and contours indicate the fiducial MaNGA-like spaxel weighting scheme, with dashes indicating the CALIFA weighting for the highest $M_\star$ bin. Unsurprisingly, we see that the higher mass galaxies account for the higher stellar surface density spaxels. We see that rSFMS relation in particular varies significantly with the integrated stellar mass of the contributing galaxies.}
\label{fig:splitmass}
\end{figure*}
A clear indication that a resolved relation is more fundamental than its integrated counterpart would be if the resolved relation does not change with the integrated properties of galaxies. Fig.~\ref{fig:splitmass} shows how the Ref-100 relations of Figs.~\ref{fig:sfms} and \ref{fig:mz} break down by the stellar mass of the contributing galaxies. Splitting Ref-100 galaxies into contiguous bins of $\log_{10}(M_\star / {\rm M_\odot})$, we plot the weighted median trends (thick coloured lines) for each mass bin, as well as the contour enclosing 80\%{} of the weighted total stellar mass (thin coloured lines). The overall weighted median trends of Figs.~\ref{fig:sfms} and \ref{fig:mz} are plotted for comparison (black lines).%
For the rSFMS (left panel) we first consider the weighted median lines for each mass range. We see that rather than sampling different $\Sigma_\star$ regimes of a common trend, distinct trends emerge for differing $M_\star$ bins. For $\log_{10}(M_\star/{\rm M_\odot}) < 11$, we find the trend becomes shallower with increasing $M_\star$, with the logarithmic slope varying from $n \approx 1$ at $9 < \log_{10}(M_\star/{\rm M_\odot}) < 10$ to $n\approx 0.6$ at $10 \leq \log_{10}(M_\star/{\rm M_\odot}) < 11$. At fixed $\Sigma_\star$, $\Sigma_{\rm SFR}$ decreases with $M_\star$.
The highest $M_\star$ bin deviates somewhat from the trends observed at lower $M_\star$. At $\Sigma_\star < 10^{2} {\rm M_\odot pc^{-2}}$, the relation is significantly flatter, and normalised 0.2-0.3~dex lower than for other mass bins. At $\Sigma_\star > 10^{2} {\rm M_\odot pc^{-2}}$, $\Sigma_{\rm SFR}$ rises steeply, with a slope similar to that of $9 < \log_{10}(M_\star/{\rm M_\odot}) < 10$ galaxies over this $\Sigma_\star$ range.
The significant $M_\star$ dependence of the rSFMS will naturally lead to differing median trends for different mass selections. However, in Fig.~\ref{fig:sfms} it was demonstrated that the two weighting schemes yield only marginal differences. Applying a CALIFA-like weighting scheme to galaxies within each $M_\star$ bin below $\log_{10}(M_\star/{\rm M_\odot})$ yields small differences in the rSFMS with respect to our fiducial (MaNGA-like) weighting scheme. For the highest mass bin, the difference is more significant, and we show the CALIFA-like weighting using a dashed line and contour. We see that this scheme yields a trend that deviates less from the relations at lower $M_\star$, with a steeper slope at $\Sigma_\star < 10^2\ {\rm M_\odot\ pc^{-2}}$, and a shallower slope at higher $\Sigma_\star$. These differences can be attributed to the higher weighting of the most massive galaxies in the MaNGA-like scheme relative to the CALIFA-like scheme.
It is interesting to consider why the most massive galaxies might diverge from the median relation. The star forming gas morphologies of the most massive galaxies are distinctly clumpy, shown in appendix Fig.~\ref{fig:sfmorph}, which may be indicative of a different mode of star formation in these systems. Differing star formation efficiencies may be influenced by the reservoir of gas having to build up between powerful AGN events. The steep increase in galaxy sizes for the most massive galaxies, for both the observed and EAGLE populations \citep{Furlong17}, could also play a role, with more high $\Sigma_\star$ spaxels located in the outer parts of galaxies. %
The finding of some $M_\star$ dependence in the EAGLE rSFMS is notable. \citet{CanoDiaz16} find no clear $M_\star$ dependence in the $\Sigma_\star-\Sigma_{\rm SFR}$ relation for CALIFA galaxies. \citet{Pan18} find some trends between $M_\star$ and the $\Sigma_\star-\Sigma_{\rm H\alpha}$ relation, exhibiting a similar flattening in the relation at higher $M_\star$, but attribute this to the influence of ionisation mechanisms other than star formation. However, other studies \citep[e.g.][]{GonzalezDelgado16, Medling18} identify strong trends between $\Sigma_\star-\Sigma_{\rm SFR}$ and galaxy morphology, a property which itself exhibits a strong trend with $M_\star$ \citep[e.g.][]{Kelvin14}. However, it is
unclear whether the $M_\star$ trend found in EAGLE would be recoverable in the data, given the uncertainties and contamination by ionisation not associated with star formation. We see from the 80\% contours that these trends may also be difficult to detect considering that different $M_\star$ bins contribute in different ranges of $\Sigma_\star$.
Turning now to the $\Sigma_\star$-${\rm O/H}$ relation (right panel), we again see systematic differences between $M_\star$ bins, but these are much more subtle. In the $1 \lesssim \log_{10}\Sigma_\star/ ({\rm M_\odot \; pc^{-2}}) \lesssim 2$ range, star-forming gas in spaxels from galaxies of $9 < \log_{10}(M_\star/{\rm M_\odot}) < 10$ are slightly ($\lessapprox 0.1$~dex) less enriched than those of $10 < \log_{10}(M_\star/{\rm M_\odot}) < 11$ galaxies. This indicates a certain level of \textit{non-local} chemical evolution, i.e. the metals produced by stellar populations may have enhanced gas outside of the spaxels in which they reside. As spaxels in more massive galaxies are biased to higher $\Sigma_\star$, this can lead to higher-metallicity gas at fixed $\Sigma_\star$, with additional metals contributed by regions of greater stellar density. Again, the most massive galaxies ($\log_{10}(M_\star/{\rm M_\odot}) \geq 11$) show a distinct behaviour, with systematically lower O/H values similar to that of the $9 < \log_{10}(M_\star/{\rm M_\odot}) < 9.5$ bin at low $\Sigma_\star$. However, we see a much less dramatic divergence for the most massive galaxies in the rMZR compared to the rSFMS, reflected in the similarity between trends where the MaNGA-like and CALIFA-like weighting schemes are applied.
The contours clearly display how different $M_\star$ bins sample different $\Sigma_\star$ regimes of a consistent (within 0.1~dex) global relation in EAGLE. This suggests that the local relation is more fundamental, where the integrated metallicities largely emerge from trends in enrichment on local scales. \citet{BarreraBallesteros16} find a similar result for the CALIFA sample, though they find a significantly steeper rMZR overall, as was seen in Fig.~\ref{fig:mz}. The sampling of a common rMZR also suggests why different weighting schemes make little difference to the global rMZR trend of Fig.~\ref{fig:mz}. %
\subsection{Comparing the rSFMS and rMZR residuals}
\label{sec:residuals}
To further investigate the connection between the spatially resolved $\Sigma_\star$, $\Sigma_{\rm SFR}$ and ${\rm O/H}$ values, we now investigate how the residuals of the rSFMS relate to the residuals of the rMZR.
In Fig.~\ref{fig:residuals}, we show the rMZR and as a thick black line, as in Fig.~\ref{fig:mz}. In the left panel, the colour scale indicates the median offset of spaxels from the rSFMS relation (see Fig.~\ref{fig:sfms}). We compute the residuals from the interpolated median relation shown in Fig.~\ref{fig:sfms}. The clear vertical colour gradients in Fig.~\ref{fig:residuals} imply that the residuals in the rSFMS and rMZR are strongly related. For spaxels with $\Sigma_\star \lesssim 10^{2} \; {\rm M_\odot} \; {\rm pc}^{-2}$ the residuals of the rSFMS and rMZR \textit{anti-correlate}, i.e. spaxels with higher $\Sigma_{\rm SFR}$ for their $\Sigma_\star$ tend to exhibit lower ${\rm O/H}$ and vice-versa. This is also seen in the integrated relations observationally \citep[e.g.][]{Yates12}.
At $\Sigma_\star \gtrsim 10^{3} \; {\rm M_\odot} \; {\rm pc}^{-2}$ (i.e. above the turn-over in the rMZR) the relationship inverts, such that spaxels with lower than average star formation rates typically also have lower than average metallicities for a given $\Sigma_\star$. Interestingly, in the $10^{2} < \Sigma_\star/({\rm M_\odot} \; {\rm pc}^{-2}) < 10^{3}$ range the relationship remains strong, but is non-monotonic, such that the most and least enriched spaxels both have relatively low $\Sigma_{\rm SFR}$.
An analogous inversion of the relationship between \textit{integrated} metallicities and SFRs of massive galaxies has been inferred observationally \citep[e.g.][]{Yates12}. Using feedback variations in EAGLE, \citet{DeRossi17} find that this effect is driven by AGN feedback. As described in \S\ref{sec:mzr}, stellar feedback processes disrupting star formation are coincident with the enrichment of local gas by stellar winds and SNa ejecta, whereas AGN feedback events do not enhance gas metallicity. Thus, the sub-rSFMS spaxels at $\Sigma_\star \sim 10^{2.5} \; {\rm M_\odot} \; {\rm pc}^{-2}$ with high and low O/H likely correspond to regions where star formation is suppressed by stellar and AGN feedback respectively.
The right hand panel of Fig.~\ref{fig:residuals} shows that the trends with the residuals of $\Sigma_{\rm SFR}$ are closely mirrored by the trends with the residuals of the $f_{\rm gas}(\Sigma_\star)$ relation (where $f_{\rm gas}$ is the baryon fraction in star-forming gas), suggesting that it is the gas fraction that largely drives the scatter in the rSFMS. this mirrors the findings of \citet{Lagos16} and \citet{DeRossi17} for the integrated EAGLE relations.
\begin{figure*}
\includegraphics[width=0.48\textwidth]{plots/MZrel_sfmsresid_nosmooth_randMaNGA.pdf}
\includegraphics[width=0.48\textwidth]{plots/MZrel_gfracresid_nosmooth_randMaNGA.pdf}
\caption{Relation between the residuals of the Ref100 resolved MZR relation of Fig.~\ref{fig:mz} and the median residual of the rSFMS of Fig.~\ref{fig:sfms} (colour scale in left panel) and the median resolved $f_{\rm gas}$-$\Sigma_\star$ relation (colour scale in right panel). Coloured bins are shown where more than 100 distinct galaxies contribute spaxels. The Ref100 fiducially weighted median relation (black lines) and MaNGA data of \citet{BarreraBallesteros16} (grey points) are plotted as in Fig.~\ref{fig:mz}. The residuals anti-correlate below the turnover in the rMZR, but this relationship inverts above the turnover, and displays a non-monotonic relationship around the turnover at $\Sigma_\star \sim 10^{2.5} \; {\rm M_\odot} \; {\rm pc}^{-2}$.}
\label{fig:residuals}
\end{figure*}
\section{Summary \& Conclusions}
\label{sec:summary}
We have produced maps of spatially resolved physical properties for the virtual galaxies formed in the EAGLE simulation, and presented resolved scaling relations between various properties. In particular, we focused on the resolved `\textit{star-forming main sequence}' (rSFMS) and resolved gas-phase mass-metallicity relations (rMZR) on 1~kpc scales. We assessed the simulated scaling relations by comparing to data inferred from local IFU surveys \citep{Sanchez12,Bundy15}, and high redshift imaging \citep{Wuyts13,Abdurrouf18}. For the SFMS we considered both the shape and normalisation of the relation, but for the resolved MZR we ignored the normalisation, due to the large systematic uncertainty in the absolute normalisation of metallicity values. \\
We first concentrated on the local ($z=0.1$) relations, which were constructed from an appropriately weighted galaxy sample for comparison to the local MaNGA and CALIFA surveys. Our general conclusions are as follows:
\begin{itemize}
\item The rSFMS slope agrees remarkably well with observations for both the standard resolution fiducial 100$^3$~cMpc$^3$ volume (Ref100) and the high resolution 25$^3$~cMpc$^3$ volume with recalibrated subgrid parameters (RecHi25).%
\item The normalisation of the EAGLE rSFMS relations show a $\approx-0.15$~dex offset from the observed relation for the fiducial MaNGA weighting scheme. This is consistent with the $\approx-0.2$~dex offset from observations found for the integrated SFMS \citep{Furlong15}, given the observational uncertainties. This suggests that the $z=0.1$ integrated and resolved main sequence relations can be brought into simultaneous agreement with the observations by changing the absolute normalisation of SFRs alone. We note that there is uncertainty in the absolute normalisation of SFRs observationally, with some studies finding close agreement with EAGLE \citep[e.g.][]{Chang15}.
\item The shape of the rMZR relation also generally follows that of the data (see Fig.~\ref{fig:mz}), with the Ref100 simulation in particular exhibiting a peak at a similar characteristic stellar surface density as seen in the data, $\Sigma_\star \sim 10^{2.5} {\rm M_\odot \; pc^{-2}}$. Again, imitating the selection function of CALIFA instead of MaNGA induces only marginal differences in the rMZR. %
\item The rSFMS and rMZR exhibit qualitatively different dependences on the $M_\star$ of their host galaxies. In the mass range $9 \leq \log_{10}(M_\star/{\rm M_\odot}) < 11$, the rSFMS slope depends on the $M_\star$ of galaxies from which it is constructed, with a best-fit power law index decreasing from $n\approx1$ to $n\approx0.7$ as $M_\star$ increases by 2~dex. Higher-mass galaxies deviate from this trend. The rMZR shows a much weaker $M_\star$ dependence, varying by only $0.1$~dex between $M_\star$ bins (Fig.~\ref{fig:splitmass}).
\item The residuals of the rSFMS and rMZR are strongly related (Fig.~\ref{fig:residuals}); the correlation is negative at low $\Sigma_\star$ ($\Sigma_\star < 10^2 {\rm M_\odot \; pc^{-2}}$) and positive at high $\Sigma_\star$ ($\Sigma_\star > 10^3 {\rm M_\odot \; pc^{-2}}$). In the surface density range, the relation is strong but \textit{non-monotonic} with the spaxels lying above and below the rMZR both having preferentially low $\Sigma_{\rm SFR}$. The inversion of this relation is ascribed to a transition between the regimes where feedback is predominately driven by SF and AGN.
\end{itemize}
We then considered how EAGLE predicts these relations to evolve, using an unweighted, volume-limited sample of galaxies with $M_\star > 10^{10} {\rm M_\odot}$. We find that:
\begin{itemize}
\item The rSFMS evolves, shifting to higher $\Sigma_{\rm SFR}$ values and becoming steeper with redshift. The observed rSFMS is normalised significantly higher at $z\approx2$, as was also found for the integrated SFMS \citep{Furlong15}, but appears consistent within the large error range (Fig.~\ref{fig:sfmsevo}).
\item The rMZR also shows strong evolution, both in terms of its shape and normalisation. The resolved MZR evolves from a steep, convex relation at $z=2$, to the shallower concave relation found at $z=0.1$. While the rMZR exhibits a positive trend at all redshifts for stellar surface densities of $\Sigma_\star < 10^{2.5}$~M$_\odot$~pc$^{-2}$, at higher $\Sigma_\star$, the rMZR evolves from a positive to negative trend between $z=2$ and $z=0.1$ (Fig.~\ref{fig:mzevol}). We demonstrated that the turn-over in the low-redshift rMZR is induced by AGN feedback, showing that the relation found for the NoAGN50 simulation volume, where AGN feedback is not included, remains convex throughout cosmic time. %
\item The redshift independence of the EAGLE integrated $\log_{10}({\rm O/H})$-$f_{\rm gas}$ relation found by \citet{DeRossi17} is not exhibited by its resolved counterpart (Fig.~\ref{fig:zfgas}). This is attributed to the \textit{non-local} processes that emerge within EAGLE galaxies. The enrichment of the gas probed by a spaxel cannot be attributed solely to the stars within that spaxel, because gas and metals move between individual kpc-scale regions.%
\end{itemize}
Altogether, we find that while the existence of spatially resolved scaling relations indicates that physical processes such as star formation, chemical enrichment and feedback also depend and operate on small scales, these resolved relations are not more `\textit{fundamental}'. The fact that the resolved scaling relations evolve, implies that the evolution of their integrated counterparts is not merely the result of higher-redshift galaxies having different surface density profiles. Indeed, the evolution of galaxies depends on both local and galaxy-wide processes.
While mapping physical properties of EAGLE galaxies appears to show resolved scaling relations in reasonable agreement with observations, we emphasise that observationally these scaling relations must be derived from observable properties. A more direct comparison would be to use virtual observations of EAGLE galaxies \citep{Camps16,Trayford17} to derive resolved properties using the same approach, potentially building in the same assumptions and systematic effects present in the data. %
\section*{Acknowledgements}
We thank Sebastian S\'{a}nchez for useful insight into the calibration of metallicity measurements from observations. This study made use of the publicly available {\tt py-sphviewer} code \citep{BenitezLambay15}. Our analysis was carried out using the DiRAC Data Centric system at Durham University, operated by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility ({\tt www.dirac.ac.uk}). This equipment was funded by BIS National E-infrastructure capital grant ST/K00042X/1, STFC capital grant ST/H008519/1, and STFC DiRAC Operations grant ST/K003267/1 and Durham University. DiRAC is part of the National E-Infrastructure.
|
1602.02449
|
\section{Introduction}
The effect of a magnetic field on electrons in crystals is fascinating and one of the
basic problems of solid state physics.
Although the fundamental principles are simple, our understanding of this
problem has been far from complete
due to the complicated matrix elements between different Bloch bands,
called as the interband effects of a magnetic field.\cite{Kubo}
One of the typical problems of this interband effect is the orbital magnetism.
After the pioneering work of orbital magnetism
by Landau for free electrons,\cite{Landau}
the effect of periodic potential
was considered by Peierls\cite{Peierls}
who obtained the
Landau-Peierls formula for the orbital susceptibility,
\begin{equation}
\chi_{\rm LP} = \frac{e^2}{6\hbar^2 c^2} \sum_{\ell,{\bm k}} \left\{
\frac{\partial^2 \varepsilon_\ell}{\partial k_x^2}
\frac{\partial^2 \varepsilon_\ell}{\partial k_y^2} - \left(
\frac{\partial^2 \varepsilon_\ell}{\partial k_x \partial k_y}\right)^2
\right\}
\frac{\partial f(\varepsilon_\ell)}{\partial \varepsilon},
\label{LandauPeierls}
\end{equation}
where $f(\varepsilon)$ is the Fermi distribution function and
$\varepsilon_\ell ({\bm k})$ is the Bloch band energy.
This Landau-Peierls formula is obtained by considering the effect
of a magnetic field by a phase factor
\begin{equation}
{\rm exp}\left( \frac{ie}{\hbar c}\int_{{\bm r}_i}^{{\bm r}_j} {\bm A}\cdot d{\bm \ell}\right),
\label{PeierlsPh}
\end{equation}
(the so-called Peierls phase) in the hopping integral of the single-band
tight-binding model.\cite{Peierls}
Here ${\bm A}$ is a vector potential and $e<0$ is the electron charge.
Attaching the Peierls phase to the hopping integral
corresponds to the modification of the energy dispersion,
$\Eell({\bm k}) \rightarrow \Eell({\bm k}-e{\bm A}/c\hbar)$,
in the presence of a magnetic field.
Apparently $\chi_{\rm LP}$ does not include the deformation of
the wave function resulting from the interband matrix elements of the
magnetic field.
Therefore it is believed that $\chi_{\rm LP}$ includes only the intraband
effects. (It is not so simple as we show later in the present paper.)
Furthermore, $\chi_{\rm LP}$ vanishes for insulators since it is
proportional to $\partial f(\varepsilon_\ell)/\partial \varepsilon$.
On the other hand, in bismuth and its alloys, it has been known experimentally
that the diamagnetism takes its
maximum when the chemical potential is located in the
band gap,\cite{BiExp1,BiExp2,BiExp3,BiExp4}
i.e., in the insulating state.
Apparently the Landau-Peierls formula fails to explain the
large diamagnetism in bismuth and its alloys,
which had been a mystery for a long time.
Stimulated by this experimental fact, there were various efforts to clarify the
interband effects of magnetic field on orbital susceptibility
\cite{Wilson,Adams,Kjeldaas,Roth,Blount,Wannier,Ichimaru,YamajiKubo,HFKubo,HS1,HS2,HS3}
For example, several theoretical studies showed that the difference between
the total susceptibility and $\chi_{\rm LP}$ is in the same order of the difference
between $\chi_{\rm LP}$ and Landau susceptibility,\cite{Adams,HFKubo,YamajiKubo}
even in the nearly-free electron cases.
The large diamagnetism of bismuth was finally understood by Fukuyama and Kubo\cite{HFKubo2}
who calculated the magnetic susceptibility based on the Wigner representation.
It was clarified that the interband effect of a magnetic field and the strong spin-orbit
interaction are essential.
After these theoretical efforts, one of the present authors\cite{Fukuyama}
(hereafter referred as I) discovered an exact but very simple formula of orbital susceptibility as
\begin{equation}
\chi = \frac{e^2}{\hbar^2c^2} k_{\rm B} T \sum_{{\bm k},n} {\rm Tr} \
\gamma_x {\cal G} \gamma_y {\cal G} \gamma_x {\cal G} \gamma_y {\cal G},
\label{FukuyamaF}
\end{equation}
where $\cal G$ is the thermal Green's function ${\cal G}({\bm k}, \varepsilon_n)$
in a matrix form whose $(ij)$ component is the matrix element between the $i$- and
$j$-th band.
$\varepsilon_n$ is Matsubara frequency
and $\gamma_\mu$ represents the current operator in the
$\mu$-direction divided by $e/\hbar$.
The spin multiplicity of 2 has been taken into account and Tr
is to take trace over the band indices.
Originally this formula is derived based on the
Luttinger-Kohn representation.\cite{LuttingerKohn}
However, as discussed in I, this formula is valid
in the usual Bloch representation because the two representations are
related by a unitary transformation and the trace is invariant
under the unitary transformation.
This exact one-line formula (\ref{FukuyamaF}) has been applied
to practical models such as
Weyl equation realized in graphene and an organic conductor
$\alpha$-(BEDT-TTF)$_2$I$_3$,\cite{FukuGra,Kobayashi,FukuRev}
and Dirac equation in bismuth,\cite{FOF1,FOF2,FOF3,BiRev}
expressed in the Luttinger-Kohn-type Hamiltonians.
For the Bloch representation, the exact formula
written in terms of Bloch wave functions had been
derived by Hebborn {\it et al.}\cite{HS1,HS2,HS3}
(especially in Ref.\cite{HS3} which will be called as HLSS in the following)
before the exact one-line formula (\ref{FukuyamaF}) was derived.
It was proved in I with the help of the formulation by Ichimaru\cite{Ichimaru}
that (\ref{FukuyamaF}) is equivalent to the results by HLSS.
However HLSS's result is very complicated for the practical use.
Because of this difficulty, orbital susceptibility for Bloch
electrons in general has not been explored in detail.
In particular, quantitative estimation of various
contributions and their physical meaning have not been clarified.
In the single-band tight-binding model, there is a fundamental problem.
When one restricts the band indices of the Green's functions in (\ref{FukuyamaF})
to a single band, one obtains a susceptibility defined as $\chi_1$
which is different from $\chi_{\rm LP}$.\cite{Fukuyama,KoshinoAndo,Gomez,Piechon}
On the other hand, in the two-dimensional honeycomb lattice
(or graphene)\cite{Safran,Safran2,Saito,KoshinoAndo,Gomez,Piechon}
which is a typical two-band tight-binding model,
it was shown that the orbital susceptibility based on the Peierls phase
(eq.~(\ref{PeierlsPh})) is not equal to either
$\chi_1$ or $\chi_{\rm LP}$.\cite{KoshinoAndo,Gomez,Piechon}
From these results, it was claimed that the formula (\ref{FukuyamaF}) cannot be applied
to the tight-binding models on one hand,\cite{Piechon} and
that there are some \lq\lq correction terms'' to the exact formula (\ref{FukuyamaF})
on the other hand,\cite{KoshinoAndo,Gomez} both of which are of course unjustified.
As shown in the present paper, these confusions come from the misusage
of the exact formula (\ref{FukuyamaF}).
In this paper, starting from the exact one-line formula (\ref{FukuyamaF})
and rewriting it in terms of Bloch wave functions,
we derive a new and exact formula of the orbital susceptibility
in a different way from those of HLSS.
As shown later explicitly, the new formula is equivalent to the previous
results.\cite{HS1,HS2,HS3}
However, it is simpler than the previous results and contains only four contributions:
(1) Landau-Peierls susceptibility, $\chi_{\rm LP}$, (2) interband contribution, $\chi_{\rm inter}$,
(3) Fermi surface contribution, $\chi_{\rm FS}$, and (4) contribution from occupied states, $\chi_{\rm occ}$.
Except for $\chi_{\rm LP}$, the other three contributions involve
the crystal-momentum derivatives of Bloch wave functions.
The physical meaning of each term is discussed.
In the atomic limit, $\chi_{\rm inter}$ is equal to the Van Vleck susceptibility
and $\chi_{\rm occ}$ is equal to the atomic diamagnetism (or contributions from core-level electrons).
$\chi_{\rm FS}$ is a newly found contribution proportional to $f'(\Eell)$.
Then, we apply the present formula to the model of linear combination of
atomic orbitals.
We will show that the orbital susceptibility can be calculated systematically by studying the
effects of overlap integrals between atomic orbitals as a perturbation from the atomic limit.
In this method, itinerant features of Bloch electrons in solids are clarified for the first time.
In most of researches, the atomic diamagnetism and Van Vleck contributions are
treated separately from $\chi_{\rm LP}$.
However, the present exact formula contains all the contributions on the same basis.
Furthermore we find that $\chi_{\rm occ}$ contains the contributions not only from the
core-level electrons (known as atomic diamagnetism),
but also from the occupied states in the partially-filled band.
This contribution has not been recognized before.
As mentioned above, when we restrict the band indices of the
Green's functions in (\ref{FukuyamaF}) to a single band, we do not obtain $\chi_{\rm LP}$.
In this paper, we show that the total of several contributions in (\ref{FukuyamaF})
gives $\chi_{\rm LP}$.
In these contributions, the $f$-sum rule which involves the summation over the other bands
plays important roles.
This means that the band indices of the Green's functions in (\ref{FukuyamaF})
should not be restricted to a single band when one consider a
single-band tight-binding model,
While preparing this paper, we notice that Gao {\it et al}\cite{Gao}
studied orbital magnetism in terms of Berry phase using the
wave-packet approximation.
Their main interest is in the case with broken time-reversal
symmetry\cite{Niu,Thon1,Niu2,Thon2} which is not the subject of the
present paper.
However we can compare our results with theirs in the case where the
time-reversal symmetry is not broken.
We find that their results are almost equivalent with ours except for a
term which has a different prefactor,
possibly due to the wave-packet approximation they used.
In section 2 we derive a new formula for orbital susceptibility in Bloch
representation.
Our main results are summarized in eqs.~(\ref{FinalChi})-(\ref{ChiC})
where four contributions are identified.
In section 3 we apply the obtained formula to the model of linear combination
of atomic orbitals.
Section 4 is devoted to discussions and future problems.
\section{Orbital susceptibility in terms of Bloch wave functions}
\subsection{Bloch wave functions and current operator}
In order to explore the implications of eq.~(\ref{FukuyamaF}) in the
Bloch representation, some essential ingredients are introduced.
Thermal Green's function for the $\ell$-th band in (\ref{FukuyamaF})
is simply given by
\begin{equation}
{\cal G}_\ell = \frac{1}{i\varepsilon_n - \Eell({\bm k})},
\end{equation}
where $\varepsilon_n$ is Matsubara frequency and $\Eell({\bm k})$ is the Bloch band energy.
$\ell$ denotes the band index and the wave vector $\bm k$ is within the first
Brillouin zone.
In order to obtain the explicit form of the current operator $\gamma_\mu$ in (\ref{FukuyamaF}),
it is necessary to have information of Bloch wave functions in a periodic potential $V({\bm r})$.
From the Bloch's theorem, the eigenfunctions of the Hamiltonian are given by
\begin{equation}
e^{i{\bm k}\cdot {\bm r}} \Uell({\bm r}),
\label{Bloch}
\end{equation}
where $\Uell({\bm r})$ is a periodic function with the same
period as $V({\bm r})$ and satisfies the equation
\begin{equation}
H_{\bm k} \Uell({\bm r}) = \Eell({\bm k}) \Uell({\bm r}),
\label{UellEq}
\end{equation}
with
\begin{equation}
H_{\bm k} = \frac{\hbar^2 k^2}{2m} - \frac{i\hbar^2}{m} {\bm k}\cdot {\bm \nabla}
- \frac{\hbar^2}{2m} {\bm \nabla}^2 + V({\bm r}).
\label{HamiltonianK}
\end{equation}
Here ${\bm k}\cdot {\bm \nabla}$ indicates the inner product between
${\bm k}$ and ${\bm \nabla}$.
For simplicity, we assume a centrosymmetric potential, $V(-{\bm r})=V({\bm r})$.
In this case we can choose $u^\dagger_{\ell {\bm k}}({\bm r}) = u_{\ell {\bm k}}(-{\bm r})$\cite{HS3}
where $u^\dagger_{\ell {\bm k}}({\bm r})$ is the complex conjugate of
$u_{\ell {\bm k}}({\bm r})$.
For $\gamma_\mu$, we calculate the current operator
\begin{equation}
{\bm j}({\bm r}) = -\frac{ie\hbar }{2m} \sum_\alpha \left[ \psi_\alpha^\dagger ({\bm r})
{\bm \nabla} \psi_\alpha ({\bm r})
- {\bm \nabla}\psi_\alpha^\dagger ({\bm r}) \ \psi_\alpha ({\bm r}) \right].
\label{current_def}
\end{equation}
Substituting the expansion
\begin{equation}
\psi_\alpha ({\bm r}) = \sum_{\ell, {\bm k}} {\hat c}_{\ell {\bm k}\alpha}
e^{i{\bm k}\cdot {\bm r}} u_{\ell{\bm k}} ({\bm r}),
\label{PsiExpand}
\end{equation}
into eq.~(\ref{current_def}) and making the Fourier transform, we obtain
\begin{equation}
\begin{split}
{\bm j}_{{\bm q}=0}
&= \frac{e\hbar}{m} \sum_{\ell \ell' {\bm k} \alpha}
\left[ \int \Uell^\dagger \left( {\bm k} - i{\bm \nabla} \right) \Uellp d{\bm r} \right]
{\hat c}_{\ell {\bm k}\alpha}^\dagger {\hat c}_{\ell' {\bm k}\alpha} \cr
&= \frac{e}{\hbar} \sum_{\ell \ell' {\bm k} \alpha}
\left[ \int \Uell^\dagger \frac{\partial \Hk}{\partial {\bm k}} \Uellp d{\bm r} \right]
{\hat c}_{\ell {\bm k}\alpha}^\dagger {\hat c}_{\ell' {\bm k}\alpha},
\label{JFourier}
\end{split}
\end{equation}
where the definition of $\Hk$ in (\ref{HamiltonianK}) has been used.
Since $\gamma_\mu$ in (\ref{FukuyamaF}) is defined as a current operator
divided by $e/\hbar$,
the matrix element of $\gamma_\mu$ is given by (see Appendix A)
\begin{equation}
\left[ \gamma_\mu \right]_{\ell\ell'}
= \int \Uell^\dagger \frac{\partial \Hk}{\partial k_\mu } \Uellp d{\bm r}
= \frac{\partial \Eell({\bm k})}{\partial k_\mu } \delta_{\ell \ell'} + p_{\ell \ell' \mu},
\label{Jmatrixelement}
\end{equation}
with $p_{\ell \ell' \mu }$ being the off-diagonal matrix elements\cite{WilsonText,Fukuyama}
\begin{equation}
p_{\ell \ell' \mu } = (\Eellp ({\bm k}) - \Eell ({\bm k}) )
\int \Uell^\dagger \frac{\partial \Uellp}{\partial k_\mu } d{\bm r}.
\label{Joffdiagonal}
\end{equation}
Although the integral in (\ref{Joffdiagonal}) is sometimes called
(interband) \lq\lq Berry connection'', this kind of terms has been
familiar for a long time in the literatures.\cite{Blount2,WilsonText,HS2}
[Note that the intraband \lq\lq Berry connection'' vanishes
in the present Hamiltonian with $V({\bm r}) = V(-{\bm r})$,
as shown in (\ref{AppBn2}).]
Here we have used the Fourier integral theorem\cite{LuttingerKohn}
for functions with the lattice periodicity.
Originally, the range of the real-space integral on the right-hand side of
eqs.~(\ref{JFourier}) or (\ref{Joffdiagonal}) is within a unit cell,\cite{LuttingerKohn}
i.e., $\frac{V}{\Omega}\int_\Omega \cdots d{\bm r}$, where
$V$ and $\Omega$ are the volumes of the whole system and of the unit cell, respectively.
However, the range of integral can be extended to the whole system size $V$
by using the periodicity of $\Uell ({\bm r})$,
i.e., $\frac{V}{\Omega}\int_\Omega \cdots d{\bm r}=\int_V \cdots d{\bm r}$.
In the following, the real-space integrals are defined in this way.
\subsection{New formula for orbital susceptibility}
Using the above matrix elements for $\gamma_\mu$ and thermal Green's functions,
we calculate the formula (\ref{FukuyamaF}) in the Bloch representation.
Due to the existence of two terms in each $\left[ \gamma_\mu \right]_{\ell\ell'} $,
there appear sixteen terms.
Classifying by the band indices of four Green's functions in (\ref{FukuyamaF}), we obtain
\begin{equation}
\chi = \sum_{n=1}^7 \chi_n,
\end{equation}
with
\begin{equation}
\chi_1 = \frac{e^2}{\hbar^2 c^2} \MSum \sum_{\ell} \left( \Eellx \right)^2 \left( \Eelly \right)^2 {\cal G}_\ell^4,
\end{equation}
\begin{equation}
\chi_2 = \frac{2e^2}{\hbar^2 c^2} \MSum \sum_{\ell \ne \ell'} \Eellx \Eelly
p_{\ell\ell' x}p_{\ell' \ell y} \ {\cal G}_\ell^3 {\cal G}_{\ell '} + (x \leftrightarrow y),
\end{equation}
\begin{equation}
\chi_3 = \frac{e^2}{\hbar^2 c^2} \MSum \sum_{\ell \ne \ell'} \Eellx \Eellpx
p_{\ell\ell' y}p_{\ell' \ell y} \ {\cal G}_\ell^2 {\cal G}_{\ell '}^2 + (x \leftrightarrow y),
\end{equation}
\begin{equation}
\chi_4 = \frac{2e^2}{\hbar^2 c^2} \MSum \sum_{\ell \ell' \ell'' }^{'} \Eellx
p_{\ell\ell' y} p_{\ell' \ell'' x} p_{\ell'' \ell y} \ {\cal G}_\ell^2
{\cal G}_{\ell '} {\cal G}_{\ell ''} + (x \leftrightarrow y),
\end{equation}
\begin{equation}
\chi_5 = \frac{e^2}{\hbar^2 c^2} \MSum \sum_{\ell \ne \ell' }
p_{\ell\ell' x} p_{\ell' \ell y} p_{\ell \ell' x} p_{\ell' \ell y} \ {\cal G}_\ell^2 {\cal G}_{\ell '}^2,
\end{equation}
\begin{equation}
\chi_6 = \frac{e^2}{\hbar^2 c^2} \MSum \sum_{\ell \ell' \ell''}^{'}
p_{\ell\ell' x} p_{\ell' \ell y} p_{\ell \ell'' x} p_{\ell'' \ell y}
\ {\cal G}_\ell^2 {\cal G}_{\ell '} {\cal G}_{\ell ''} + (x \leftrightarrow y),
\end{equation}
\begin{equation}
\chi_7 = \frac{e^2}{\hbar^2 c^2} \MSum \sum_{\ell \ell' \ell'' \ell'''}^{'}
p_{\ell\ell' x} p_{\ell' \ell'' y} p_{\ell'' \ell''' x} p_{\ell''' \ell y}
\ {\cal G}_\ell {\cal G}_{\ell '} {\cal G}_{\ell ''} {\cal G}_{\ell '''},
\end{equation}
where $+(x \leftrightarrow y)$ means a term obtained by replacing $(x,y)$ to $(y,x)$.
Schematic representation of the contributions to
$\chi_1$-$\chi_7$ are shown in Fig.~\ref{Fig:01}.
The summation with prime $\sum^{'}$ means that all the band indices
($\ell, \ell', \ell''$ or $\ell, \ell', \ell'', \ell'''$) are different with each other.
In the following we write $\Eell$ for $\Eell({\bm k})$ as far as it is not confusing.
\def\MC#1{\textcolor{blue}{#1}}
\def\MCC#1{\textcolor{red}{#1}}
\begin{figure
\setlength{\unitlength}{1mm}
\centering\begin{picture}(80,140)
\thicklines
\put(7,130){\Large $\chi_1$} \put(0,130){(a)}
\put(20,130){\line(1,0){7}} \put(22,125){$\ell$} \put(30,130){\line(1,0){7}} \put(32,125){$\ell$}
\put(40,130){\line(1,0){7}} \put(42,125){$\ell$} \put(50,130){\line(1,0){7}} \put(52,125){$\ell$}
\put(27.25,129){\textcolor{blue}{$\otimes$}} \put(37.24,129){\textcolor{blue}{$\otimes$}}
\put(47.25,129){\textcolor{blue}{$\otimes$}} \put(57.24,129){\textcolor{blue}{$\otimes$}}
\put(26,125){\textcolor{blue}{$\frac{\partial \Eell}{\partial k_x}$}}
\put(7,110){\Large $\chi_2$} \put(0,115){(b)}
\put(20,110){\line(1,0){7}} \put(22,105){$\ell$} \put(30,116){\line(1,0){7}} \put(32,111){$\ell'$}
\put(40,110){\line(1,0){7}} \put(44,105){$\ell$} \put(50,110){\line(1,0){7}} \put(52,105){$\ell$}
\MC{\multiput(27,110)(1,2){4}{\circle{1}} \multiput(40,110)(-1,2){4}{\circle{1}}
\put(21.5,115){$p_{\ell \ell' x}$} \put(40.5,115){$p_{\ell' \ell y}$} }
\put(47.25,109){\textcolor{blue}{$\otimes$}} \put(57.24,109){\textcolor{blue}{$\otimes$}}
\put(7,90){\Large $\chi_3$} \put(0,95){(c)}
\put(20,90){\line(1,0){7}} \put(22,85){$\ell$} \put(30,96){\line(1,0){7}} \put(32,91){$\ell'$}
\put(40,96){\line(1,0){7}} \put(42,91){$\ell'$} \put(50,90){\line(1,0){7}} \put(52,85){$\ell$}
\MC{\multiput(27,90)(1,2){4}{\circle{1}} \multiput(50,90)(-1,2){4}{\circle{1}} \put(21.5,95){$p_{\ell \ell' y}$} \put(49.5,95){$p_{\ell' \ell y}$} }
\put(37.24,95){\textcolor{blue}{$\otimes$}} \put(57.24,89){\textcolor{blue}{$\otimes$}}
\put(7,70){\Large $\chi_4$} \put(0,75){(d)}
\put(20,70){\line(1,0){7}} \put(22,65){$\ell$} \put(30,76){\line(1,0){7}} \put(32,71){$\ell'$}
\put(40,80){\line(1,0){7}} \put(42,75){$\ell''$} \put(50,70){\line(1,0){7}} \put(52,65){$\ell$}
\MC{\multiput(27,70)(1,2){4}{\circle{1}} \multiput(50,70)(-0.6,2){6}{\circle{1}} \multiput(37,76)(1.8,2){3}{\circle{1}}
\put(21.5,75){$p_{\ell \ell' y}$} \put(49.5,77){$p_{\ell'' \ell y}$} \put(32,80){$p_{\ell' \ell'' x}$} }
\put(57.24,69){\textcolor{blue}{$\otimes$}}
\put(7,50){\Large $\chi_5$} \put(0,55){(e)}
\put(20,50){\line(1,0){7}} \put(22,46){$\ell$} \put(30,56){\line(1,0){7}} \put(32,51){$\ell'$}
\put(40,50){\line(1,0){7}} \put(45,46){$\ell$} \put(50,56){\line(1,0){7}} \put(52,51){$\ell'$}
\MC{\multiput(27,50)(1,2){4}{\circle{1}} \multiput(40,50)(-1,2){4}{\circle{1}} \put(21.5,55){$p_{\ell \ell' x}$} \put(39.5,55){$p_{\ell' \ell y}$}
\multiput(47,50)(1,2){4}{\circle{1}} \multiput(60,50)(-1,2){4}{\circle{1}} \put(44,58){$p_{\ell \ell' x}$} \put(59.5,55){$p_{\ell' \ell y}$} }
\put(7,25){\Large $\chi_6$} \put(0,33){(f)}
\put(20,25){\line(1,0){7}} \put(22,21){$\ell$} \put(30,31){\line(1,0){7}} \put(32,26){$\ell'$}
\put(40,25){\line(1,0){7}} \put(45,21){$\ell$} \put(50,35){\line(1,0){7}} \put(52,30){$\ell''$}
\MC{\multiput(27,25)(1,2){4}{\circle{1}} \multiput(40,25)(-1,2){4}{\circle{1}} \put(21.5,30){$p_{\ell \ell' x}$} \put(39.5,30){$p_{\ell' \ell y}$}
\multiput(47,25)(0.6,2){6}{\circle{1}} \multiput(60,25)(-0.6,2){6}{\circle{1}} \put(42,35){$p_{\ell \ell'' x}$} \put(59,35){$p_{\ell'' \ell y}$} }
\put(7,2){\Large $\chi_7$} \put(0,10){(g)}
\put(20,2){\line(1,0){7}} \put(22,-2){$\ell$} \put(30,8){\line(1,0){7}} \put(32,3){$\ell'$}
\put(50,5){\line(1,0){7}} \put(52,0){$\ell'''$} \put(40,12){\line(1,0){7}} \put(42,7){$\ell''$}
\MC{\multiput(27,2)(1,2){4}{\circle{1}} \multiput(37,8)(1.8,2){3}{\circle{1}} \put(21.5,7){$p_{\ell \ell' x}$} \put(32,12){$p_{\ell' \ell'' y}$}
\multiput(50,5)(-1,2){4}{\circle{1}} \multiput(60,2)(-1.5,1.5){3}{\circle{1}} \put(50,10){$p_{\ell'' \ell''' x}$} \put(59,5){$p_{\ell''' \ell y}$} }
\MCC{\put(24,105){\line(1,0){18}} \put(24,120){\line(1,0){18}} \put(24,105){\line(0,1){15}} \put(42,105){\line(0,1){15}}
\put(24,45){\line(1,0){18}} \put(24,60){\line(1,0){18}} \put(24,45){\line(0,1){15}} \put(42,45){\line(0,1){15}}
\put(43,45){\line(1,0){18}} \put(43,60){\line(1,0){18}} \put(43,45){\line(0,1){15}} \put(61,45){\line(0,1){15}}
\put(24,20){\line(1,0){18}} \put(24,38){\line(1,0){18}} \put(24,20){\line(0,1){18}} \put(42,20){\line(0,1){18}}
\put(43,20){\line(1,0){18}} \put(43,38){\line(1,0){18}} \put(43,20){\line(0,1){18}} \put(61,20){\line(0,1){18}}
\put(25,101.5){$f$-sum rule} \put(25,41.5){$f$-sum rule} \put(44,41.5){$f$-sum rule}
\put(25,16.5){$f$-sum rule} \put(44,16.5){$f$-sum rule}
}
\end{picture}
\caption{(Color online) Schematic representation of the contributions to $\chi_1$-$\chi_7$:
The solid lines with band indices $\ell, \ell'$ etc.\ represent
the Green's functions. Height of these lines represents the energy level $\Eell$.
The array of blue circles connecting the two lines represents the off-diagonal matrix elements of
$\gamma_\mu$, i.e., $p_{\ell\ell', \mu}$.
The symbol $\otimes$ between the two solid lines represents the diagonal component of
$\gamma_\mu$, i.e., $\partial \Eell/\partial k_\mu$.
The right-hand of each diagram is connected to its left-hand because of the trace in (\ref{FukuyamaF}).
The red squares indicate the part of the diagrams which can be expressed by
the $f$-sum rule in eq.~(\ref{fSumRule}).
}
\label{Fig:01}
\end{figure}
The first contribution $\chi_1$ will be purely intraband since only the intraband matrix
elements of $\gamma_\mu$'s are involved.
After taking the summation over Matsubara frequency $n$ and making
integrations by parts, we obtain\cite{Fukuyama}
\begin{equation}
\begin{split}
\chi_1 &= \frac{e^2}{6\hbar^2 c^2} \sum_{\ell, {\bm k}}
\left( \Eellx \right)^2 \left( \Eelly \right)^2 f'''(\Eell) \cr
&= \frac{e^2}{6\hbar^2 c^2} \sum_{\ell, {\bm k}} \biggl[
\frac{\partial^2 \varepsilon_\ell}{\partial k_x^2}
\frac{\partial^2 \varepsilon_\ell}{\partial k_y^2} + 2 \left(
\frac{\partial^2 \varepsilon_\ell}{\partial k_x \partial k_y} \right)^2 \cr
&\qquad\ + \frac{3}{2} \left(
\Eellx \frac{\partial^3 \varepsilon_\ell}{\partial k_x \partial k_y^2} +
\Eelly \frac{\partial^3 \varepsilon_\ell}{\partial k_x^2 \partial k_y}
\right) \biggr] f'(\varepsilon_\ell),
\label{Chi1}
\end{split}
\end{equation}
where $f(\Eell)$ is the Fermi distribution function.
This $\chi_1$ is similar to the Landau-Peierls susceptibility $\chi_{\rm LP}$ in (\ref{LandauPeierls}),
but there are two differences.
The numerical prefactor of the second term of $\chi_1$ is different from $\chi_{\rm LP}$ and
the last term of $\chi_1$ does not appear in $\chi_{\rm LP}$.
We will show shortly that $\chi_{\rm LP}$ is obtained by adding some other
contributions from $\chi_2, \chi_5$ and $\chi_6$.
As discussed in Section 1, this means that one should not pick up only $\chi_1$ in discussing the
orbital susceptibility in a single-band model.
Next let us consider $\chi_2$. The summation over $n$ in $\chi_2$ gives
\begin{equation}
\begin{split}
\chi_2 &= \frac{2e^2}{\hbar^2 c^2} \sum_{\ell \ne \ell', {\bm k}}
\Eellx \Eelly p_{\ell\ell' x}p_{\ell' \ell y} \cr
&\times \left\{ \frac{1}{2} \frac{ f''(\Eell)}{\Eell - \Eellp} - \frac{ f'(\Eell)}{(\Eell - \Eellp)^2}
+ \frac{ f(\Eell) - f(\Eellp) }{(\Eell - \Eellp)^3} \right\} + (x \leftrightarrow y) \cr
&\equiv \MyChi{2}{1} + \MyChi{2}{2} + \MyChi{2}{3},
\label{chi2}
\end{split}
\end{equation}
where the $j$-th term in $\chi_n$ is denoted as $\MyChi{n}{j}$.
For $\MyChi{2}{1}$,
the summation over $\ell'$ can be carried out and we obtain
\begin{equation}
\begin{split}
\MyChi{2}{1} = - \frac{e^2}{2\hbar^2 c^2} &\sum_{\ell, {\bm k}}
f'(\Eell) \biggl\{
\left( \frac{\partial^2 \Eell}{\partial k_x \partial k_y} \right)^2
+ \Eellx \frac{\partial^3 \Eell}{\partial k_x \partial k_y^2} \biggr\} \cr
&\qquad \qquad + (x \leftrightarrow y),
\label{Chi2-1}
\end{split}
\end{equation}
where we have used the $f$-sum rule\cite{Wilson,Fukuyama}
\begin{equation}
\sum_{\ell' \ne \ell} \frac{p_{\ell\ell' \mu} p_{\ell' \ell \nu}}{\Eell-\Eellp}
= \frac{1}{2}\left( \frac{\partial^2 \Eell}{\partial k_\mu \partial k_\nu} -
\frac{\hbar^2}{m} \delta_{\mu\nu} \right),
\label{fSumRule}
\end{equation}
with $\mu=x, \nu=y$ and the integration by parts.
This $\ell'$-summation is schematically shown in Fig.~\ref{Fig:01}(b),
in which the red square indicates the part of the diagram representing the $f$-sum rule.
The $f$-sum rule in eq.~(\ref{fSumRule}) results from the completeness
property of $\Uellp$ [see eq.~(\ref{fSumRuleInA}) in Appendix A].
(Various formulas used in the present paper are listed in Appendix A.)
One may call the left-hand side of (\ref{fSumRule}) as \lq\lq interband" since it
contains the off-diagonal matrix elements of the current operator $p_{\ell\ell', \mu}$.
On the other hand, the right-hand side of (\ref{fSumRule}) is expressed by a
single-band property, $\Eell$,
and as a result, $\MyChi{2}{1}$ looks like an \lq\lq intraband" contribution.
This indicates that the naive classification of \lq\lq intraband" and \lq\lq interband"
does not apply.
For $\MyChi{2}{2}$, the summation
over $\ell'$ can also be carried out, and we obtain
\begin{equation}
\begin{split}
\MyChi{2}{2} = - \frac{2e^2}{\hbar^2 c^2} &\sum_{\ell, {\bm k}}
f'(\Eell) \Eellx \Eelly
\int \frac{\partial \Uell^\dagger}{\partial k_x}
\frac{\partial \Uell}{\partial k_y} d{\bm r}
+ (x \leftrightarrow y),
\label{Chi2-2}
\end{split}
\end{equation}
where we have used
\begin{equation}
\sum_{\ell' \ne \ell} \frac{p_{\ell\ell' x} p_{\ell' \ell y}}{(\Eell-\Eellp)^2}
= \int \frac{\partial \Uell^\dagger}{\partial k_x} \frac{\partial \Uell}{\partial k_y} d{\bm r}.
\label{fSumRule2}
\end{equation}
(See (\ref{fSumRule2InA}).)
The last term of $\chi_2$ is given by
\begin{equation}
\begin{split}
\MyChi{2}{3} &= -\frac{2e^2}{\hbar^2 c^2} \sum_{\ell \ne \ell', {\bm k}} \Eellx \Eelly
\frac{ f(\Eell) - f(\Eellp)}{\Eell - \Eellp} \cr
&\times \int \Uell^\dagger \Uellpx d{\bm r} \int \Uellp^\dagger \Uelly d{\bm r}
+ (x \leftrightarrow y).
\label{Chi2-3}
\end{split}
\end{equation}
Here the $\ell'$-summation can not be carried out due to the presence of the
denominator, $1/(\Eell - \Eellp)$.
This denominator is the
same as what appears in the second-order perturbation of the interband process.
Features similar to $\chi_2$ are present in $\chi_5$ and $\chi_6$, as seen in
Fig.\ref{Fig:01}, where the excluded terms of $\ell'=\ell''$ in $\chi_6$ are
supplemented by $\chi_5$, leading to the independent summations over
$\ell'$ and $\ell''$.
As a result, using the $f$-sum rule, we obtain
\begin{equation}
\begin{split}
&\MyChi{5}{1} + \MyChi{6}{1} =
\frac{e^2}{4\hbar^2 c^2} \sum_{\ell, {\bm k}} f'(\Eell)
\left( \frac{\partial^2 \Eell}{\partial k_x \partial k_y} \right)^2 + (x \leftrightarrow y).
\label{Chi56X}
\end{split}
\end{equation}
Detailed calculations are shown in Appendix B.
[For the definitions of $\MyChi{5}{1}$ and $\MyChi{6}{1}$, see (\ref{AppD1})
and (\ref{AppD2}).]
Now, we can see that sum of $\chi_1$, $\MyChi{2}{1}$ and
$\MyChi{5}{1} + \MyChi{6}{1}$ becomes
\begin{equation}
\begin{split}
&\frac{e^2}{6\hbar^2 c^2} \sum_{\ell, {\bm k}} f'(\Eell)
\biggl\{ \frac{\partial^2 \Eell}{\partial k_x^2} \frac{\partial^2 \Eell}{\partial k_y^2}
-\left( \frac{\partial^2 \Eell}{\partial k_x \partial k_y} \right)^2 \cr
&\qquad \qquad \quad -
\frac{3}{2} \left(
\Eellx \frac{\partial^3 \varepsilon_\ell}{\partial k_x \partial k_y^2} +
\Eelly \frac{\partial^3 \varepsilon_\ell}{\partial k_x^2 \partial k_y}
\right) \biggr\}.
\end{split}
\end{equation}
It is seen that the first two terms give $\chi_{\rm LP}$, while the last term
can be combined with other contributions after the transformation
\begin{equation}
\begin{split}
\frac{\partial^3 \Eell}{\partial k_x \partial k_y^2}
&= 2\int \frac{\partial \Uell^\dagger}{\partial k_y} \left(
\frac{\partial H_{\bm k}}{\partial k_x} - \frac{\partial \Eell}{\partial k_x} \right)
\frac{\partial \Uell}{\partial k_y} d{\bm r} \cr
&+ 4\int \frac{\partial \Uell^\dagger}{\partial k_x} \left(
\frac{\partial H_{\bm k}}{\partial k_y} - \frac{\partial \Eell}{\partial k_y} \right)
\frac{\partial \Uell}{\partial k_y} d{\bm r},
\label{E3Formula}
\end{split}
\end{equation}
which is obtained by putting $\mu\nu\tau$ as $xyy$ in eq.~(\ref{AppBe3_2}) in Appendix A.
Other terms in $\chi_3$-$\chi_7$ are calculated similarly, whose details are
shown in Appendix B.
We obtain the total susceptibility $\chi$ as follows, which is exact
as eq.~(\ref{FukuyamaF}).
\begin{equation}
\chi = \chi_{\rm LP} + \chi_{\rm inter} + \chi_{\rm FS} + \chi_{\rm occ},
\label{FinalChi}
\end{equation}
with
\begin{equation}
\chi_{\rm LP} = \frac{e^2}{6 \hbar^2 c^2}
\sum_{\ell, {\bm k}} f'(\Eell)
\left\{ \frac{\partial^2 \Eell}{\partial k_x^2} \frac{\partial^2 \Eell}{\partial k_y^2}
-\left( \frac{\partial^2 \Eell}{\partial k_x \partial k_y} \right)^2 \right\},
\label{ChiLP}
\end{equation}
\begin{equation}
\begin{split}
\chi_{\rm inter} &= -\frac{e^2}{\hbar^2 c^2} \sum_{\ell \ne \ell', {\bm k}} \frac{f(\Eell)}{\Eell - \Eellp}
\biggl| \int \frac{\partial \Uell^\dagger}{\partial k_x}
\left( \frac{\partial H_{\bm k}}{\partial k_y} + \frac{\partial \Eell}{\partial k_y} \right) \Uellp d{\bm r} \cr
&\qquad\qquad - \int \frac{\partial \Uell^\dagger}{\partial k_y}
\left( \frac{\partial H_{\bm k}}{\partial k_x} + \frac{\partial \Eell}{\partial k_x} \right) \Uellp
d{\bm r} \biggr|^2,
\label{ChiInter}
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\chi_{\rm FS} &= \frac{e^2}{\hbar^2 c^2} \sum_{\ell, {\bm k}} f'(\Eell) \biggl\{
\Eellx\int \frac{\partial \Uell^\dagger}{\partial k_y}
\left( \frac{\partial H_{\bm k}}{\partial k_x} + \frac{\partial \Eell}{\partial k_x} \right)
\Uelly d{\bm r} \cr
&\qquad
-\Eellx\int \frac{\partial \Uell^\dagger}{\partial k_x}
\left( \frac{\partial H_{\bm k}}{\partial k_y} + \frac{\partial \Eell}{\partial k_y} \right)
\Uelly d{\bm r}\biggr\} + (x\leftrightarrow y),
\label{ChiFS}
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\chi_{\rm occ} &= -\frac{e^2}{2\hbar^2 c^2}
\sum_{\ell, {\bm k}} f(\Eell) \biggl\{
\frac{\partial^2 \Eell}{\partial k_x \partial k_y}
\int \frac{\partial \Uell^\dagger}{\partial k_x} \frac{\partial \Uell}{\partial k_y} d{\bm r} \cr
&\qquad +\left( \frac{\hbar^2}{m} - \frac{\partial^2 \Eell}{\partial k_x^2} \right)
\int \frac{\partial \Uell^\dagger}{\partial k_y} \frac{\partial \Uell}{\partial k_y} d{\bm r}
\biggr\} + (x\leftrightarrow y).
\label{ChiC}
\end{split}
\end{equation}
Schematic representation of these contributions is shown in Fig.~\ref{Fig:02}.
It is worth noting that the separation of $\chi_{\rm FS}$ and $\chi_{\rm occ}$
is not unique.
For example, part of $\chi_{\rm occ}$ can be rewritten into a form with
$f'(\Eell)$ by integration by parts.
In the above expression, we have chosen a separation which is simple.
\subsection{Interpretation of each term}
The first contribution, $\chi_{\rm LP}$, is Landau-Peierls formula.
As shown above, $\chi_{\rm LP}$ comes from the
contributions of $\chi_1$, $\MyChi{2}{1}$ and $\MyChi{5}{1}+\MyChi{6}{1}$.
As we have seen, the $f$-sum rule plays important roles in $\chi_{\rm LP}$.
The second contribution, $\chi_{\rm inter}$, is a purely interband contribution.
Among the four contributions, only this term involves two bands of $\Eell$ and $\Eellp$.
In the next section, we show that $\chi_{\rm inter}$ is equal to the Van Vleck
susceptibility when we consider the atomic limit.
Note that the energy denominator of $\chi_{\rm inter}$ is the same as
the case of the $f$-sum rule (\ref{fSumRule}).
However, there is a clear difference.
Let us consider a single-band case in which the energy differences between the
bands are quite large.
In this case, we have $\Delta E = {\rm min} |\Eell ({\bm k}) - \Eellp({\bm k})|$
which satisfies $\Delta E>>$ [band width of $\ell$-th band].
Then, the absolute value of $\chi_{\rm inter}$ is less than
\begin{equation}
\begin{split}
&\sum_{\ell \ne \ell', {\bm k} } \frac{f(\Eell)}{\Delta E}
\biggl| \int \frac{\partial \Uell^\dagger}{\partial k_x}
\left( \frac{\partial H_{\bm k}}{\partial k_y} + \frac{\partial \Eell}{\partial k_y} \right)
\Uellp d{\bm r} \cr
&\qquad\qquad\qquad
- \int \frac{\partial \Uell^\dagger}{\partial k_y}
\left( \frac{\partial H_{\bm k}}{\partial k_x} + \frac{\partial \Eell}{\partial k_x}
\right) \Uellp d{\bm r} \biggr|^2.
\end{split}
\end{equation}
Then the $\ell'$ summation can be carried out using the completeness property of $\Uellp$.
For example, a typical term can be written as
\begin{equation}
\begin{split}
&\sum_{\ell, {\bm k} } \frac{f(\Eell)}{\Delta E}
\int \frac{\partial \Uell^\dagger}{\partial k_x}
\left( \frac{\partial H_{\bm k}}{\partial k_y} + \frac{\partial \Eell}{\partial k_y} \right)
\left( \frac{\partial H_{\bm k}}{\partial k_y} + \frac{\partial \Eell}{\partial k_y} \right)
\Uellx d{\bm r},
\end{split}
\end{equation}
which is arbitrarily small when $\Delta E$ is large.
This is in a sharp contrast with the $f$-sum rule (\ref{fSumRule}) in which the
right-hand side is finite when $\Delta E$ is large.
We call the third contribution in (\ref{FinalChi}) as $\chi_{\rm FS}$
(FS stands for \lq\lq Fermi surface"), since it is proportional to $f'(\Eell)$.
This is a newly found contribution, but its physical meaning is not
clear at present. However, the factor
\begin{equation}
\frac{\partial H_{\bm k}}{\partial k_\mu} + \frac{\partial \Eell}{\partial k_\mu},
\label{CommonFac}
\end{equation}
in the integral in $\chi_{\rm FS}$ is common with $\chi_{\rm inter}$,
indicating some close relationship between $\chi_{\rm FS}$ and
$\chi_{\rm inter}$.
The fourth contribution, $\chi_{\rm occ}$,
has contributions from the occupied states (\lq\lq occ" stands for occupied states).
As shown in the next section, $\chi_{\rm occ}$ is equal to the atomic diamagnetism
in the atomic limit.
Furthermore, we show that $\chi_{\rm occ}$ contains the contributions not only from the
core-level electrons, but also from the occupied states in the partially-filled band.
(See the right-hand side of Fig.~\ref{Fig:02}.)
This contribution has not been recognized before.
\begin{figure
\setlength{\unitlength}{1mm}
\centering
\begin{picture}(80,25)
\thicklines
\put(7,30){\ }
\end{picture}
\setlength { \fboxrule } { 1pt }
\colorbox{white}{\begin{picture}(27,10)(0,0) \end{picture}}
\colorbox{yellow}{\begin{picture}(27,3)(0,0) \end{picture}}
\colorbox{white}{\begin{picture}(27,13)(0,0) \end{picture}}
\colorbox{yellow}{\begin{picture}(27,5)(0,0) \end{picture}}
\begin{picture}(80,1)
\thicklines
\put(71,23){\large $\chi_{\rm LP}, \chi_{\rm FS}$} \put(39,35){\large $\chi_{\rm inter}$}
\put(69.5,11.5){\textcolor{red}{$\left.
\begin{tabular}{@{}l@{}}
\ \\ \ \\ \ \\ \ \\ \
\end{tabular} \right\}$}}
\put(72,13){\large $\chi_{\rm occ}$}
\put(5,35){\large $\chi^{({\rm Van\ Vleck})}$}
\put(4.5,12){\textcolor{red}{$\left\{
\begin{tabular}{@{}l@{}}
\ \\ \ \\ \ \\ \
\end{tabular} \right.$
}}
\put(6,12){\large $\chi^{({\rm atomic\ dia.})}$}
\put(40.5,10){\textcolor{blue}{\line(1,0){28.8}}} \put(40.5,3){\textcolor{blue}{\line(1,0){28.8}}}
\put(40.5,3){\textcolor{blue}{\line(0,1){7}}} \put(69.3,3){\textcolor{blue}{\line(0,1){7}}}
\put(40.5,41){\textcolor{blue}{\line(0,1){7}}} \put(69.3,41){\textcolor{blue}{\line(0,1){7}}}
\put(40.5,41){\textcolor{blue}{\line(1,0){28.8}}} \put(40.5,48){\textcolor{blue}{\line(1,0){28.8}}}
\put(40.5,23){\textcolor{green}{\line(1,0){28.8}}} \put(40.5,23.9){\textcolor{green}{\line(1,0){28.8}}}
\put(40.5,23.3){\textcolor{green}{\line(1,0){28.8}}} \put(40.5,23.6){\textcolor{green}{\line(1,0){28.8}}}
\put(40.5,24.2){\textcolor{green}{\line(1,0){28.8}}}
\put(40.5,30){\textcolor{blue}{\line(1,0){28.8}}} \put(40.5,18.4){\textcolor{blue}{\line(1,0){28.8}}}
\put(40.5,18.4){\textcolor{blue}{\line(0,1){11.6}}} \put(69.3,18.4){\textcolor{blue}{\line(0,1){11.6}}}
\put(7,6){\textcolor{blue}{\line(1,0){25}}} \put(7,20){\textcolor{blue}{\line(1,0){25}}}
\put(7,44){\textcolor{blue}{\line(1,0){25}}}
\multiput(-1.4,23.5)(4,0){11} {\line(1,0){2}} \put(-2,21){$\mu$}
\textcolor{red}{\qbezier(5,21)(1.5,32)(6,43)
\qbezier(5,7)(-1,25)(6,43)
\qbezier(39.5,21)(35.5,32)(39.5,43)
\qbezier(39.5,7)(33.5,25)(39.5,43)
} \put(52,26){\large $\Eell$} \put(52,43.5){\large $\Eellp$} \put(10,-1){atomic limit} \put(46,-1){Bloch bands}
\end{picture}
\caption{(Color online) Schematic representation of energy levels and contributions in the orbital susceptibility
in the atomic limit (left-hand side) and in the Bloch bands (right-hand side).
The dashed line represents the position of the chemical potential $\mu$
and the colored parts of the squares indicate the occupied states.
Note that $\chi_{\rm occ}$ has a contribution not only from the core electrons but also from the
occupied states in the partially-filled band. }
\label{Fig:02}
\end{figure}
\subsection{Comparison with the result by HLSS}
For the orbital susceptibility in (\ref{FinalChi}), we have only four contributions which are
simpler than those obtained previously by HLSS, i.e., eqs.~(4.3)-(4.6) in Ref.\cite{HS3}.
In Appendix C, we prove the equivalence between the present result and HLSS.
Here we summarize the differences between the two.
(1) The result by HLSS is not symmetric with respect to the
exchange of $x$ and $y$.
This is because they used Landau gauge, ${\bm A}=(-Hy, 0, 0)$.
On the other hand, the present formula is symmetric with respect to $x$ and $y$
because we have used the gauge-invariant formalism (\ref{FukuyamaF}).
In order to prove the equivalence between our result and that by HLSS,
we have to symmetrize the HLSS's result. (See details in Appendix C.)
(2) Among the four contributions in the present formula,
$\chi_{\rm LP}$ is determined solely from the energy dispersion $\Eell({\bm k})$.
The other three contributions involve the ${\bm k}$-derivatives of wave functions.
In contrast, HLSS's result contains a term
\begin{equation}
\frac{e^2}{6\hbar^2 c^2}
\sum_{\ell, {\bm k}} f'(\Eell)
\frac{3}{2} \left( \Eellx \frac{\partial^3 \Eell}{\partial k_x \partial k_y^2}
+ \Eelly \frac{\partial^3 \Eell}{\partial k_x^2 \partial k_y} \right).
\label{HSyobun}
\end{equation}
(See eq.~(\ref{HSChi}).)
As shown above in eq.~(\ref{E3Formula}), this term can be rewritten
in terms of $\Uell$ and has been included in $\chi_{\rm FS}$ in our formalism.
It is important to use (\ref{E3Formula}) in order to simplify the final expression.
Note that, in contrast to (\ref{HSyobun}), $\chi_{\rm LP}$ can not be
rewritten in terms of $\Uell$'s.
(3) The result by HLSS contains several terms which have a
common denominator of $1/(\Eell-\Eellp)$.
In our formula, these contributions are summed up into a single
term as $\chi_{\rm inter}$.
The method how we can sum up the several terms in HLSS
into a single term is explained in Appendix C.
(4) As explained above, each contribution in the present formula has a rather
clear meaning compared with the previous ones.
For example, $\chi_{\rm inter}$ and $\chi_{\rm occ}$ are contributions naturally
connected to the Van Vleck susceptibility and atomic diamagnetism, respectively.
Note that, in the HLSS's formula,
the contribution of the atomic diamagnetism is expressed as the first term of
$\chi_4^{({\rm HLSS})}$ in eq.~(\ref{HSChi}), i.e.,
\begin{equation}
- \frac{2e^2}{\hbar^2 c^2} \sum_{\ell, {\bm k}} \frac{\hbar^2}{m}
f(\Eell) \int \frac{\partial \Uell^\dagger}{\partial k_y} \Uelly d{\bm r}.
\label{HScoreterm}
\end{equation}
However, the numerical prefactor is different from that of the present result.
(See the term proportional to $\hbar^2/m$ in $\chi_{\rm occ}$).
As shown in the next section, this term reproduces the orbital susceptibility
from the core-electrons in the atomic limit.
$\chi_{\rm occ}$ in the present formula gives a correct prefactor,
while eq.~(\ref{HScoreterm}) by HLSS does not.
As shown in Appendix C, we find that the correct term
is obtained in the HLSS's formula when we rewrite the interband contributions into a single
term as $\chi_{\rm inter}$.
[For details, see eq.~(\ref{AppE4new}).]
\subsection{Comparison with the results by Gao {\it et al}.}
Let us discuss here the recent work by Gao {\it et al}\cite{Gao}
who studied orbital magnetism in terms of Berry phase.
They are interested in the case with broken time-reversal
symmetry in which spontaneous orbital magnetization appears.\cite{Niu,Thon1,Niu2,Thon2}
In this case, there are several terms involving the Berry curvature
denoted as ${\bm \Omega}$.
In the present notation, its $z$-component is given by
\begin{equation}
\Omega_z = i\int \left(
\frac{\partial \Uell^\dagger}{\partial k_x} \frac{\partial \Uell}{\partial k_y}-
\frac{\partial \Uell^\dagger}{\partial k_y} \frac{\partial \Uell}{\partial k_x}
\right) d{\bm r}.
\end{equation}
However, in our case with a centrosymmetric potential,
$\Omega_z$ vanishes since there is a relation
$\Uell^\dagger(-{\bm r}) = \Uell({\bm r})$.
As a result, we do not have the contributions coming from the Berry curvature.
However, we can compare our results with theirs in the case where
the time-reversal symmetry is not broken.
Details of calculations are shown in Appendix D.
By using the completeness property of $\Uell$, we can show that their results
are almost equivalent with our results except for the coefficient of $\chi_{\rm FS}$.
We think that this difference is due to the wave-packet approximation\cite{Niu2}
used in their formalism.
Nevertheless, their formula for the orbital susceptibility
based on the wave-packet approximation is fairly accurate.
\section{Band effect from atomic limit}
The obtained formula in eqs.~(\ref{ChiLP})-(\ref{ChiC}) is exact.
However, in order to calculate each contribution explicitly, it is necessary to specify
the functional form of $\Uell({\bm r})$.
For example, $\Uell({\bm r})$ can be obtained in general from the first-principle band calculation.
In this paper, however, we will study each contribution from the atomic limit
to see possible band effects based on the linear combination of atomic orbitals (LCAO).
In the atomic limit, it is found that
diamagnetic susceptibility from core electrons and Van Vleck susceptibility are
the only contributions to $\chi$.
Then, $\chi$ is estimated
by treating the overlap integrals between atomic orbitals as a perturbation.
We will show that there appear several contributions to $\chi$
in addition to $\chi_{\rm LP}$.
In this perturbative method, the itinerant feature of Bloch electrons
in solids are clarified systematically.
Schematic picture is shown in Fig.~\ref{Fig:02}.
\subsection{Atomic limit}
In order to study the atomic limit in the present formula, it is appropriate to
use LCAO.
Let us consider a situation in which the periodic potential $V({\bm r})$ is
written as
\begin{equation}
V({\bm r}) = \sum_{{\bm R}_i} V_0({\bm r}-{\bm R}_i),
\label{PotSum}
\end{equation}
where ${\bm R}_i$ represent lattice sites and $V_0({\bm r})$ is a potential
of a single atom.
We use atomic orbitals $\phi_n({\bm r})$ which satisfy
\begin{equation}
\left( -\frac{\hbar^2}{2m} \nabla^2 + V_0({\bm r}) \right) \phi_n ({\bm r}) =
E_n \phi_n ({\bm r}).
\label{phiEq}
\end{equation}
Using these atomic orbitals, we consider the LCAO wave function
\begin{equation}
\varphi_{n{\bm k}} ({\bm r}) = \frac{1}{\sqrt{N}}
\sum_{{\bm R}_i} e^{-i{\bm k}({\bm r}-{\bm R}_i)} \phi_n ({\bm r}-{\bm R}_i),
\label{LCAO}
\end{equation}
which is used as a basis set for $\Uell({\bm r})$.
Here $N$ is the total number of unit cells.
It is easily shown that $\varphi_{n{\bm k}}({\bm r})$ are periodic functions with
the same period with $V({\bm r})$.
In the atomic limit,
$V_0({\bm r}-{\bm R}_i)$ and $\phi_n ({\bm r}-{\bm R}_i)$
are confined in a unit cell and there is no overlap between nearest-neighbor
$V_0({\bm r}-{\bm R}_i)$ or nearest-neighbor $\phi_n ({\bm r}-{\bm R}_i)$.
In this case, it is easily shown that the LCAO wave function,
$\varphi_{\ell{\bm k}} ({\bm r})$, in (\ref{LCAO}) satisfies the equation (\ref{UellEq})
with energy $\Eell=E_\ell$ that is independent on $\bm k$.
Therefore, $\Uell$ is just given by
\begin{equation}
\Uell ({\bm r}) = \varphi_{\ell{\bm k}} ({\bm r}) = \frac{1}{\sqrt{N}}
\sum_{{\bm R}_i} e^{-i{\bm k}({\bm r}-{\bm R}_i)} \phi_\ell ({\bm r}-{\bm R}_i).
\label{AtomicLim}
\end{equation}
By substituting eq.~(\ref{AtomicLim}) and $\Eell = E_\ell$
into eqs.~(\ref{ChiLP})-(\ref{ChiC}),
we obtain $\chi_{\rm LP}, \chi_{\rm inter}, \chi_{\rm FS}$ and $\chi_{\rm occ}$.
Since $E_\ell$ is $\bm k$-independent, $\chi_{\rm LP}=\chi_{\rm FS} = 0$.
For $\chi_{\rm inter}$, we obtain
\begin{equation}
\begin{split}
&\chi_{\rm inter} = -\frac{e^2}{\hbar^2 c^2} \sum_{\ell \ne \ell', {\bm k}}
\frac{f(E_\ell)}{E_\ell - E_{\ell'}} \biggl| \frac{1}{N} \frac{\hbar^2}{m}\sum_{{\bm R}_i, {\bm R}_j} \cr
& \int (x-R_{jx}) e^{i{\bm k}({\bm r}-{\bm R}_j)} \phi_\ell^* ({\bm r}-{\bm R}_j)
e^{-i{\bm k}({\bm r}-{\bm R}_i)} \nabla_y \phi_{\ell'} ({\bm r}-{\bm R}_i) d{\bm r} \cr
& -\int (y-R_{jy}) e^{i{\bm k}({\bm r}-{\bm R}_j)} \phi_\ell^* ({\bm r}-{\bm R}_j)
e^{-i{\bm k}({\bm r}-{\bm R}_i)} \nabla_x\phi_{\ell'} ({\bm r}-{\bm R}_i) d{\bm r} \biggr|^2.
\end{split}
\end{equation}
Since there is no overlap between atomic orbitals, only the terms with
${{\bm R}_i={\bm R}_j}$ survives. Thus $\chi_{\rm inter}$ is simplified as
\begin{equation}
\begin{split}
\chi_{\rm inter} &= -\frac{e^2}{\hbar^2 c^2} \sum_{\ell \ne \ell', {\bm k}}
\frac{f(E_\ell)}{E_\ell - E_{\ell'}} \cr
&\times \biggl|
\frac{\hbar^2}{m} \int \phi_\ell^* ({\bm r}) (x\nabla_y - y\nabla_x) \phi_{\ell'} ({\bm r})
d{\bm r} \biggr|^2,
\label{vanVleckProper}
\end{split}
\end{equation}
where the ${\bm r}$-integral is shifted to the center of the atomic orbital, ${\bm R}_i$.
The right-hand side of (\ref{vanVleckProper}) is nothing but the Van Vleck susceptibility
which we denote as $\chi^{\rm ({\rm Van\ Vleck})}$.
Similarly, by substituting (\ref{AtomicLim}) and $\Eell=E_\ell$ into (\ref{ChiC}), we obtain
\begin{equation}
\begin{split}
\chi_{\rm occ} &= -\frac{e^2}{2\hbar^2 c^2}
\sum_{\ell, {\bm k}} f(E_\ell) \frac{\hbar^2}{m}
\int \frac{\partial \varphi_{\ell{\bm k}}^\dagger}{\partial k_x} \frac{\partial \varphi_{\ell{\bm k}}}{\partial k_x} d{\bm r}
+ (x \leftrightarrow y) \cr
&= -\frac{e^2}{2\hbar^2 c^2} \sum_{\ell, {\bm k}} f(E_\ell) \frac{\hbar^2}{m}
\int (x^2+y^2) |\phi_{\ell} ({\bm r})|^2 d{\bm r}.
\label{coreProper}
\end{split}
\end{equation}
This is just the atomic diamagnetism coming from the core electrons
which we denote as $\chi^{\rm ({\rm atomic\ dia.})}$.
Therefore, in the atomic limit, we have
$\chi=\chi^{\rm ({\rm Van\ Vleck})}+\chi^{\rm ({\rm atomic\ dia.})}$.
(See the left-hand side of Fig.~\ref{Fig:02}.)
\subsection{Perturbation with respect to the overlap integrals}
Next we consider the case in which there are overlap integrals
between the nearest-neighbor atomic orbitals.
In this case, using $\varphi_{n{\bm k}}({\bm r})$ in eq.~(\ref{LCAO}),
we expand $\Uell$ as
\begin{equation}
\begin{split}
\Uell ({\bm r}) &= \sum_n c_{\ell, n}({\bm k}) \varphi_{n{\bm k}} ({\bm r}) \cr
&=\frac{1}{\sqrt{N}} \sum_n \sum_{{\bm R}_i} c_{\ell, n}({\bm k})
e^{-i{\bm k}({\bm r}-{\bm R}_i)} \phi_n ({\bm r}-{\bm R}_i).
\label{UellTBA}
\end{split}
\end{equation}
(Note that $c_{\ell, n}({\bm k})=\delta_{\ell, n} $ in the atomic limit.)
The coefficients $c_{\ell, n}({\bm k})$ should be determined in order for $\Uell$ to
satisfy the equation (\ref{UellEq}).
This can be achieved by solving the eigenvalue problem
\begin{equation}
\sum_m h_{nm}({\bm k}) c_{\ell, m}({\bm k}) = \Eell({\bm k}) \sum_m s_{nm}({\bm k}) c_{\ell, m}({\bm k}),
\label{EigenEquation}
\end{equation}
where the Hamiltonian matrix elements are
\begin{equation}
h_{nm}({\bm k}) = \int \varphi_{n{\bm k}}^*({\bm r}) H_{\bm k} \varphi_{m{\bm k}}({\bm r}) d{\bm r},
\end{equation}
and $s_{nm}({\bm k})$ represents the integral
\begin{equation}
s_{nm}({\bm k}) = \int \varphi_{n{\bm k}}^*({\bm r}) \varphi_{m{\bm k}}({\bm r}) d{\bm r}.
\end{equation}
$h_{nm}({\bm k})$ and $s_{nm}({\bm k})$ can be calculated perturbatively with respect
to the overlap integral
\begin{equation}
\int \phi_n^* ({\bm r}-{\bm R}_j) {\cal O} \phi_m ({\bm r}-{\bm R}_i) d{\bm r},
\end{equation}
with $\cal O$ being an operator and ${\bm R}_j \ne {\bm R}_i$.
For example, the first-order term of $h_{nm}({\bm k})$ contains
the hopping integral used in the tight-binding model.
By substituting eq.~(\ref{UellTBA}) and $\Eell({\bm k})$ into
eqs.~(\ref{ChiLP})-(\ref{ChiC}),
we can show that each of four contributions,
$\chi_{\rm LP}, \chi_{\rm inter}, \chi_{\rm FS}$ and $\chi_{\rm occ}$,
is calculated perturbatively with respect to the overlap integrals.
In contrast to the atomic limit, there are two new features:
(1) $\Eell({\bm k})$ has band dispersion due to the hopping integrals, and
(2) $\Uell({\bm r})$ has an additional $\bm k$-dependence through
$c_{\ell,n}({\bm k})$ in eq.~(\ref{UellTBA}).
The latter gives several contributions to the orbital susceptibility
originating from the $\bm k$-derivatives of $\Uell({\bm r})$.
One may expect that $\chi_{\rm LP}$ is dominant in the first-order perturbation.
However, we find that the situation is not so simple even in the single-band case.
Each of $\chi_{\rm LP}, \chi_{\rm inter}, \chi_{\rm FS}$
and $\chi_{\rm occ}$ depends on the location of the chemical potential as well as
on the details of the model.
In the forthcoming paper,
we will discuss several explicit models such as single-band and two-band tight-binding
models.
\section{Discussions}
Based on the exact formula,
we have shown rigorously that the orbital susceptibility for Bloch electrons can be
described in terms of four contributions,
$\chi=\chi_{\rm LP}+\chi_{\rm inter}+\chi_{\rm FS}+\chi_{\rm occ}$.
Except for the Landau-Peierls susceptibility, $\chi_{\rm LP}$,
the other three contributions involve
the crystal-momentum derivatives of $\Uell$'s.
These contributions represent the effects of the deformation of the wave function
due to the magnetic field.
We find that $\chi_{\rm occ}$ contains the contributions from the occupied states
in the partially-filled band, which has not been recognized before.
We applied the present formula to the model of LCAO.
In the atomic limit where there are no overlap integrals,
$\chi_{\rm inter}$ becomes $\chi^{\rm ({\rm Van\ Vleck})}$ and
$\chi_{\rm occ}$ becomes $\chi^{\rm ({\rm atomic\ dia.})}$.
These two are the only contributions to $\chi$ in the atomic limit.
When the overlap integrals are finite, we have discussed that $\chi$ can be
calculated by treating the overlap integrals as a perturbation.
In this method, itinerant features of Bloch electrons in solids can be
clarified systematically for the first time.
The present formalism can be used as a starting point for various extensions.
Several future problems are as follows:
(1) It is very interesting to apply the present formula to
the multi-band tight-binding models.
A typical example is the honeycomb lattice which is a model for
graphene.\cite{Safran,Safran2,Saito,KoshinoAndo,Gomez,Piechon}
In this case, we have A- and B-sublattice in a unit cell, and as a result,
we have massless Dirac electrons (or more precisely, Weyl electrons) which is
a typical two-band model.
The orbital susceptibility has been calculated by several
groups\cite{KoshinoAndo,Gomez,Piechon} based on the Peierls phase.
In contrast, in the present formula, all the contributions from Bloch bands
are included rigorously.
Application of the present formula to graphene will be discussed
in the forthcoming paper.
(2) In the present Hamiltonian, the spin-orbit interaction is not included.
It is also a very interesting problem to study the orbital susceptibility
in the presence of spin-orbit interaction.
As discussed recently by Gao {\it et al.},\cite{Gao}
the orbital susceptibility in the Hamiltonian with broken time-reversal
symmetry\cite{Niu,Thon1,Niu2,Thon2} is another interesting problem.
This will be also studied in the forthcoming paper.
As suggested\cite{Gao} there appear several terms which is written in terms of
Berry curvatures.
(3) We have confined ourselves in the orbital susceptibility in this paper.
The transport coefficients are of course interesting quantities.\cite{FukuyamaHall}
Hall conductivity in the Weyl equation realized in graphene and an organic conductor
$\alpha$-(BEDT-TTF)$_2$I$_3$\cite{FukuGra,Kobayashi,FukuRev} as well as in bismuth\cite{FOF1,BiRev}
has been discussed.
The similar method used in this paper can be applied to the Hall conductivity
in the Bloch representation.
\bigskip\noindent
{\bf Acknowledgment}
We thank very fruitful discussions with F.\ Pi\'echon, I.\ Proskurin, Y.\ Fuseya,
H.\ Matsuura, T.\ Mizoguchi and N.\ Okuma.
This paper is dedicated to Professor Ryogo Kubo (1920-1995), who guided
authors into the never-fashionable but very deep and fundamentally
important problem of orbital magnetism in solids.
This work was supported by a Grant-in-Aid for Scientific Research on
\lq\lq Dirac Electrons in Solids'' (No.\ 24244053) and
\lq\lq Multiferroics in Dirac electron materials'' (No.\ 15H02108).
\onecolumn
|
1708.08561
|
\section*{Abstract}
{\bf
Holographic states that have a well-defined geometric dual in AdS/CFT are not faithfully represented by Haar-typical states in finite-dimensional models. As such, trying to apply principles and lessons from Haar-random ensembles of states to holographic states can lead to apparent puzzles and contradictions. We point out a handful of these pitfalls.
}
\vspace{10pt}
\noindent\rule{\textwidth}{1pt}
\tableofcontents\thispagestyle{fancy}
\noindent\rule{\textwidth}{1pt}
\vspace{10pt}
\clearpage
\section{Introduction}
Typical states and the technique of averaging over ensembles of states are powerful tools in quantum information theory.
In high-energy physics, these tools have been a useful component of quantum gravity studies over the last several years.
In particular, the notion of Haar-typical states and Page's theorem are two workhorses of quantum information in quantum gravity.
Roughly speaking, a Haar-typical state is a fully-random quantum state.
Given a finite-dimensional Hilbert space $\mathcal{H}$ and any reference state $\ket{\psi_0} \in \mathcal{H}$, a Haar-typical state may be thought of as a realization of the random variable $\ket{\psi(U)} = U \ket{\psi_0}$, where $U$ is a random unitary matrix drawn with uniform probability from the set of all unitary matrices.
The uniform probability distribution over the unitaries is the normalized Haar measure over the set of unitary matrices.
For convenience, we will just refer to this as ``the Haar measure'' for the rest of this note.
The Haar measure is a key ingredient in both the statement and proof of Page's theorem \cite{Page:1993df}.
Suppose now that the Hilbert space splits into two subfactors, $\mathcal{H} = \mathcal{H}_A \otimes \mathcal{H}_B$.
If $\ket{\psi}$ is a Haar-typical state on $\mathcal{H}$, then Page's theorem essentially says that, with high probability, the reduced state of $\ket{\psi}$ in the $\mathcal{H}_A$ subfactor is very nearly maximally entangled with the part of the state in the $\mathcal{H}_B$ subfactor if the dimension of $\mathcal{H}_A$ is much smaller than that of $\mathcal{H}_B$.
Taken together, Page's theorem and Haar typicality are a precise statement of the notion that a small subsystem generically tends to be maximally entangled with its larger complement in a fixed Hilbert space.
Page's theorem has been fruitfully applied to the physics of semiclassical black holes \cite{Page:1993wv, Page:2013dx, Hayden:2007cs}.
Here, one usually considers a collection of matter that is initially in a pure state and that collapses into a black hole, which over time dissipates via the process of Hawking evaporation \cite{HawkingRad}.
From a quantum-mechanical standpoint, the whole process is modelled as taking place in a single Hilbert space, $\mathcal{H}$, in which a finite number of degrees of freedom are divided up between the black hole and radiation.
Of course, the factorization of $\mathcal{H}$ varies from Cauchy slice to Cauchy slice as the black hole evaporates, but if black holes do not destroy information, then the evolution of the total state is unitary and the size of the total Hilbert space must be constant in this model.
While the quantum-gravitational dynamics of black hole degrees of freedom are unknown, for information-theoretic purposes at least, it seems that it is reasonable to model the dynamics in the black hole's Hilbert space by Haar-random unitary evolution (or a 2-design approximation thereof \cite{Hayden:2007cs}), at least on timescales that are shorter than the rate of Hawking emissions so that the size of the black hole Hilbert space is constant.
Therefore, shortly after the black hole forms,\footnote{Or more precisely, one scrambling time after the black hole forms \cite{Hayden:2007cs}.} the total state in $\mathcal{H}$ is Haar-typical in this model, and so Page's theorem may be used to study the entanglement properties of the state across the changing factorization of $\mathcal{H}$ during the subsequent evolution.
For example, this sort of analysis revealed that if you toss a small quantum system into a black hole, then information about its state is rapidly returned via Hawking radiation once the black hole has given up more than half of its degrees of freedom through evaporation \cite{Hayden:2007cs}.
Similar considerations are at the heart of the ongoing debates that surround complementarity \cite{Susskind:1993if}, the black hole information problem \cite{Polchinski:2016hrw}, and firewalls \cite{Almheiri:2012rt,Almheiri:2013hfa}.
However, an area in which Page's theorem and Haar-typicality have significantly less applicability is AdS/CFT \cite{Maldacena:1997re, Witten:1998qj}.
For starters, the Haar measure is not even well-defined for a conformal field theory (CFT), whose Hilbert space is infinite-dimensional.
One must therefore work with a finite-dimensional approximation of the CFT in order to define the Haar measure.
But, even assuming that such a regularization can be made in a satisfactory way, the Haar-typical states of this approximation will be of limited use in modelling holographic states with well-defined geometric gravitational duals because they are too entangled on short scales.
If $A$ is a small spacelike region in the boundary, the reduced state on $A$ of a Haar-typical state of the approximate theory will have extensive entanglement entropy that scales like the volume of $A$, per Page's theorem.
For a holographic state, when $A$ is small enough, the minimal surface anchored to $\partial A$ only probes the region of the bulk gravitational dual near its asymptotically Anti de Sitter (AdS) boundary.
Therefore, the area of the minimal surface, and hence the entropy of the reduced holographic state on $A$, must be sub-extensive in the volume of $A$.
Haar-typical states in the CFT approximation are therefore not good models for holographic states.\footnote{Nevertheless, careful use of random state statistics and suitable generalizations of Haar-typicality, such as typicality with respect to the microcanonical ensemble in a fixed energy window, can serve as useful tools for holography. We will return to this point in \Sec{sec:otherdistros}.}
That measures of holographic states have distinctive structure is well-known in the community.
Nevertheless, it is quite easy to momentarily overlook this fact, which can lead to apparent puzzles in holography.
Our main goal in writing this short note is to highlight this observation and to point out a handful of potential pitfalls that arise from overextending Page's theorem when it enters into discussions about holography in the literature.
In particular, our goal is not to assess the validity of various cutoffs or finite-dimensional models of CFTs, which is an important topic in and of itself.
Rather, such techniques being commonplace in both the literature and the verbal lore (including sometimes the haphazard use of Page's theorem without reference to any sort of finite-dimensional regularization), we only aim to identify limitations on typicality-based arguments which must accompany these techniques.
We hope that both veterans and novices will find this note to be digestible and pedagogical.
In Section \ref{RandomProperties}, we review the precise definition of Haar-typicality and the precise statement of Page's theorem, and we reiterate the argument for why Haar-typicality is of limited utility for holography in commensurate language.
Then, in Section \ref{Puzzles}, we point out several puzzles and potential pitfalls in the literature.
We offer some concluding remarks in Section \ref{Conclusions}.
\section{Properties of random states} \label{RandomProperties}
Let $\mathcal{H}$ be a finite-dimensional Hilbert space with dimension $n$ and consider the group of unitary transformations, $\mathrm{U}(n)$, acting on $\mathcal{H}$.
\begin{definition}
The normalized \emph{Haar measure} on $\mathrm{U}(n)$ is the unique measure $\mu$ measuring subsets of $\mathrm{U}(n)$ such that:
\begin{itemize}
\item $\mu(U\mathcal{S}) = \mu(\mathcal{S}U) = \mu(\mathcal{S})$ for all $\mathcal{S} \subset \mathrm{U}(n)$ and $U \in \mathrm{U}(n)$, where $U\mathcal{S} = \left\{UV \; | \; V \in \mathcal{S}\right\}$ and $\mathcal{S}U$ is similarly defined.
\item $\mu(\mathrm{U}(n)) = 1$
\end{itemize}
\end{definition}
\noindent From the properties above, it also follows that $\mu$ is non-negative, so the normalized Haar measure on $\mathrm{U}(n)$ is a uniform probability distribution on the group of unitary transformations as such.
To state Page's theorem, we also need the trace norm, which is defined as $\Vert T \Vert_1 = \mathrm{tr}~\sqrt{T^\dagger T}$ for any linear operator $T$ on $\mathcal{H}$.
The trace norm gives a good notion of distinctness of states because if $\Vert \rho - \sigma \Vert_1 < \epsilon$ for any two density operators (i.e., states) $\rho$ and $\sigma$ on $\mathcal{H}$, then $\Vert P(\rho - \sigma) \Vert_1 < \epsilon$ for any projector $P$ so that the probabilities for measurement outcomes in the states $\rho$ and $\sigma$ are close \cite{Harlow:2014yka}.
Now suppose that $\mathcal{H} = \mathcal{H}_A \otimes \mathcal{H}_B$, where $\mathrm{dim}~\mathcal{H}_A = d_A$ and $\mathrm{dim}~\mathcal{H}_B = d_B$ with $n = d_A d_B$.
Without loss of generality, suppose that $d_A \leq d_B$.
A precise statement of Page's theorem is as follows\footnote{As a historical note, the theorem originally proved by Page \cite{Page:1993df} was formulated in terms of entanglement entropies and built upon earlier works by Lubkin \cite{Lubkin:1978} and Lloyd \& Pagels \cite{Lloyd:1988cn}.
The version of Page's theorem given here is based on the statement appearing in \cite{Harlow:2014yka}.
A detailed yet accessible proof of Page's theorem can be found in Sec. 10.9 of \cite{Preskill:2016htv} under the name of ``the decoupling inequality.''}:
\begin{theorem}[Page]
Let $\ket{\psi_0} \in \mathcal{H}$ be a fixed reference state and let $\ket{\psi(U)} = U \ket{\psi_0}$ for any $U \in \mathrm{U}(n)$.
Let $\rho_A(U) = \mathrm{tr}_B \, \ketbra{\psi(U)}{\psi(U)}$ denote the reduced state of $\ket{\psi(U)}$ on $\mathcal{H}_A$.
Then it follows that
\begin{equation}
\int d\mu(U)~ \left\Vert \, \rho_A(U) - \frac{I_A}{d_A} \, \right\Vert_1 \leq \sqrt{\frac{d_A}{d_B}}
\end{equation}
where $I_A$ is the identity operator on $\mathcal{H}_A$.
\end{theorem}
Page's theorem therefore says that the \emph{average} distance between a random reduced state $\rho_A(U)$ and the maximally mixed state on $\mathcal{H}_A$, $I_A/d_A$, is bounded by $\sqrt{d_A/d_B}$.
The bite of Page's theorem comes when $d_A \ll d_B$, in which case the average distance is very small.
Note that this can be the case for qubit systems even when the number of qubits associated with $A$ is only a little smaller than those associated with $B$, as the dimensionality scales exponentially in the number of qubits.
In this case, the interpretation of Page's theorem is that the reduced state on a small subfactor of a randomly-chosen pure state is, with high probability, close to the maximally mixed state.
Or, in other words, for a Haar-typical state, the reduced state on a small subfactor of Hilbert space is very nearly maximally entangled with its complementary degrees of freedom.
Now consider a CFT in $D$ spacetime dimensions.
As mentioned in the introduction, the Haar measure is ill-defined because the Hilbert space of the CFT is infinite-dimensional.
Therefore, here we will assume that one also has a regularization of the CFT with the following properties:
\begin{itemize}
\item The dimension of its Hilbert space, $\mathcal{H}^\prime$, is finite.
\item For a spacelike region $A$ in the CFT, there exists a corresponding subspace $H_A^\prime \subseteq \mathcal{H}^\prime$, which we can think of as a spatially-local factor of Hilbert space.
\item $\log \dim \mathcal{H}_A^\prime$ scales extensively with the volume of $A$.
In other words, if the region $A$ in the CFT has a characteristic linear dimension $l$ in some fixed system of coordinates, then
\begin{equation} \label{eq:logdim}
\log \dim \mathcal{H}_A^\prime \propto \left( \frac{l}{\epsilon} \, \right)^{D-1} .
\end{equation}
Written as such, $\epsilon$ may be interpreted as the (uniform) density of degrees of freedom of $\mathcal{H}^\prime$ in the chosen coordinate system.
\end{itemize}
For example, $\mathcal{H}^\prime$ could describe a critical spin chain on a periodic lattice of spacing $\epsilon$, where a CFT is recovered in the continuum limit $\epsilon \rightarrow 0$ \cite{Vidal:2002rm, Latorre:2003kg, Calabrese:2004eu, Calabrese:2005in, Keating:2005ri}.
In a similar vein, any tensor network model for holography has a finite-dimensional boundary space by construction \cite{Swingle:2009bg, Qi:2013caa, Pastawski:2015qua, Hayden:2016cfa, Evenbly:2017hyg}.
Another example is the sort of qubitization of $\mathcal{N}=4$ super Yang-Mills theory in $D=4$ dimensions proposed in section~5.3 of \Ref{Almheiri:2014lwa}.
Now that we are armed with a finite-dimensional Hilbert space, let $\ket{\psi^\prime}$ be some Haar-typical state on $\mathcal{H}^\prime$ and consider the reduced state on $\mathcal{H}_A^\prime$.
According to Page's theorem, the reduced state on $\mathcal{H}_A^\prime$, call it $\rho_A^\prime$, is very close to being maximally mixed provided that $A$ is a small subregion of the entire CFT.
In particular, this means that its von Neumann entropy scales extensively with the volume of $A$, having the same scaling as in \Eq{eq:logdim}:
\begin{equation} \label{eq:S_Haar}
S(\rho_A^\prime) \propto \left(\frac{l}{\epsilon}\right)^{D-1}
\end{equation}
The way this von Neumann entropy scales is in tension with the Ryu-Takayanagi formula \cite{Ryu:2006bv}, and so a Haar-typical state in $\mathcal{H}^\prime$ like $\ket{\psi^\prime}$ cannot be a good model for a holographic CFT state.
Suppose now that the CFT in question has a state $\ket{\phi}$ with a well-defined, asymptotically AdS geometric dual in $D+1$ dimensions where we can think of the CFT as living on the bulk geometry's boundary.\footnote{Here we only consider the case where the bulk geometry is stationary, but a generalization to time-dependent geometries using the covariant Hubeny-Rangamani-Takayanagi (HRT) formula \cite{Hubeny:2007xt} is likely possible.}
Per the Ryu-Takayanagi formula, the von Neumann entropy of the reduced state on $A$, call it $\sigma_A$, is given by the area in Planck units of a bulk minimal surface $\tilde A$ that is homologous to $A$ and such that $\partial A = \partial \tilde A$:
\begin{equation}
S(\sigma_A) = \frac{\mathrm{area}(\tilde A)}{4G}
\end{equation}
The tension comes when $A$ is small enough such that the minimal surface $\tilde A$ only probes the near-boundary region, which has asymptotic AdS geometry.
Since such $\tilde A$ only sees AdS geometry, its area will be subextensive compared to the volume of $A$ in the boundary.
For example, when $D=3$ and when $A$ is a small disk of radius $l$, the area of $\tilde A$ scales like $l/{\tilde \epsilon}$, where $\tilde \epsilon$ is a UV cutoff in the bulk.\footnote{The precise relationship between $\epsilon$ and $\tilde \epsilon$ depends on the finite-dimensional theory in $\tilde \mathcal{H}$. For example, \Ref{McGough:2016lol} argues that a $T\bar{T}$ deformation in a $D=2$ holographic CFT results in a rigid bulk UV cutoff with specific boundary conditions. Provided that $\epsilon$ and $\tilde \epsilon$ do not depend parametrically on the the size of $A$, however, the scalings of $S(\rho_A^\prime)$ and $S(\sigma_A)$ may be compared.}
In general, the scaling is
\begin{equation}
\mathrm{area}(\tilde A) \propto \int_{\tilde \epsilon/l}^1 d\zeta~ \frac{(1-\zeta^2)^{(D-3)/2}}{\zeta^{D-1}} \leq \int_{\tilde \epsilon/l}^1 \frac{d\zeta}{\zeta^{D-1}}
\end{equation}
which is subextensive compared to $(l/{\tilde \epsilon})^{D-1}$.
Therefore, Haar-typical states in $\mathcal{H}^\prime$ cannot be good models for typical holographic states, and arguments which depend critically on Haar-typicality will generally not apply to states with classical holographic bulk duals.
\section{Puzzles resolved and pitfalls espied} \label{Puzzles}
Having reviewed Haar-typicality and Page's theorem, we now identify a handful of situations in the literature on holography where intuition from and use of Haar-typical states can be misleading.
We also discuss two situations in which the use of Haar-typicality is appropriate.
\subsection{Measures of holographic states}
A current area of research is what fraction of quantum states are allowed to be holographic states.
In particular, the measure computed using the entropy cone \cite{Bao:2015bfa} seems to conflict with a recent numerical study by Rangamani and Rota \cite{Rangamani:2015qwa} for reasons that we now clarify.
In their work, Rangamani and Rota study how measures of entanglement and the structure of entanglement among different partitions of a pure state characterize the structure of the state.
Given a randomly-chosen pure state of $N$ qubits, they first trace out $k<N$ qubits and then compute various measures of entanglement among partitions of the remaining $N-k$ qubits.
For example, one entanglement measure that they check is whether states generated in this way obey monogamy of mutual information (MMI):
\begin{equation}
S_{AB}+S_{BC}+S_{AC}\geq S_A+S_B+S_C+S_{ABC}.
\end{equation}
In the above, $A$, $B$, and $C$ denotes subsets of the remaining $N-k$ qubits, and all permutations such that $3 \leq |ABC| \leq N-k$ are checked.
($|A|$, $|AB|$, etc. denotes the number of qubits in $A$, $AB$, etc.)
MMI is an inequality which holographic states must obey but generic quantum states need not \cite{Hayden:2011ag}.
In their work, it was found that when the method described above is Monte Carlo iterated, almost all states generated in this way satisfy MMI.
It is further believed that the higher party-number holographic inequalities \cite{Bao:2015bfa} would be generically obeyed, as well.
The entropy cone measure, by contrast, does not randomly generate states directly, but rather the entanglement entropies $S_A, S_B, S_{AB},\ldots$ directly.
Once the requisite entanglement entropies are generated, it is then determined whether they (a) are entanglement entropies that are valid for a quantum state and (b) satisfy the further holographic entanglement entropy inequalities, such as MMI.
The ratio of the number of sets of randomly generated entanglement entropies that are consistent with both holography and quantum mechanics as a fraction of the number of such entropies consistent with quantum mechanics is then computed.
When using this measure, it is found that just over half of all sets of entropies consistent with quantum constraints do not obey the holographic inequalities for three parties, and that this fraction appears to fall off rapidly as a function of party number \cite{Bao:2017oms}.
We stress that Rangamani and Rota's goal was not to compute a measure of holographic states; however, the entropy cone measure is somewhat puzzling in light of how weakly-constraining MMI is in their numerical assays.
At this point, it is useful to note that Rangamani and Rota's construction is generic with respect to the Haar measure; the way that the states are constructed there could have been equivalently done via the application of a Haar-random unitary.
From the perspective of this measure, if $|A|, |B|, |C| \ll N-k$, then generically we would expect MMI to be not only satisfied, but saturated---not because of any holographic consideration, but because MMI is a balanced inequality and all of the entropies are approximately equal to the logarithms of the dimensions of the states' selfsame Hilbert spaces by Page's theorem.
In general, MMI tends to be satisfied for arbitrary partitions of Haar-typical states, since $S_K = S_{K^c}$, with Page's theorem applied to the complement $K^c$ for any collection of qubits such that $|K|>N/2$.
If holographic states cannot be modelled by Haar-typical states in a finite-dimensional Hilbert space, then attempting to conclude which fraction of states can be holographic using a measure that is typical with respect to the Haar measure would yield false positives.
The entropy cone measure, by contrast, is unbiased with respect to subregion entropies, as it does not directly generate the states.
\subsection{Error correction}
The assumption that holographic states are well-governed by Page's theorem also appears within the original paper positing the connection between quantum error correction and AdS/CFT \cite{Almheiri:2014lwa}.
Here, typicality with respect to the Haar measure is used to argue that deletion of arbitrary sets of $l$ qubits in the (qubitized) boundary can be totally corrected given that more than half of the boundary theory is retained.
The reason given for this sharp transition is that this is the point at which the deleted portion of the boundary changes between being just less than and just more than half of the number of qubits in the boundary.
Then, once the deleted region becomes less than half of the boundary, it becomes exponentially close to maximally mixed.
In particular, this is how the relationship between the error correction picture and the entanglement wedge was originally argued.
For a more detailed and complete picture of this argument, see \cite{Almheiri:2014lwa}.
In reality, the constraint provided by Ryu-Takayanagi prevents the deleted region of the boundary theory from being close to maximally mixed, and thus a key portion of the argument above is no longer supported.
Indeed, while it is true that a generic state with respect to the Haar measure can be corrected by a typical code with a random $k$ qubit code subspace of $n$ qubits so long as $n-2l-k \gg 1$, because holographic states are not faithfully modelled by Haar-typical states, this statement has little traction in holography.
Thus, this portion of the error correcting picture of holography is not actually suggestive of the entanglement wedge, unless another measure of holographic typicality can be shown to yield similar results.
Further works \cite{Dong:2016eik, Cotler:2017erl} eventually established the relationship between the entanglement wedge and quantum error correction.
Nevertheless, we stress that these Page-type arguments about the relationship between quantum error correction and holography are specious, and must be taken with a large grain of salt, if at all, particularly in the construction of new arguments in this field.
\subsection{Butterfly effects and shockwaves}
Haar-randomness has also been used in the context of \Ref{Shenker:2013pqa} to model the behavior of a shockwave acting on the left half of a thermofield double state.
To the past of the shockwave, the thermofield double state has a classical bulk black hole geometry.
The state is conjectured to have a classical bulk geometry after the shockwave, whose sole effect should be the displacement of the event horizon.
This would be difficult to realize by approximating the shockwave as a Haar-random unitary.
Based on the previous discussion, acting with a Haar random unitary on (a finite-dimensional regularization of) a CFT state with a well-defined geometric dual would typically create a generic state whose corresponding dual (at least on the left half) would be totally non-classical (in order for boundary subregions to obey Page's theorem).
This is undesirable for approximating shockwave geometries, as one loses the ability to reproduce holographic entanglement entropies on spacelike slices of the left region.
\subsection{Random tensor networks}
We note, however, that holographic models can exploit Haar-randomness, as long as the objects that are Haar-random are Haar-random below the curvature scale.
For example, the random tensor model of holography \cite{Hayden:2016cfa} has Haar-random unitaries involved in the creation of each individual tensor site.
This should not, however, create a global Haar-typical state, as the spatial arrangement of the tensors in the tensor network would provide a structure to the entanglement that is consistent with that of holography.
In this way the overall global state avoids being Haar-typical.
It is possible that such models may have issues with reproducing physics below the curvature scale, but that is outside of the scope of this work.
\subsection{Other probability distributions}
\label{sec:otherdistros}
Similarly, states that are typical with respect to other random state distributions can also serve as good models for holographic states.
For example, let $\mathcal{H}_{[E,E + \delta E]}$ denote the subspace spanned by the eigenstates of the Hamiltonian whose energies lie within the range $[E, E + \delta E]$ for some $\delta E \ll E$.
A random state drawn with uniform probability from this energy shell (or in other words, sampled from the microcanonical ensemble) typically has a pure-state black hole dual, where the mass of the black hole is set by the energy $E$ \cite{Banks:1998dd,Peet:1998cr,Avery:2015hia}.
In this case, a uniform measure on $\mathcal{H}_{[E,E+\delta E]}$ is admissible because the resulting measure on $\mathcal{H}$ is not itself uniform.
That typical microcanonical states have black hole duals also gives us a nice way to understand, at a heuristic but intuitive level, why random states in the whole CFT should not have good geometric duals (at least for the following notions of randomness and goodness).
Consider collating a collection of energy shells with increasing energies.
In other words, letting $\mathcal{H}_i \equiv \mathcal{H}_{[E_i,E_i+\delta E]}$, consider the set
\begin{equation}
\mathcal{H}_1 \cup \mathcal{H}_2 \cup \cdots \cup \mathcal{H}_N,
\end{equation}
with $E_1 < E_2 < \cdots < E_N$.
Suppose that we choose a state at random from this set.
Heuristically, because the density of states scales exponentially with energy, the random choice of a state will be dominated by the states of the highest energies.
In other words, we should expect a randomly chosen state to come from the highest energy shells and, as such, to correspond to a large black hole.
However, in the limit where we collate shells that cover the whole Hilbert space, we should expect a random state to correspondingly describe a black hole that takes up the whole spacetime.
There is no asymptotically-AdS geometry, in the sense that boundary-anchored geodesics just skirt along the horizon of the black hole, or equivalently, the boundary of the spacetime, since the black hole is all there is.
We are disinclined to think of such a state as being geometric; at the very least, we would not call it a ``good'' geometric state.
\section{Conclusion} \label{Conclusions}
A randomly-chosen state is probably not holographic.
This seems to be generally acknowledged in the holography community, but it can be easy to overlook when tempted with results which stem from Haar-typicality.
From this perspective, we clarified some potentially confusing points in the literature.
We hope that this will help newcomers to the field to avoid being misled by Haar-random intuition.
\section*{Acknowledgements}
We would like to thank Hirosi Ooguri, Daniel Harlow, Massimiliano Rota and Mukund Rangamani for discussions.
We would especially like to thank Bogdan Stoica, Sam Blitz, and Veronika Hubeny for collaboration in the early part of this work.
\paragraph{Funding information}
N.B. is supported in part by the DuBridge Postdoctoral Fellowship, by the Institute for Quantum Information and Matter, an NSF Physics Frontiers Center (NFS Grant PHY-1125565) with support of the Gordon and Betty Moore Foundation (GBMF-12500028).
A.C.-D. is supported by the Gordon and Betty Moore Foundation through Grant 776 to the Caltech Moore Center for Theoretical Cosmology and Physics.
This work is supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Award Number DE-SC0011632.
|
1708.08881
|
\section{Introduction}
This note is motivated to understand
in terms of categorical viewpoint
the recent work of Morton and Samuelson \cite{MS}
establishing the algebra isomorphism between
the torus skein algebra and the elliptic Hall algebra.
In the late 1980s, Turaev \cite{Turaev:ASMP, Turaev}
introduced the skein algebra $\Sk(\Sigma)$
as a $q$-deformation of
the Goldman Lie algebra \cite{Goldman}
on an oriented surface $\Sigma$.
The Goldman Lie algebra is the Lie algebra
encoding the symplectic structures of
character varieties on $\Sigma$.
Recently, Morton and Samuelson \cite{MS}
discovered remarkable
relationship between skein algebra for a torus $T$
and the Ringel-Hall algebra for an elliptic curve $E$.
Let us start with this introduction by explaining
Turaev's skein algebra.
\subsection{Skein algebra}
\label{subsec:skein}
Let $R:=\bbZ[s^{\pm1},v^{\pm1}]$.
A skein module of an oriented 3-fold $M$ is
the quotient of the $R$-module spanned by
the isotopy classes of framed oriented links in $M$
by the skein relation $(\text{sk})$
and the framing relation $(\text{fr})$.
\begin{align*}
S(M) :=
\left<
\begin{array}{l}
\text{isotopy classes of} \\
\text{framed oriented links in $M$}
\end{array}
\right>_{R\text{-lin}}
\Big/ (\text{sk}),(\text{fr})
\end{align*}
The skein relation $(\text{sk})$
is defined
as the following local diagram.
\hskip6em
{\unitlength 0.1i
\begin{picture}(28.0,4.0)(-6.0,-6.0
\special{pn 13
\special{pa 1200 600
\special{pa 1350 450
\special{fp
\special{pn 13
\special{pa 200 600
\special{pa 600 200
\special{fp
\special{sh 1
\special{pa 600 200
\special{pa 539 233
\special{pa 562 238
\special{pa 567 261
\special{pa 600 200
\special{fp
\special{pn 13
\special{pa 600 600
\special{pa 450 450
\special{fp
\special{pn 13
\special{pa 350 350
\special{pa 200 200
\special{fp
\special{sh 1
\special{pa 200 200
\special{pa 233 261
\special{pa 238 238
\special{pa 261 233
\special{pa 200 200
\special{fp
\special{pn 13
\special{pa 1600 600
\special{pa 1200 200
\special{fp
\special{sh 1
\special{pa 1200 200
\special{pa 1233 261
\special{pa 1238 238
\special{pa 1261 233
\special{pa 1200 200
\special{fp
\special{pn 13
\special{pa 1450 350
\special{pa 1600 200
\special{fp
\special{sh 1
\special{pa 1600 200
\special{pa 1539 233
\special{pa 1562 238
\special{pa 1567 261
\special{pa 1600 200
\special{fp
\special{pn 13
\special{ar 2800 400 100 200 4.7123890 1.5707963
\special{fp
\special{sh 1
\special{pa 2875 410
\special{pa 2925 410
\special{pa 2900 340
\special{pa 2875 410
\special{fp
\special{pn 13
\special{ar 3200 400 100 200 1.5707963 4.7123890
\special{fp
\special{sh 1
\special{pa 3075 410
\special{pa 3125 410
\special{pa 3100 340
\special{pa 3075 410
\special{fp
\put(-5.00,-5.00){\makebox(0,0)[lb]{{\large(sk)}}
\put( 8.00,-4.80){\makebox(0,0)[lb]{{\large $-$}}
\put(18.00,-4.70){\makebox(0,0)[lb]{{\large $=$}}
\put(21.00,-5.00){\makebox(0,0)[lb]{{\large $(s-s^{-1})$}}
\end{picture}
\noindent
We omit the definition of the framing relation.
It concerns with twists of framed links,
and the variable $v$ counts the number of twists.
Let $I$ denote the unit interval.
The skein algebra of oriented surface $\Sigma$ is
$\Sk(\Sigma) := S(\Sigma \times I)$ as an $R$-module
with the multiplication given by
placing one copy of $\Sigma \times I$
on top of another $\Sigma \times I$.
For example, if $\Sigma$ is a torus,
we set $L_{0,1}$, $L_{1,1}$ as in the diagrams below.
The square shows the
\hskip 12em
$L_{0,1}$
\hskip 6.5em
$L_{1,1}$
\hskip 5.5em
$L_{1,1} \cdot L_{0,1}$
\hskip10em
{\unitlength 0.1i
\begin{picture}(10.0,4.00)(2.0,-8.00
\special{pn 8
\special{pa 400 400
\special{pa 400 800
\special{pa 800 800
\special{pa 800 400
\special{pa 400 400
\special{fp
\special{pn 8
\special{pa 600 400
\special{pa 600 800
\special{fp
\special{sh 1
\special{pa 570 630
\special{pa 630 630
\special{pa 600 560
\special{pa 570 630
\special{fp
\special{pn 8
\special{pa 1600 400
\special{pa 1600 800
\special{pa 2000 800
\special{pa 2000 400
\special{pa 1600 400
\special{fp
\special{pn 8
\special{pa 1600 800
\special{pa 2000 400
\special{fp
\special{sh 1
\special{pa 1815 585
\special{pa 1770 670
\special{pa 1730 630
\special{pa 1815 585
\special{fp
\special{pn 8
\special{pa 2800 400
\special{pa 2800 800
\special{pa 3200 800
\special{pa 3200 400
\special{pa 2800 400
\special{fp
\special{pn 8
\special{pa 2800 800
\special{pa 3200 400
\special{fp
\special{pn 8
\special{pa 3000 400
\special{pa 3000 550
\special{fp
\special{pn 8
\special{pa 3000 650
\special{pa 3000 800
\special{fp
\special{pn 8
\special{sh 1
\special{pa 2945 660
\special{pa 2900 745
\special{pa 2860 705
\special{pa 2945 660
\special{fp
\special{pn 8
\special{sh 1
\special{pa 2970 510
\special{pa 3030 510
\special{pa 3000 440
\special{pa 2970 510
\special{fp
\end{picture}
\noindent
Then the skein relation implies
the following equation.
The corresponding diagrams are denoted below
with orientations on links omitted.
\hskip11em
$L_{1,1} \cdot L_{0,1} \ = \
L_{0,1} \cdot L_{1,1}+(s-s^{-1})L_{1,2}$.
\hskip10em
{\unitlength 0.1i
\begin{picture}(10.0,4.00)(2.0,-8.00
\special{pn 8
\special{pa 400 400
\special{pa 400 800
\special{pa 800 800
\special{pa 800 400
\special{pa 400 400
\special{fp
\special{pn 8
\special{pa 400 800
\special{pa 800 400
\special{fp
\special{pn 8
\special{pa 600 400
\special{pa 600 550
\special{fp
\special{pn 8
\special{pa 600 650
\special{pa 600 800
\special{fp
\special{pn 8
\special{pa 1200 400
\special{pa 1200 800
\special{pa 1600 800
\special{pa 1600 400
\special{pa 1200 400
\special{fp
\special{pn 8
\special{pa 1200 800
\special{pa 1350 650
\special{fp
\special{pn 8
\special{pa 1450 550
\special{pa 1600 400
\special{fp
\special{pn 8
\special{pa 1400 400
\special{pa 1400 800
\special{fp
\special{pn 8
\special{pa 2400 400
\special{pa 2400 800
\special{pa 2800 800
\special{pa 2800 400
\special{pa 2400 400
\special{fp
\special{pn 8
\special{pa 2400 800
\special{pa 2600 400
\special{fp
\special{pn 8
\special{pa 2600 800
\special{pa 2800 400
\special{fp
\end{picture}
\begin{fct*}[{Turaev, \cite{Turaev}}]
$\Sk(\Sigma)$ is an $s$-deformation of
the Goldman Lie algebra of $\Sigma$.
\end{fct*}
Recall that the Goldman Lie algebra \cite{Goldman}
is the Lie algebra whose underlying vector space
is given by the space of isotropy classes of
oriented loops on $\Sigma$
and whose Lie bracket is given by
\[
[\langle \alpha\rangle, \langle \beta\rangle]_{\text{Goldman}}
=\sum_{p \in \alpha \cap \beta} \pm \langle \alpha_p \beta \rangle.
\]
Here $\alpha:S^1 \to \Sigma$ is a loop on $\Sigma$,
and $\langle \alpha \rangle \in \pi^1(\Sigma)$
is its class in the fundamental group.
$\alpha\cap \beta$ is the set of intersection points,
and $\alpha_p\beta$ means a loop obtained by
$\alpha$ and $\beta$ joined at $p$.
The sign $\pm$ is defined by the orientation of $\Sigma$
and the intersection behavior of $\alpha$ and $\beta$ at $p$.
The result due to Morton and Samuelson \cite{MS} is that
the skein algebra for a torus is isomorphic to
a specialization of elliptic Hall algebra.
So let us turn to explain the Hall algebras.
\subsection{Hall algebra}
\label{subsec:intro:Hall}
Let us briefly recall the theory of Hall algebra.
We refer \cite{Sc:lect} as a good review on the subject.
Let $\bbF_p$ is a finite field.
A finitary category $\catC$
is an $\bbF_p$-linear abelian category such that
hom-spaces are of finite dimensional.
Denote by $\Iso(\catC)$ the set of isomorphism classes
of objects in $\catC$.
The Ringel-Hall algebra \cite{Ringel} for $\catC$ is
the $\bbQ$-vector space
\[
\Hall(\catC) :=
\{ f: \Iso(\catC) \to \bbQ \mid \supp(f) < \infty \}
\]
with the multiplication
\[
f*g([M]) := \sum_{N \subset M}f([M/N])g([N]).
\]
Here we denoted by $[M]$
the isomorphism class of the object $M$.
If $\catC$ is hereditary, then by Green \cite{Green},
$\Hall(\catC)$ is a bialgebra with the comultiplication
\[
\Delta(f)([M],[N]) := f([M \otimes N])
\]
together with the Hopf pairing
\[
\langle \delta_{[M]},\delta_{[N]} \rangle
:= \delta_{M,N}/\#\Aut_{\catC}(M).
\]
Here we denoted by $\delta_{[M]}$
the characteristic function of $[M] \in \Iso(\catC)$.
Since we have a bialgebra with a Hopf pairing,
we can consider its Drinfeld double
(see \cite{Sc:lect} and \cite{J} for good accounts).
For a hereditary and finitary abelian category $\catC$,
the Drinfeld double $\DHall(\catC)$
of $\Hall(\catC)$ is defined to be
\[
\DHall(\catC) :=
\Hall(\catC) \otimes_{\bbQ} \Hall(\catC)
\]
as a vector space,
and the multiplication is given by
\[
(m\otimes 1)\cdot(1 \otimes n) = m\otimes n,
\quad
\tsum \langle m_{(2)},n_{(1)}\rangle m_{(1)} \otimes n_{(2)}
=\tsum \langle m_{(1)},n_{(2)}\rangle
(1\otimes n_{(1)})\cdot(m_{(2)} \otimes 1).
\]
Here we used Sweedler's notation
$\Delta(m)=\sum m_{(1)} \otimes m_{(2)}$.
Now we can state the result of Burban and Schiffmann \cite{BS}
on the Hall algebra for an elliptic curve.
\begin{fct}[{Burban-Schiffmann, \cite{BS}}]
\label{fct:BS}
Let $E$ be an elliptic curve over $\bbF_p$ and
set $\catC := \Coh_{E}$, the category of coherent sheaves
over $E$.
Then
$\DHall(\catC)$ has a basis
$\{u_{x} \mid x \in \bbZ^2\setminus\{(0,0)\}\}$
with relations
\\
\quad
(1)
$[u_x,u_y]=0$ if $x,y$ parallel
\\
\quad
(2)
$[u_x,u_y]=\pm\theta_{x+y}/\alpha_1$
if $\Delta(0,x,x+y)$ contains
no integral points.
\\
Here $\theta_x$ is defined by
\[
1+\sum_{k\ge1} \theta_{k z} w^k
=\exp\Bigl(\sum_{n\ge1}\alpha_n u_{n z} w^n\Bigr)
\]
with
$\alpha_n:=(1-q^n)(1-t^{-n})(1-q^n t^{-n})/n$.
The symbols $q,t^{-1}$ denote the Weil numbers of $E$.
\end{fct}
Recall that the Weil numbers $q,t^{-1}$ enjoys the properties
\begin{equation}\label{eq:qt-Weil}
q/t=p,\quad
\zeta_E(z)
:=\exp\Bigl(\sum \# E(\bbF_{p^n})\frac{w^n}{n}\Bigr)
=\frac{(1-q z)(1-z/t)}{(1-z)(1-p z)}.
\end{equation}
Let us call the algebra $\DHall(\Coh_E)$
the \emph{elliptic Hall algebra}.
Burban and Schiffmann also showed in \cite{BS}
that $\DHall(\Coh_E)$
has an integral basis $\{w_x\}$,
where each $w_x$ is proportional to $u_x$,
in the following sense.
\begin{dfn*}
Let $q,t$ be indeterminates.
Define $\EH_{q,t}$ to be
the algebra over $\bbZ[q^{\pm1/2},t^{\pm1/2}]$ generated by
$w_x$'s with relations (1), (2) in Fact \ref{fct:BS}.
\end{dfn*}
We omit the proportional factor $w_x/u_x$.
The meaning of integral basis is that
if we choose $q$ and $t$ to be the Weil numbers of $E$,
then we have
\[
\DHall(\Coh_E) \otimes_{\bbQ} \bbC \simeq
\EH_{q,t} \otimes_{\bbZ[q^{\pm1/2},t^{\pm1/2}]} \bbC.
\]
\subsection{The Morton-Samuelson isomorphism}
Now we can explain
the Morton-Samuelson isomorphism mentioned in the beginning.
Denote by $L_{r,d} \in \Sk(\torus)$
the class of loop winding $r$-times rightwards
and winding $d$-times upwards,
as in $L_{0,1}$ and $L_{1,1}$ in \S\ref{subsec:skein}.
\begin{fct*}[{Morton-Samuelson, \cite{MS}}]
\begin{align*}
\Sk(\torus) &\longsimto
\EH_{s^2,s^2} \otimes_{\bbZ[s^{\pm1}]}
\bbZ[s^{\pm1},v^{\pm1}],
\\
L_{r,d} &\longmapsto \
w_{(r,d)} \text{ if } \gcd(r,d)=1.
\end{align*}
\end{fct*}
Let us remark that
$\EH_{q,t=q}$ is still well-defined since
$\theta_x/\alpha_1 = ([\gcd(x)]_{q^{1/2}})^2 u_x$.
For $t=q$, the proportional factor between $w_x$ and $u_x$
is given by
\[
w_x=(q^{d/2}-q^{-d/2})u_x
\]
with $d:=\gcd(x)$.
\subsection{The result of this note}
\label{subsec:intro:outline}
Morton and Samuelson showed that the isomorphism
between the skein algebra and the elliptic Hall algebra
``by hand".
However, as they mentioned
in the introduction of their paper \cite{MS},
it is natural to invoke
homological mirror symmetry for torus/elliptic curve
\cite{K,PZ}:
\[
\DFuk(\torus) \simeq \DCoh(E), \quad
L_{1,d} \longleftrightarrow \calL_d.
\]
Here $L_{1,d}$ is the Lagrangian submanifold
corresponding to the loop $L_{1,d}$,
and $\calL_{d}$ is the line bundle over
the elliptic curve $E$ defined over $\bbC$.
One may guess that the Morton-Samuelson isomorphism
comes from equivalence of categories.
However we have one drawback:
the algebra $\EH_{s^2,s^2}$ would correspond to
the elliptic curve $E$ with Weil numbers $q=t=s^2$.
Then by \eqref{eq:qt-Weil}
$E$ is defined over $\bbF_p$ with $p=q/t=1$.
So in the present situation $E$
should be ``defined over $\bbF_1$".
Thus some theory of schemes over $\bbF_1$ will be needed.
The purpose of this paper is to give
a construction of $\EH_{q,q}$ as the Hall algebra of
``elliptic curve of $\bbF_1$".
Precisely speaking, we argue
\begin{description}
\item[B1]
Build a category $\catB$ using
the \emph{monoidal Tate curve} $\wh{E}$.
$\wh{E}$ is a monoidal scheme over the formal monoidal scheme
$\Spf\mnd{q}$,
which is seen as the space of the parameter $q$.
The category $\catB$ will be
considered as the category of
``coherent shaves over $E/\bbF_1$".
\item[B2]
The category $\catB$ is not abelian, and it is not
even additive.
However it is a belian and quasi-exact category
in the sense of Deitmar \cite{D:bel,D:qext}.
Then Szczesny's construction of Hall algebra \cite{Sz}
can be applied, and we have an algebra $\DHall(\catB)$.
\item[B3]
Check $\DHall(\catB) \simeq \EH_{q,q}$.
\end{description}
The step \textbf{B3}, or Theorem \ref{thm:main},
is the main result of this note.
The steps \textbf{B1}--\textbf{B3} form the ``B-side"
of our strategy of the proof of Morton-Samuelson isomorphism
in terms of homological mirror symmetry.
For the completeness of explanation,
let us explain the remained ``A-side".
\begin{description}
\item[A1]
Build another category $\catA$ associated to a torus,
which may be seen as the ``Fukaya category of tori over $\bbF_1$".
$\catA$ should have a parameter $s$ parametrizing
the symplectic form, or the area form, on the torus.
\item[A2]
Consider Hall algebra $\Hall(\catA)$
\item[A3]
Check $\DHall(\catA)\simeq \Sk(\torus)$.
\end{description}
The Morton-Samuelson isomorphism will be the consequence of
\begin{description}
\item[HMS]
Show $\catA \simeq \catB$.
\end{description}
One may prove the step \textbf{HMS} by
taking an appropriate $\bbF_1$-analogue of
the mirror symmetry for torus/elliptic curve
over $\bbZ$, shown by \cite{Gross,LP}.
\section{Hall algebra in the monoidal setting}
\label{sect:Hall}
In this section we explain Szczesny's definition of
Hall algebra for monoid representation
and restate it in the setting of
quasi-exact and belian category
in the sense of Deitmar.
\subsection{Category of modules over commutative monoid}
\label{subsec:A-mod}
Let $A$ be a (multiplicative) commutative monoid with
the absorbing element $0$, i.e.,
$a 0 = 0 a = 0$ for any $a \in A$.
An $A$-module is a pointed set $(M,*)$ with
$A$-action $\cdot: A \times M \to M$
such that $0 \cdot m = *$ for any $m \in M$.
We will denote $M$ for $(M,*)$ for simplicity.
Morphisms, submodules, images
are defined as in the case of modules over a group.
Let $M$ be an $A$-module and $N \subset M$ be a sub $A$-module.
The quotient $M/N$ is defined to be the module
with the underlying pointed set
$(M\setminus N)\sqcup \{*\}$.
We denote by $A$-$\modc$ the category of $A$-modules.
It is not abelian since
$\coim(f) \to \im(f)$
is not an isomorphism in general.
It is not even additive.
$A$-$\modc$ has the zero object $\{*\}$
in the sense that for every $A$-module $M$
there exists a unique $A$-morphism $M \to \{*\}$
and a unique $\{*\} \to M$.
We will denote $\{*\}$ by $0$ hereafter.
Now we have the notion of kernels and cokernels of morphisms.
For $A$-modules $M$ and $N$,
is defined to be
$M \sqcup N/\sim$ with $\sim$
identifying base-points.
Here $\sqcup$ denotes the (set-theoretic) disjoint sum.
Then $M \oplus N$ is the coproduct in the categorical sense.
The product
$M \times N$ in the categorical sense
is defined to be the set-theoretic product with diagonal action.
One can also check that
$A$-$\modc$ has finite limits and colimits.
Let us state a property of $A$-$\modc$
which is necessary to prove the associativity
of Hall algebra.
\begin{lem}\label{lem:noether}
For an $A$-module $M$ and its sub $A$-module $N \subset M$,
there is an inclusion-preserving correspondence between
$A$-modules $N \subset L \subset M$
and sub $A$-modules of $M/N$ given by $L \mapsto L/N$.
\end{lem}
\subsection{Hall algebra of monoid representations}
Let $A$ be a commutative monoid
as in the previous subsection.
Since $A$-$\modc$ is not abelian,
one cannot expect to construct Hall algebra.
The idea of Szczesny \cite{Sz} is to restrict
the class of morphisms
so that we have a good family of exact sequences.
\begin{dfn}\label{dfn:normal}
A morphism $f$ in $A$-$\modc$ is called
\emph{normal}
if $\coim(f) \simto \im(f)$.
Denote by $A$-$\modc^n$
the subcategory of $A$-$\modc$
with the same objects
and only normal morphisms.
\end{dfn}
Now in the category $A$-$\modc$
we have the notion of exact sequence consisting
of normal morphisms,
and can construct Hall algebra as in the abelian category
(recall \S\ref{subsec:intro:Hall}).
\begin{fct}[{Szczesny, \cite[\S3]{Sz}}]
\label{fct:Hall:Sz}
Denote by $\Iso(A\text{-}\modc^n)$
the set of isomorphism classes of objects in $A$-$\modc^n$.
Then the $\bbQ$-vector space
\begin{equation}\label{eq:Hall:Amod}
\{f:\Iso(A\text{-}\modc^n) \to \bbQ \mid
\#\supp(f)<\infty\}
\end{equation}
is a bialgebra with the multiplication $*$
and the comultiplication $\Delta$ defined as
\[
f*g([M]):=\sum_{N \subset M}f([M/N])g([N]), \quad
\Delta(f)([M],[N]) := f([M \otimes N]).
\]
Here we denoted by $[M]$ the class of the object $M$.
\end{fct}
As mentioned before, the associativity is proved using
Lemma \ref{lem:noether}.
\begin{dfn}\label{dfn:Hall:Sz}
We denote this bialgebra by $\Hall(A\text{-}\modc^n)$.
\end{dfn}
\subsection{Hall algebra for quasi-exact category}
According to \cite{Sz},
the category $A$-$\modc$ is an example of
quasi-exact and belian category in the sense of Deitmar
\cite{D:qext, D:bel}.
or an example of proto-exact category in the sense of
Dyckerhoff-Kapranov.
Let us restate the result in the previous subsection
in such a categorical setting as we need in the latter argument.
\begin{dfn}
Let $\catC$ be a small category.
\\
(1)
$\catC$ is called \emph{balanced}
if every morphism
which is both monomorphism and epimorphism
is an isomorphism.
\\
(2)
$\catC$ is called \emph{pointed} if it has an object $0$
such that for every object $X$ the sets $\Hom_{\catC}(X,0)$
and $\Hom_{\catC}(0,X)$
of morphisms have exactly one element respectively.
Such $0$ is unique up to unique isomorphism,
and called the \emph{zero object}.
The unique element in $\Hom_{\catC}(X,0)$
and $\Hom_{\catC}(0,X)$ are called the \emph{zero morphism}.
\end{dfn}
For a pointed category,
one can introduce the notion of kernels and cokernels
using the zero morphism.
\begin{dfn}[{Deitmar \cite{D:bel}}]
A \emph{belian} category is a balanced pointed category
which has finite products, kernels and cokernels,
and has the property that every morphism with zero cokernel
is an epimorphism.
\end{dfn}
For a commutative monoid $A$,
the category $A$-$\modc$ is an example belian category.
In fact, one can check easily that
$A$-$\modc$ is balanced, and the other axioms are already
discussed in \S\ref{subsec:A-mod}.
Next we turn to quasi-exact categories.
Let $\catC$ be a balanced pointed category.
As usual, we have the notion of short exact sequence
\[
0 \to X \to Y \to Z \to 0
\]
of morphisms in $\catC$.
We will call $X \to Y$ the kernel
and $Y \to Z$ the cokernel of the short exact sequence.
\begin{rmk*}
In \cite{D:qext,D:bel}
Deitmar called our exact sequences
strong exact sequences.
\end{rmk*}
\begin{dfn}[{Deitmar \cite{D:qext}}]
A quasi-exact category is a balanced pointed category $\catC$
together with a class $\calE$ of short exact sequences
such that
\begin{itemize}
\item
for any two objects $X,Y$, the natural sequence
$0 \to X \to X \oplus Y \to Y \to 0$
belongs to $\calE$,
\item
kernels in $\calE$ is closed under composition
and base-change by cokernels in $\calE$,
\item
cokernels in $\calE$ is closed under composition
and base-change by kernels in $\calE$.
\end{itemize}
\end{dfn}
In the construction of Szczesny's Hall algebra,
we considered the category $A$-$\modc^n$ of $A$-modules
with normal morphisms.
One can check that $A$-$\modc^n$ together with
short exact sequences consisting of normal morphisms
is a quasi-exact category.
Now we can restate Szczesny's result (Fact \ref{fct:Hall:Sz}).
\begin{dfn}
A category $\catC$ is called finitary if
for any two objects $X$ and $Y$,
the set $\Hom_{\catC}(X,Y)$ is finite
\end{dfn}
Let us denote a kernel $N \inj M$ in a quasi-exact category
by $N \subset M$.
\begin{prop}
Let $\catC$ be a finitary, belian and quasi-exact category
with the class $\calE$ of short exact sequences.
Then the $\bbQ$-vector space
\begin{equation}\label{eq:Hall:space}
\{f:\Iso(\catC) \to \bbQ \mid \#\supp(f)<\infty\}
\end{equation}
is a unital associative algebra with the multiplication
\begin{equation}\label{eq:Hall:mul}
f*g([M]):=\sum_{N \subset M}f([M/N])g([N]),
\end{equation}
with $[M]$ the class of the object $M$.
\end{prop}
The only non-trivial point is
the associativity of the multiplication,
which is proved by using Lemma \ref{lem:noether}.
Let us restate it in the present context.
For a quasi-exact category $\catC$ with the class $\calE$
of short exact sequences, we denote a kernel $X \to Y$
appearing in a sequence of $\calE$ by $X \inj Y$,
and denote a cokernel $Y \to Z$
appearing in a sequence of $\calE$ by $Y \surj Z$.
\begin{lem}
Let $\catC$ a belian and quasi-exact category.
Suppose that we are given a commutative exact diagram
with solid arrows in the next line.
\[
\xymatrix{
L \ar@{^{(}->}[r] \ar@{=}[d] &
M \ar@{->>}[r] \ar@{^{(}->}[d] &
X \ar@{^{(}.>}[d] \\
L \ar@{^{(}->}[r] &
N \ar@{->>}[r] \ar@{->>}[d] &
Y \ar@{.>>}[d] \\
& Z \ar@{=}[r] & Z
}
\]
Then there are dotted morphisms making the whole diagram
commutative and exact.
\end{lem}
The proof of the lemma can be done by a standard diagram chasing.
Next we turn to coalgebra and bialgebra structures.
The coalgebra structure is defined in a usual way.
\begin{lem}
Let $\catC$ be a finitary belian quasi-exact category.
The vector space \eqref{eq:Hall:space} has a structure of
counital coassociative coalgebra
with the comultiplication
\begin{equation}\label{eq:Hall:comul}
\Delta(f)([M],[N]):=f([M \oplus N]).
\end{equation}
\end{lem}
Now to construct a bialgebra, we need the hereditary condition
on our category $\catC$.
\begin{dfn}
A belian quasi-exact category $\catC$ is called hereditary
if $\Ext_{\catC}^n(X,Y)=0$ for any objects $X,Y$
and any $n \in \bbZ_{\ge2}$.
\end{dfn}
Here the higher extension $\Ext^n$ is defined
by the derived functor of $\Hom$.
The theory of derived functors for a belian category
is given in \cite[\S1.6]{D:bel}, and we will not repeat it.
Now the argument \cite{Green}
(see also \cite[\S1.5]{Sc:lect})
on the Hall bialgebra for abelian category
can be applied to our setting, and we have
\begin{thm}
Let $\catC$ be a belian quasi-exact category
which is finitary and hereditary.
The vector space \eqref{eq:Hall:space}
has a structure of bialgebra
with the multiplication \eqref{eq:Hall:mul}
and the comultiplication \eqref{eq:Hall:comul}.
\end{thm}
\section{Monoidal schemes, monoidal Tate curve and Hall algebra}
In this section we introduce the monoidal Tate curve $\wh{E}$.
The desired category $\catB$
mentioned in \S\ref{subsec:intro:outline}
will be the constructed as a category of sheaves over $\wh{E}$.
Our monoidal Tate curve is a counter part of
the standard Tate curve in the $\bbF_1$-scheme setting.
We will use Deitmar's theory \cite{D} of monoid schemes
as $\bbF_1$-scheme theory.
\subsection{Monoidal schemes}
We follow Deitmar \cite{D} and use \emph{monoidal schemes}
as the theory of schemes over $\bbF_1$.
For a commutative monoid $A$,
an ideal $\frka$ is a subset such that $\frka A \subset \frka$.
One can define prime ideals and localizations
as in the commutative ring case.
Now an affine monoidal scheme
$\Spec^{\mon}(A)$ is the set of prime
ideals of $A$ with the Zariski topology.
A monoidal schemes is a topological space $X$
with a sheaf
$\shO^{\mon}_X$ of monoids,
locally isomorphic to some $\Spec^{\mon}(A)$.
Given a monoidal scheme $X$,
we can define
$\shO^{\mon}_X$-modules,
(quasi-)coherent $\shO^{\mon}_X$-modules
and locally free $\shO^{\mon}_X$-modules
in a quite similar was as the usual scheme case.
We also have the notion of
a perfect complex of $\shO^{\mon}_X$-modules,
i.e., a cohomologically bounded complex
of coherent $\shO^{\mon}_X$-modules
which is locally quasi-isomorphic to a bounded complex
of locally free $\shO^{\mon}_X$-modules of finite ranks.
Relative notions can also be introduced to monoidal schemes.
In particular,
we have the notion of flat family of monoidal schemes,
i.e., a flat morphism $X \to S$ of monoidal schemes.
Let us explain the definition of flatness:
A commutative monoid $B$ over another $A$
is flat if the tensor product functor
$B \otimes_A (-): A\text{-}\modc^n \to B\text{-}\modc^n$
is exact.
We also have a functor
\begin{equation}\label{eq:XtoX_Z}
X \longmapsto X_{\bbZ}
\end{equation}
mapping a monoidal scheme $X$ to
a scheme $X_{\bbZ}$ over $\bbZ$.
It is induced by the functor $A \mapsto \bbZ[A]$ from
a commutative monoid $A$ to the monoidal ring $\bbZ[A]$ of $A$.
\subsection{Monoidal Tate curve}
Now let us recall the Tate curve $\wh{E}_{\text{Tate}}$
in the sense of usual scheme,
following \cite[\S8.4.1]{Gross} and \cite[\S9.1]{LP}.
Consider the following toric data.
\begin{equation}\label{eq:tate:toric}
\begin{split}
&\rho_i := \bbQ_{\ge0}(i,1) \subset \bbQ^2 \ (i \in \bbZ),
\quad
\rho_i^\vee
:= \{x \in \bbQ^2 \mid \langle x,\rho_i\rangle
\subset \bbQ_{\ge 0}\}
= \{(x_1,x_2) \mid i x_1+x_2 \ge 0 \},
\\
&\sigma_{i+1/2}^\vee := \rho_i^\vee \cap \rho_{i+1}^\vee.
\end{split}
\end{equation}
For a commutative ring $R$,
set
$U_{i+1/2}:=\Spec R[\sigma_{i+1/2}^\vee \cap \bbZ^2]$.
These affine schemes glue to give a scheme $E$ over $R$
(see \cite[\S9.1.1]{LP} for the explicit form of this gluing map).
The map
$\sigma_{i+1/2}^{\vee} \to \bbQ$, $(x,y) \mapsto y$
gives a morphism $E_{\text{Tate}} \to \Spec R[q]$.
The Tate curve
$\wh{E}_{\text{Tate}} \to \Spf R[[q]]$
is a formal thickening of this morphism.
The Tate curve $\wh{E}_{\Tate}$ can be considered
as a family of elliptic curves over $R$ in the following sense.
$\bbZ$ acts on $E_{\Tate}$ in the way
that $1 \in \bbZ$ corresponds to
the matrix $\begin{pmatrix}1&1 \\ 0&1\end{pmatrix}$
acting on $(\bbQ^2)^{\vee}$.
Taking $R=\bbC$, we can describe this action on the big
torus $(\bbC^*)^2 \subset E_{\text{Tate}}$ as
$(z,q)\mapsto (z q, q)$ with $z,q$ corresponding to
the basis $(1,0),(0,1)$ of $\bbQ^2$.
If we fix $q \in \bbC \setminus\{0\}$ and consider the fiber
of the map $E_{\text{Tate}} \to \Spec \bbC[q]$,
then the $\bbZ$-action on this fiber is
generated by $z \mapsto z q$,
and the quotient is identified with $\bbC^*/q^{\bbZ}$.
Now we construct the \emph{monoidal Tate curve} $\wh{E}$
defined by replacing rings in the
construction of $\wh{E}_{\text{Tate}}$ by monoids.
So we consider the toric data \eqref{eq:tate:toric}
and set
$U_{i+1/2}^{\mon}:=\Spec^{\mon}
\langle \sigma_{i+1/2}^\vee \cap \bbZ^2\rangle$.
Gluing and formal thickening give
$\wh{E} \to \Spf^{\mon}\mnd{q}$,
where
$\Spf^{\mon}\mnd{q}$ is the
(formal) monoidal scheme of $\mnd{q} := q^{\bbN}$.
As mentioned above in the case of $\wh{E}_{\text{Tate}}$,
the monoidal Tate curve $\wh{E}$ has a $\bbZ$-action.
Recall the functor \eqref{eq:XtoX_Z} mapping
each monoidal scheme to a scheme over $\bbZ$.
By construction we have
\begin{equation}\label{eq:EtoEtate}
\wh{E}_{\bbZ} \simeq \wh{E}_{\Tate}
\end{equation}
For later use we consider the zeta function of $\wh{E}$
using Deitmar's result \cite{D:qext}.
The (local) zeta function of a monoidal scheme $X$
is defined to be
$\zeta_{X}(z):=\exp(\sum_{n\ge1}\#X_{\bbZ}(\bbF_{p^n})z^n/n)$
with $p$ a prime number.
Then by \eqref{eq:EtoEtate} and by the fact
that the Tate curve is an elliptic curve over $\bbZ$,
we have
\begin{equation}\label{eq:zeta}
\zeta_{\wh{E}/\mnd{q}}(z)=\frac{(1-q z)(1-z/q)}{(1-z)^2}.
\end{equation}
\section{Hall algebra of monoidal Tate curve}
We now introduce the category $\catB$
outlined in \S\ref{subsec:intro:outline},
and define the Hall algebra associated to it
using the general theory in \S\ref{sect:Hall}.
\subsection{Category of sheaves over monoidal scheme}
We want to consider the category of
$\shO^{\mon}_{\wh{E}}$-modules, i.e.
sheaves over the monoidal Tate curve.
An $\shO^{\mon}_{\wh{E}}$-module
consists of the modules $M_{i+1/2}$
over the commutative monoids
$\langle \sigma^{\vee}_{i+1/2} \cap \bbZ^2\rangle$
and of the gluing data.
As in the case of the category $A$-$\modc$
of monoid representations,
the category of $\shO^{\mon}_{\wh{E}}$-modules
with normal morphisms
is a belian quasi-exact category.
We can restrict the consideration to
coherent $\shO^{\mon}_{\wh{E}}$-modules,
and denote by $\Coh_{\wh{E}}$
the belian quasi-exact category of
$\shO^{\mon}_{\wh{E}}$-modules with normal morphisms,
which is obviously finitary and hereditary.
\subsection{Fourier-Mukai transform}
Before introducing some $\shO^{\mon}_{\wh{E}}$-modules,
let us recall the standard facts on the sheaves over
elliptic curves in the usual scheme theory.
Hereafter the word ``a sheaf over a scheme $X$" means
a $\shO_X$-module,
and the word ``a vector bundle" means a locally free sheaf.
Recall that for a smooth projective curve $C$
over a field any coherent $\shO_C$-module
has the Harder-Narasimhan filtration,
and the associated factors are semi-stable sheaves.
Each semi-stable sheaf has
a Jordan-H\"{o}lder filtration
and the associated factors are stable sheaves.
Let us denote by $\Coh^{ss}_C(r,d)$
the category of coherent semi-stable $\shO_C$-modules
with rank $r$ and degree $d$.
Denote also by $\Sky_C(d)$
the category of skyscraper sheaves on $C$ of length $d$.
We have $\Sky_C(d)=\Coh^{ss}_C(0,d)$.
In the case of an elliptic curve $C=E$,
the Fourier-Mukai transform gives the equivalence
of categories
\begin{equation}\label{eq:FMT:S}
\Coh^{ss}_E(r,d) \simto \Coh^{ss}_E(d,-r)
\text{ if } d>0,
\quad
\Coh^{ss}_E(r,0) \simto \Sky_E(r)
\text{ if } r>0.
\end{equation}
The tensor product functor $- \otimes_{\shO_E} L$
with a degree one line bundle $L$
gives the equivalence
\begin{equation}\label{eq:FMT:T}
\Coh^{ss}_E(r,d) \simto \Coh^{ss}_E(r,d+r).
\end{equation}
These equivalences gives a complete picture of
the category $\Coh_E$ of coherent sheaves over $E$.
We refer \cite[Chap.\ 3]{BBH} as a nice account.
Let us recall a part of the description of $\Coh_E$.
All stable vector bundles are
simple semi-homogeneous vector bundles.
Here a semi-homogeneous vector bundle is a sheaf
of the form $\pi_* L$ with some isogeny $\pi: E' \to E$
and a line bundle $L$ on $E'$.
\subsection{Fourier-Mukai transform in the monoidal setting}
Now let us turn to the monoidal Tate curve $\wh{E}$.
The notions of relative locally free sheaves
is well-defined in the monoidal setting.
We also have the notion of relative divisors,
so that the rank and degree of relative sheaves are well-defined.
As in the case of the usual elliptic scheme,
we have the relative Jacobian monoidal scheme
$\Jac_{\wh{E}/\mnd{q}}$
parametrizing the degree 0 line bundles over $\wh{E}/\mnd{q}$
with the universal family $\calP$ over
$\wh{E} \times \Jac_{\wh{E}/\mnd{q}}$.
The sheaf $\calP$ is nothing but the Poincar\'{e} bundle.
As in the usual case,
we have a natural isomorphism
$\Jac_{\wh{E}/\mnd{q}} \simeq \wh{E}$.
Thus we have the Fourier-Mukai transform
$\frkS(-):=(\bfR\pi_1)_*(\pi_2^*(-)\otimes \calP)$
with $\pi_i$ the projection from
$\wh{E} \times \Jac_{\wh{E}/\mnd{q}}$ to $i$-th factor.
Here we used the derived functors in the monoidal scheme setting
\cite{D:bel}.
We also denote by $\frkT(-):= L \otimes (-)$
the tensor product functor with
the relative line bundle of degree $1$ on $\wh{E}/\mnd{q}$.
The functors $\frkS$ and $\frkT$ are derived equivalences,
and those actions
are similar as in the usual scheme case \eqref{eq:FMT:S}
and \eqref{eq:FMT:T} if we appropriately change the definition
of the,subcategories $\Coh_{\wh{E}}^{ss}$.
In particular,
the rank and degree of sheaves, denoted by $(r,d)$,
are changed to $(d,-r)$ and $(r,d+r)$ respectively.
In other words,
the group generated by $\frkS$ and $\frkT$ acts on
the lattice $\bbZ^2$ of ranks and degrees by
the natural $\SL(2,\bbZ)$ action.
\subsection{The category $\catB$ and Hall algebra}
Let us recall the Fourier-Mukai transform
$\frkS$ introduced in the previous subsection.
We denote by $\Sky_{\wh{E}/\mnd{q}}(d)$
the category of relative skyscraper sheaves with length $d$
and normal morphisms.
We also denote by $\Sky_{\wh{E}/\mnd{q}}$
the full subcategory of $\Coh_{\wh{E}/\mnd{q}}$
consisting of $\Sky_{\wh{E}/\mnd{q}}(d)$'s
for all $d \in \bbZ_{\ge0}$ and normal morphisms.
\begin{dfn}
Define $\catB$ to be the subcategory of
$\shO^{\mon}_{\wh{E}}$-modules
consisting of the image of $\Sky_{\wh{E}/\mnd{q}}$
under the repetitions of
$\frkS^{\pm1}$ and $\frkT^{\pm1}$
with normal morphisms.
\end{dfn}
One can apply Szczesny's construction of Hall algebra
(Definition \ref{dfn:Hall:Sz}) to $\catB$.
Denote by $\Hall(\catB)$ the resulting algebra.
It is a bialgebra with Hopf pairing,
so we can consider its Drinfeld double,
denoted by $\DHall(\catB)$.
\begin{thm}\label{thm:main}
As associative algebras over $\bbZ[q^{\pm1}]$,
we have the isomorphism
\[
\DHall(\catB) \simeq \EH_{q,q}.
\]
\end{thm}
One remark is in order.
The algebra $\DHall(\catB)$ is defined over $\bbQ$.
But in the following argument
we show that it has an integral basis and
can be seen defined over $\bbZ[q^{\pm1}]$.
Our argument is the same as \cite{BS},
except the point that
we use $\frkS$ and $\frkT$ instead of
the usual Fourier-Mukai transform
and the tensor product functor with line bundle.
We will not repeat the detailed discussion,
but explain some key steps.
First we consider the grading
and commutative subalgebras of $\DHall(\catB)$
arising naturally from the geometric setting.
Consider the subalgebra
generated by the class of objects in
$\Sky_{\wh{E}/\mnd{q}}$.
We denote by $S(0,d)$ the class of
the relative skyscraper sheaf
with length $d$.
Recall also that $\delta_{L}$
is the characteristic function of $L$.
\begin{prop}\label{prop:csa}
The subalgebra $\Hall(\Sky_{\wh{E}/\mnd{q}})$
of $\DHall(\catB)$
is isomorphic to the polynomial algebra with
infinite variables $\delta_{S(0,d)}$, $d\in\bbZ_{\ge1}$.
\end{prop}
\begin{proof}
The exact sequences in $\Sky_{\wh{E}/\mnd{q}}$
are all split,
so that the algebra
$\Hall(\Sky_{\wh{E}/\mnd{q}})$ is commutative.
It is also clear that $S(0,d)$'s generate
$\Hall(\Sky_{\wh{E}/\mnd{q}})$,
so the statement holds.
\end{proof}
The argument of \cite[\S4.1]{BS} or
\cite[Chap.\ III]{Macd}
gives an explicit description of
the commutative subalgebra $\Hall(\Sky_{\wh{E}/\mnd{q}})$.
In fact, it is a bialgebra
under the comultiplication \eqref{eq:Hall:comul},
and it is isomorphic to the classical Hall bialgebra.
In the present context, we have
\begin{prop}
The subalgebra $\Hall(\Sky_{\wh{E}/\mnd{q}})$
is isomorphic to the subalgebra of $\EH_{q,q}$
generated by $w_{(0,d)}$, $d \in \bbZ_{\ge1}$.
The isomorphism is given by
$(q^{d/2}-q^{-d/2})\delta_{S(0,d)} \mapsto w_{0,d}$.
\end{prop}
By the description of $\EH_{q,q}$, this isomorphism
enable us to consider the subalgebra
$\Hall(\Sky_{\wh{E}/\mnd{q}})$ to be defined
over $\bbZ[q^{\pm1}]$ with basis
$(q^{d/2}-q^{-d/2})\delta_{S(0,d)}$.
Next, we note the following properties of our Hall algebra.
\begin{prop}
(1)
$\DHall(\catB)$ has a $\bbZ^2$-grading.
We will denote the associated decomposition as
$\DHall(\catB)=\oplus_{(r,d) \in \bbZ^2}H^{r,d}$.
\\
(2)
The group $\SL(2,\bbZ)$ act on
$\DHall(\catB)$ as algebra automorphisms,
which induces the natural $\SL(2,\bbZ)$-action on
the grading $\bbZ^2$.
\\
(3)
For a coprime pair $(r_0,d_0) \in \bbZ^2$,
The submodule $\oplus_{n \in \bbZ} H^{n r_0, n d_0}$
is a commutative algebra isomorphic to
the Drinfeld double of $\Hall(\Sky_{\wh{E}/\mnd{q}})$.
\end{prop}
\begin{proof}
(1)
The rank and degree of a sheaf give the grading.
\\
(2)
This is a consequence of derived equivalences
$\frkS$ and $\frkT$.
\\
(3)
The functors $\frkS$ and $\frkT$
give the standard
$\SL(2,\bbZ)$-action on the grading $\bbZ^2$.
Note also that
the Drinfeld double $\DHall(\Sky_{\wh{E}/\mnd{q}})$
is the same as $\oplus_{d \in \bbZ}H^{0,d}$.
Using the actions of $\frkS$ and $\frkT$,
one can map $\oplus_{d \in \bbZ}H^{0,d}$ to a given
$\oplus_{n \in \bbZ} H^{n r_0, n d_0}$.
Since $\frkS$ and $\frkT$ preserve exact triangles,
we find that $\oplus_{n \in \bbZ} H^{n r_0, n d_0}$
is isomorphic to $\oplus_{d \in \bbZ}H^{0,d}$.
\end{proof}
The remaining part of the proof of Theorem \ref{thm:main}
is the construction of the algebraic homomorphism
$\varphi:\EH_{q,q} \to \DHall(\catB)$
sending generator $w_{(r,d)}$
with coprime $(r,d) \in \bbZ^2$ to $\delta_{S(r,d)}$,
where $S(r,d)$ is the class of a stable sheaf
with rank $r$ and degree $d$.
In fact, the correspondence
$w_{(r,d)} \mapsto \delta_{S(r,d)}$
determines the vector space homomorphism
$\varphi$ uniquely,
since each object of $\catB$ has
the Harder-Narasimhan filtration and
the Jordan-H\"{o}lder filtration.
Then the argument in \cite[\S5]{BS} can be applied to
our situation, showing that $\varphi$ is an algebra homomorphism.
Thus we have the isomorphism $\EH_{q,q} \to \DHall(\catB)$.
\subsection*{Acknowledgements.}
The author is supported by the Grant-in-aid for
Scientific Research (No.\ 16K17570), JSPS.
This work is also supported by the
JSPS for Advancing Strategic International Networks to
Accelerate the Circulation of Talented Researchers
``Mathematical Science of Symmetry, Topology and Moduli,
Evolution of International Research Network based on OCAMI''
and by the JSPS Bilateral Program
``Topological Field Theory and String Theory --
from topological recursion to quantum toroidal algebras".
The author would like to thank Y.~Yonezawa for
the stimulating discussion on skein algebras.
|
2212.09575
|
\section{Introduction}
The charged and neutral $B \to K \mu^+ \mu^-$ decays are powerful probes to test the Standard Model of particle physics (SM) and to search for New Physics (NP). For several years, these decays and similar processes originating from the same $b\to s\mu^+\mu^-$ quark-level transition have exhibited interesting tensions between the SM predictions and experimental data \cite{LHCb:2014cxe,LHCb:2014vgu,LHCb:2021trn,BELLE:2019xld,BaBar:2012mrf}.
A multitude of global effective field theory (EFT) analyses have been performed to determine the short-distance physics that could cause these anomalies (see e.g.~\cite{Bobeth:2017vxj,Altmannshofer:2021qrr,Alguero:2022wkd,Gubernari:2022hxn,Geng:2021nhg,Carvunis:2021jga,Mahmoudi:2022hzx,SinghChundawat:2022zdf}). The EFT framework allows for model-independent analyses in terms of operators and their short-distance Wilson coefficients which -- in most analyses -- are considered to be real. However, in general, NP could also provide new sources of CP violation, which are encoded in complex phases of Wilson coefficients.
While the usual key players in the search for new CP violation are non-leptonic $B$ decays \cite{Fleischer:2002ys,Fleischer:2022axm}, CP violation might also enter leptonic \cite{Fleischer:2017ltw} and semileptonic rare $B$ decays \cite{Bobeth:2011gi,Alok:2011gv,Becirevic:2020ssj,Bordone:2021olx,Descotes-Genon:2020tnz,Descotes-Genon:2015hea}.
For the charged $B^\pm \to K^\pm \mu^+\mu^-$ modes, direct CP violation could arise from the interference of two amplitudes with different CP-conserving and CP-violating phases. In the decays at hand, the CP-conserving phases arise through $\bar{c}c$ resonances. In the SM, the CP-violating phases come from elements of the unitary Cabibbo--Kobayashi--Maskawa (CKM) matrix.
For the decays we study here, the matrix element $|V_{cb}|$ is particularly relevant. Unfortunately, we are facing discrepancies between inclusive and exclusive determinations of this quantity, which has implications for SM calculations of the decay rate. In our analysis, we pay special attention to these determinations and address them separately, in line with \cite{DeBruyn:2022zhw}. Neglecting tiny doubly Cabibbo-suppressed terms, the direct CP asymmetry is zero. Consequently, a nonzero value would be an unambiguous sign of new sources of CP violation.
A qualitative NP analysis of direct CP violation thus crucially relies on the description of resonance effects \cite{Becirevic:2020ssj}.
These non-perturbative, i.e.\ long-distance effects are challenging to compute (see e.g.\cite{Khodjamirian:2012rm, Lyon:2013gba, Ciuchini:2015qxb, Capdevila:2017ert, Gubernari:2022hxn}).
Here we use a long-distance model implemented by the LHCb collaboration based on the Kr\"uger--Seghal approach \cite{Kruger:1996cv, Kruger:1996dt}. Performing a fit to their experimental data, four sets of fit parameters (branches) were obtained \cite{LHCb:2016due}.
In order to obtain the full picture, we also consider the neutral $B_d^0\to K_S\mu^+\mu^-$ decay. In this channel, in addition to direct CP violation, there is also mixing-induced CP violation arising from interference between $\bar{B}^0_d$--$B^0_d$ mixing and $B_d^0, \bar{B}_d^0$ decays into the $K_S\mu^+\mu^-$ final state. In contrast to direct CP violation, mixing-induced CP violation does not require CP-conserving phases. Therefore it is much more robust with respect to long-distance effects \cite{Descotes-Genon:2020tnz}.
In this paper, we present a new strategy that exploits the complementary information provided by the direct and mixing-induced CP asymmetries to determine the complex phases of the Wilson coefficients $C_{9\mu}$ and $C_{10\mu}$. We demonstrate that different sources of new CP-violating physics leave distinct ``fingerprints'' on the observable space, allowing us to transparently determine the Wilson coefficients. This is possible even without making specific assumptions, such as having NP only in $C_{9\mu}$ or assuming the relation $C_{9\mu}^{\rm NP}=-C_{10\mu}^{\rm NP}$ as is frequently done in the literature. In order for the strategy to work, we need to discriminate between the four long-distance branches. We show how we can do this by considering the CP asymmetries in specific $q^2$ bins.
A first constraint on the direct CP asymmetry has been obtained by the LHCb Collaboration \cite{LHCb:2014mit}. However, the mixing-induced CP asymmetry in $B_d^0\to K_S\ell^+\ell^-$ has not yet been measured. Consequently, we consider benchmark scenarios to illustrate the procedure.
CP-violating NP may also have different strengths for muons versus electrons, which can be probed through the lepton-flavour universality ratio $R_K$ \cite{Bobeth:2007dw,Hiller:2014yaa,Robinson:2021cws}. When allowing for CP-violating effects, special care has to be taken by measuring these ratios for the $B^- (\bar{B}^0_d)$ and $B^+ ({B}^0_d)$ modes separately. In this paper, we define several new observables and find a new way to measure the direct CP asymmetry in $B\to K e^+ e^-$.
This paper is organized as follows: In Section \ref{ch:B_to_Kellell_theoretical_framework}, we introduce the theoretical framework and observables, as well as the model for the long-distance effects. We calculate the SM prediction for the branching ratio of $B^\pm \to K^\pm \mu^+\mu^-$ and, while doing so, address the uncertainty arising from discrepancies between different determinations of the CKM element $|V_{cb}|$. Then, in Section \ref{ch:correlations}, we discuss how we can fingerprint NP with direct and mixing-induced CP asymmetries in $B\to K \mu^+\mu^-$. We discuss the interplay of these observables and demonstrate how we can use them to extract the complex values of $C_{9\mu}$ and $C_{10\mu}$.
In Section \ref{ch:RK}, we discuss lepton flavour universality. Finally, we conclude in Section \ref{ch:conclusions} .
\section{Theoretical framework}
\label{ch:B_to_Kellell_theoretical_framework}
\subsection{Effective Hamiltonian}
The low-energy effective Hamiltonian for $b \to s \ell^+\ell^-$ transitions is \cite{Descotes-Genon:2020tnz, Altmannshofer:2008dz, Gratrex:2015hna,Buchalla:1995vs}
\begin{equation}\label{eq:ham}
\mathcal{H}_{\rm eff} = - \frac{4 G_F}{\sqrt{2}} \left[\lambda_u \Big\{C_1 (\mathcal{O}_1^u - \mathcal{O}_1^c) + C_2 (\mathcal{O}_2^u - \mathcal{O}_2^c)\Big\} + \lambda_t \sum\limits_{i \in I} C_i \mathcal{O}_i \right] \ ,
\end{equation}
where $\lambda_q = V_{qb} V_{qs}^*$ and $I = \{1c, 2c, 3, 4, 5, 6, 8, 7^{(\prime)}, 9^{(\prime)}\ell, 10^{(\prime)}\ell, S^{(\prime)}\ell, P^{(\prime)}\ell, T^{(\prime)}\ell\}$. We neglect the terms proportional to $\lambda_u$ which are doubly Cabibbo suppressed and contribute at the $\mathcal{O}(\lambda^2) \sim 5\%$ level, with $\lambda\equiv |V_{us}|$ in the Wolfenstein expansion of the CKM matrix \cite{Wolfenstein:1983yz,Buras:1994ec}.
We consider the following operators\footnote{In comparison with \cite{Descotes-Genon:2020tnz}, we have absorbed a factor $m_b$ into the definition of $\mathcal{O}_{S\ell}$ and $\mathcal{O}_{P\ell}$. }:
\begin{equation}
\begin{aligned}
\mathcal{O}_{7^{(\prime)}} &= \frac{e}{(4\pi)^2} m_b [\bar s \sigma^{\mu\nu} P_{R(L)} b] F_{\mu\nu}, & \mathcal{O}_{S^{(\prime)}\ell} &= \frac{e^2}{(4\pi)^2} m_b [\bar s P_{R(L)} b] (\bar \ell \ell),\\
\mathcal{O}_{9^{(\prime)}\ell} &= \frac{e^2}{(4\pi)^2} [\bar s \gamma^\mu P_{L(R)} b] (\bar \ell \gamma_\mu \ell), & \mathcal{O}_{P^{(\prime)}\ell} &= \frac{e^2}{(4\pi)^2} m_b [\bar s P_{R(L)} b] (\bar \ell \gamma_5 \ell),\\
\mathcal{O}_{10^{(\prime)}\ell} &= \frac{e^2}{(4\pi)^2} [\bar s \gamma^\mu P_{L(R)} b] (\bar \ell \gamma_\mu \gamma_5 \ell), & \mathcal{O}_{T^{(\prime)}\ell} &= \frac{e^2}{(4\pi)^2} [\bar s \sigma^{\mu\nu} P_{R(L)} b] (\bar \ell \sigma_{\mu\nu} \ell) \ ,
\end{aligned}
\end{equation}
with $P_{R(L)} = \frac{1}{2} (1 \pm \gamma_5)$ and $\sigma_{\mu\nu} = \frac{i}{2}[\gamma_\mu, \gamma_\nu]$. The index $\ell$ denotes the flavour of the lepton, that we will omit for simplicity whenever it is clear from the context if we consider a generic lepton or a specific flavor. In this paper, we will consider only light leptons, i.e.\ $\ell = \mu, e$.
We use the following standard parameterization for the local form factors,
\begin{align}\label{eq:localff}
\mel{K^-(p_K)}{\bar s \gamma_\mu b}{B^- (p_B)} &= (p_B + p_K)_\mu f_+(q^2) + \frac{m_B^2 - m_K^2}{q^2} q_\mu (f_0(q^2) - f_+(q^2)), \nonumber\\
\mel{K^-(p_K)}{\bar s \sigma_{\mu\nu} b}{B^- (p_B)} &= i\bigg{[} (p_B + p_K)_\mu q_\nu - (p_B - p_K)_\nu q_\mu \bigg{]} \frac{f_T(q^2)}{m_B + m_K}, \nonumber \\
\mel{K^-(p_K)}{\bar s b}{B^- (p_B)} &= \frac{m_B^2 - m_K^2}{m_b - m_s} f_0(q^2),
\end{align}
where $q = p_B-p_K$ is the momentum transfer of the final state lepton-antilepton pair.
The angular distribution for $B^- \to K^- \ell^+\ell^-$ takes the form (see e.g. \cite{Descotes-Genon:2020tnz, Gratrex:2015hna,Bobeth:2012vn})
\begin{equation}
\begin{aligned}
\frac{d \Gamma(B^- \to K^-\ell^+\ell^-)}{dq^2 d\cos\theta_\ell} &= & \bar{G}_0(q^2) + \bar{G}_1(q^2) \cos\theta_\ell + \bar{G}_2(q^2)\frac{1}{2} \left(3\cos\theta_\ell^2 -1\right) \ ,
\end{aligned}
\label{eq:BtoKellell_diff_decay_ratecos}
\end{equation}
where $\theta_\ell$ is defined as the angle between the $\ell^-$ three-momentum and the reversed of the $B^-$ three-momentum in the dilepton rest frame. Integrating \eqref{eq:BtoKellell_diff_decay_ratecos} over the full range $\theta_\ell \in [0,\pi]$, gives
\begin{equation}
\begin{aligned}
\frac{d \Gamma(B^- \to K^-\ell^+\ell^-)}{dq^2} &= &2\bar{G}_0(q^2) \
\end{aligned}
\label{eq:BtoKellell_diff_decay_rate}
\end{equation}
with
\begin{equation}
\begin{aligned}
\bar{G}_0 &= & \frac{4}{3}(1 + 2 \hat m_\ell^2) \abs{\bar h_V}^2 + \frac{4}{3} \beta_\ell^2 \abs{\bar h_A}^2 + 2 \beta_\ell^2 \abs{\bar h_S}^2 + 2 \abs{\bar h_P}^2\\
&&+ \frac{8}{3}(1 + 8 \hat m_\ell^2) \abs{\bar h_{T_t}}^2 + \frac{4}{3}\beta_\ell^2 \abs{\bar h_T}^2 + 16 \hat m_\ell \Im[\bar h_V \bar h_{T_t}^*] \ ,
\end{aligned}
\label{eq:G0bar}
\end{equation}
where $\hat m_\ell \equiv m_\ell/\sqrt{q^2}$ and $\beta_\ell \equiv \sqrt{1 - 4 \hat m_\ell^2}$ \cite{Descotes-Genon:2020tnz}. The coefficients $\bar{G}_1$ and $\bar{G}_2$ are given in \cite{Descotes-Genon:2020tnz}. The decay rate is a function of the amplitudes
\begin{align}
\bar h_V &= \mathcal{N} \frac{\sqrt{\lambda_B}}{2 \sqrt{q^2}} \bigg[ \frac{2 m_b}{m_B + m_K} (C_7 + C_{7'}) f_T(q^2) + ((C_9 + C_{9'})) f_+(q^2) \bigg], \label{eq:h_amplitudes_V}\\
\bar h_A &= \mathcal{N} \frac{\sqrt{\lambda_B}}{2 \sqrt{q^2}} (C_{10}+C_{10'}) f_+(q^2),\\
\bar h_S &= \mathcal{N} \frac{m_B^2 - m_K^2}{2} \frac{m_b}{m_b - m_s} (C_S + C_{S'}) f_0(q^2),\\
\bar h_P &= \mathcal{N} \frac{m_B^2 - m_K^2}{2} \bigg[ \frac{m_b (C_P + C_{P'})}{m_b - m_s} + \frac{2 m_\ell}{q^2} (C_{10} + C_{10'}) \bigg] f_0(q^2),\\
\bar h_T &= -i \mathcal{N} \frac{\sqrt{\lambda_B}}{\sqrt{2} (m_B + m_K)} (C_T - C_{T'}) f_T(q^2),\\
\bar h_{T_t} &= -i \mathcal{N} \frac{\sqrt{\lambda_B}}{2 (m_B + m_K)} (C_T + C_{T'}) f_T(q^2),
\label{eq:h_amplitudes_Tt}
\end{align}
where $\lambda_B = \lambda(m_B^2, m_K^2, q^2)$ is the Källén function and the normalization factor is
\begin{equation}\label{eq:curlyN}
\mathcal{N} = - \frac{\alpha G_F}{\pi} V_{ts}^* V_{tb} \sqrt{\frac{q^2 \beta_\ell \sqrt{\lambda_B}}{2^{10} \pi^3 m_B^3}} \ .
\end{equation}
The differential $B^+ \to K^+ \ell^+\ell^-$ decay rate follows from~\eqref{eq:BtoKellell_diff_decay_rate} by replacing $\bar h$ with $h$. The latter are obtained from $\bar h$ by taking the complex conjugate of all CP-violating phases, present in the CKM matrix elements in $\mathcal{N}$ and (possibly) in NP contributions to the Wilson coefficients.
In the remainder of this paper, we neglect possible NP contributions to chirality-flipped (primed) Wilson coefficients such that $h_T= \sqrt{2} h_{T_t}$. Primed Wilson coefficients can be easily included by replacing $C_i \to C_i + C_{i^\prime}$, except in the case of $C_{T}$.
\subsection{Hadronic long-distance effects}
\label{ch:long_distance_effects}
The decay $B^- \to K^- \ell^+ \ell^-$ can also proceed through an intermediate vector meson $V$, which subsequently decays into the lepton pair. This process originates from the $\mathcal{O}_1^c$ and $\mathcal{O}_2^c$ current-current operators in the low-energy effective Hamiltonian \eqref{eq:ham} through loops and corresponding rescattering effects as illustrated in Fig.~\ref{fig:hadronic_LD_effects}. We note that the hadronic long-distance contributions have been discussed intensively (see e.g.\cite{Khodjamirian:2012rm, Lyon:2013gba, Ciuchini:2015qxb, Capdevila:2017ert,Blake:2017fyh,Chrzaszcz:2018yza, Gubernari:2022hxn}). We account for these effects by replacing $C_9$ by an effective coefficient
\begin{equation}
C_9^{\rm eff} = C_9 + Y(q^2) \ ,
\end{equation}
where the function $Y \equiv |Y| e^{i \delta_Y}$ with a CP-conserving strong phase $\delta_Y$.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{figures/LD_EFT.png}
\caption{Hadronic long-distance effects in $B^- \to K^-\ell^+\ell^-$.}
\label{fig:hadronic_LD_effects}
\end{figure}
The long-distance contributions are especially large in $q^2$-regions close to a $c \bar c$ resonance peak, where the intermediate meson can go on shell. The large size of non-local effects close to vector resonances is both a blessing and a curse. On the one hand, a larger amplitude means that any direct CP asymmetry will be enhanced near the peaks \cite{Becirevic:2020ssj}. On the other, a large amplitude means that measurements near the resonances are contaminated by large backgrounds. For this reason, experimental studies are usually restricted to regions far away from the peaks \cite{LHCb:2021trn,LHCb:2021lvy}. In this work, we consider $q^2$ bins near the resonances, laying the foundation for future analyses once the challenge of the peaks can be overcome. Since the $Y$ contains non-perturbative physics, we will use experimental data to determine it. We note that the perturbative contribution to $C_9^{\rm eff}$ has been calculated and discussed in \cite{Asatrian:2001de,Asatryan:2001zw,Beneke:2001at,Greub:2008cy,Bobeth:2011gi, Du:2015tda }. Here we do not add this contribution to our description, to avoid possible double counting because we fit to the experimental data (see e.g. discussion in \cite{Huber:2019iqf} for inclusive $B\to X_{s,d}\ell^+\ell^-$ decays).
We adopt a parameterization where the $c\bar{c}$ vector resonances in $B \to K\mu^+\mu^-$ are modelled as a sum of Breit--Wigner distributions following \cite{Kruger:1996cv,Kruger:1996dt}:
\begin{equation}
Y(q^2) = \sum\limits_j \eta_j e^{i \delta_j} A_j^{\rm res}(q^2) \ ,
\label{eq:long-distance}
\end{equation}
where $\eta_j$ is the magnitude of a resonance amplitude, $\delta_j$ its CP-conserving phase, and
\begin{equation}
A_j^{\rm res}(q^2) = \frac{m_{0j} \Gamma_{0j}}{(m_{0j}^2 - q^2) - i m_{0j} \Gamma_j(q^2)} \ ,
\end{equation}
where $\Gamma_j$ and $m_{0j}$ are the width and pole mass of the $j$th resonance, respectively. We use the values of \cite{LHCb:2016due}. The sum runs over the six $c \bar c$ resonances $J/\psi$, $\psi(2S)$, $\psi(3770)$, $\psi(4040)$, $\psi(4160)$, and $\psi(4415)$. In this model, contributions from other broad resonances and continuum contributions are ignored. There also exist resonances involving lighter quarks, but their contribution to the branching ratios is only around $1\%$ \cite{LHCb:2016due}. We neglect these contributions for consistency as we also neglect the doubly Cabibbo-suppressed contributions to the decay amplitude as denoted after \eqref{eq:ham}.
The parameters $\eta$ and the phases $\delta$ can then be determined from experimental data. The LHCb collaboration has measured them by fitting the model to the full invariant mass spectrum \cite{LHCb:2016due}. In the fit four ``branches'' were found, corresponding to different signs for the phases of the $J/\psi$ and $\psi(2S)$ resonances. We denote them by $Y_{--}$, $Y_{-+}$, $Y_{+-}$, and $Y_{++}$, where the first (second) sign matches the sign of the CP-conserving phase of the $J/\psi (\psi(2S))$. Fig. \ref{fig:Y_abs_arg} shows the $|Y(q^2)|$ and $\delta_Y$ for each of the four branches. We can see that the absolute value of $Y(q^2)$ grows largest near the $c \bar c$ resonances, and furthermore that it behaves similarly in all four branches. The overall phase $\delta_Y(q^2)$ differs more: up to $m_{J/\psi}^2 \approx 10 \;\si{\giga\eV^2}$, it is determined almost completely by the phase of the $J/\psi$, so the $Y_{--}, Y_{-+}$ branches are similar, as are the $Y_{+-}, Y_{++}$. Near the $\psi(2S)$ peak, $\delta_Y(q^2)$ is determined by the phase of the $\psi(2S)$. At higher $q^2$, past the two major $c \bar c$ resonances, $\delta_Y(q^2)$ follows a pattern that is similar for all branches. Because the phase $\delta_Y(q^2)$ is distinct for each branch, we can use CP-violating observables that are sensitive to $\delta_Y(q^2)$ to distinguish between the different branches. We will return to this in Section \ref{ch:disentangling_branches}. We note that this description of the hadronic resonances is a model and as such could be significantly improve in the future using either more data or a different parametrization \cite{Blake:2017fyh,Chrzaszcz:2018yza}, see for example the recent discussion in \cite{Isidori:2021tzd}. Nevertheless, it is instructive to consider this model to determine the distinct patterns given by the CP asymmetries.
\begin{figure}
\centering
\subfloat[$\abs{Y(q^2)}$]{\includegraphics[height=0.25\textwidth]{figures/LDabsplot.png}}
\hfill
\subfloat[$\delta_Y(q^2)$]{\includegraphics[height=0.25\textwidth]{figures/LDargplot.png}}
\caption{Absolute value and phase of the hadronic long-distance function $Y(q^2)$. The four colors show the four branches from Ref. \cite{LHCb:2016due} (see text).}
\label{fig:Y_abs_arg}
\end{figure}
\subsection{Direct CP violation}
\label{ch:direct_CPV}
In order to generate a direct CP-asymmetry, at least two interfering amplitudes are required, as well as two different CP-violating phases and CP-conserving phases. In the case of $B \to K\mu^+\mu^-$, a CP-violating phase can come from tiny Cabibbo-suppressed interference terms that we neglect, or from NP, manifest as a complex phase in a Wilson coefficient. In our approach, the only source of a CP-conserving phase in this decay is the function $Y(q^2)$ that describes hadronic long-distance effects.
We denote by $\bar \Gamma \equiv \Gamma(B^- \to K^- \ell^+\ell^-)$, $\Gamma \equiv \Gamma(B^+ \to K^+ \ell^+\ell^-)$, and define the differential direct CP asymmetry as
\begin{equation}
\begin{aligned}
\mathcal{A}_{\rm CP}^{\rm dir}(q^2) \equiv \frac{d\bar \Gamma/dq^2
- d \Gamma/dq^2}{d\bar \Gamma/dq^2
+ d \Gamma/dq^2}
\end{aligned}
\label{eq:ACP_dir}
\end{equation}
with
\begin{equation*}
\begin{aligned}
\frac{d\bar \Gamma/dq^2 - d\Gamma/dq^2}{2}
&= \frac{4}{3}(1 + 2 \hat m_\ell^2) (\abs{\bar h_V}^2 - \abs{h_V}^2) + 16 \hat m_\ell (\Im[\bar h_V \bar h_{T_t}^*] - \Im[h_V h_{T_t}^*])\\
\end{aligned}
\end{equation*}
\begin{equation}
\begin{aligned}
= \abs{\mathcal{N}}^2 \frac{\lambda_B}{\sqrt{q^2}} f_+(q^2) \abs{Y(q^2)} \sin \delta_Y(q^2) &\bigg[ \frac{4 (1 + 2 \hat m_\ell^2)}{3 \sqrt{q^2}} f_+(q^2) \abs{C_{9\ell}^{\rm NP}} \sin \phi_9^{\rm NP}\\
&+ \frac{8 \hat m_\ell}{m_B + m_K} f_T(q^2) \abs{C_{T\ell}^{\rm NP}} \sin \phi_{T\ell}^{\rm NP})\bigg] \ ,
\label{eq:Adir_numerator}
\end{aligned}
\end{equation}
where the $\bar{h}_i$ were given in \eqref{eq:h_amplitudes_V} and \eqref{eq:h_amplitudes_Tt}. In addition, we have defined the CP-violating phase $\phi_i^{\rm NP}$ of a Wilson coefficient through
\begin{equation}
C_{i\ell}^{\rm NP} \equiv \abs{C_{i\ell}^{\rm NP}} e^{i \phi_{i\ell}^{\rm NP}} \ ,
\end{equation}
where we explicitly denote by $\ell$ the lepton flavour. We note that, in order to get direct CP violation in $B \to K \ell^+ \ell^-$, an interference between the hadronic long-distance phase $\delta_Y(q^2)$ and a NP phase $\phi_{9\ell}^{\rm NP}$ or $\phi_{T\ell}^{\rm NP}$ is required. Because the direct CP asymmetry is proportional to $\sin\delta_Y(q^2)$, it inherits the properties of the CP-conserving phase variations as a function of $q^2$ given in Fig.~\ref{fig:Y_abs_arg}a. Consequently, as has been observed in \cite{Becirevic:2020ssj}, the direct CP asymmetry will vary across the $q^2$ spectrum, growing largest close to the first two $c \bar c$ resonance peaks.
In the SM, tiny amounts of direct CP violation can be generated through interference between the terms proportional to $\lambda_t$ and the doubly Cabibbo-suppressed $\lambda_u$ term which we neglect. For this reason, in the SM $\mathcal{A}_{\rm CP}^{\rm dir}$ is suppressed by a factor $\lambda^2$ relative to \eqref{eq:Adir_numerator}.
Therefore, an observed sizeable direct CP asymmetry in $B \to K\ell^+\ell^-$ would be a clear sign of new CP-violating physics.
The denominator of the direct CP asymmetry is
\begin{equation}
\frac{d\bar \Gamma/dq^2 + d \Gamma/dq^2}{2} = \bar{G}_0 +
G_0
\label{eq:Adir_denominator}
\end{equation}
with $\bar{G}_0$ given in \eqref{eq:G0bar}.
To connect with experiment, we consider the $q^2$-integrated direct CP asymmetry, defined as
\begin{equation}
\mathcal{A}_{\rm CP}^{\rm dir}[q^2_{\rm min}, q^2_{\rm max}] = \frac{\bar \Gamma[q^2_{\rm min}, q^2_{\rm max}] - \Gamma[q^2_{\rm min}, q^2_{\rm max}]}{\bar \Gamma[q^2_{\rm min}, q^2_{\rm max}] + \Gamma[q^2_{\rm min}, q^2_{\rm max}]} \ ,
\label{eq:q2_binned_CP_asymm}
\end{equation}
where
\begin{equation}\label{eq:brdef}
\Gamma[q^2_{\rm min}, q^2_{\rm max}] = \int_{q^2_{\rm min}}^{q^2_{\rm max}} \frac{d\Gamma}{dq^2} dq^2 \ .
\end{equation}
Note that in \eqref{eq:q2_binned_CP_asymm}, all the differential decay rates have been integrated individually -- this is not the same as integrating \eqref{eq:ACP_dir} over a $q^2$ bin.
The numerator of the $q^2$-binned direct CP asymmetry can be written as \begin{equation}
\begin{aligned}
\frac{ \bar \Gamma[q^2_{\rm min}, q^2_{\rm max}] - \Gamma[q^2_{\rm min}, q^2_{\rm max}]}{2} &= \rho_{9\rm Im}^\ell \abs{C_{9\ell}^{\rm NP}} \sin \phi_{9\ell}^{\rm NP}
+ \rho_{T\rm Im}^\ell \abs{C_{T\ell}^{\rm NP}} \sin \phi_{T\ell}^{\rm NP} \ ,
\label{eq:CP_asymm_numerator_numerical}
\end{aligned}
\end{equation}
and the denominator as
\begin{align}
&\frac{ \bar \Gamma[q^2_{\rm min}, q^2_{\rm max}] + \Gamma[q^2_{\rm min}, q^2_{\rm max}] }{2}= \Gamma_{\rm SM}^\ell
+ \rho_9^\ell \abs{C_{9\ell}^{\rm NP}}^2
+ \rho_{10}^\ell \abs{C_{10\ell}^{\rm NP}}^2
+ \rho_P^\ell \abs{C_{P\ell}^{\rm NP}}^2
+ \rho_S^\ell \abs{C_{S\ell}^{\rm NP}}^2
+ \rho_T^\ell \abs{C_{T\ell}^{\rm NP}}^2 \nonumber \\
& \quad\quad\quad\quad + \rho_{9 \rm Re}^\ell \abs{C_{9\ell}^{\rm NP}} \cos \phi_{9\ell}^{\rm NP}
+ \rho_{10\rm Re}^\ell \abs{C_{10\ell}^{\rm NP}} \cos \phi_{10\ell}^{\rm NP}
+ \rho_{P\rm Re}^\ell \abs{C_{P\ell}^{\rm NP}} \cos \phi_{P\ell}^{\rm NP}+ \rho_{T\rm Re}^\ell \abs{C_{T\ell}^{\rm NP}} \cos \phi_{T\ell}^{\rm NP}\nonumber \\
& \quad\quad\quad\quad + \rho_{10P\rm Re}^\ell \abs{C_{10\ell}^{\rm NP}} \abs{C_{P\ell}^{\rm NP}} \cos \phi_{10\ell}^{\rm NP} \cos \phi_{P\ell}^{\rm NP}
+ \rho_{10P\rm Im}^\ell \abs{C_{10\ell}^{\rm NP}} \abs{C_{P\ell}^{\rm NP}} \sin \phi_{10\ell}^{\rm NP} \sin \phi_{P\ell}^{\rm NP}\nonumber \\
& \quad\quad\quad\quad + \rho_{9T\rm Re}^\ell \abs{C_{9\ell}^{\rm NP}} \abs{C_{T\ell}^{\rm NP}} \cos \phi_{9\ell}^{\rm NP} \cos \phi_{T\ell}^{\rm NP}
+ \rho_{9T\rm Im}^\ell \abs{C_{9\ell}^{\rm NP}} \abs{C_{T\ell}^{\rm NP}} \sin \phi_{9\ell}^{\rm NP} \sin \phi_{T\ell}^{\rm NP} \ .
\label{eq:CP_asymm_denominator_numerical}
\end{align}
The coefficients $\Gamma_{\rm SM}$ and $\rho$ depend on the choice of long-distance model. Numerical values for these coefficients are given in Appendix \ref{ch:coefficients}, with long-distance branches separated where appropriate.
\subsection{Mixing-induced CP violation}
Charged $B$-meson decays exhibit only direct CP violation. Neutral $B^0_q$ mesons show $B^0_q-\bar{B}^0_q$ oscillations, which have profound implications for CP violation. Due to $B^0_q-\bar{B}^0_q$ mixing, interference effects arise between $B^0_q$ and $\bar{B}^0_q$ decaying into the same final state $f$, thereby leading to mixing-induced CP violation. This phenomenon has received a lot of attention in non-leptonic $B$ decays but is also very interesting for the rare modes that we study here.
The $B^0_d \to K^0 \ell^+ \ell^-$ and $\bar{B}^0_d \to \bar{K}^0 \ell^+ \ell^-$ modes followed by $K^0 \to K^+ \pi^-$ and $\bar{K}^0 \to \pi^+ K^-$, respectively, are flavour specific decays, which can be distinguished through the charges of the subsequent kaon decays. However, if we observe the neutral kaons as $K_S$, both $B^0$ and $\bar{B}^0$ can decay into the same final state. We note that we neglect the CP violation in the neutral kaon system, which appears at the $10^{-3}$ level, and thereby treat $K_S$ as a CP eigenstate. Consequently, for these decays $B^0_q-\bar{B}^0_q$ mixing can generate interference between these decay amplitudes, giving rise to mixing-induced CP violation. Therefore, in the $B^0_d \to K_S\mu^+\mu^-$ channel these interference effects are present. The amplitudes of $B^0_d \to K_S \ell^+ \ell^-$ and $\bar{B}^0_d \to K_S \ell^+ \ell^-$ are related through spectator quarks in the isospin limit to the charged ones of $B^+ \to K^+ \ell^+ \ell^-$ and $B^- \to K^- \ell^+ \ell^-$, respectively, taking the normalisation factor $1/\sqrt{2}$ at the amplitude level coming from the $K_S$ state into account.
In the presence of $B^0_q-\bar{B}^0_q$ mixing, the time evolution can be expressed as:
\begin{align}\label{eq:decs}
\Gamma(B^0_q \to f) &= \Big[ |g_-(t)|^2 + |\xi_f|^2 |g_+(t)|^2 - 2 \Re \{ \xi_f \ g_+(t) \ g_-(t)^* \} \Big] \Gamma_f, \nonumber \\
\Gamma(\bar{B}^0_q \to f) &= \Big[ |g_+(t)|^2 + |\xi_f|^2 |g_-(t)|^2 - 2 \Re \{ \xi_f \ g_-(t) \ g_+(t)^* \}\Big]{\Gamma}_f,
\end{align}
where the normalisation $\Gamma_f$ denotes the ``unevolved'' $B_q \to f$ rate and the time dependence enters through:
\begin{align}
\label{eq:gfun}
g_+(t) \ g_-(t)^* &= \frac{1}{4} \left[ e^{- \Gamma_{L} t} - e^{- \Gamma_{H} t} - 2 i e^{-\Gamma t} \sin{\Delta M t} \right], \nonumber \\
|g_{\mp}(t)|^2 &= \frac{1}{4} \left[ e^{- \Gamma_{L} t} + e^{- \Gamma_{H} t} \mp 2 e^{-\Gamma t} \cos{\Delta M t} \right].
\end{align}
Here $\Gamma_{H,L}$ is the width of the ``heavy'' and ``light'' mass eigenstates and $\Delta M = M_H - M_L$ is the mass difference between those eigenstates. The parameter $\xi$ is a process-specific physical observable that measures the strength of the interference effects:
\begin{equation}
\xi_f= e^{-i\phi_{q}}\left[e^{\phi_{CP}}\frac{A(\bar{B}^0_q \to f) }{A({B}^0_q \to f)}\right] \ .
\end{equation}
Here $\phi_{CP}$ is a convention-dependent phase, which later cancels in the ratio of the amplitudes, while $\phi_q$ is the $B^0_q-B^0_q$ mixing phase. This phase is sizeable for the $B_d$ system and takes the value \cite{Barel:2020jvf,Barel:2022wfr}
\begin{equation}
\phi_d = (44.4^{+1.6}_{-1.5})^\circ \ ,
\label{phid}
\end{equation}
which is extracted from $B_d^0\to J/\psi K_S$ and which also takes penguin contributions into account. We note here that the mixing phase $\phi_s$ in the $B_s$ system, e.g. in the $B_s^0 \to f_0 \ell^+ \ell^-$ decay \cite{Descotes-Genon:2020tnz}, is much smaller, for which reason $B_s^0$ decays are more sensitive to NP effects.
We can then introduce the time-dependent decay rate asymmetry (following the notation in \cite{Fleischer:2017yox}):
\begin{equation}
\begin{aligned}
\frac{\Gamma(B^0_q(t) \to f)
- \Gamma(\bar B^0_q(t) \to f)}{\Gamma(B^0_q(t) \to f)
+ \Gamma(\bar B^0_q(t) \to f)}
&=\frac{\mathcal{C}\cos(\Delta M t) +\mathcal{S} \sin (\Delta M t) }{\cosh (\frac{\Delta \Gamma_q}{2} t) + \mathcal{A}_{\Delta \Gamma} \sinh (\frac{\Delta \Gamma_q}{2} t)} \ ,
\end{aligned}
\label{eq:time_dependent_CP_asymmetry}
\end{equation}
where $\Delta \Gamma_q \equiv \Gamma_L-\Gamma_H$ denotes the decay width difference in the $B_q$ system. We note that $\Delta \Gamma_q$ is sizeable in the $B_s$ system but negligible in the $B_d$, therefore making the observable $\mathcal{A}_{\Delta \Gamma}$ not accessible in $B^0_d \to K_S \ell^+\ell^-$. A full angular analysis would give rise to 6 angular observables as discussed in \cite{Descotes-Genon:2020tnz}. As the time-dependent measurement of these observables appears to be challenging, our starting point is to consider rates integrated over the full range $\theta$ (as in the charged case above), such that only the $S$-wave component remains. At a decay time $t=0$, where mixing effects are switched off, the untagged differential decay rate then yields:
\begin{align}
\frac{d\Gamma(B_d\to K_S\ell^+\ell^-)\pm d\Gamma(\bar{B}_d\to K_S\ell^+\ell^-)}
{ds}
&=2 [G_0 \pm \bar{G}_0] \ ,
\end{align}
with $\bar{G}_0$ given in \eqref{eq:G0bar} but with the replacements
\begin{equation}\label{eq:hneutral}
\bar{h}_X(\bar{B}_d\to K_S \ell^+\ell^-) = \frac{1}{\sqrt{2}} \bar{h}_X(B^-\to K^-\ell^+\ell^-) \ , \quad {h}_X(\bar{B}_d\to K_S \ell^+\ell^-) = \frac{1}{\sqrt{2}} {h}_X(B^-\to K^-\ell^+\ell^-) \ ,
\end{equation}
to take the normalization of the $K_S$ into account.
From \eqref{eq:decs} and \eqref{eq:gfun}, we find
\begin{align}\label{eq:J+Jt}
G_0(t)-\bar G_0(t) &=e^{-\Gamma t}\Big[(G_0 - \bar G_0)\cos(x\Gamma t) - s_0 \sin(x\Gamma t)\Big]\ , \\
G_0(t)+\bar G_0(t) &=e^{-\Gamma t}\Big[(G_0 + \bar G_0)\cosh(y\Gamma t) - h_0 \sinh(y\Gamma t)\Big]\ ,
\label{eq:Gs}
\end{align}
where $y = \Delta \Gamma/(2 \Gamma)$ and $x= \Delta M/ \Gamma$. And \cite{Descotes-Genon:2020tnz}
\begin{equation}
\begin{aligned}
s_0 = 2 \Im \bigg[ e^{-i \phi_d} &\bigg( \frac{4}{3} (1 + 2 \hat m_\ell^2) \Tilde h_V h_V^* + \frac{4}{3} \beta_\ell^2 \Tilde h_A h_A^* + 2 \beta_\ell^2 \Tilde h_S h_S^* + 2 \Tilde h_P h_P^*\\
&+ \frac{8}{3} (1 + 8 \hat m_\ell^2) \Tilde h_{T_t} h_{T_t}^* + \frac{4}{3} \beta_\ell^2 \Tilde h_T h_T^* \bigg) \bigg] - 16 \hat m_\ell \Re \bigg[ e^{- i \phi_d} \Tilde h_V h_{T_t}^* - e^{i \phi_d} h_V \Tilde h_{T_t}^* \bigg] \ ,
\label{eq:s0}
\end{aligned}
\end{equation}
The normalization is as in \eqref{eq:hneutral} and $\tilde{h}_X= \eta_X \bar{h}_X$, with for the $K_S$, $\eta_V=\eta_A=\eta_P=\eta_{T_t}=-1$ and $\eta_S=\eta_T=1$ (see \cite{Descotes-Genon:2020tnz, Descotes-Genon:2015hea} for a detailed discussion). The coefficient $h_0$ is also given in \cite{Descotes-Genon:2020tnz}.
Finally, we can write the CP asymmetries as\footnote{We note that we have $\mathcal{A}_{\rm CP}^{\rm mix} = 2\sigma_0$, where $\sigma_0$ was used in \cite{Descotes-Genon:2020tnz}.}
\begin{align}
\mathcal{C} & \equiv \frac{G_0 - \bar{G}_0}{G_0+\bar{G}_0} = - \mathcal{A}_{\rm CP}^{\rm dir} (B^-\to K^- \ell^+ \ell^-) \ ,\\
\mathcal{S}&\equiv \frac{-s_0}{G_0+\bar{G}_0} \equiv - \mathcal{A}_{\rm CP}^{\rm mix} \ , \label{eq:Sdef}
\end{align}
where the charged direct CP asymmetry was already introduced in \eqref{eq:ACP_dir}.
Unlike the direct CP asymmetry, which is sensitive only to a CP-violating phase in $C_9$ or $C_T$, the mixing-induced CP asymmetry is affected by complex phases in any of the Wilson coefficients. The mixing phase $\phi_d$, which is experimentally found to be sizeable, plays a key role, leading to a large mixing-induced CP asymmetry. Therefore, having new complex Wilson coefficients involved, this CP asymmetry can be shifted away from its SM value. We provide the SM prediction in the next section.
\subsection{Standard Model predictions and input parameters}
For the SM Wilson coefficients, we use \cite{Descotes-Genon:2013vna}
\begin{equation}
C_7^{\rm SM} = -0.292, \quad C_9^{\rm SM} = 4.07, \quad C_{10}^{\rm SM} = -4.31 \ ,
\end{equation}
which are flavour universal (i.e.\ equal for $\ell=\mu=e$) and determined at $\mu=m_b$. In addition, we use the quark masses in ${\overline{\rm MS}}$ at $m_b$: $\bar{m}_b(m_b) = 4.18\pm 0.03$ and $\bar{m}_s(m_b)= 0.078\pm 0.007$ \cite{ParticleDataGroup:2020ssz}. For the local form factors in \eqref{eq:localff}, we use the recent lattice QCD determination of \cite{Parrott:2022rgu}.
To determine the CKM elements $V_{ts}V_{tb}^*$, we exploit CKM unitarity to write (see \cite{DeBruyn:2022zhw}):
\begin{equation}\label{eq:vtsvtb}
V_{ts}V_{tb}^* = -V_{cb} \bigg[1 - \frac{\lambda^2}{2}(1-2 \bar{\rho} + 2i \bar{\eta}) \bigg] + \mathcal{O}(\lambda^6) \ ,
\end{equation}
which introduces the dependence of the apex of the Unitarity Triangle (UT) through $\bar{\rho}$ and $\bar{\eta}$ at $\mathcal{O}(\lambda^2)$ with respect to the leading $|V_{cb}|$ contribution. For completeness, we do include here the $\lambda^2$-suppressed terms such that we have an imaginary part arising from the CKM matrix elements.
The discrepancies between the inclusive and exclusive $|V_{ub}|$ and $|V_{cb}|$ determinations lead to different pictures for the allowed parameter spaces for NP in $B_q$-meson mixing. We employ the recent results from \cite{DeBruyn:2022zhw} where a CKM fit was performed using both the inclusive and exclusive case. On top of these, a hybrid approach with the exclusive $|V_{ub}|$ and inclusive $|V_{cb}|$ values was studied \cite{DeBruyn:2022zhw}.
We note that for determining the CKM factor in \eqref{eq:vtsvtb}, the difference for $|V_{ub}|$ is marginal and only enters via higher order corrections, which we neglect. In this limit, the hybrid scenario coincides with the inclusive one. We provide our results separately, first for the inclusive/hybrid and then for the exclusive case.
For $|V_{cb}|$, we take the result from inclusive $b\to c\ell\nu$ decays \cite{Bordone:2021oof}:
\begin{equation}
|V_{cb}|_{\rm incl} = (42.16 \pm 0.50)\times 10^{-3} \ ,
\end{equation}
which agrees with the recent determination in \cite{Bernlochner:2022ucr}. Finally, \eqref{eq:vtsvtb} then gives
\begin{equation}\label{eq:hybrid}
V_{ts}V_{tb}^*|_{\rm incl/hybrid} = (-41.4\pm 0.5 + 0.7 i)\times 10^{-3}, \quad |V_{ts}V_{tb}^*|_{\rm incl/hybrid} = (41.4 \pm 0.5)\times 10^{-3} \ ,
\end{equation}
where here and in the following we take into account the tiny imaginary part from the CKM elements. Taking the exclusive $|V_{cb}|$ given in \cite{DeBruyn:2022zhw} from \cite{HFLAV:2019otj}, we find
\begin{equation}
|V_{ts}V_{tb}^*|_{\rm excl} = (38.4 \pm 0.5)\times 10^{-3},
\end{equation}
which differs from \eqref{eq:hybrid} at the $4\sigma$ level to the different $|V_{cb}|$ values.
Finally, we find for the branching ratio in the $1.1 \;{\rm GeV}^2 <q^2 < 6.0 \; {\rm GeV}^2$ range:
\begin{equation}
\mathcal{B}(B^- \to K^- \mu^+\mu^-)^{\rm SM}[1.1,6.0]_{\rm incl/hybrid} = (1.83 \pm 0.14) \times 10^{-7} \ ,
\label{eq:SM_BR_lowq2}
\end{equation}
where we have added the variation of the different long distance branches as an additional uncertainty. Even when including those, the uncertainty is still dominated by the form factor uncertainty. We note that we do not include a systematic uncertainty from our long-distance assumptions. Comparing this with the experimental measurement of the branching ratio in this $q^2$-bin \cite{LHCb:2014cxe} \footnote{We have converted the differential rate given in \cite{LHCb:2014cxe} by multiplying with the appropriate bin-width.}:
\begin{equation
\mathcal{B}(B^\pm \to K^\pm \mu^+\mu^-) [1.1, 6.0] = (1.19 \pm 0.07) \times 10^{-7} \ ,
\end{equation}
we find
\begin{equation}
\frac{ \mathcal{B}(B^- \to K^- \mu^+\mu^-)^{\rm SM}[1.1,6.0]_{\rm {{incl/hybrid}}}}{\mathcal{B}(B^\pm \to K^\pm \mu^+\mu^-) [1.1, 6.0]} -1 = 0.54 \pm 0.15 \ ,
\end{equation}
which is a $3.5\sigma$ deviation from zero. Taking the exclusive $|V_{cb}|$ determination, we find
\begin{equation}
\mathcal{B}(B^- \to K^- \mu^+\mu^-)^{\rm SM}[1.1,6.0]_{\rm excl} = (1.57 \pm 0.13) \times 10^{-7} \ ,
\label{eq:SM_BR_lowq2_excl}
\end{equation}
and
\begin{equation}
\frac{ \mathcal{B}(B^- \to K^- \mu^+\mu^-)^{\rm SM}[1.1,6.0]_{\rm excl}}{\mathcal{B}(B^\pm \to K^\pm \mu^+\mu^-) [1.1, 6.0]} -1 = 0.33 \pm 0.13 \ ,
\end{equation}
which lies $2.4\sigma$ from the SM. This study shows that the unsatisfactory determination of the CKM factors has a profound impact on the comparison between the data and the SM prediction for the corresponding branching ratio, thus it has to be resolved.
Finally, we note that our determination is based on our assumptions for the hadronic long-distance effects. As such, our prediction differs slightly from \cite{Parrott_SM_predictions} which found:
\begin{equation}
\mathcal{B}(B^- \to K^- \mu^+\mu^-)^{\rm SM}[1.1,6.0] = (2.07 \pm 0.17) \times 10^{-7} \ ,
\end{equation}
which shows a $4.7\sigma$ deviation from the SM. In this case, the authors used a different model for the $Y(q^2)$ including the perturbative contribution discussed previously. Their variation of the charm mass results in a larger uncertainty than ours. The SM predicitions for these modes, including long-distance effects, have been discussed in a long range of papers, most recently in \cite{Gubernari:2022hxn}. They also find a significant tension with the SM.
In addition, we give the prediction for the branching ratio in the $7 \;{\rm GeV}^2 <q^2 < 8\; {\rm GeV}^2$ range:
\begin{equation}
\mathcal{B}(B^+ \to K^+ \mu^+\mu^-)^{\rm SM}[7,8] = (0.367 \pm 0.031) \times 10^{-7} \ .
\label{eq:theoBR_7to8}
\end{equation}
Finally, we note that for electrons, the branching ratios are almost identical, differing only at the third digit through phase-space effects.
Using the value of $\phi_d$ in \eqref{phid}, we determine the values of the CP asymmetries. Since we neglect the $\lambda^2$ suppressed $\lambda_u$ contributions, we have
\begin{equation}
\mathcal{A}_{\rm CP}^{\rm dir}|_{\rm SM} = 0,
\end{equation}
for all $q^2$ regions.
The mixing-induced CP asymmetry for $1.1 \;{\rm GeV}^2 <q^2 < 6\; {\rm GeV}^2$:
\begin{equation}
\mathcal{A}_{\rm CP}^{\rm mix}|_{\rm SM} = 0.72 \pm 0.02 ,
\end{equation}
which is exactly equal to $\phi_d$ if the tiny CKM phase is neglected \cite{Descotes-Genon:2020tnz}. The tiny uncertainty stems thus mainly from $\phi_d$. In addition, the CKM factors drop out and we are not affected by the difference in $|V_{cb}|$ as for the branching ratio.
\section{\boldmath Fingerprinting New Physics with direct and mixing-induced CP asymmetries in $B \to K\mu^+\mu^-$}
\label{ch:correlations}
\subsection{Experimental constraints}
\label{ch:exp_bounds}
We consider data on the branching ratio \cite{LHCb:2014cxe} and the direct CP asymmetry \cite{LHCb:2014mit} of the $B^+ \to K^+ \mu^+\mu^-$ channel in the bin of $q^2 \in [7,8] \:\si{\giga\eV^2}$, where the LHCb collaboration finds \cite{LHCb:2014cxe} \begin{equation}
\mathcal{B}(B^+ \to K^+ \mu^+\mu^-)[7,8] = (23.1 \pm 1.8) \times 10^{-9}
\label{eq:expBR_7to8}
\end{equation}
and \cite{LHCb:2014mit}
\begin{equation}\label{eq:adirmeas}
\mathcal{A}_{\rm CP}^{\rm dir}[7,8] = 0.041 \pm 0.059 \ .
\end{equation}
Here we have added the statistical and systematic uncertainties in quadrature. In this paper, we focus on this $q^2$-bin because it lies close to the $J/\psi$ resonance at $m_{J/\psi}^2 = \SI{9.6}{\giga\eV^2}$, where the direct CP asymmetry could be enhanced \cite{Becirevic:2020ssj}. In Fig.~\ref{fig:ACP_bounds}, we depict the experimental data in the different $q^2$-bins as presented in \cite{LHCb:2014mit}. The average of over all the bins is \cite{LHCb:2014mit}
\begin{equation}
\mathcal{A}_{\rm CP}^{\rm dir} = 0.012 \pm 0.017 \ .
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width=0.6\textwidth]{figures/Adirplot.png}
\caption{Illustration of the experimental measurements of the direct CP asymmetry of $B^- \to K^- \mu^+ \mu^-$ by the LHCb collaboration \cite{LHCb:2014mit}. The blue line shows the prediction for benchmark point 1 given in~\eqref{eq:staranddiamond} (see also \cite{Becirevic:2020ssj}).}
\label{fig:ACP_bounds}
\end{figure}
We consider three scenarios:
\begin{equation}
\begin{aligned}
&\text{Scenario 1:} \quad C_{9\mu}^{\rm NP} \neq 0 \ ,\\
&\text{Scenario 2:} \quad C_{9\mu}^{\rm NP} = -C_{10\mu}^{\rm NP} \neq 0 \ ,\\
&\text{Scenario 3:} \quad C_{10\mu}^{\rm NP} \neq 0 \ .
\end{aligned}
\label{eq:NP_ranges}
\end{equation}
All three scenarios fit the data better than the SM with pulls of $(4$-$7)\sigma$ \cite{Altmannshofer:2021qrr,Gubernari:2022hxn,Geng:2021nhg,Carvunis:2021jga,Mahmoudi:2022hzx,SinghChundawat:2022zdf}.
In Fig. \ref{fig:C9mu_bounds_scenario_1}, we illustrate the $1\sigma$-experimental constraints on the complex $C_{9\mu}$ and $C_{10\mu}$ coefficients for the three NP scenarios in \eqref{eq:NP_ranges}. We note that for Scenario 3, shown in Fig.~\ref{fig:C9mu_bounds_scenario_1}c, there is no bound from the direct CP asymmetry, as can be seen also from \eqref{eq:Adir_numerator}. For simplicity, we show results using the $Y_{--}(q^2$) hadronic long-distance branch. In addition, we use the hybrid scenario for the CKM factors. For the other three branches, the constraints from the branching ratio remain similar while the allowed region from the direct CP asymmetry changes. This dependence on the direct CP asymmetry to the choice of the branch is an important feature that we will exploit in the remainder of this paper.
In Fig.~\ref{fig:C9mu_bounds_scenario_1}, we have indicated two benchmark points with large CP-violating phases, allowed by the current data:
\begin{equation}
\begin{aligned}
\text{Benchmark Point 1:}& \quad \abs{C_{9\mu}^{\rm NP}}/\abs{C_9^{\rm SM}} = 0.75 \ , \quad\quad \phi_{9\mu}^{\rm NP} = 195^\circ \ ,\\
\text{Benchmark Point 2:}& \quad \abs{C_{9\mu}^{\rm NP}}/\abs{C_9^{\rm SM}} = \abs{C_{10\mu}^{\rm NP}}/\abs{C_9^{\rm SM}}= 0.30\quad \phi_{9\mu}^{\rm NP} = \phi_{10\mu}^{\rm NP} - \pi = 220^\circ \ .
\end{aligned}
\label{eq:staranddiamond}
\end{equation}
We will use these parameter sets for illustrative purposes in Sec.~\ref{ch:extracting_WCs}. In Fig.~\ref{fig:ACP_bounds}, we show benchmark scenario 1 to demonstrate the enhancement of the direct CP asymmetry in the resonance region. A similar scenario was considered in \cite{Becirevic:2020ssj}.
In \eqref{eq:NP_ranges}, we only consider scenarios with $C_{9\mu}$ and/or $C_{10\mu}$. We do not discuss NP entering through $C_{S\mu}$ and $C_{P\mu}$, because any significant contribution of these coefficients to $B\to K\ell^+\ell^-$ would have a much larger, clearly visible influence on $B_s^0 \to \mu^+\mu^-$ \cite{Fleischer:2017ltw} as they would lift the helicity suppression. In addition, we do not explicitly consider tensor couplings \cite{Beaujean:2015gba}. If only $C_{T\mu}$ were present, the $B \to K\mu^+\mu^-$ rate in \eqref{eq:CP_asymm_denominator_numerical} reduces to
\begin{equation}
\Gamma[1.1,6.0] = \Gamma_{\rm SM} + \rho_T^\ell \abs{C_{T\ell}^{\rm NP}}^2 + \rho_{T\rm Re}^\ell \abs{C_{T\ell}^{\rm NP}} \cos \phi_{T\ell}^{\rm NP}\ ,
\end{equation}
where the $\rho$ are given in Appendix~\ref{ch:coefficients} from which we find $\rho_{T\rm Re}<\rho_T$. Therefore, the branching ratio will always be pushed upwards, independent of the phase difference $\phi_T$. This situation changes if also NP in other Wilson coefficients is present, in which case the lower value of the branching ratio can be accounted for (see \eqref{eq:CP_asymm_denominator_numerical}).
For the direct CP asymmetry, we explicitly find
\begin{equation}
\mathcal{A}_{\rm CP}^{\rm dir} = \frac{\rho_{T\rm Im}^\ell |C_{T\ell}^{\rm NP}| \sin\phi_{T\ell}^{\rm NP}}{\Gamma_{\rm SM}^\ell+ {\rho_T^\ell} |C_{T\ell}^{\rm NP}|^2} \ .
\end{equation}
We note that in this case $ \mathcal{A}_{\rm CP}^{\rm dir} < 0.03$, because $\rho_{T}^\ell >> \rho_{T\rm Im}^\ell$. Therefore, we focus on NP in $C_9$ and $C_{10}$.
\begin{figure}[t]
\centering
\subfloat[$C_{9\mu}^{\rm NP}$ only]{\includegraphics[width=0.3\textwidth]{figures/boundsC9only7to8.png}}
\hfill
\subfloat[$C_{9\mu}^{\rm NP} = -C_{10\mu}^{\rm NP}$]{\includegraphics[width=0.3\textwidth]{figures/boundsC9isminusC107to8.png}}
\hfill
\subfloat[$C_{10\mu}^{\rm NP}$ only]{\includegraphics[width=0.3\textwidth]{figures/boundsplotC10only7to8.png}}
\caption{Experimental $1\sigma$ constraints on the three NP scenarios specified in~\eqref{eq:NP_ranges}. The star and diamond indicate the benchmark points 1 and 2 given in \eqref{eq:staranddiamond}, respectively.
}
\label{fig:C9mu_bounds_scenario_1}
\end{figure}
\subsection{Correlations between CP-violating observables}
\label{ch:correlations_one_branch}
We will now demonstrate how to distinguish between the three NP scenarios in \eqref{eq:NP_ranges} using the correlations between the direct and mixing-induced CP asymmetries of $B \to K\mu^+\mu^-$. To illustrate the method, we first consider only the $Y_{--}$ branch. Considering now NP Wilson coefficients that are within $1\sigma$ of the branching ratio measurement in \eqref{eq:expBR_7to8}, while adding experimental and theoretical uncertainties in quadrature, gives the correlations in \ref{fig:Adir_Amix_correlation_toy_scenario}. In addition, we show the experimental constraint on the direct CP asymmetry for the $B^+\to K^+$ decay given in \eqref{eq:adirmeas}. We observe that each scenario leaves a distinct ``fingerprint'' in the $\mathcal{A}_{\rm CP}^{\rm dir}$-$\mathcal{A}_{\rm CP}^{\rm mix}$ plane. For Scenario 1, with only $C_{9\mu}^{\rm NP}$, the direct CP asymmetry can range from $[-0.2, 0.2]$, while the mixing-induced CP asymmetry remains close to the SM prediction. On the other hand, Scenario 2 results in mixing-induced CP asymmetries as large as $\pm 0.4$, although parts of the allowed region are already in tension with the measurements of the direct CP asymmetries. With a complex phase
in $C_{10\mu}$ only (Scenario 3), the direct CP asymmetry remains zero. Nevertheless, it allows for large mixing-induced CP asymmetries.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{figures/correlation_within_bounds_big.png}
\caption{Correlations between the CP asymmetries $\mathcal{A}_{\rm CP}^{\rm dir}$ and $\mathcal{A}_{\rm CP}^{\rm mix}$ for the three NP scenarios in \eqref{eq:NP_ranges}. The SM point is marked by a square, while the current experimental bound on the direct CP asymmetry is illustrated by a green vertical band.
}
\label{fig:Adir_Amix_correlation_toy_scenario}
\end{figure}
These studies illustrate the power of the correlations between the CP-violating observables to distinguish different NP scenarios.
\subsection{Distinguishing hadronic long-distance branches}
\label{ch:disentangling_branches}
In Fig.~\ref{fig:correlation_all_branches}a, we show the correlations between the CP asymmetries for the $Y_{--}, Y_{-+}, Y_{+-}$ and $Y_{++}$ branches for Scenario 1 and 2 again in the range of $q^2 \in [7,8]$ GeV$^2$. We note that Scenario 3, with only $C_{10\mu}^{\rm NP}$, is not included as it is not influenced by the different branches. The branch multiplicity makes it more challenging to disentangle the different NP scenarios.
In order to distinguish the branches, we use the high sensitivity of the direct CP asymmetry to the long-distance model and the strong dependence on the $q^2$-bin. Exploiting these features, we can once again distinguish between the branches by using the CP asymmetry correlations in different parts of the $q^2$-spectrum. In Fig. \ref{fig:correlation_all_branches}b and \ref{fig:correlation_all_branches}c, we show the correlations for two bins with $q^2>11$ GeV$^2$. These correlations are obtained by varying the Wilson coefficients as follows:
\begin{itemize}
\item Scenario 1: $\abs{C_{9\mu}^{\rm NP}}/\abs{C_9^{\rm SM}} \in [0, 0.75] $, $\phi_{9\mu}^{\rm NP} \in [90,270]^\circ$.
\item Scenario 2: $\abs{C_{9\mu}^{\rm NP}}/\abs{C_9^{\rm SM}} \in [0, 0.50]$, $\phi_{9\mu}^{\rm NP} \in [90,270]^\circ$.
\end{itemize}
From \ref{fig:correlation_all_branches}, we observe that the regions that overlap at lower $q^2$ drift apart. To understand this, we consider the sign of the direct CP asymmetry:
\begin{equation}
{\rm sign}[\mathcal{A}_{\rm CP}^{\rm dir}] = \sin\phi_{9\mu}^{\rm NP} \times \sin\delta_Y(q^2) \ .
\end{equation}
In the bin $q^2 \in [1.1,6.0] \;\si{\giga\eV^2}$, the $J/\psi$ resonance provides the dominant contribution to $\delta_Y(q^2)$. For $q^2 \in [12.5,13.5] \; \si{\giga \eV^2}$, the $\psi(2S)$-threshold opens and drives the sign of $\delta_Y(q^2)$ (see Fig. \ref{fig:Y_abs_arg}b). At even higher $q^2$, the higher $c \bar c$ resonances dominate and we are no longer sensitive to the phases of the first two resonances. Due to all the different contributions to $\delta_Y$ at different $q^2$, the sign of the direct CP asymmetry varies across the $q^2$ spectrum, thereby giving distinct fingerprints. We have specifically picked the $q^2$-bins of Fig. \ref{fig:correlation_all_branches} to illustrate how branches that are close together in some $q^2$-bins separate in other bins.
By measuring the CP asymmetries in different parts of the $q^2$ spectrum, we can thus differentiate between the hadronic long-distance branches.
\begin{figure}
\centering
\subfloat[${q^2 \in [7,8] \; \si{\giga \eV^2}}$]{\includegraphics[width=0.9\textwidth]{figures/correlation_7to8_all_branches_within_exp_bounds_big.png}}\\
\hfill
\subfloat[${q^2 \in [11,11.8] \; \si{\giga \eV^2}}$]{\includegraphics[width=0.49\textwidth]{figures/correlation_11to12.png}}
\hfill
\subfloat[${q^2 \in [14,15] \; \si{\giga \eV^2}}$]{ \includegraphics[width=0.49\textwidth]{figures/correlation_14to15.png}}
\caption{Correlations between $\mathcal{A}_{\rm CP}^{\rm dir}$ and $\mathcal{A}_{\rm CP}^{\rm mix}$ in different $q^2$ bins including all four hadronic long-distance branches.}
\label{fig:correlation_all_branches}
\end{figure}
\subsection{Extracting Wilson coefficients from the CP asymmetries}
\label{ch:extracting_WCs}
If the CP asymmetries in $B \to K\mu^+\mu^-$ deviate from their SM values, the next goal is to extract the Wilson coefficients. In the general scenario of complex and independent NP contributions to both $C_{9\mu}$ and $C_{10\mu}$, we have four parameters entering the observables and therefore we need at least four observables to determine them from the measured data. Here we demonstrate a minimal scenario with four observables, but stress that also other information from different bins could be used. To this end, we use the direct and mixing-induced CP asymmetries and the CP-averaged integrated branching ratio of $B \to K \mu^+\mu^-$ in two different $q^2$ bins. Utilizing the strength of each observable, we consider:
\begin{itemize}
\item the direct CP asymmetry \eqref{eq:q2_binned_CP_asymm} integrated over $q^2 \in [8,9]$,
\item the mixing-induced CP asymmetry \eqref{eq:Sdef} in $q^2 \in [1.1,6.0]$,
\item the branching ratio in $q^2 \in [1.1,6.0]$ and $q^2 \in [15,22]$.
\end{itemize}
The branching ratios and the mixing-induced CP asymmetry are considered outside the resonance region and thus less affected by long-distance effects. On the other hand, for the direct CP asymmetry we pick the bin close to the $J/\psi$ peak at $\SI{9}{\giga \eV^2}$ where it may be enhanced.
\begin{figure}[t]
\centering
\includegraphics[width=0.46\textwidth]{figures/chisquaredplot.png}
\caption{Constraints on $C_{9\mu}$ and $C_{10\mu}$ in the complex plane assuming the input measurements and uncertainties given in ~\eqref{eq:input}. The lines show the $68\%$ and $90\%$ C.L. contours.}
\label{fig:fitresults}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{figures/chisquaredplotwithassumptions.png}
\caption{Constraints in the complex plane on $C_{9\mu}$, assuming $C_{9\mu}$ only and $C_{9\mu} = -C_{10\mu}$ for the uncertainties as specified by \eqref{eq:input} and
\eqref{eq:input_scenario1}.}
\label{fig:fitresultscons}
\end{figure}
In order to illustrate the extraction of Wilson coefficients from these four observables, we consider benchmark point 2 specified in~\eqref{eq:staranddiamond}. This NP scenario corresponds to
\begin{align}\label{eq:input}
\mathcal{A}_{\rm CP}^{\rm dir}[8,9] &= 0.16 \pm 0.02\ , & \mathcal{A}_{\rm CP}^{\rm mix}[1.1,6] &= 0.94 \pm 0.04 \ , \\
\mathcal{B}[1.1,6.0] &= (1.15\pm 0.02)\times 10^{-7} \ , & \mathcal{B}[15,22] &= (0.908\pm 0.018)\times 10^{-7} \nonumber \ ,
\end{align}
where we consider a possible future scenario for the experimental uncertainties. The uncertainties on our inputs in \eqref{eq:input} allows us to explore how the precision on the CP asymmetries translates into allowed regions for the Wilson coefficients. Due to the complexity of the system of equations, we perform a $\chi^2$ fit\footnote{If NP only enters through $C_{9\mu}$ or $C_{9\mu} = -C_{10\mu}$, we can solve the system analytically using only the direct and mixing-induced CP asymmetries as demonstrated in Appendix \ref{ch:analytical_solution}.} while setting theory uncertainties to zero. Figure!\ref{fig:fitresults} shows $68
\%$ and $90\%$ C.L. regions for the extracted Wilson coefficients in the complex plane. We find a good determination of the imaginary part of $C_{9\mu}$ but a less constraining situation for $C_{10\mu}$. This can be improved by considering additional $q^2$ bins and thus over-constraining the system.
It is interesting to consider the $C_{9\mu}$-only and $C_{9\mu}=-C_{10\mu}$ scenarios to show the possible precision of such an over-constrained fit. For the $C_{9\mu}$-only scenario, we use benchmark point 1 of~\eqref{eq:staranddiamond}, which gives
\begin{align}\label{eq:input_scenario1}
\mathcal{A}_{\rm CP}^{\rm dir}[8,9] &= 0.15 \pm 0.02\ , & \mathcal{A}_{\rm CP}^{\rm mix}[1.1,6] &= 0.66 \pm0.04 \ , \\
\mathcal{B}[1.1,6.0] &= (1.16\pm 0.02)\times 10^{-7} \ , & \mathcal{B}[15,22] &= (0.806\pm 0.016)\times 10^{-7} \nonumber \ ,
\end{align}
where the uncertainties again indicate a possible future scenario. Using the inputs in \eqref{eq:input_scenario1} for $C_{9\mu}$ only and \eqref{eq:input} for $C_{9\mu}=-C_{10\mu}$, we find the $68
\%$ and $90\%$ C.L. regions in Fig.~\ref{fig:fitresultscons}.
Finally, we illustrate our strategy in Fig. \ref{fig:flowchart}. By combining the direct and mixing-induced CP asymmetries with the branching ratios in specific bins, we can optimally exploit the complementarity of these observables. In this way, both the hadronic long-distance effects as well as the short-distance Wilson coefficients can be determined from the data.
\begin{figure}[t]
\centering
\includegraphics[width=0.6\textwidth]{figures/flowchart.png}
\caption{Illustration of the strategy to determine complex Wilson coefficients $C_{9\mu}$ and $C_{10\mu}$.}
\label{fig:flowchart}
\end{figure}
\section{Testing lepton flavour universality}
\label{ch:RK}
\subsection{Setting the stage}
Previously, we considered new sources of CP violation specifically for the muonic channel. In the SM, the only difference between the muon and electron channels is caused by tiny phase space effects, while $C_9$ and $C_{10}$ are strictly lepton-flavour universal. It is possible that NP enters in a lepton-flavour universal way as in the SM. On the other hand, NP may also affect the different generation distinctly (see e.g. \cite{Buttazzo:2017ixm,Bordone:2019uzc,Bordone:2017bld,Bordone:2018nbg, Cornella:2021sby}). When testing for lepton-flavour universality, special care is needed because new sources of CP violation may play a role. To demonstrate this, we define the integrated rates as
\begin{equation}
\bar{\Gamma}_\ell[q^2_{\rm min}, q^2_{\rm max}] \equiv \int_{q^2_{\rm min}}^{q^2_{\rm max}} dq^2 \frac{ d \Gamma(B^- \to K^- \mu^+\mu^-)}{dq^2}
\end{equation}
and equivalently for $\Gamma_\ell$ for the $B^+\to K^+\mu^+\mu^-$. For simplicity, we omit the explicit $q^2$-bin in the following. Deviations from lepton flavour universality between muons and electrons are then probed through the ratios
\begin{equation}
R_K \equiv \frac{\Gamma_\mu}{\Gamma_e} \ , \quad\quad\quad \bar{R}_K \equiv \frac{\bar{\Gamma}_\mu}{\bar{\Gamma}_e} \ ,
\label{eq:RK}
\end{equation}
for the $B^+$ and its conjugate $B^-$ mode, respectively.
We note that, usually, the CP-averaged ratio is quoted as $R_K$, although this is often not explicitly written. To make this difference explicit, we define
\begin{equation}
\langle R_K \rangle = \frac{ \Gamma_\mu +\bar{\Gamma}_\mu}{\Gamma_e + \bar{\Gamma}_e} \ .
\label{eq:RKav}
\end{equation}
We note that this averaged quantity is not the same as the sum of $R_K$ and $\bar{R}_K$ as defined in \eqref{eq:RK} but rather
\begin{equation}
\langle R_K \rangle = \frac{1}{2} \left[R_K + \bar{R}_K + (\bar{R}_K - R_K) \mathcal{A}_{\rm CP,e}^{\rm dir}\right] \,
\end{equation}
where the direct CP asymmetry for the electron mode is defined analogously to the muon mode in \eqref{eq:ACP_dir}. In the SM, $R_K$ and $\bar{R}_K$ are $1$ to an excellent precision, even when including tiny QED effects \cite{Bordone:2016gaq, Isidori:2022bzw}.
The latest LHCb measurements in the $q^2 \in [1.1,6.0] \: \si{\giga\eV^2}$-bin reads \cite{LHCb:2021trn}
\begin{equation}\label{eq:rkmeas}
\langle R_K \rangle[1.1,6.0] = 0.846^{+0.044 }_{-0.041} \ .
\end{equation}
This measurement deviates from the SM prediction with a significance of $3.1 \sigma$ \cite{LHCb:2021trn}. Measurements of this quantity have also been given by the BaBar \cite{BaBar:2012mrf} and Belle \cite{BELLE:2019xld} collaborations.
Using the LHCb measurement of the muon channel in \eqref{eq:adirmeas}, combined with \eqref{eq:rkmeas} gives \cite{LHCb:2021trn}:
\begin{equation}
\mathcal{B}(B\to K e^+e^-)[1.1,6.0] = (1.40 \pm 0.10) \times 10^{-7} \ .
\end{equation}
As said, the SM prediction only differs from the the muonic channel in \eqref{eq:SM_BR_lowq2} through tiny phase space effects. Taking \eqref{eq:SM_BR_lowq2}, we find
\begin{equation}
\frac{ \mathcal{B}(B^- \to K^- e^+ e^-)^{\rm SM}[1.1,6.0]_{\rm {{incl/hybrid}}}}{\mathcal{B}(B^\pm \to K^\pm e^+ e^-) [1.1, 6.0]} -1 = 0.31 \pm 0.14 \ ,
\end{equation}
differing by $2.2\sigma$. A somewhat larger difference was found in \cite{Parrott_SM_predictions}. Using the exclusive CKM factors, we find
\begin{equation}
\frac{ \mathcal{B}(B^- \to K^- e^+ e^-)^{\rm SM}[1.1,6.0]_{\rm excl.}}{\mathcal{B}(B^\pm \to K^\pm e^+ e^-) [1.1, 6.0]} -1 = 0.13 \pm 0.12 \ ,
\end{equation}
which differs by $1.1 \sigma$. We note that even though these SM predictions suffer from uncertainties due to the modeling of the long-distances effects, they seem to indicate consistently that there may be NP present in the muon and/or electron channels.
\subsection{New CP-violating couplings}
If $C_{9\mu}$ is different from $C_{9e}$, they would cause a difference between $R_K$ and $\bar{R}_K$. For completeness, we note that if we have NP in $C_T$, which enters with proportional to the mass, this would also give an effect if $C_{T\mu} = C_{Te}$. However, precisely because they are proportional to the mass, these effects are also suppressed (see previous discussions). Therefore, we do not further discuss such terms.
We stress that it important to separately measure these two ratios. The amount of CP violation in $R_K$ can be defined as
\begin{equation}
\mathcal{A}_{\rm CP}^{R_K} \equiv \frac{\bar R_K - R_K}{\bar R_K + R_K} \ .
\label{eq:CPV_in_RK}
\end{equation}
This new observable in \eqref{eq:CPV_in_RK} provides a measure of whether lepton flavour non-universal NP in $B \to K\ell^+\ell^-$ are also CP violating. We can rewrite \eqref{eq:CPV_in_RK} in terms of the direct CP asymmetries of the individual muonic and electronic decay channels:
\begin{equation}
\begin{aligned}
\mathcal{A}_{\rm CP}^{R_K} &= \left[ \frac{\mathcal{A}_{\rm CP}^{\rm dir,\mu} - \mathcal{A}_{\rm CP}^{\rm dir,e}}{1 - \mathcal{A}_{\rm CP}^{\rm dir,\mu}\ \mathcal{A}_{\rm CP}^{\rm dir,e}} \right] \ ,
\label{eq:CP_separated_RK_direct_CP_asymmetries}
\end{aligned}
\end{equation}
where $\mathcal{A}_{\rm CP}^{\rm dir, \mu}$ and $\mathcal{A}_{\rm CP}^{\rm dir,e}$ denote the direct CP asymmetries of $B \to K\mu^+\mu^-$ and $B \to Ke^+e^-$, respectively defined in \eqref{eq:ACP_dir}. From~\eqref{eq:CP_separated_RK_direct_CP_asymmetries}, we observe that the $\mathcal{A}_{\rm CP}^{R_K}$ will be enhanced if $\mathcal{A}_{\rm CP}^{\rm dir, \mu}$ has an opposite sign to $\mathcal{A}_{\rm CP}^{\rm dir, e}$. In addition, if the muonic and electronic direct CP asymmetries are identical, the observable vanishes. Therefore, any measurement of this observable is a clear sign of CP-violating NP with different magnitude and phase for the electron and muon channels.
Interestingly, is also possible to access the electronic direct CP asymmetry directly through separate measurements of the ratios for the $B^-$ and $B^+$. Specifically, we find
\begin{equation}
\mathcal{A}_{\rm CP}^{\rm dir,e} = \frac{2 \langle R_K \rangle - R_K - \bar R_K}{\bar R_K - R_K} = \frac{2 \langle R_K \rangle}{\bar{R}_K - R_K} - \frac{1}{\mathcal{A}_{\rm CP}^{R_K}} \ .
\end{equation}
At the moment, the only limit on the $\mathcal{A}_{\rm CP}^{\rm dir, e}$ comes from the Belle Collaboration\footnote{They use a weighted average over different $q^2$ bins, both below and above the $c \bar c$ resonances.} \cite{Belle:2009zue}:
\begin{equation}
\mathcal{A}_{\rm CP}^{\rm dir, e} = 0.14 \pm 0.14 \ ,
\end{equation}
therefore, it would interesting to see if the new observable could be used to obtain a more precise determination.
\section{Conclusions}
\label{ch:conclusions}
The charged and neutral $B\to K \mu^+\mu^-$ decays offer exciting probes of New Physics. In this paper, our main focus has been on imprints of possible new sources of CP violation. We have performed a comprehensive analysis of these channels, using the most recent state-of-the-art lattice QCD calculations for the required non-perturbative form factors.
For the $q^2$ regions with a large impact of hadronic $c\bar c$ resonances, we have implemented a model by the LHCb collaboration using experimental data. The corresponding fit results in different branches of the hadronic parameters describing the resonances. Interestingly, these effects generate CP-conserving strong phases, which are necessary ingredients for direct CP violation. In the SM, such CP asymmetries are negligibly small. However, new CP-violating phases arising in the short-distance Wilson coefficients may lead to sizeable CP violation, thereby signalling the presence of New Physics.
In our analysis, we have complemented these direct CP asymmetries with mixing-induced CP violation in the $B^0_d\to K_{S}\mu^+\mu^-$ decay. We have pointed out that the interplay of the CP asymmetries, utilising also information from the differential decay rates, allows us to distinguish between the different hadronic parameter sets describing the resonance effects, as well as between different NP scenarios. In these studies, we have considered the information for the time-dependent differential decay rates integrated over the angle describing the kinematics of the muon pair, thereby simplifying the analysis. Measuring angular distributions would provide even more information.
We highlight that, in agreement with other studies, we have found that the differential decay rates calculated in the SM are significantly smaller than the experimental values, thereby indicating the presence of NP. We have pointed out that the discrepancies between inclusive and exclusive determinations of $|V_{cb}|$ -- and to a smaller extent of $ |V_{ub}|$ -- have a large impact on the branching ratios. While the exclusive values result in a difference of the charged $B\to K \mu^+\mu^-$ branching ratio in the $q^2$ region within [1.1,6.0]\,$\si{\giga \eV^2}$ at the $2.5\,\sigma$ level, a hybrid scenario pairing the inclusive value of $|V_{cb}|$ with the exclusive value of $|V_{ub}|$ results in a SM branching ratio lying $3.5\,\sigma$ below the measured result. The latter CKM combination gives the most consistent picture of constraints on neutral $B$ mixing with the Standard Model. It will be important to finally resolve the issues in the determination of these CKM matrix elements.
We have presented a new strategy to determine the complex Wilson coefficients $C_{9\mu}$ and $C_{10\mu}$ from measurements of the branching ratios as well as the direct and mixing-induced CP asymmetries in appropriate $q^2$ regions. The method makes use of our finding that different NP sources lead to distinct “fingerprints”, allowing a transparent determination of these coefficients without making specific assumptions, such as having NP only in $C_{9\mu}$ or assuming the relation $C_{9\mu}^{\rm NP}=-C_{10\mu}^{\rm NP}$, as is frequently done in the literature. Since not all required measurements are yet available, we have illustrated this method through specific examples.
In the presence of NP, we may have a violation of lepton flavour universality which is probed through the $R_K$ ratio. Using the different determinations for the CKM factors, we found that the integrated SM branching ratio of the $B^-\to K^- e^+e^-$ channel for $q^2$ in [1.1,6.0]\,$\si{\giga \eV^2}$ is about $2 \sigma$ below the corresponding experimental result for the hybrid/inclusive case and about $1\sigma$ for the exclusive. We have pointed out that new sources of CP violation require special care in studies to distinguish between $B\to K \ell^+\ell^-$ decays and their CP conjugates for the final states with muons and electrons. We have presented a new method to measure direct CP violation in the $B\to K e^+ e^-$ modes using only $R_K$-like ratios for decays and their CP conjugates.
It will be very interesting to monitor how the data will evolve in the future. The methods to "fingerprint" CP-violating NP in $B\to K \mu^+\mu^-$ decays offer an exciting playground for the future high-precision era of $B$ physics. It will also be exciting the see whether new sources of CP violation can be revealed in these semileptonic rare $B$ decays, and whether they will complement current puzzles in CP violation in non-leptonic $B$ decays.
\section*{Acknowledgements}
This research has been supported by the Netherlands Organisation for Scientific Research (NWO).
|
2005.12826
|
\section{Introduction}
\label{Introduction}
It is a mystery how different brain regions to be optimized jointly.
In this article, we propose a brain-like heterogeneous network(BHN) simulating the multi-module structure of the brain.
We use three hypothesises in this article:
\begin{enumerate}
\item The brain is a machine to maximize information of its inner representations. This hypothesis was known as Efficient Coding\cite{barlow1961possible} or Efficient Information Representation\cite{linsker1990perceptual,atick1992could}.
\item The brain learns by optimizing certain objective functions, and different brain regions optimize different objective functions\cite{lake2017building}.
\item The brain works by fusing top-down predictions with bottom-up perceptions. This hypothesis actually enables the brain to process information recursively.
\end{enumerate}
We view hypothesis 1 as the first principle to understand the brain and obtain desired objective functions by formalizing it. The objective is a sum of many objective functions, each applied on an individual module, and all the modules make up BHN.
We also seek to understand the brain's information processing scheme, which we name as Recursive Modeling in this article.
Following in this article, firstly, the section \ref{Info} will give the objective functions derived from the first hypothesis.
Next, the section \ref{BHN} and the section \ref{RM} will elaborate on BHN and Recursive Modeling respectively.
And then the section \ref{A} and the section \ref{B} will provide some demonstration experiments for the former two sections respectively.
\section{Efficient Information Representation}
\label{Info}
The brain collects information from the environment($x$) and then generates internal representations($z$).
It is inferred that an important function of the brain is to maximize the information entropy of its representations.
It is generally believed that these representations are distributed on the cerebral cortex, and so it is essential to ensure the independence of information they represent.
Previous solutions include sparse-coding\cite{olshausen1996emergence}, independent component analysis\cite{hyvarinen2000independent}, and end-to-end deep learning.
In this article we propose our solution as follows.
We use$\{z^{1}, z^{2}, \cdots, z^{n}\}$ to denote the representations distributed on the cerebral cortex, and use $H(z^{1} z^{2} \cdots z^{n})$ to denote the information entropy of them.
We then formalize the objective function as $\max H(z^{1} z^{2} \cdots z^{n})$.
Considering
\begin{equation}
H(z^{1} z^{2} \cdots z^{n}) =\sum_iH(z^{i}) +[H(z^{1} z^{2} \cdots z^{n})-\sum_iH(z^{i})]
\end{equation}
the objective function can be roughly decomposed into two sub-objectives\cite{atick1992could}, as
\begin{equation}
\begin{cases}
\max\limits_{z}\ H(z^{i}) \\
\min\limits_{z}\ I(z^{i};z^{j}), & \mbox{if } i \ne j
\end{cases}
\label{eq:minmax}
\end{equation}
Noting that the second sub-objective is intractable because of the $\Omega (n^2)$ computational complexity, so we introduce a \textbf{global} attention\cite{graves2014neural,vaswani2017attention} representation($a$) into (\ref{eq:minmax}) by reforming the expression in a minimax fashion, as
\begin{equation}
\begin{cases}
&\max\limits_{z}\ \sum_iH(z^{i})
\\
&\min\limits_{z}\max\limits_{a}\ \sum_i I(z^{i};a)
\end{cases}
\label{eq:minmax_2}
\end{equation}
Then, by re-composing the two expressions above into a single one, we obtain the objective function:
\begin{equation}
\label{eq:minz}
\min\limits_{z}\max\limits_{a} \sum_i [-H(z^{i}) + I(a;z^{i})]
\end{equation}
We use contrastive losses\cite{hadsell2006dimensionality} to formalize $H(z^i)$. Contrastive losses measure the similarities of sample pairs in a representation space. A form of a contrastive loss function, which is called InfoNCE\cite{oord2018representation}, is considered in this article:
\begin{equation}
\label{eq:hzi}
H(z^{i}) \propto \log
\frac{\exp(f(z^{i}, z^{i}_{+}))}{\sum_{z^{i}_{-} \in Z^{i}} \exp(f(z^{i}, z^{i}_{-}))}
\end{equation}
where $f$ is a density ratio, which preserves the mutual information between a positive or negative pair of samples.
The next step is to formalize $I(a;z^{i})$.
To stabilize minimaxing on $I(a;z^{i})$, we do not formalize it directly. Instead, we use $a$ to produce a probability distribution, i.e. $P(z^i)$, as the prediction of $z^{i}$. The $a$ is called as an attention representation because it is used to generate shared Query/Key vectors $a^i$, each of which is paired with a representation $z^i$, and these Q/K vectors will be used to calculated each sample's probability/weight.
The details are as follows:
We provide a memory pool having $N$ paired samples, as
\begin{equation}
X^{i}=(A^{i},Z^{i})=\{(a^{i}_1,z^{i}_1),(a^{i}_2,z^{i}_2),\cdots,\cdots, (a^{i}_N,z^{i}_N)\}
\end{equation}
where $Z^{i}$ is the sample space of $P(z^i)$. The probability $P\big|z^i = z_j^i$ is equal to the attention weight $w_j^i$ calculated by
\begin{equation}
P\big|_{z^i = z_j^i}
=w_j^i
=
\text{softmax}\left(
{similarity}( a^i ,a_j^i)
\right)
\bigg|_{a^{i}_{j} \in A^{i}}
\end{equation}
Now we can formalize $I(a;z^{i})$ as
\begin{equation}
\label{eq:iaz}
I(a;z^{i}) \propto \log
\frac{\sum_{z^{i}_{j} \in Z^{i}} w_j^i \exp(f(z^{i}, z^{i}_{j}))}{\sum_{z^{i}_{-} \in Z^{i}} \exp(f(z^{i}, z^{i}_{-}))}
\end{equation}
Eventually, by bringing (\ref{eq:hzi}) and (\ref{eq:iaz}) into (\ref{eq:minz}), we formalize the objective function as
\begin{equation}
\label{eq:obj}
\min\limits_{z}\max\limits_{a}
\sum_i[-\log \frac{\exp(f(z^{i}, z^{i}_{+}))}
{\sum_{z^{i}_{j} \in Z^{i}} w_j^i \exp(f(z^{i}, z^{i}_{j}))}]
\end{equation}
Notable, this objective function suggests a probabilistic inference machines\cite{von2013treatise} and it is the corollary of our hypothesises. This is biologically plausible and we can say that \textbf{the attention representation makes predictions by activating selective replays of representations in the cerebral cortex}.
\section{Brain-like Heterogeneous Network}
\label{BHN}
In the section we propose the architecture of BHN to apply the objective function in Equation \ref{eq:obj}. It has a cortex-network composed of basic units, with unit $i$ generating the corticocerebral representation $z^i$, and an attention-network generating the global attention representation $a$.
As the name suggests, we use artificial neural network(ANN) to implement these two components.
Different from popular approach using end-to-end back-propagation with a global loss function, in our model, there is gradient isolation between the units and between the two networks.
\paragraph{Cortex-network} In each unit $i$, there is an encoder $g_{enc}^i$ to encode the input $x$ into a latent representation $z^i$. In the following image task, there only is a $g_{enc}^i$ inside the unit. While in the video tasks, in each unit, another network, which is called aggregator $g_{ar}^i$, is used to output a unit context $c^i$ to act as the positive partner of the $z^i$. Actually, we are applying Contrastive Predictive Coding\cite{oord2018representation} in each unit, as shown in Figure \ref{fig:cortex}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\textwidth]{cortex.png}
\caption{\label{fig:cortex} Architecture of the cortex-network in video tasks}
\end{figure}
\paragraph{Attention-network}
The attention-network generates the global attention representation $a$, like the medial temporal lobe in the mammalian brain.
Its architecture is like a traditional encoder-decoder network, where the encoder generates $a$ and the the decoder generates $a^i$, as shown in Figure \ref{fig:h1}.
\begin{figure}[hb]
\centering
\begin{subfigure}{0.35\textwidth}
\includegraphics[width=0.7\linewidth]{hippocampus.png}
\caption{}
\label{fig:h1}
\end{subfigure}
\begin{subfigure}{0.35\textwidth}
\includegraphics[width=0.7\linewidth, right]{hippo_control.png}
\caption{control group 2}
\label{fig:h2}
\end{subfigure}
\caption{\label{fig:hippocampus} Architecture of the attention-network in video tasks}
\end{figure}
The input of the attention-network is from the output of the cortex-network. In our image task, it is natural to take all $z^i$ as input because they are the only outputs of the cortex-network. In our video tasks, the attention-network takes all $c^i$ as input, because we want to get $a$ in advance of $z^i$.
The attention representation $a$ should not retain all the information inputted into it, but only needs to capture the information shared by multiple units.
To achieve this goal, one option is to make $a$ act as an information bottleneck, which means that $a$ is lower dimensional than the input vector.
The other option is to arbitrarily drop out some units' outputs. In our image task, we adopt the second option, and in our video tasks, we adopt the first one.
\paragraph{Neural Interface} Neural interface is not an essential component of BHN, but we want to mention it here in advance because it is important for the Recursive Modeling. Unlike the cortex-network and the attention-network, Neural Interfaces have no biological counterparts.
Actually, this name comes from Brain-Computer Interfaces(BCIs) \cite{wolpaw2000brain}.
By processing information from the cortex-network, neural interfaces perform various functions, such as controlling attention, controlling actions, and whatever as you need.
\section{Experiment(1)}
\label{A}
\subsection{Image Task}
We download ten landscape pictures from the internet and crop them into 8000 patches of $\ 16 \times 16 $ pixels, and then each patch is gray-scaled and normalized.
We then design a BHN model to learn on this dataset.
The model has 64 units in its cortex-network.
The encoder in each unit contains 128 hidden units with leaky-relu activation, and the attention-network contains 256 hidden units, which is also the dimension of $a$, with leaky-relu activation.
The dimensions of $z^i$ and $a^i$ are both set to 1.
The batch size, which is also the size of $X^{i}$, is 512.
The density ratio $f$ is formalized as
\begin{equation}
f(z, \mathfrak{z}) = - \text{clamp\_max\_5}(|z- \mathfrak{z}|)
\end{equation}
The similarity between $a^i$ is formalized as
\begin{equation}
{similarity}(a^i ,a_j^i) = -|a^i-a_j^i|/\tau
\end{equation}
where $\tau$ is the temperature optimized together with the attention-network.
The inputs are added with Gaussian white noise with $mean=0$ and $std=0.1$ for image enhancement, and also in this way to produce positive sample pairs in constrastive loss function.
The dropout ratio is $0.2$ in the attention-network.
We use SGD optimizer with the $lr=0.1$, $momentum=0.9$, and $weight\_decay=0.001$. The model is light and the training runs fast even in a laptop without GPU acceleration.
In addition to the normal experiment, we also establish a control experiment where the objective function is to $\max \sum_i H(z^i)$ only.
After 40 epochs of training, we visualize all 64 units by maximizing their outputs.
The results are shown in Figure \ref{fig:vision}.
\begin{figure}[hb]
\centering
\begin{subfigure}{0.32\textwidth}
\includegraphics[width=1\linewidth]{vision0.png}
\caption{untrained}
\label{fig:v0}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\includegraphics[width=1\linewidth]{vision1.png}
\caption{normal}
\label{fig:v2}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\includegraphics[width=1\linewidth]{vision2.png}
\caption{control}
\label{fig:v1}
\end{subfigure}
\caption{\label{fig:vision} Visualized features of units}
\end{figure}
In order to show clearly, we use red and green to indicate light and shade.
As can be seen from the figure, the visualized features are noisy if the model is not trained.
In both normal and control experiments, the units have intensified responses to certain image modes after training.
The result images in the control experiment are fuzzy.
In contrast, the visualized features in the normal experiment are more sharp and diverse.
\subsection{Video Task}
\label{A:v}
We build a video set containing 64 episodes recording the play of CarRacing game in OpenAI gym. Each episode lasts for 512 frames and each frame has a size of (96, 96) pixels. The frames are converted to gray scale and rescaled to (-1,1).
At each time step, 4 consecutive frames with additional noises are fed to the input.
A linear layer, shared by all $g^{i}_{enc}$ for the consideration of reducing the number of parameters, will first reduce the dimensions of inputs from $ 4 \times (96\times 96)$ to $512$.
The encoder architecture $g^{i}_{enc}$ contains 32 hidden units with leaky-relu activations.
We then use a GRU-RNN \cite{cho2014learning} for the autoregressive part of the unit, $g^{i}_{ar}$, with 32 dimensional hidden state.
The cortex-network has 16 units, and the attention-network is a simple unbiased linear network with a hidden layer. Dimensions of $z^{i}_t$, $c^{i}_t$ , $a^{i}_t$ and $a_t$ are all set to 2.
The batch size, which is also the size of $X^{i}$, is 256.
In our experiment, $z_{t+4}^i$ and $c_{t}^i$ are used as the positive pair for the contrastive loss function. The delay of $4$ is, somewhat arbitrary, to quantify the directional information between $z$ and $c$.
The density ratio $f$ is formalized as
\begin{equation}
f(z^i_{t+4}, c^i_t) = -\cos\langle z^i_{t+4},c^i_t\rangle/T
\end{equation}
where $T$ is the temperature optimized together with the cortex-network.
The similarity between $a^i$ is formalized as
\begin{equation}
{similarity}(a^i ,a_j^i) = -\cos\langle a^i, a_j^i\rangle/\tau
\end{equation}
where $\tau$ is the temperature optimized together with the attention-network.
We choose Adam optimizer with $lr=1e-4$. We use data enhancement in which each episode is folded into 16 segments of 256 frames long. We train each model for 20 epochs. However, according to our experience, a much longer training would not lead to over-fitting.
We use deconvolutional networks\cite{zeiler2011adaptive} to reconstruct images from representations $z_t$, $c_t$ and $a_t$ respectively.
Mean square errors($mse$) of the reconstructed images will be used to evaluate the quality of source representations.
Given that a trivial solution could achieve a loss of 0.0225 if none information was provided, in the following, we use the score, calculated by $(0.0225-{mse}) \times 255$, to indicate the quality.
We also establish two control groups to demonstrate the performance of adversarial training.
\paragraph{control groups 1}We abandon the attention-network to only perform optimizations on $H(z^i)$, just in the same way as the control group established in section \ref{A:v}.
\paragraph{control groups 2} We design a restricted attention-network architecture by cutting off the links via $a$ between units, as shown in Figure \ref{fig:h2}.
Table \ref{table:sample-table} gives the scores of $z_t$, $c_t$ and $a_t$ before and after training.
The scores of the experimental group surpass those of its competitors.
\begin{table}[ht]
\caption{Scores of representations}
\label{table:sample-table}
\centering
\begin{tabular}{llll}
\toprule
& $z_t$ & $c_t$ & $a _t$ \\
\midrule
Before Training & $2.04\pm 0.06$& $2.17\pm0.04$ &$0.29\pm 0.23$\\
Experimental Group& \textcolor{red}{$3.13\pm 0.04$}& \textcolor{red}{$2.33\pm0.07$}&\textcolor{red}{$0.80\pm0.23$} \\
Control Group 1 & $2.93\pm 0.07 $&$1.81\pm0.23 $ & \\
Control Group 2 & $2.93\pm 0.07$&$1.92\pm0.21 $& \\
\bottomrule
\end{tabular}
\end{table}
\section{Recursive Modeling}
\label{RM}
Model building, arguably, is the approach to general intelligence\cite{lake2017building}.
Additionally, we think recursion is essential in the design of strong artificial intelligence, just as it is for many Turing complete machines\cite{turing1936computable}.
So we propose the approach of Recursive Modeling, which means that the agent should not only build causal models for the environment, but also recursively build causal models on the early-built ones.
The environment is where negentropy\cite{schrodinger1944life} flows in.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\textwidth]{recursive_model.png}
\caption{\label{fig:recursive_model} Schematic diagram of Recursive Modeling}
\end{figure}
As shown in the schematic diagram(Figure \ref{fig:recursive_model}), the Recursive Modeling approach has two requirements.
The first requirement is to build a mental space where models run.
If we think of a model as a collection of regularities (or schemas\cite{piaget1929child,bartlett1932remembering}), then the mental space is the collection of all regularities.
Regularities are usually obtained from information bottlenecks, like the linguistic regularities found in the word vector space\cite{mnih2013learning}, and the disentangled representations generated by generative models\cite{bengio2013representation,larsen2015autoencoding}.
Existing low-level representations should be recursively distilled by the information bottleneck.
The second requirement of Recursive Modeling is to allow the agent to perceive and intervene the mental space, just as it does with the environment in the physical world. Perception and intervention are two necessities to build causal models at any time.
Among the models that have been built, the early built models are to simulate the relations between real entities in the environment, while the later ones are responsible for abstract thinking tasks, such as calculus in a symbolic system.
We do not mean that there is a clear hierarchy between models.
In fact, the notion of "model" is only a fictitious concept describing a set of closely related regularities, and many of those regularities are actually intertwined and shared, and reappear at different levels.
Units in the cortex-network can also cluster into function regions, and regions can be organized in a hierarchy-like pattern.
Different models can correspond to different regions in the cortex.
However, this may be a future work and the article dose not involve this too much.
\subsection{BHN and Recursive Modeling}
\label{RM:bhn}
\begin{figure}[hbt]
\centering
\includegraphics[width=0.55\textwidth]{BHN.png}
\caption{\label{fig:BHN} BHN adapted for Recursive Modeling}
\end{figure}
Figure \ref{fig:BHN} gives a schematic diagram of BHN adapted for Recursive Modeling, in which the three Loops marked in Figure \ref{fig:recursive_model} are also marked roughly at the corresponding positions.
BHN meets the two requirements of Recursive Modeling.
Firstly, the attention-network can serve as an information/attention bottleneck\cite{felleman1991distributed}, and the global attention($a$) can be regarded as representations in the mental space.
Secondly, it is possible for the agent to perceive the mental space by fusing bottom-up perceptions with top-down predictions, which will be detailed in the section \ref{RM:mpp}.
We think that much of the intelligence of the human brain resides in its sophisticated architecture, and now our BHN model is oversimplified and lacks many essential functions, such as dopaminergic neurons for reward and prediction error learning\cite{hollerman1998dopamine}, a realization of the attention control interface, the hippocampus forming mental maps and episodic memories, etc.
There is no doubt that we need more inspiration from the human brain to proceed with this work\cite{lake2017building}.
\subsection{Working Memory}
\label{RM:mpp}
We think that the human brain works by continuously mixing real perceptions with imaginary predictions, and in extreme cases it is like "hearing one's thoughts spoken out aloud"\cite{schneider1939psychischer}.
If $z_{t}^{i}$ represents what is heard, then the expectation $e^{i}_t
=
\sum_{z^{i}_j \in Z^{i}}w^{i}_{tj}
z^{i}_j$ can represent what the brain predicts to hear.
By replacing $z_{t}^{i}$ with $e_{t}^{i}$ at some times, the agent can somewhat perceive the mental space just as it perceives the external environment.
$z_{t}$ and $e_{t}$ are homologous, and they can both be used as the output of a unit, so that the information flow within the net is actually a mixture of perceptions($z_{t}$) and predictions($e_{t}$).
$z_{t}$ is involuntary and volatile, but $e_{t}$ is processed recurrently and remains somewhat locked inside the Loop {(\textcolor[RGB]{162,31,37}{3})}(marked both in Figure \ref{fig:recursive_model} and Figure \ref{fig:BHN}), and so in this way, $e_{t}$ can provide gain for $z_{t}$ in a sense.
We speculate that this mechanism corresponds to the brain's working memory, and its gain level determines whether the representations in the cortex will be suppressed or enhanced\cite{miller1991neural}.
\section{Experiment(2)}
\label{B}
\begin{figure}[b]
\centering
\includegraphics[width=0.7\textwidth]{casual.png}
\caption{\label{fig:casual} The score of $e$ over time in the testing phase}
\end{figure}
We follow the same basic setup of the simple model in section \ref{A:v} to test the hypothesis of working memory by mixing $z_{t}^{i}$ with $e_{t-4}^{i}$.
First, in the training phase, we feed $z_t$ to $c^{i}_t = g^{i}_{ar}(*)$ for even time steps and feed $e_{t-4}$ for odd time steps.
A deconvolutional network reconstructing images from $e$ to give a score is also trained in this phase.
Next, in the testing phase, taking a certain time step as the boundary, $z_t$ is used before and $e_{t-4}$ is used after.
We judge the performance by how long the score of $e$ keeps positive in the testing phase.
Figure \ref{fig:casual} gives the result and it shows that the working memory effectively lasts for about 30 frames, much longer than the one frame which is what we adapt the system to in the earlier training phase.
\section{Conclusions}
In this article, we propose three hypothesises on the learning and working mechanism of the human brain. By formalizing these hypothesises, we get a computable objective, which is a sum of many objective functions. After that, we build and test a model(BHN), which couples several artificial neural networks together, to optimize the objective functions obtained. Finally, we propose the approach of Recursive Modeling and test a hypothesis on working memory.
\section*{Broader Impact}
Our work has no direct ethical or societal implications.
\bibliographystyle{apalike}
\small
|
1803.06171
|
\section{Introduction}
One of the many quantum bit implementations is based on spin of an electron or hole trapped in a semiconductor nanodevice \cite{ref1,ref2,ref3}. Such a device must be built in a way, that allows performing several fundamental operations, namely: initialization, manipulation and readout \cite{ref4,ref5}. Most of them can be easily carried out in electrostatic quantum dots (QDs), for which confinement potentials are generated in quantum wells \cite{ref6,ref7,ref8,ref9,ref10} or wires \cite{ref11,ref12}, by voltages applied to local gates. They are also realised in self-assembled QDs \cite{ref13,ref14,ref15,ref16,ref17}, for which confinement is obtained only due to presence of heterojunctions of different semiconductors. These operations have to be performed sufficiently fast, as a sequence of calculations has to be completed before the decoherence of spin takes place \cite{ref9}.
The most difficult operation to accomplish turns out to be the spin initialization, that is, orienting spin in a chosen direction before any computations are executed. In self-assembled QDs spin of a single electron \cite{ref13,ref14,ref15} or hole \cite{ref16,ref17,ref18} can be set by using optical transitions to excitonic or trionic (charged excitons) states. We can proceed in a similar way in nanowire QDs based on InAsP/InP heterojunctions \cite{ref19}. On the contrary, in electrostatic QDs optical initialization through trionic states is not possible, since an attractive potential for electrons is repulsive for holes and thus, a stable excitonic state cannot be formed. In such systems the Pauli blockade is used \cite{ref9,ref10}. This method allows to set spin of an electron in parallel to spin of another adjacent electron previously trapped in the QD, however spin of the former electron is random. It can be set deterministically by using a strong magnetic field and waiting until the electron relaxes to the ground state \cite{ref10}. Obtained this way initialization is not accurate and takes at least a couple of nanoseconds.
However, to initialize a qubit to a known state for further operations, a high fidelity initialization procedure is necessary \cite{noiri}. Additionally, to perform quantum error correction certain ancillary qubits must be continuously reinitialized in ultra-short (relatively to decoherence) timescales \cite{preskill}.
The main source of electron spin decoherence are interactions with nuclear spin bath. Our nanodevice structure is similar to the one described in the experimental paper \cite{ref12} which invokes a coherence time of about 34 ns.
Recently, in \cite{ref23}, we have designed a nanodevice based on a planar semiconductor nanostructure, which allows for spin initialization with fidelity over 99\%, lasting about $400\,\mathrm{ps}$. This task can be achieved using the electrostatically controlled Rashba spin-orbit interaction (SOI). In this paper we propose a device capable of achieving similar fidelity an order of magnitude faster in a quantum wire, a nanostructure that is well within current experimental capabilities and much easier to fabricate than a planar nanostructure. This makes quantum wires an ideal starting point for experimental reasearch on spin initialization in solid state systems.
\section{Nanodevice structure}
\begin{figure}[b]
\includegraphics[width=0.4\textwidth]{FIG1.png}
\caption{\label{fig:nanodevice}Schematic view of the proposed nanodevice containing a gated InSb nanowire. Top gate is shown only partially.}
\end{figure}
For spin initialization we propose a nanodevice similar to those used in \cite{ref24,ref12}, shown in Fig. \ref{fig:nanodevice}. On a strongly doped silicon substrate we place a $100\,\mathrm{nm}$ thick layer of $\mathrm{SiO_2}$. Next, we lay down seven $200\,\mathrm{nm}$ wide metallic gates $\mathrm{U_i}$ separated by gaps of $50\,\mathrm{nm}$ each. They serve to shape the confinement potential along the wire. The gates are then covered with a $260\,\mathrm{nm}$ thick layer of $\mathrm{Si_3N_4}$ insulator. On top of the insulator we put a catalytically grown InSb nanowire, $80\,\mathrm{nm}$ in diameter and of length $L$ exceeding $1.5\,\mathrm{\mu m}$. On both sides of the wire, in parallel, we put two lateral gates $\mathrm{U_{left}}$ and $\mathrm{U_{right}}$ at a distance of $50\,\mathrm{nm}$ from the wire axis. They are used to generate an electric field along the $y$-axis. Everything is then covered with $\mathrm{Si_3N_4}$ up to $400\,\mathrm{nm}$ measured from the substrate. Finally, the top surface of $\mathrm{Si_3N_4}$ is covered with top gate $\mathrm{U_{top}}$, which, along with back gate (formed by the strongly doped substrate) is used to create an electric field parallel to the $z$-direction.
\section{Model}
The operations on the electron are performed within a range of very low energies near to the conduction band minimum. The initial voltages applied to the gates, and a parabolic approximation of the resulting potential give rise to an excitation energy of about $\hbar\omega=0.2\,\mathrm{meV}$, significantly lower than the InSb band gap of $230\,\mathrm{meV}$. This makes the single band effective mass approximation a reasonable choice. We thus use this approximation in all subsequent calculations.
Now let us discuss theoretical model of the nanodevice.
First, we assume that the quantum wire confines a single electron. For the InSb effective mass $m=0.014\,m_e$, the energy difference between the ground state and the first excited state of quantized electron motion in perpendicular directions to the wire ($80\,\mathrm{nm}$ in diameter) equals $40\,\mathrm{meV}$, which is two orders of magnitude greater than energies of motion encountered in our nanodevice. Thus, we can use a one-dimensional approximation assuming, that the electron always occupies the ground state of lateral motion.
The corresponding Hamiltonian takes the following form
\begin{equation}\label{eq:hamiltonian}
\mathbf{H}=\left[-\frac{\hbar^2}{2m}\frac{d^2}{dx^2}+V(x)\right]\mathbf{I_2}+\mathbf{H_\mathrm{so}},
\end{equation}
where $V(x)$ is a potential energy of confinement and the last term $\mathbf{H_\mathrm{so}}$ expresses the SOI. The wavefunction takes the spinor form $\mathbf{\Psi}(x,t)=\left(\psi_\uparrow(x,t), \psi_\downarrow(x,t)\right)^\mathrm{T}$. $\mathbf{I_2}$ denotes a $2\times 2$ identity matrix. We assume that the wire is grown along the crystallographic direction $[111]$, thence the Dresselhaus interaction vanishes \cite{ref24, ref25} and we can take into account only the Rashba SOI contribution
\begin{equation}\label{eq:rashba}
\mathbf{H_\mathrm{so}}=\frac{\alpha_\mathrm{so}|e|}{\hbar}\left(E_z\sigma_y-E_y\sigma_z\right)\hat{p}_x,
\end{equation}
with the Pauli matrices $\sigma_y$, $\sigma_z$ and the Rashba coefficient for InSb $\alpha_\mathrm{so}=523\,\mathrm{\AA{}^2}$ \cite{ref25}. $E_y$, $E_z$ are the electric field components within the wire.
If the confinement potential energy has the parabolic form
\begin{equation}\label{eq:potential}
V(x)=\frac{1}{2}m\omega^2x^2,
\end{equation}
we can solve for the Hamiltonian eigenfunctions analytically in the momentum representation and then transform them to the position representation. Let us assume, that only $E_y$ component of $\mathbf{E}$ is nonzero. The ground state energy is now doubly degenerated with respect to the spin $z$-projection. The wavefunctions in the position representation are gaussians multiplied by plane waves due to the presence of the SOI. Depending on the spin $z$-projection, the wavenumber is either positive $q$ or negative $-q$. Corresponding eigenfunctions take the following form
\begin{equation}\label{eq:eigenstates}
\begin{split}
\mathbf{\Psi}_\uparrow(x)=\frac{2\beta}{\pi}\begin{pmatrix}1\\0\end{pmatrix}e^{-\beta x^2}e^{iqx},\\
\mathbf{\Psi}_\downarrow(x)=\frac{2\beta}{\pi}\begin{pmatrix}0\\1\end{pmatrix}e^{-\beta x^2}e^{-iqx},
\end{split}
\end{equation}
with $\beta=\frac{m\omega}{2\hbar}$ and $q=\frac{m\alpha_\mathrm{so}|e|E_y}{\hbar^2}$. Although each state's wavefunction contains a plane wave factor, the electron remains still as its motion is blocked by the SOI \cite{ref27}. If we now turn off the electric field abruptly by putting $E_y=0$, the SOI vanishes and the electron starts moving according to its momentum to the left ($\langle p_x\rangle=-\hbar q$) or to the right ($\langle p_x\rangle=\hbar q$) depending on its spin. The SOI introduces an energy correction $\Delta E=\frac{\hbar^2q^2}{2m}$, so the maximal electron displacement $\Delta x$ from the equilibrium position of the confinement potential (Eq. \ref{eq:potential}) approximately obeys the relation $\Delta E = V(\Delta x)$, or $\frac{\hbar^2q^2}{2m}=\frac{m\omega^2}{2}\Delta x^2$. It follows that
\begin{equation}\label{eq:displacement}
\Delta x = \frac{\alpha_\mathrm{so}|e|E_y}{\hbar\omega}.
\end{equation}
Converesly, if the electron relaxes to the ground state with $E_y=0$, abrupt turning on of the electric field will set it in motion yet in the opposite direction.
The effect of spin dependent motion when the electric field is turned on can be used for the spin readout. Spin orientation of the electron determines its direction of motion (to the left or to the right). Thus, by measuring the electron presence in the left or in the right half of the wire, after the movement took place, we can infer about its spin orientation.
If the initial electron state is not an eigenstate of $\sigma_z$ (spin $z$-projection is not definite), its wavefunction is a linear combination of both basis states: $\mathbf{\Psi}(x)=c_\uparrow\mathbf{\Psi}_\uparrow(x)+c_\downarrow\mathbf{\Psi}_\downarrow(x)$. After the electric field $E_y$ is changed, both spinor parts start moving along the $x$-axis in opposite directions and split. If the electric field pulse is sufficiently strong, it is possible to separate them entirely.
\section{Principles of spin initialization}
Before we delve deeper into the details of spin initialization let us first explain the basic principles of this process.
\begin{figure}[t]
\includegraphics[width=0.4\textwidth]{FIG2.pdf}
\caption{\label{fig:stage1principle}First stage of the spin initialization: (a) An electron of arbitrary spin orientation is confined within a quantum wire and occupies the ground state; (b) The rising slope of the electric field pulse $E_y$ along the $y$-axis, due to the Rashba SOI, triggers motion of the electron wavefunction spin components in opposite directions resulting in spin separation; (c) When the spin components are separated the most, the electric field is maximum. The components, however, still overlap and require stronger separation; (d) The falling slope of the electric field pulse $E_y$, employed when the spin components start turning back, accelerates them even more, making them cross each other and reach positions farther apart; (d) Introduction of a potential barrier between the spin components isolates them from their mutual influence and locks their new positions. White arrows along with red or blue colors indicate the spin orientation of each component.}
\end{figure}
Fig. \ref{fig:stage1principle} shows a simplified model of the nanowire (gray) made of InSb, a material with strong Rashba spin-orbit coupling. Confinement along the wire is created by external voltage-driven gates removed from the picture for clarity. We control voltages applied to these gates to shape the confinement potential, while yet another few gates are used to create electric fields necessary for inducing the Rashba SOI, which is essential for the operation of the device. The initialization scheme consists of two stages.
In the first stage we insert an electron into the quantum wire and trap it inside the confinement potential (Fig. \ref{fig:stage1principle}a). This electron can be of arbitrary spin as the described procedure makes no assumptions on its initial orientation. Next we apply a rising slope of an electric field pulse $E_y$ along the $y$-axis (Fig. \ref{fig:stage1principle}b). This field induces the Rashba SOI and makes the electron wavefunction split into two components of opposite spins which start travelling in opposite directions. As the components travel further they slow down and finally halt for an instant (Fig. \ref{fig:stage1principle}c). At this moment the electric field $E_y$ is maximum and the spin components are somewhat separated but they still overlap. Now, the components start turning back towards the center of the wire due to the repelling influence of the confinement potential. At this very moment, we apply a falling slope of the electric field $E_y$ (Fig. \ref{fig:stage1principle}d). This accelerates the components towards each other and makes them cross unaffected.
This acceleration occurs bacause any change of the electric field $E_y$ affects the components' motion. If the electrons were still moving away from each other the falling slope of $E_y$ would decelerate them but since they have already turned back, the falling slope actually accelerates them further. This peculiar behavior has been described in detail in \cite{ref27}. Now, after the components crossed each other they start slowing down and halt in new positions separated by a distance considerably larger than previously. We need only set a potential barrier between them to lock them in their new positions (Fig. \ref{fig:stage1principle}e). As the components no longer overlap, this indicates full spin separation. This concludes the first stage of initialization.
\begin{figure}[t]
\includegraphics[clip,trim=0 5em 0 2em,width=0.4\textwidth]{FIG3.pdf}
\caption{\label{fig:stage2principle}Second stage of the spin initialization: (a) By applying the electric field $E_z$ along $z$-axis and modifying positions of the confinement potential minima the spin components' motion along the wire and the spin rotation about $y$-axis are induced; (b) By reverting the positions of minima and reversing the electric field the spins are further rotated and the wavefunction components are brought back to their previous positions; (c) Spins of the both former spin components are now oriented in the same direction effectively ending the initialization procedure; (d) Two wavefunction parts can now be brought back together to create a single wavepacket.}
\end{figure}
Now, the second stage of initialization proceeds, as shown in Fig. \ref{fig:stage2principle}. We start from the point where the previous stage finished. By modifying voltages applied to the gates forming the electron confinement potential, we slightly move the potential minima apart, thus setting both spin components in motion in opposite directions (Fig.~\ref{fig:stage2principle}a). At the same time we apply an electric field $E_z$ along the $z$-axis which induces spin rotations about the $y$-axis. We must not use the term ,,spin components'' anymore as they no longer represent spin up and down with respect to the $z$-axis. From now on, we merely call them wavefunction parts. Because the parts move in opposite directions their spins rotate about the same axis yet by opposite angles. After they travel over some distance they are being brought back to their previous locations by appropriately forming the confinement potential (Fig.~\ref{fig:stage2principle}b). At this point we also have to reverse the $E_z$ direction, because otherwise not only positions but also spins of both parts would revert to their original orientations jeopardizing our efforts. Dividing the spin rotation into two small steps: movement forwards and movement backwards is advantageous as it prevents the wavefunction parts from travelling long distances and allows for simpler control gate layouts. After the spin rotation, the spins of the wavefunction parts are now directed along the $x$-axis, as shown in Fig. \ref{fig:stage2principle}c. Finally, after we turn off the $E_z$ field, both parts can be brought back and merged to form a single wavepacket (Fig. \ref{fig:stage2principle}d). This concludes the second and the last stage of spin initialization.
\section{Simulations}
We have performed time-dependent simulations of nanodevice operation. We use generalized Poisson's equation to solve for the potential $\phi(\mathbf{r},t)$ at every time step in a computational box encompassing the entire nanodevice. The obtained potential is used to calculate the potential energy profile within the quantum wire and along its axis $V(x,t)=-|e|\phi(x,y_0,z_0,t)$ (where $y_0$, $z_0$ are coordinates of the wire), as well as the electric field $\mathbf{E}(\mathbf{r},t)=-\nabla\phi(\mathbf{r},t)$. The time evolution of the electron is obtained by solving the time-dependent Schr\"odinger's equation starting from the electron ground state for the initial potential. A detailed description of the method can be found in \cite{ref26}.
Initially the voltages of top and lateral gates are set to zero $U_\mathrm{top}=U_\mathrm{left}=U_\mathrm{right}=0$. To the remaining seven lower gates we apply $U_{1,\dots,7}=-40\,\mathrm{mV}$, $-10\,\mathrm{mV}$, $-2.5\,\mathrm{mV}$, $0\,\mathrm{mV}$, $-2.5\,\mathrm{mV}$, $-10\,\mathrm{mV}$, $-40\,\mathrm{mV}$. These voltages create a confinement potential energy with nearly parabolic center and high barriers at the borders.
\begin{figure}[t]
\includegraphics[width=0.4\textwidth]{FIG4.pdf}
\caption{\label{fig:wireplot}The electron (red) and spin (blue) densities, together with the electron potential energy (green) along the quantum wire for three selected moments of time: (a) at the beginning, with the black line being a parabola fitted near the potential energy center; (b) after spin separation of an initial state being an equally weighted linear combination of $\mathbf{\Psi}_\uparrow$ and $\mathbf{\Psi}_\downarrow$; (c) just like (b) but with an initial state being an exemplary \emph{non-equally} weighted linear combination of $\mathbf{\Psi}_\uparrow$ and $\mathbf{\Psi}_\downarrow$; (d) after setting up a potential energy barrier between wavepacket parts with opposite spin.}
\end{figure}
The potential energy profile is shown in Fig. \ref{fig:wireplot}(a) as a green line along with a parabolic fit (black line). The red line marks the charge density (i.e., square of the modulus of the wavefunction $\mathbf{\Psi}^\dagger(x)\mathbf{\Psi}(x)$). We assume that initially the wavefunction corresponds to the electron ground state.
Because $U_\mathrm{top}=U_\mathrm{left}=U_\mathrm{right}=0$ and the voltages $U_i$ are relatively small, the electric field components $E_y$ and $E_z$ are nearly zero. This effectively causes vanishing of the SOI. Let us now assume that the spin $z$-projection is indefinite and the electron wavefunction is a linear combination of both spin basis states. To the gates $\mathrm{U_{left}}$ and $\mathrm{U_{right}}$ we apply a single voltage pulse lasting half a period, given by the formulae $U_\mathrm{left}(t)=-U_\mathrm{sep}\sin(\omega_\mathrm{sep}t)$ and $U_\mathrm{right}(t)=U_\mathrm{sep}\sin(\omega_\mathrm{sep}t)$ where $t\in[0,\pi/\omega_\mathrm{sep}]$, the amplitude $U_\mathrm{sep}=1.3\,\mathrm{V}$ and $\hbar\omega_\mathrm{sep}=0.15\,\mathrm{meV}$. This pulse generates an electric field parallel to the $y$-axis, and equivalently the Rashba SOI, which causes spatial separation of the wavefunction into two parts with opposite spin directions.
\begin{figure}[h]
\includegraphics[width=0.4\textwidth]{FIG5.pdf}
\caption{\label{fig:separation}Time courses in the spin separation stage. The black curve shows the voltage pulse applied to gate $\mathrm{U_{left}}$. The red curves (solid and dashed) show the expectation values of position of both spin parts of the wavefunction ($\langle x\rangle_\uparrow$ and $\langle x\rangle_\downarrow$), while the blue curves show the expectation values of the Pauli $z$-matrix calculated in the right ($\langle\sigma_z\rangle_\mathrm{right}$) and left ($\langle\sigma_z\rangle_\mathrm{left}$) halves of the nanodevice.}
\end{figure}
Fig.~\ref{fig:separation} shows results of the electron state time evolution. The black curve shows a time course of $U_\mathrm{left}(t)$, the solid red curve the expectation value of position calculated for the upper spinor part as $ \langle x\rangle_\uparrow=\langle\psi_\uparrow|\hat{x}|\psi_\uparrow\rangle/\langle\psi_\uparrow|\psi_\uparrow\rangle$ and the dashed red curve $\langle x\rangle_\downarrow$ calculated in a similar way using the lower spinor part $\psi_\downarrow$. Initially, at $t=0$, both $x_\uparrow$ and $x_\downarrow$ are identical and correspond to the middle of the wire where the potential energy minimum is located. An increasing voltage between $\mathrm{U_{left}}$ and $\mathrm{U_{right}}$ induces separation of both wavepacket parts. At $t=7\,\mathrm{ps}$, when the voltage pulse reaches its maximum, the wavepacket parts cease to separate any more and start moving backwards. At this moment, the spacing between both parts calculated from Eq. (\ref{eq:displacement}) equals $2\Delta x = 590\,\mathrm{nm}$. This agrees only approximately with the more accurate value $505\,\mathrm{nm}$ obtained from the simulation shown in Fig. \ref{fig:separation} (marked with a cyan arrow). This is so, because Poisson's equation solved in the simulations additionally takes into account the interaction of the electron with charge induced on local gates. From now on, the spin orbit coupling decreases but still accelerates both wavepacket parts, since they bounced off of the potential energy barriers and move in the opposite directions\cite{ref27}. As a result, the wavepacket energy continues to grow until the voltage pulse returns to zero. For an exactly parabolic potential energy, the spacing between wavepacket parts would be twice as large (and equal to $4\Delta x$), but the actual potential energy does not satisfy this condition and changes its shape during the wavepacket separation.
Note that the entire process is not resonant and $\omega_\mathrm{sep}$ does not have to be equal to $\omega$ from Eq. \ref{eq:potential}. Because the assumed value of $\omega_\mathrm{sep}$ is approximately $20\%$ lower than $\omega$, in Fig. \ref{fig:separation} the opposite spin wavepackets return to the initial position sooner (at $t=11\,\mathrm{ps})$ than the voltages fall down to zero (at $t=14\,\mathrm{ps}$).
The solid blue curve in Fig.~\ref{fig:separation} depicts the expectation value of the Pauli $z$-matrix, calculated in the right half of the quantum wire as (note the limits):
\begin{equation}\label{eq:rightspin}
\langle\sigma_z\rangle_\mathrm{right}=\int_0^{L/2}\mathbf{\Psi^\dagger}(x,t)\sigma_z\mathbf{\Psi}(x,t)dx.
\end{equation}
The value of $\langle\sigma_z\rangle_\mathrm{left}$, calculated in a similar way, is shown in Fig. \ref{fig:separation} as a dashed blue line. In the presented simulation the initial electron state was an equally weighted linear combination of spin states (i.e., $c_\uparrow=c_\downarrow=1/\sqrt{2}$), thus at $t=15\,\mathrm{ps}$ $\langle\sigma_z\rangle_\mathrm{right}=0.5$ and $\langle\sigma_z\rangle_\mathrm{left}=-0.5$. This indicates a full spatial separation of spin parts, as shown in Fig. \ref{fig:wireplot}(b). If the initial linear combination of spin states was not equally weighted, the final values of $|\langle\sigma_z\rangle_\mathrm{right}|$ and $|\langle\sigma_z\rangle_\mathrm{left}|$ would not be equal. This situation is shown in Fig.~\ref{fig:wireplot}(c).
In the most extreme case, when spin is oriented upwards, i.e. $c_\uparrow=1$ (or downwards, $c_\downarrow=1$), the electron will occupy the right (or left) half of the quantum wire with probability 1. Let us note that this stage of operation can also be used for spin readout. This operation can be performed in $T_{\mathrm{READOUT}}=15\,\mathrm{ps}$.
Fig.~\ref{fig:wireplot}(b) shows the electron density along the wire (red curve) calculated as $\rho(x,t)=\mathbf{\Psi^\dagger}(x,t)\mathbf{\Psi}(x,t)=|\psi_\uparrow(x,t)|^2+|\psi_\downarrow(x,t)|^2$ and spin density (blue curve) as $\rho_\sigma(x,t)=\mathbf{\Psi^\dagger}(x,t)\sigma_z\mathbf{\Psi}(x,t)=|\psi_\uparrow(x,t)|^2-|\psi_\downarrow(x,t)|^2$. According to these definitions in the region where spin is directed upwards the curves overlap, which occurs in the right side of the nanodevice, while for spin directed downwards they have opposite signs, which occurs in the left. At the moment when the distance between wavepackets is maximal, we change gate voltages appropriately, creating a potential barrier between them and confining them inside two separate potential valleys. The barrier has to be sufficiently high, and the minima deep, so as to allow independent operations on spin in both valleys. To achieve this, at $t_1=16\,\mathrm{ps}$ we change the gate voltages rapidly to $U_{1,\dots,7}(t_1)=-60\,\mathrm{mV}$, $10\,\mathrm{mV}$, $-10\,\mathrm{mV}$, $-100\,\mathrm{mV}$, $-10\,\mathrm{mV}$, $10\,\mathrm{mV}$, $-60\,\mathrm{mV}$. The obtained potential energy profile as well as electron and spin densities are plotted in Fig. \ref{fig:wireplot}(d). The potential energy has two minima with a barrier between them, separating the wavefunction spatially into two parts of opposite spin directions.
Now we proceed to the second stage of operation which boils down to a spin rotation about the $y$-axis. If we turn spin in the right valley clockwise by $\pi/2$ and in the left -- counterclockwise by the same angle, spins of both parts become parallel to each other and directed along the $x$-axis. From the form of Hamiltonian (Eq. \ref{eq:hamiltonian}), it follows that the motion along the $x$-axis induces spin rotation about the $y$-axis if an electric field $E_z$ (along the $z$-axis) is present. We generate it by applying a voltage to top gate. Now spin rotation is achieved by setting the electron in an oscillatory motion along the $x$-direction.
To achieve this, from the time $t_1$ onwards, voltages applied to gates $\mathrm{U_2}$ and $\mathrm{U_3}$ are modified according to the formulae: $U_2(t)=U_2(t_1)-U_\mathrm{osc}\left(1-\cos\left(\omega_\mathrm{rot}(t-t_1)\right)\right)$ and $U_3(t)=U_3(t_1)+U_\mathrm{osc}\left(1-\cos\left(\omega_\mathrm{rot}(t-t_1)\right)\right)$. Because we want to rotate spin of both wavepacket parts in opposite directions, to gates $\mathrm{U_5}$ and $\mathrm{U_6}$ we must apply voltages stimulating motion in the opposite direction: $U_5(t)=U_3(t)$ and $U_6(t)=U_2(t)$. At the same time $t=t_1$ we turn on the Rashba SOI, then reverse its sign when the wavepacket parts stop and start moving backwards. This is achieved by applying a voltage, phase-shifted by $\pi/2$, to top gate: $U_\mathrm{top}(t)=-U_\mathrm{rot}\sin(\omega_\mathrm{rot}(t-t_1))$ \cite{ref28}. In this simulation stage, the pulse duration equals one full period of sine. Using a pulse lasting half a period did not produce satisfactory fidelity of spin initialization. The value $\omega_\mathrm{rot}$ does not have to be carefully selected, but it should not be too small, as it affects the duration of operation and not too large, because with its increase fidelity drops. In the simulations we assumed a value of $\hbar\omega_\mathrm{rot}=0.11\,\mathrm{meV}$. The amplitude $U_\mathrm{osc}$ of voltages stimulating wavepacket oscillations does not have to be precisely chosen and we assumed $U_\mathrm{osc}=100\,\mathrm{mV}$. Also, the exact shape of pulses is not critical and deviations from the presented sine-like shapes is acceptable. Only the $U_\mathrm{rot}$ amplitude must be tuned to the pulse duration. Performed simulations indicate, that to maintain fidelity at the level of 99\% or greater, $U_\mathrm{rot}$ should be selected with accuracy better than $\pm 40\,\mathrm{mV}$.
\begin{figure}[h]
\includegraphics[width=0.4\textwidth]{FIG6.pdf}
\caption{\label{fig:FIG4}Time courses of expectation values of the Pauli $x$-matrix, namely $\langle\sigma_x\rangle$, for various initial spin orientations (green). The red dashed curve shows the separating voltage pulse, while the blue dashed curve shows the pulse, responsible for SOI, used for spin rotations in both parts of the device.}
\end{figure}
Fig. \ref{fig:FIG4} shows time evolutions of expectation values of the Pauli $x$-matrix $\langle\sigma_x\rangle$ for several initial spin orientations. The courses differ considerably only in the first stage of nanodevice operation, lasting about $7\,\mathrm{ps}$. At $t=t_1=16\,\mathrm{ps}$, the wavepacket is split into two parts, one of which has spin parallel to the $z$-axis, while the second one spin antiparallel. At this moment $\langle\sigma_x\rangle$ vanishes. In the second stage of simulation, in which spin parts are rotated, the courses overlap regardless of the initial spin orientations and all reach a value close to unity at the same time $t=60\,\mathrm{ps}$. The final fidelity of spin initialization is of the order of $99.3\%$.
After the entire procedure, spin becomes oriented along the $x$-direction. However, if we want to further change its orientation, this can be done using another voltage pulse. Voltages applied to the lateral gates can generate the SOI, which induces spin rotation about the $z$-axis. On the other hand a voltage applied to the top gate allows for rotations about the $y$-axis.
The second pulse, visible in Fig. \ref{fig:FIG4}, lasting about $45\,\mathrm{ps}$ resulted in a spin rotation by $90^\circ$. Such operation is performed by the Hadamard gate, and in our case it takes $T_\mathrm{HADAMARD}=45\,\mathrm{ps}$. The NOT gate requires a rotation by $180^\circ$, thus it requires $T_\mathrm{NOT}=90\,\mathrm{ps}$. During the second stage of the presented initialization scheme we rotate spin in the right potential valley clockwise and in the left -- counterclockwise. If spin was rotated by $180^\circ$ only in the left valley, leaving the right one unchanged, what can be achieved by modulating voltages $\mathrm{U_2}$ and $\mathrm{U_3}$ and fixing voltages $\mathrm{U_5}$ and $\mathrm{U_6}$, spin of the electron would be reversed if it occupied the left valley or remained untouched if it occupied the right one. This outcome is equivalent to the controled negation (CNOT) two-qubit gate if we assume that the first qubit is a spin qubit while the second is a charge qubit defined as presence of an electron in the left or right potential valley. The operation time of this gate is $T_\mathrm{CNOT}=90\,\mathrm{ps}$.
\section{Summary}
We proposed a nanodevice designed to set spin of a single electron in a desired direction. After two pulses of voltages, lasting less than $60\,\mathrm{ps}$ in total, spin is set in parallel to the $x$-axis. The outcome does not depend on initial spin orientations and is obtained without using any external fields, microwaves or photons. The goal is achieved all electrically with voltages applied to the local gates.
The proposed nanodevice can also be used to perform other necessary quantum operations: readout, Hadamard gate, NOT gate, CNOT gate in times: $15\,\mathrm{ps}$, $45\,\mathrm{ps}$, $90\,\mathrm{ps}$ and $90\,\mathrm{ps}$, respectively. These estimated operation times, compared to the coherence time of about $34\,\mathrm{ns}$ look very promising.
In the performed simulations the nanodevice is modeled upon other similar nanostructures described in experimental papers, which assures its experimental feasibility. We used real material parameters like the InSb electron effective mass or distinct dielectric constants of the nanowire and the surrounding insulator. The assumed electric pulse durations are short yet experimentally achievable, while their amplitudes are low enough to ensure adiabaticity of the entire process. The estimated operation times are extracted directly from the time courses in Fig. \ref{fig:FIG4}. They are achievable in practice but shortening them any further might prove difficult.
\section{Acknowledgements}
This work has been supported by National Science Centre (NSC), Poland, under UMO-2014/13/B/ST3/04526. JP has been supported by National Science Centre, under Grant No. 2016/20/S/ST3/00141. This work was also partially supported by the Faculty of Physics and Applied Computer Science AGH UST dean grants No. 15.11.220.717/24 and 15.11.220.717/30 for PhD students and young researchers within subsidy of Ministry of Science and Higher Education.
|
1803.06252
|
\section{Introduction}
Extracting information from historical handwritten text documents in an optimal and efficient way is still a challenge to solve, since text in these kind of documents are not as simple to read as printed characters or modern handwritten calligraphies \cite{VeronicaRomero2016}, \cite{Toselli:2016:HWG:3043320.3051195}. Historical manuscripts contain information that gives an interpretation of the past of societies. Systems designed to search and retrieve information from historical documents must go beyond literal transcription of sources. Indeed it is necessary to shorten the semantic gap and get semantic meaning from the contents, thus the extraction of the relevant information carried out by named entities (e.g. names of persons, organizations, locations, dates, quantities, monetary values, etc.) is a key component of such systems. Semantic annotation of documents, and in particular automatic named entity recognition is neither a perfectly solved problem \cite{DBLP:journals/corr/LampleBSKD16}.
Many existing solutions make use of Artificial Neural Networks (ANNs) to transcribe handwritten text lines and then parse the transcribed text with a Named Entity Recognition model, but the precision of those existing solutions is still to improve \cite{VeronicaRomero2016}, \cite{Toselli:2016:HWG:3043320.3051195}, \cite{competition}. One possible approach is to start with already segmented words, by an automatic or manual process, and predict the semantic category using visual descriptors (c.f. \cite{Toledo2016}) which has the benefit that when the name entity prediction is correct, the transcription would be much easier to predict correctly since it restricts the language model within the corresponding category. The downside is that we rarely have large amounts of word level segmented data, a key for most ANNs proper performance. In case that automatic word segmentation is needed, the whole information extraction process involves three steps which will probably accumulate errors in each of them.
Another and most common option is to perform handwritten text recognition (HTR) first and then named entity recognition (NER). An advantage of this approach is that it has one less step than the previous explained approach, but it has the counterpart that if the transcription is wrong, the NER part is affected.
Recent work in ANNs suggests that using models that solve tasks as general as possible, might give similar or better performance than concatenating subprocesses due to error propagation in the different steps, as shown in \cite{DBLP:journals/corr/LiuFQJY16}, \cite{DBLP:journals/corr/BojarskiTDFFGJM16}. This is the main motivation of this work, and consequently we propose a single convolutional-sequential model to jointly perform transcription and semantic annotation. Adding a language model, the transcription can be restricted to each semantic category and therefore improved. The contribution of this work is to show the improvement when joining a sequence of processes in a single one, and thus, avoiding to commit accumulation of errors and achieving generalization to emulate human-like intelligence.
Some examples of historical handwritten text documents include birth, marriage and defunction records which provide very meaningful information to reconstruct genealogical trees and track locations of family ancestors, as well as give interesting macro-indicators to scholars in social sciences and humanities. The interpretation of such types of documents unavoidably requires the identification of named entities. As experimental scenario we illustrate the performance of the proposed method on a collection of handwritten marriage records.
The rest of the paper is organized as follows: Next section explains the task being considered. In section \ref{stateofart} we review the state of the art work in HTR and NER. In \ref{methodology} we explain our model architecture, ground truth setup and training details. In Section \ref{results} we analyze the results for the different configurations and last in \ref{conclusion} we give the conclusions.
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{competition.png}
\caption{\label{fig:frog}An example of a document line annotation from \cite{competition}.}
\label{competit_annotation}
\end{figure*}
\section{The Task: Information Extraction in Marriage Records}
\label{task}
The approach presented in this paper is general enough to be applied to many information extraction tasks, but due to time constraints and our access to a particular dataset, the approach is evaluated on the task of information extraction in a system for the analysis of population records, in particular handwritten marriage records. It consists of transcribing the text and to assign to each word a semantic and person category, i.e. to know which kind of word has been transcribed (name, surname, location, etc.) and to what person it refers to. The dataset and evaluation protocol are exactly the same as the one proposed in the ICDAR 2017 Information Extraction from Historical Handwritten Records (IEHHR) competition \cite{competition}.
The semantic and person categories to identify in the IEHHR competition are listed in table \ref{categories}.
\begin{table}
\centering
\caption{Semantic and person categories in the IEHHR competition}
\label{categories}
\begin{tabular}{ll}
\textbf{Semantic} & \textbf{Person} \\ \hline
Name & Wife \\
Surname & Husband \\
Occupation & Wife's father \\
Location & Wife's Mother \\
Civil State & Husband's father \\
Other & Husband's mother \\
& Other person \\
& None
\end{tabular}
\end{table}
Two tracks were proposed. In the basic track the goal is to assign the semantic class to each word, whereas in the complete track it is also necessary to identify the person. An example of both tracks is shown in Figure \ref{competit_annotation}.
The dataset for this competition contains 125 pages with 1221 marriage records (paragraphs), where each record contains several text lines giving information of the wife, husband and their parents' names, occupations, locations and civil states. The text images are provided at word and line level, naturally having the increased difficulty of word segmentation when choosing to work with line images. More details of the dataset can be found in table \ref{dataset_details}.
\begin{table}
\centering
\caption{Marriage Records dataset distribution}
\label{dataset_details}
\begin{tabular}{llll}
& Train & Validation & Test \\ \hline
Pages & 90 & 10 & 25 \\
Records & 872 & 96 & 253 \\
Lines & 2759 & 311 & 757 \\
Words & 28346 & 3155 & 8026 \\ \hline
\multicolumn{4}{l}{Out of vocabulary words: 5.57 \%}
\end{tabular}
\end{table}
\section{State of the art}
\label{stateofart}
Recent work shows that neural models allow generalization of problems that earlier were solved separately \cite{DBLP:journals/corr/BojarskiTDFFGJM16}. This idea can also be applied to information extraction from handwritten text documents which consists of HTR followed by NER. From the HTR side there is still a long way to improve until human level transcription is achieved \cite{NIPS2016_6257}. Attention models have helped to understand the inside behavior of neural networks when reading document images but still have lower accuracy than Recurrent Neural Network with Connectionist Temporal Classification (RNN+CTC) approaches \cite{DBLP:journals/corr/BlucheLM16}.
Named entity recognition is the problem of detecting and assigning a category to each word in a text, either at part-of-speech level or in pre-defined categories such as the names of persons, organizations, locations, expressions of times, quantities, monetary values, percentages, etc. The goal is to select and parse relevant information from the text and relationships within it. One could think that it would be sufficient to keep a list of locations, common names and organizations, but the case is that these lists are rarely complete, or one single name can refer to different kind of entities. Also it is not easy to detect properties of a named entity and how different named entities are related to each other. Most widely used kind of models for this task are \textit{conditional random fields} (CRFs), which were the state of the art technique for some time \cite{Lafferty:2001:CRF:645530.655813}, \cite{Finkel:2005:INI:1219840.1219885}.
In the area of Natural Language Processing, Lample et al. \cite{DBLP:journals/corr/LampleBSKD16} proposed a combination of \textit{Long Short-term Memory networks} (LSTMs) and CRFs, obtaining good results for the CoNLL2003 task. The problem is similar to the one we are facing, except that it starts from raw text. In this work the input to the system are images of handwritten text lines, for which it is not even known how many characters or words are present. This undoubtedly introduces a higher difficulty.
In Adak's work \cite{7490147} a similar end-to-end approach from image to semantically annotated text is proposed, but in that case the key relies in identifying capital letters to detect possible named entities. The problem is that in many cases, such as in the IEHHR competition \cite{competition} dataset, named entities do not always have capital letters, and also, it is a task-specific approach that could not be used in many other cases.
Finally, another concept that can help to improve the quality of our models' prediction is curriculum learning \cite{Bengio:2009:CL:1553374.1553380}. Letting the model look at the data in a meaningful and ordered way, such that the difficulty of prediction goes from easy to hard, and therefore, can make the training evolve with a much better performance.
\section{Methodology}
\label{methodology}
The main goal of this work is to explore a few possibilities for a single end-to-end trainable ANN model that receives as input text images and gives as output transcripts, already labeled with their corresponding semantic information. One possibility to solve it could be to propose a ANN with two sequence outputs, one for the transcript and the other for the semantic labels. However, keeping an alignment between these two independent outputs complicates a solution. An alternative would be to have a single sequence output that combines the transcript and semantic information, which is the approach taken here. There are several ways in which this information can be encoded such that a model learns to predict it. The next subsection describes the different ways of encoding it that were tested in this work. Then there are subsections describing the architecture chosen for the neural network, the image input and characteristics of the learning.
\subsection{Semantic encoding}
The first variable which we explored is the way in which ground truth transcript and semantic labels are encoded so that the model learns to predict them. To allow the model to recognize words not observed during training (out-of-vocabulary) the symbols that the model learns are the individual characters and a space to identify separation between words. For the semantic labels special tags are added to the list of symbols for the recognizer. The different possibilities are explained below.
\subsubsection{Open \& close separate tags}
In the first approach, the words are enclosed between \textbf{opening and closing tags} that encode the semantic information. Both the category and the person have independent tags. Thus, each word is encoded by starting with opening category and person symbols, followed by a symbol for each character and ends by closing person and category symbols. The ``other'' and ``none'' semantics are not encoded. For example, the ground truth of the image shown in Figure \ref{fig:frog} would be encoded as:
\begin{center}
\texttt{\item h a b i t a t \{space\} e n \{space\} <location> <husband> B a r a </husband> </location> \{space\} a b \{space\} <name> <wife> E l i s a b e t h </wife> </name> ... }
\end{center}
This kind of encoding is not expected to perform well in the IEHHR task, since tags are assigned to only one word at a time, so it is redundant to have two tags for each word. However, in other tasks it could make sense having opening and closing tags and this is why it has been considered in this work.
\subsubsection{Single separate tags}
Similar to the previous approach, in this case both category and person tags are independent symbols but there is only one for each word added before the word. Thus, the ground truth of the previous example would be encoded as:
\begin{center}
\texttt{\item h a b i t a t \{space\} e n \{space\} <location/> <husband/> B a r a \{space\} a b \{space\} <name/> <wife/> E l i s a b e t h \{space\} J u a n a \{space\} <state/> <wife/> \{space\} d o n s e l l a ... }
\end{center}
\subsubsection{Change of person tag}
In this variation of the semantic encoding the person label is only given if there is a \textbf{change of person}, i.e. the person label indicates that all the upcoming words refer to that person until another person label comes, in contrast to previous approaches where we give the person label for each word. This approach is possible due to the structured form of the sentences in the dataset. As we can see in Figure \ref{fig:iehhr_record} the marriage records give the information of all the family members without mixing them.
\begin{center}
\texttt{<wife/> <name/> E l i s a b e t h \{space\} <name/> J u a n a \{space\} <state/> d o n s e l l a ... }
\end{center}
\subsubsection{Single combined tags}
The final possibility tested for encoding the named entity information is to \textbf{combine category and person} labels into a single tag. So the example would be as:
\begin{center}
\texttt{ h a b i t a t \{space\} e n \{space\} <location\_husband/> B a r a \{space\} a b \{space\} <name\_wife/> E l i s a b e t h \{space\} <name\_wife/> J u a n a \{space\} <state\_wife/> d o n s e l l a ... }
\end{center}
\subsection{Level of input images: lines or records}
The IEHHR competition dataset includes manually segmented images at word level. But to lower ground truthing cost or avoid needing a word segmentator, we will assume that only images at line level are available. Having text line images then the obvious approach is to give the system individual line images for recognition. However, there are semantic labels that would be very difficult to predict if only a single line image is observed due to lack of context. For example, it might be hard to know if the name of a person corresponds to the husband or the father of the wife if the full record is not given. Because of this, in the experiments we have explored having as input both text line images and full marriage record images, concatenating all the lines of a record one after the other.
\begin{figure}
\includegraphics[width=0.5\textwidth]{iehhr_record.png}
\label{iehhr_record}
\caption{Reading the whole record makes it easier to transcribe as well as to identify the semantic categories based on context information.}
\label{fig:iehhr_record}
\end{figure}
\subsection{Transfer learning}
The next variable we examined was the effect of the use of \textbf{transfer learning} from a previously trained model for HTR. Transfer Learning consists of training for the same or a similar task (HTR) using other datasets, and then fine tune it for our purpose, in our case HTR+NER. To perform transfer learning from a generic HTR model, the softmax layer is removed and replaced with a softmax that allows as an output the activations for the number of possible classes in the fine tuning step. In our case, they will be all the characters in the alphabet plus the semantic labels. In the experiments for transfer learning we have tested only one HTR model that was trained with the following datasets: IAM~\cite{Marti99afull}, Bentham~\cite{6981116}, Bozen~\cite{Sanchez2016a}, and some datasets used by us internally: IntoThePast, Wiensanktulrich, Wienvotivkirche and ITS.
\subsection{Curriculum Learning}
The last variation that we propose is curriculum learning i.e. start with easier demands to the model and then increase the difficulty. In this case this method can be interpreted as starting by learning to transcribe single text lines, and when the training is finished, continue with learning to transcribe images of a whole marriage record.
\subsection{Model architecture and training}
In this work we use a CNN+BLSTM+CTC model, which is one of the most common models for performing HTR exclusively, although other HTR models could be used as well. In particular, the architecture consists of 4 convolutional layers with max pooling followed by 3 stacked BLSTM layers. The detailed model architecture is shown in Figure \ref{architecture}.
\begin{figure*}
\centering
\includegraphics[scale=0.45]{htr_model_2.png}
\caption{Used model architecture}
\label{architecture}
\end{figure*}
To train the model we use the Laia HTR toolkit \cite{laia2016} which uses Baidu's parallel CTC \cite{Graves:2006:CTC:1143844.1143891} implementation, which consists of minimizing the loss or ``objective'' function
\begin{equation}
O^{ML}(S,\mathcal{N}_w)=-\sum_{(x,z)\in S} ln(p(z|x))
\end{equation}
where $S$ is the training set, $x$ is the input sequence (visual features), $z$ is the sequence labeling (transcription) for $x$ and
\begin{equation}
\mathcal{N}_w:(\mathbb{R}^m)^T\mapsto(\mathbb{R}^n)^T
\end{equation}
is a recurrent neural network with $m$ inputs, $n$ outputs and weight vector $w$. The probabilities of a labeling of an input sequence are calculated with a dynamic programming algorithm called "forward-backward".
Some special features of our model are that the activation function for the convolutional layers is leaky ReLu
$f(x) = x \ \text{if $x>0.01$},\ 0.01x \text{ otherwise.}$
We also use batch normalization to reduce \textit{internal covariate shift}~\cite{DBLP:journals/corr/IoffeS15}.
\section{Results}
\label{results}
We compare the performance of our methods%
\footnote{Scripts used for the experiments available at \url{http://doi.org/10.5281/zenodo.1174113}}
with the results of the participants of the IEHHR competition in \cite{competition} thereby using the same metric, see Table \ref{results_table}. The evaluation metric counts the words that were correctly transcribed and annotated with their category and person label with respect to the total amount of words in the ground truth. For those words that were not correctly transcribed but the category and person labels match one or more words in the ground truth, we add to the score 1 - CER (character error rate) on the best matching word. This means that the named entity recognition part is vital for a good score, since a perfect transcription will count as 0 in the score if its named entity is incorrectly detected.
\begin{table}
\caption{Average scores of the experiments compared with the IEHHR competition participants' methods.\label{tab:results}}
\label{results_table}
\centering
\begin{minipage}{\linewidth}
\relsize{-1}
\newcommand{\hspace{5pt}}{\hspace{5pt}}
\newcommand{\addlinespace[4pt]}{\addlinespace[4pt]}
\begin{tabular}{ccccc}
\toprule
{\bf Method} & \makecell{\bf Segm. \\\bf Level} & \makecell{\bf Proc. \\\bf Level} & \makecell{\bf Track \\\bf Basic} & \makecell{\bf Track \\\bf Complete} \\
\midrule
\multicolumn{5}{c}{\relsize{-1}\bf IEHHR competition results} \\
\midrule
\makecell[l]{Hitsz-ICRC-1 \\\hspace{5pt} CNN HTR+NER} & Word & Record\myfootnote[fn:hitsz]{\relsize{-1}HTR is word based.} & 87.56 & 85.72 \\\addlinespace[4pt]
\makecell[l]{Hitsz-ICRC-2 \\\hspace{5pt} ResNet HTR+NER} & Word & Record\myfootnotemark{fn:hitsz} & 94.16 & 91.97 \\\addlinespace[4pt]
\makecell[l]{Baseline \\\hspace{5pt} HMM+MGGI} & Line & Record & 80.24 & 63.08 \\\addlinespace[4pt]
\makecell[l]{CITlab-ARGUS-1 \\\hspace{5pt} LSTM+CTC+regex} & Line & Record\myfootnote[fn:citlab]{\relsize{-1}Posterior character probabilities computed at line level.}
& 89.53 & 89.16 \\\addlinespace[4pt]
\makecell[l]{CITlab-ARGUS-2 \\\hspace{5pt} LSTM+CTC \\\hspace{5pt} +OOV+regex} & Line & Record\myfootnotemark{fn:citlab}
&\textbf{ 91.93 }& \textbf{91.56} \\
\midrule
\multicolumn{5}{c}{\relsize{-1}\bf Results of our experiments} \\
\midrule
\makecell[l]{Separate-single tags }
& Line & Line
& 73.49
& 61.96 \\\addlinespace[4pt]
\makecell[l]{Separate-\\\hspace{5pt} open-close tags}
& Line & Line
& 73.70
& 64.09 \\\addlinespace[4pt]
\makecell[l]{Combined-single tags}
& Line & Line
& 87.96
& 80.74 \\\addlinespace[4pt]
\makecell[l]{Combined-single tags \\\hspace{5pt} + transfer learn}
& Line & Line
& 87.01
& 80.05 \\\addlinespace[4pt]
\makecell[l]{Change person tag \\\hspace{5pt} + transfer learn}
& Line & Recor
& 84.41
& 80.51 \\\addlinespace[4pt]
\makecell[l]{Combined-single tags \\\hspace{5pt} + transfer learn}
& Line & Record
& 86.58
& 84.72 \\\addlinespace[4pt]
\makecell[l]{Combined-single tags \\\hspace{5pt} + transfer learn \\\hspace{5pt} + curriculum learn}
& Line & Record
& \textbf{90.58}
& \textbf{89.39} \\\addlinespace[4pt]
\makecell[l]{Combined-single tags \\\hspace{5pt} + transfer learn \\\hspace{5pt} + curriculum learn \\\hspace{5pt} + alt. line extraction}
& Word\myfootnote[fn:comp]{\relsize{-1}Not fair to compare with the IEHHR results because it uses a different segmentation (alternative line extraction) than the one provided in the competition.}
&\makecell[l]{Record\myfootnotemark{fn:comp}
& \textbf{96.39}\myfootnotemark{fn:comp}
& \textbf{96.63}\myfootnotemark{fn:comp} \\\addlinespace[4pt]
\bottomrule
\end{tabular}
\end{minipage}
\end{table}
We can observe in the results that our best performance is reached when receiving the whole marriage record, which is probably due to the help of contextual information. For example, it can benefit the detection of named entities composed of several words when they are written in separate consecutive lines. Also we observe that the best performing encoding of the semantic labels is the combined tags setup. This can be due to the lower amount of symbols to predict, which might require to store less long term dependencies in the network.
The most significant improvement was achieved when picking our best performing configuration and running it with an alternative line extraction. In the competition, the text lines were extracted by including all the bounding boxes of the words within every line. As a result, when there are large ascenders and descenders, the bounding box of the line is too wide, including sections of other text lines. In order to cope with this limitation, we used the XML containing the exact location of the segmented words within a page, and for the y-coordinates, we used a weighted (by the words widths) average of upper and lower limits of the word bounding boxes. As expected, the performance highly improves because the segmentation of the text lines is more accurate. However, this result is not directly comparable to the other participants's methods because the segmentation is different.
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{error1.png}
\caption{Some of the errors committed in the predictions}
\label{fig_error1}
\end{figure*}
In Figure \ref{fig_error1} we show some examples of committed errors. We can see that they consist of small typos that are understandable when looking at the text images. It is definitely difficult to transcribe certain names that have never been seen before. The proposed approach could be combined with a category-based language model~\cite{VeronicaRomero2016} which could potentially improve the results.
Our best performing model took 4 hours 38 to run 133 training epochs with a NVIDIA GTX 1080 GPU. The train and validation error rates can be seen in Figure \ref{fig_train}. As training configuration we used an adversarial regularizer~\cite{Goodfellow14_arXiv} with weight 0.5, an initial learning rate of $5\cdot 10^{-4}$ with decay factor of 0.99 per epoch and batch size 6.
\begin{figure}
\includegraphics[width=0.5\textwidth]{train.png}
\caption{Train and validation (green and violet respectively) CER (\%).}
\label{fig_train}
\end{figure}
\section{Conclusion}
\label{conclusion}
In this paper we have proposed to solve a complex task (i.e. text recognition and named entity recognition) with a single end-to-end neural model. Our first conclusion is that, also in information extraction problems, a generic model for solving two subsequent tasks can perform at least similarly as two separated models. This is true even if there is less prepared data (record level images instead of a sequence of word images) and we do not make use of task specific tools like dictionaries or language model.
By investigating different ways of encoding the image transcripts and semantic labels we have shown that the recognition performance is highly affected, even though it is indeed representing the same information. Also, curriculum learning (first text lines and then records) can make the model reach a higher final prediction accuracy.
Future work would include the use of language models to improve the accuracy of the predictions, the effect of automatic text line and record detection, and also, to evaluate our method in other datasets.
\section*{Acknowledgments}
This work has been partially supported by the Spanish project TIN2015-70924-C2-2-R, the grant 2016-DI-095 from the Secretaria d'Universitats i Recerca del Departament d'Economia i Coneixement de la Generalitat de Catalunya, the Ramon y Cajal Fellowship RYC-2014-16831, the CERCA Programme /Generalitat de Catalunya, and RecerCaixa (XARXES, 2016ACUP-00008), a research program from Obra Social "La Caixa" with the collaboration of the ACUP.
\bibliographystyle{IEEEtran}
|
1110.0253
|
\section{Introduction}
Let $D$ be a squarefree integer, $D\neq 0,1$, and let $K= \mb{Q}(\sqrt{D})$
be the corresponding quadratic field. The fundamental discriminant $d$ of $K$ equals
$D$ if $D = 1 \mod 4$, and $4D$ if $D = 2,3 \mod 4$.
Let $\chi_d(n)$ be the Kronecker symbol $\kr{d}{n}$,
and $L\pr{s,\chi_d}$ the quadratic Dirichlet $L$-function given by the
Dirichlet series
\begin{equation}
L\pr{s,\chi_d} = \sum_{n=1}^\infty \fr{\chi_d(n)}{n^s}, \qquad \Re(s)>0,
\end{equation}
satisfying the functional equation
\begin{equation}
L\pr{s,\chi_d} = \lr{d}^{\fr{1}{2} - s} X(s,a) L\pr{1-s, \chi_d},
\end{equation}
where
\begin{equation}
\label{eq:X}
X(s,a) =
\pi^{s-\fr{1}{2}} \fr{\Gamma\pr{\fr{1-s+ a}{2}}}{\Gamma\pr{\fr{s+a}{2}}}, \qquad a =
\begin{cases} 0 \quad &\tx{if $d> 0$,} \\ 1 \quad &\tx{if $d<0$.} \end{cases}
\end{equation}
In this paper, we describe some experiments concerning
the moments of quadratic Dirichlet $L$-functions at the central critical point:
\begin{equation}
\sum_{d \in D(X)} L\pr{1/2,\chi_d}^k. \label{eq: integralmoments}
\end{equation}
Here, $k$ is a positive integer, and $D(X)$ denotes the set of fundamental discriminants
with $|d|< X$.
Several conjectures exist for these moments. For instance,
Keating and Snaith \cite{keat: snaith}, motivated by the fundamental work of
Katz and Sarnak \cite{katz: sarnak} and based on an analogous result in Random
Matrix Theory, conjectured a formula for the leading asymptotics of (\ref{eq: integralmoments}).
Specifically, they conjectured
that, as $X\rightarrow\infty$,
\begin{equation}
\fr{1}{\lr{D(X)}} \sum_{d\in D(X)} L(1/2,\chi_d)^k \sim
a_k \prod_{j=1}^k \fr{j!}{(2j)!} \log(X)^{\fr{k(k+1)}{2}}, \label{eq: KeatingSnaith}
\end{equation}
where $a_k$ is an arithmetic factor, described by Conrey and Farmer \cite{cf: meanLfunc},
of the form
$$
a_k = \prod_p \fr{\pr{1 - \fr{1}{p}}^{\fr{k(k+1)}{2}}}{1 + \fr{1}{p}} \pr{\fr{\pr{1 -
\fr{1}{\sqrt{p}}}^{-k} + \pr{1 + \fr{1}{\sqrt{p}}}^{-k}}{2} + \fr{1}{p}}.
$$
In a few cases, Keating and Snaith's conjecture
agrees with known theorems, e.g. Jutila for
$k =1,2$ \cite{jutila: character}, and Soundararajan for $k=3$ \cite{sound: quadDir}.
Subsequently, a more precise asymptotic expansion for (\ref{eq: integralmoments}) was predicted by
Conrey, Farmer, Keating, Rubinstein, and Snaith \cite{cfkrs: intmom}. The lower terms are affected
by the form of the Gamma factors in the functional equation for $L(s,\chi_d)$, so naturally they
considered the subset of $d>0$ separately from $d<0$. Therefore, let
\begin{eqnarray}
D_+(X) &=& \{ d \in D(X) : d>0\} \notag \\
D_-(X) &=& \{ d \in D(X) : d<0\}.
\end{eqnarray}
The conjecture of CFKRS states that:
\begin{equation}
\label{eq: moment asympt}
\sum_{d \in D\pm(X)} L\pr{1/2,\chi_d}^k \sim
\frac{3}{\pi^2} X \mathcal Q_\pm(k,\log X),
\end{equation}
where $\mathcal Q_\pm(k,x)$ is a polynomial of degree $k(k+1)/2$ in $x$.
The fraction $3/\pi^2$ accounts for the density of fundamental discriminants amongst all the integers.
In their paper, CFKRS also conjectured a remainder term of size $O(X^{1/2+\epsilon})$ but the evidence,
both theoretical and numerical (discussed below), suggests the existence of lower order terms
for $k \geq 3$.
The beauty in their conjecture lies in the fact that it gives a formula for
the polynomial in question.
The polynomial $\mathcal Q_\pm(k,\log X)$ is expressed
in terms of a more fundamental polynomial $Q_\pm(k,x)$ of the same degree
that captures the moments locally:
\begin{equation}
\mathcal Q_\pm(k,\log{X}) = \frac{1}{X} \int_1^X Q_\pm(k,\log{t}) dt.
\end{equation}
The polynomial $Q_\pm(k,x)$ is described below, and its leading coefficient
agrees with the conjecture of Keating and Snaith (\ref{eq: KeatingSnaith}).
The polynomial $Q_\pm(k,x)$ of CFKRS is given by them as the $k$-fold residue:
\begin{equation}\label{eq: main integral}
Q_\pm(k,x)= \frac{(-1)^{k(k-1)/2} 2^k}{k!} \frac {1}{(2\pi i)^k}
\oint \cdots \oint
\frac{G_\pm(z_1,\ldots,z_k)\Delta(z_1^2,\ldots,z_k^2)^2}{\prod_{j=1}^k z_j^{2k-1}}
e^{\frac x 2 \sum_{j=1}^k z_j} \, dz_1\ldots dz_k
\end{equation}
where
\begin{equation}
G_\pm(z_1,\ldots,z_k) = A_k(z_1,\ldots,z_k) \prod_{j=1}^k X(\frac 1 2 +z_j,
a)^{-1/2} \prod_{1\leq i\leq j \leq k} \zeta(1+z_i+z_j),
\end{equation}
and
\begin{equation}
\Delta(z_1^2,\ldots,z_k^2)
= \prod_{1\leq i < j \leq k} (z_j^2 -z_i^2)
\end{equation}
is a Vandermonde determinant.
Here, $a=0$ for $G_+$ and $a=1$ for $G_-$, $X(s,a)$ is given in~\eqref{eq:X},
and $A_k$ equals the Euler product, absolutely convergent for $|\Re z_j|
< 1/2$, defined by
\begin{multline}
\index{$A_k$}A_k(z_1, \ldots, z_k) = \prod_p \prod_{1\leq i\leq j \leq k}
\left(1-\frac{1}{p^{1+z_i+z_j}} \right)\\
\times \left(\frac 1 2 \left( \prod_{j=1}^{k}\left(1-\frac 1
{p^{\frac 1 2 +z_j}} \right)^{-1} +
\prod_{j=1}^{k}\left(1+\frac 1{p^{\frac 1 2 +z_j}}
\right)^{-1} \right) +\frac 1 p \right) \left(1+\frac 1 p
\right)^{-1}.
\end{multline}
More generally, CFKRS predicted that for suitable
weight functions $g$,
\begin{equation}
\label{eq: cfkrs}
\sum_{d \in D_\pm(\infty)} L(1/2,\chi_d)^k g\pr{\lr{d}}
\sim \frac{3}{\pi^2}
\int_1^\infty Q_\pm(k,\log{t}) g(t) dt.
\end{equation}
The method that CFKRS used
to heuristically derive this formula relies on number theoretic techniques, specifically
the approximate functional equation, but was guided by analogous results in random matrix theory
to help determine the form of the conjecture.
An alternative approach for conjecturing moments exists. In their paper \cite{dgh: multDir},
Diaconu, Goldfeld, and Hoffstein used the double Dirichlet series
$$
Z_k\pr{s,w} = \sum_{d\in D(\infty)} \fr{L\pr{s,\chi_d}^k}{\lr{d}^w}
$$
to study the moments of $L(1/2,\chi_d)$.
In particular, they showed how one can derive a formula for the
cubic moments of $L\pr{1/2,\chi_d}$ by investigating the polar behavior of
$Z_3\pr{s,w}$.
The method of DGH produces a proof for the cubic moment of $L(1/2,\chi_d)$.
Specifically they show that the difference of both sides of \eqref{eq: moment asympt}
for $k=3$ is, for any $\epsilon>0$, of size $O_\epsilon(X^{\theta+\epsilon})$,
where $\theta=.85366\ldots$. They also gave a remainder term for a smoothed
cubic moment with $\theta=4/5$.
Recently, Young~\cite{young2} has obtained an improved
estimate for the remainder term of size $O(X^{3/4+\epsilon})$. The moments he
considered were smoothed, and, for simplicity, he considered the subset of
discriminants divisible by 8 and positive. The appearance of $3/4+\epsilon$ is very
interesting in light of the next paragraph.
The method of DGH predicts the existence of
a further lower order term of size $X^{3/4}$. In particular,
DGH conjectured that there exists a constant $b$ such that
\begin{equation}
\sum_{d\in D(X)} L(1/2,\chi_d)^3
= \frac{6}{\pi^2} X\mc{Q}\pr{3,\log X} + bX^{\fr{3}{4}} + O\pr{X^{\fr{1}{2} + \epsilon}}. \label{eq: 421}
\end{equation}
The existence of such a term comes from a pole of the double
Dirichlet series at $w=3/4$ and $s=1/2$,
the conjectured meromorphic continuation of
$Z_3\pr{1/2,w}$ to $\Re(w) < 3/4$, and assumes a growth condition on
$Z_3\pr{1/2,w}$.
DGH also suggest that additional lower order terms, infinitely many for each $k\geq 4$,
are expected to persist. The form of these terms is described in Zhang's
survey \cite{zhang: appDir}, along with an exposition of the approach
using double Dirichlet series. Their conjecture
involving lower terms is stated in the following form. For $k\geq 4$, and every $\epsilon>0$,
\begin{equation}
\sum_{d \in D(X)} L\pr{1/2,\chi_d}^k=
\sum_{l=1}^\infty X^{(l+1)/(2l)} P_l(\log{x}) + O(X^{1/2+\epsilon}),
\label{eq: integralmoments_k}
\end{equation}
where every $P_l$ is a polynomial depending on $k$. In particular, $P_1$ is a polynomial
of degree $k(k+1)/2$, presumably agreeing with the polynomial predicted by the CFKRS
conjecture, but to our knowledge this agreement has not been checked.
Zhang \cite{zhang: cubmom} further conjectured that $b \approx - .2154$,
and, in a private communication to one of the authors, reported that
he also computed the constants associated with the $X^{3/4}$ term
when one restricts to $d<0$, or to $d>0$, thus predicting:
\begin{eqnarray}
\sum_{d\in D_\pm(X)} L(1/2,\chi_d)^3
&=& \frac{3}{\pi^2} X\mathcal Q_\pm(3,\log X) +
b_\pm X^{\fr{3}{4}} + O\pr{X^{\fr{1}{2} + \epsilon}},
\end{eqnarray}
with $b_+ \approx -.14$ and $b_- \approx -.07$ (note that $b=b_+ + b_-$).
His evaluation of $b$, $b_+$, and $b_-$ involves an elaborate sieving process,
and also depends on unproven hypotheses regarding the meromorphic continuation
and rate of growth of $Z_3$.
While it might seem that a term as large as $X^{3/4}$ in the cubic moment
should be easily detected, two things make it very challenging in this context:
the small size of the constants involved, and also the fact that the remainder
term, conjecturally of size $O(X^{1/2+\epsilon})$, dominates even in our large
data set - presumably the $X^\epsilon$ can get as large as some power of $\log(x)$,
which can dominate $X^{1/4}$ even for values of $X$ as large as $10^{11}$.
For this reason, we embarked on a large scale
computation in order to see whether such a lower main term in the cubic moment
could be detected or not. We also carried out extensive
verification of the predictions of CFKRS for $k=1,\ldots,8$. While CFKRS provided
some modest data in \cite{cfkrs: intmom}, for $|d|<10^7$, we carried out tests
for $-5\times10^{10}<d<0$ and $0<d<1.3 \times 10^{10}$.
In order to dampen the effect of the noisy remainder term, we also considered
smoothed moments.
Our numerical results are described in Section 2. Interestingly, they
lend support to both the full asymptotic expansion conjectured by CFKRS,
and to the existence of lower terms predicted by DGH and Zhang.
In Section 3 we describe the two methods that we used to compute a large number
of $L(1/2,\chi_d)$, for $d<0$ and, separately, for $d>0$. The first, for
$d<0$, is based on the theory of binary quadratic forms, and uses Chowla and Selberg's
$K$-Bessel expansion of the Epstein zeta function~\cite{cs: epzeta}. The
second, for $d>0$, uses a traditional smooth approximate functional equation.
Both methods have comparable runtime complexities, but the former has the
advantage of being faster by a constant factor. See the
end of the paper for a discussion that compares the two runtimes.
\section{Numerical Data}
In this section, we numerically examine the conjectures of CFKRS, DGH, and Zhang
for the moments of $L(1/2,\chi_d)$.
The collected data provides further evidence in favour of the
CFKRS conjecture concerning the full asymptotics of the moments of
$L(1/2,\chi_d)$. With respect to the remainder term, the numerics also seem
to suggest the presence of additional lower terms as predicted by DGH and
Zhang.
In Tables 1--2 and Figures 1--4 we depict the quantities
\begin{equation}
R_\pm(k,X) := \fr{\displaystyle \sum_{d\in D_{\pm}(X)} L(1/2,\chi_d)^k}
{\displaystyle \frac{3}{\pi^2}\int_1^X Q_\pm(k,\log{t}) dt} , \label{eq: quotient}
\end{equation}
and the related difference
\begin{equation}
\Delta_\pm(k,X) := \sum_{d\in D_{\pm}(X)} L(1/2,\chi_d)^k\;\;
- \frac{3}{\pi^2}\int_1^X Q_\pm(k,\log{t}) dt ,
\label{eq: difference}
\end{equation}
for $k = 1,\ldots, 8$ and both positive and negative discriminants $d$.
The quantity \eqref{eq: quotient} measures the consistency of CFKRS prediction,
while~\eqref{eq: difference} allows one to see the associated remainder term.
The numerator of (\ref{eq: quotient}) was calculated by computing many values of
$L(1/2,\chi_d)$, using the methods described in the next two sections. The
denominator was obtained from numerically approximated values of the coefficients of
$Q_\pm(k,\log t)$, computed
in the same manner performed in \cite{cfkrs: intmom}, though to higher precision. Tables of the
coefficients of the polynomials $Q_\pm(k,x)$ can be found in \cite{cfkrs: intmom}.
These values were then also used in graphing the difference (\ref{eq: difference}).
Tables 1--2 provide strong numerical support in favor of the asymptotic
formula predicted by CFKRS, described in equations~\eqref{eq: moment asympt}
to~\eqref{eq: main integral}, for both $d<0$ and $d>0$, agreeing to 7--8 decimal places
for $k=1$, and 4--5 decimal places for $k=8$.
In the figures below we depict thousands of values of the
quantities~\eqref{eq: quotient} and~\eqref{eq: difference}
at multiples of $10^7$, i.e. $X=10^7,2\times10^7,\ldots$. We display data up to
$X=1.3\times10^{10}$ for $d>0$, and $X=5\times 10^{10}$ for $d<0$. The larger
amount of data for $d<0$ reflects the faster method that we used for computing
the corresponding $L$-values.
In Figures~\ref{fig: fig1} and~\ref{fig: fig2}, notice that each graph fluctuates tightly about
one, with the extent of fluctuation becoming progressively larger as
$k$ increases, as indicated by the varying vertical scales. The graphs show
excellent agreement with the full asymptotics as predicted by CFKRS across all
eight moments computed, for both $d<0$ and $d>0$. One does also notice a
slight downward shift from $1$ in the $k=3$ plots, as predicted by DGH and Zhang.
We also depict in Figures~\ref{fig: fig3} and~\ref{fig: fig4} the
differences~\eqref{eq: difference} as dots, as well as the running average of
the plotted differences as a solid curve. While we plot the average every
$10^7$, these running averages were computed by sampling the differences every
$10^6$. We chose to display $1/10$th of the computed values in order to make
our plots more readable given the limited resolution of computer displays and
printers.
Averaging has the effect of smoothing the moment and reducing the impact
of the noisy remainder term. It allows one to more clearly see if
there are any biases hiding within the noise. This running average gives a
discrete approximation to the smoothed difference:
\begin{equation}
\Delta_\pm(k,X) := \sum_{d\in D_{\pm}(X)} L(1/2,\chi_d)^k (1-|d|/X)\;\;
- \frac{3}{\pi^2}\int_1^X Q_\pm(k,\log{t}) (1-t/X) dt , \label{eq: smoothed difference}
\end{equation}
We make several observations concerning Figures~\ref{fig: fig3} and~\ref{fig:
fig4}. First, for $k=1$, the observed remainder term is seemingly of size
$X^{1/4+\epsilon}$, much smaller than Goldfeld and Hoffstein's proven bound of
$O(X^{19/32+\epsilon})$ for the first moment, and also smaller than the bound of
$O(X^{1/2+\epsilon})$ implied in their work for a smoothed first moment (for the
latter, see also Young~\cite{young1}). Furthermore, there appears to be a bias
in the average remainder term. This is especially apparent for $d<0$ and
further supported by our log log plots in Figure~\ref{fig: fig6}.
For $k=2$, the remainder term, even when averaged, fluctuates above and below
0. The largest average remainder for $k=2$ in our data set was of size roughly
$1.3\times10^4$, consistent with a conjectured remainder, for $k=2$, of size
$O(X^{1/2+\epsilon})$.
In Figures~\ref{fig: fig3} and~\ref{fig: fig4}, a bias of the kind predicted by
DGH can be seen. For $k\geq 3$ a noticeable bias is evident in the remainder
term, especially when averaged, and most prominently for $d>0$.
We elaborate on this last point further.
In the case of $k=3$, DGH predict a single main lower term of
the form $b X^{3/4}$ and, as described in the introduction, Zhang worked out
the value of $b$, separately for $d<0$ and $d>0$, as equal to $-.07$ and $-.14$
respectively.
One possible explanation for the fact that the bias appears more prominently
for $d>0$, even though we have less data in that case, is that the `noise'
appears numerically to be much larger for $d<0$. Comparing Figure~\ref{fig:
fig3} with Figure~\ref{fig: fig4}, we notice that the plotted remainder terms
seem to be about ten times larger for $d<0$ as compared to $d>0$, even when restricted
to $|d|<1.3\times 10^{10}$. A larger amount of `noise' in the remainder term
makes it harder to detect a lower order term hiding within the noise,
especially when the the lower terms are married to such small constant factors:
$-.07 X^{3/4}$ and $-.14 X^{3/4}$, as predicted by Zhang, before averaging, and
$4/7$ as large after averaging over $X$.
Thus, even though one expects, in the long run, to see an $X^{3/4}$ term
dominate over noise of size $X^{1/2+\epsilon}$, in the ranges examined it seems that
the noise has a large impact.
Another factor possibly affecting the poorer quality of the lower term detected
when $d<0$ is that the predicted constant factor is about one half as large:
$-.07$ for $d<0$, compared to $-.14$ for $d>0$. Combined with noise that is ten
times bigger, it is not surprising that the quality of the average remainder
term for $d<0$ seems to be more affected by the noise.
In Figure \ref{fig: fig5} we redisplay the $k=3$ plots from Figures~\ref{fig:
fig3} and~\ref{fig: fig4}, zoomed in to allow one to see the average
remainder term in greater detail. Here we also depict the prediction of Zhang
(dashed line). More precisely, the dashed line represents the average predicted
lower term:
\begin{equation}
\frac{1}{X}\int_0^X b_\pm t^{3/4} dt = \frac{4}{7} b_\pm x^{3/4},
\end{equation}
with $b_-=-.07$ for $d<0$ and $b_+=-.14$ for $d>0$.
For $d>0$, the fit of the average remainder term against Zhang's prediction
is very nice. For $d<0$, the plot supports a bias in the sense that the average value
is mainly negative, but the fit against $-.07 \frac{4}{7} X^{3/4}$ is far
from conclusive.
Plots of the average reaminder term on a log log scale are shown in
Figure~\ref{fig: fig6}, for $1\leq k \equiv 4$, and positive/negative $d$.
On a log log scale, a function of $X$ of the form $f(X) = B X^{3/4}$
is transformed into the function of $u=\log{X}$ given by
$\log{B} + 3/4 u$, i.e. a straight line with slope $3/4$. We compare, in the
$k=3$ plots, the log log plot of the average remainder against Zhang's
prediction. The fit is especially nice for $d>0$, but only in crude qualitative
terms for $d<0$, consistent with the observations made concerning the poorer
fit in the $k=3$, $d<0$, plots of Figures~\ref{fig: fig3}--\ref{fig: fig5}.
For $k \geq 4$, DGH predict infinitely many lower terms, with
the largest one of size $X^{3/4}$ times a power of $\log{X}$
which they did not make explicit.
Interestingly, a bias in support of this does appear evident,
especially for $k=4$ and $d>0$. In that log log plot, the remainder term does
seem reasonably straight suggesting a lower order term obeying a power law,
perhaps with some additional powers of log.
It is reasonable to contest that the observed biases here exist due to
a persistent small error in the calculation of the moment polynomials or in the
values of $L(1/2,\chi_d)$. In an effort to alleviate such concerns, the
computations yielding our numerics were executed again, in a limited way, using
higher precision. As anticipated, these higher precision results remained
consistent with the initial results, reducing the possibility of such a bias
existing. Furthermore, the overall excellent agreement of the computed moments
with the predicted asymptotic formula of CFKRS supports the correctness
of the computation.
\subsection{Tables and Figures}
\begin{table}[H]
\centerline{\fontsize{11pt}{12pt}\selectfont
\begin{tabular}{|c|c|c|c|c|}
\hline
$k$ & $\sum_{d\in D_-(X)} L(1/2,\chi_d)^k$ & $\frac{3}{\pi^2} \int_1^X Q_-(k,\log{t}) dt$& $R_-(k,x)$ & $\Delta_-(k,X)$ \\ \hline
1 & 25458527125.376 & 25458526443.085 & 1.0000000268001 & 682.291 \\
1 & 52401254983.398 & 52401252573.351 & 1.0000000459922 & 2410.047 \\
1 & 79904180421.746 & 79904180600.902 & .99999999775786 & -179.156 \\
1 & 107770905413.09 & 107770904521.07 & 1.0000000082770 & 892.02 \\
1 & 135908144579.9 & 135908144595.65 & .99999999988411 & -15.75 \\
\hline
2 & 695798091128.96 & 695797942880.62 & 1.0000002130623 & 148248.34 \\
2 & 1505736931971.7 & 1505736615082.0 & 1.0000002104549 & 316889.7 \\
2 & 2362905062077.2 & 2362905209666.9 & .99999993753888 & -147589.7 \\
2 & 3251727763805.6 & 3251727486319.2 & 1.0000000853351 & 277486.4 \\
2 & 4164586513531.5 & 4164586544704.8 & .99999999251467 & -31173.3 \\
\hline
3 & 35923488939396. & 35923434720074. & 1.0000015093023 & 54219322. \\
3 & 82792501873632. & 82792433101707. & 1.0000008306547 & 68771925. \\
3 & .13470723693602e15 & .13470723096090e15 & 1.0000000443563 & 5975116 \\
3 & .19013982678941e15 & .19013979175101e15 & 1.0000001842770 & 35038394 \\
3 & .24831500039182e15 & .24831501538879e15 & .99999993960505 & -14996973 \\
\hline
4 & .26221677201508e16 & .26221542614856e16 & 1.0000051326749 & 13458665240 \\
4 & .64846065425230e16 & .64845918799277e16 & 1.0000022611439 & 14662595290 \\
4 & .10987196470794e17 & .10987187884822e17 & 1.0000007814531 & .8585972e10 \\
4 & .15956123181403e17 & .15956125546013e17 & .99999985180550 & -.2364610e10 \\
4 & .21299535514803e17 & .21299540911015e17 & .99999974665125 & -.5396212e10 \\
\hline
5 & .23541937472178e18 & .23541622006477e18 & 1.0000134003384 & .315465701e13 \\
5 & .62771726711464e18 & .62771414322685e18 & 1.0000049766089 & .312388779e13 \\
5 & .11106890853615e19 & .11106862772711e19 & 1.0000025282480 & .28080904e13 \\
5 & .16628632428499e19 & .16628683849741e19 & .99999690767817 & -.51421242e13 \\
5 & .22724025077610e19 & .22724048423231e19 & .99999897264693 & -.23345621e13 \\
\hline
6 & .24225487162243e20 & .24224780818937e20 & 1.0000291578822 & .706343306e15 \\
6 & .69880224640908e20 & .69879554487455e20 & 1.0000095901220 & .670153453e15 \\
6 & .12937968210632e21 & .12937887586288e21 & 1.0000062316467 & .80624344e15 \\
6 & .19996752978479e21 & .19997013306315e21 & .99998698166411 & -.260327836e16 \\
6 & .28005925088677e21 & .28006019455853e21 & .99999663046810 & -.94367176e15 \\
\hline
7 & .274712571777e22 & .274697762672e22 & 1.00005391054 & .14809105e18 \\
7 & .859431066562e22 & .859415893116e22 & 1.00001765553 & .15173446e18 \\
7 & .166743403869e23 & .166740957095e23 & 1.00001467410 & .2446774e18 \\
7 & .266330275024e23 & .266339641978e23 & .999964830793 & -.9366954e18 \\
7 & .382588166641e23 & .382591322018e23 & .999991752617 & -.3155377e18 \\
\hline
8 & .3351697755e24 & .3351406841e24 & 1.000086804 & .290914e20 \\
8 & .1139465805e25 & .1139429048e25 & 1.000032259 & .36757e20 \\
8 & .2319359069e25 & .2319282301e25 & 1.000033100 & .76768e20 \\
8 & .3831454627e25 & .3831738559e25 & .9999259000 & -.283932e21 \\
8 & .5649093016e25 & .5649183210e25 & .9999840342 & -.90194e20 \\
\hline
\end{tabular}
}
\caption[Moments of $L(1/2,\chi_d)$ versus the CFKRS conjectured asymptotics,
$d<0$]{Moments $\sum_{d\in D_-(X)} L(1/2,\chi_d)^k$ versus CFKRS' $\frac{3}{\pi^2}
\int_1^X Q_-(k,\log{t}) dt$, for $k=1,\ldots,8$ and $d<0$. Five values for
each $k$ are shown, at $X = 10^{10},2\times 10^{10}, \ldots, 5\times
10^{10}$.}\label{tab:Lhalfchin}
\end{table}
\begin{table}[H]
\centerline{\fontsize{11pt}{12pt}\selectfont
\begin{tabular}{|c|c|c|c|c|}
\hline
$k$ & $\sum_{d\in D_+(X)} L(1/2,\chi_d)^k$ & $\frac{3}{\pi^2} \int_1^X Q_+(k,\log{t}) dt$& $R_+(k,x)$ & $\Delta_+(k,X)$ \\ \hline
1 & 4074391863.4447 & 4074392042.9388 & .99999995594580 & -179.4941 \\
1 & 8445624718.0243 & 8445624023.3138 & 1.0000000822569 & 694.7105 \\
1 & 12928896894.590 & 12928896383.146 & 1.0000000395582 & 511.444 \\
1 & 17484928279.579 & 17484927921.500 & 1.0000000204793 & 358.079 \\
1 & 22095062063.114 & 22095062690.738 & .99999997159438 & -627.624 \\
\hline
2 & 76310075816.466 & 76310057832.320 & 1.0000002356720 & 17984.146 \\
2 & 168051689378.93 & 168051603484.03 & 1.0000005111222 & 85894.90 \\
2 & 266303938917.29 & 266303916920.62 & 1.0000000825999 & 21996.67 \\
2 & 368948427173.22 & 368948308826.37 & 1.0000003207681 & 118346.85 \\
2 & 474942139636.16 & 474942177549.68 & .99999992017235 & -37913.52 \\
\hline
3 & 2478393690176.2 & 2478391641054.5 & 1.0000008267950 & 2049121.7 \\
3 & 5878735240405.9 & 5878729153410.4 & 1.0000010354271 & 6086995.5 \\
3 & 9720154390088.4 & 9720158187579.5 & .99999960931797 & -3797491.1 \\
3 & 13873264940982. & 13873252832529. & 1.0000008727912 & 12108453. \\
3 & 18271480140004. & 18271496263135. & .99999911758015 & -16123131. \\
\hline
4 & .10868425484737e15 & .10868409751016e15 & 1.0000014476562 & 157337203 \\
4 & .27974980520169e15 & .27974915668497e15 & 1.0000023182079 & 648516719 \\
4 & .48473276073219e15 & .48473329605692e15 & .99999889563038 & -535324726 \\
4 & .71493167429315e15 & .71492961664246e15 & 1.0000028781164 & 2057650687 \\
4 & .96564046289913e15 & .96564334647659e15 & .99999701382764 & -2883577466 \\
\hline
5 & .57022430562904e16 & .57022322406897e16 & 1.0000018967310 & 10815600670 \\
5 & .15999737676260e17 & .15999653478756e17 & 1.0000052624580 & .84197504e11 \\
5 & .29130430291967e17 & .29130495012249e17 & .99999777826357 & -.64720282e11 \\
5 & .44482716417300e17 & .44482376920928e17 & 1.0000076321545 & .339496372e12 \\
5 & .61707290890367e17 & .61707708869778e17 & .99999322646362 & -.417979411e12 \\
\hline
6 & .33658290814098e18 & .33658163201404e18 & 1.0000037914337 & .127612694e13 \\
6 & .10326933113376e19 & .10326816848898e19 & 1.0000112585010 & .116264478e14 \\
6 & .19792425806612e19 & .19792515491256e19 & .99999546875969 & -.89684644e13 \\
6 & .31332379844474e19 & .31331890401641e19 & 1.0000156212353 & .489442833e14 \\
6 & .44685941512069e19 & .44686487402482e19 & .99998778399367 & -.545890413e14 \\
\hline
7 & .215991539086e20 & .215989246213e20 & 1.00001061568 & .2292873e15 \\
7 & .726312167992e20 & .726295668031e20 & 1.00002271797 & .16499961e16 \\
7 & .146733199900e21 & .146734533114e21 & .999990914109 & -.1333214e16 \\
7 & .241042340834e21 & .241036160843e21 & 1.00002563927 & .6179991e16 \\
7 & .353694078736e21 & .353700808054e21 & .999980974547 & -.6729318e16 \\
\hline
8 & .1475899774e22 & .1475859642e22 & 1.000027192 & .40132e17 \\
8 & .5449090667e22 & .5448853612e22 & 1.000043505 & .237055e18 \\
8 & .1161602962e23 & .1161622793e23 & .9999829282 & -.19831e18 \\
8 & .1981618159e23 & .1981550523e23 & 1.000034133 & .67636e18 \\
8 & .2993403001e23 & .2993484649e23 & .9999727248 & -.81648e18 \\
\hline
\end{tabular}
}
\caption[Moments of $L(1/2,\chi_d)$ versus conjectured asymptotics,
$d<0$]{Moments $\sum_{d\in D_+(X)} L(1/2,\chi_d)^k$ versus $\frac{3}{\pi^2}
\int_1^X Q_+(k,\log{t}) dt$, for $k=1,\ldots,8$ and $d>0$. Five values for
each $k$ are shown, $X = 2\times 10^9, 4\times 10^9, \ldots, 10^{10}$.}
\label{tab:Lhalfchip}
\end{table}
\renewcommand{\thefigure}{\arabic{figure}}
\newpage
\thispagestyle{empty}
\begin{figure}[H]
\centerline{
\includegraphics[width=.48\textwidth,height=2in]{quadratic_reality_vs_conj_1_neg_d.eps}
\includegraphics[width=.48\textwidth,height=2in]{quadratic_reality_vs_conj_2_neg_d.eps}
}
\centerline{
\includegraphics[width=.48\textwidth,height=2in]{quadratic_reality_vs_conj_3_neg_d.eps}
\includegraphics[width=.48\textwidth,height=2in]{quadratic_reality_vs_conj_4_neg_d.eps}
}
\centerline{
\includegraphics[width=.48\textwidth,height=2in]{quadratic_reality_vs_conj_5_neg_d.eps}
\includegraphics[width=.48\textwidth,height=2in]{quadratic_reality_vs_conj_6_neg_d.eps}
}
\centerline{
\includegraphics[width=.48\textwidth,height=2in]{quadratic_reality_vs_conj_7_neg_d.eps}
\includegraphics[width=.48\textwidth,height=2in]{quadratic_reality_vs_conj_8_neg_d.eps}
}
\caption[Plot of the ratio $R_-(k,X)$for $k=1,\ldots, 8$]{These
plots depict the ratio $R_-(k,X)$ of the numerically computed moments compared to
the CFKRS predictions, for $k=1,\ldots, 8$ and $d<0$, sampled every $10^7$, i.e.
at $X = 10^7, 2\times 10^7, \ldots, 5\times 10^{10}$. The horizontal axis is
$X$, the vertical axis is the ratio $R_-(k,X)$.}\label{fig: fig1}
\end{figure}
\newpage
\thispagestyle{empty}
\begin{figure}[H]
\centerline{
\includegraphics[width=0.48\textwidth,height=2in]{quadratic_reality_vs_conj_1_pos_d.eps}
\includegraphics[width=0.48\textwidth,height=2in]{quadratic_reality_vs_conj_2_pos_d.eps}
}
\centerline{
\includegraphics[width=0.48\textwidth,height=2in]{quadratic_reality_vs_conj_3_pos_d.eps}
\includegraphics[width=0.48\textwidth,height=2in]{quadratic_reality_vs_conj_4_pos_d.eps}
}
\centerline{
\includegraphics[width=0.48\textwidth,height=2in]{quadratic_reality_vs_conj_5_pos_d.eps}
\includegraphics[width=0.48\textwidth,height=2in]{quadratic_reality_vs_conj_6_pos_d.eps}
}
\centerline{
\includegraphics[width=0.48\textwidth,height=2in]{quadratic_reality_vs_conj_7_pos_d.eps}
\includegraphics[width=0.48\textwidth,height=2in]{quadratic_reality_vs_conj_8_pos_d.eps}
}
\caption[Plot of the ratio $R_+(k,X)$ for $k=1,\ldots, 8$]{These
plots depict the ratio $R_+(k,X)$ of the numerically computed moments compared to
the CFKRS predictions, for $k=1,\ldots, 8$ and $d>0$, sampled every $10^7$, i.e.
at $X = 10^7, 2\times 10^7, \ldots, 1.3\times 10^{10}$. The horizontal axis is
$X$, the vertical axis is the ratio $R_+(k,X)$.}\label{fig: fig2}
\end{figure}
\newpage
\thispagestyle{empty}
\begin{figure}[H]
\centerline{
\includegraphics[width=0.48\textwidth,height=2in]{quadratic_diff_reality_vs_conj_1_neg_d.eps}
\includegraphics[width=0.48\textwidth,height=2in]{quadratic_diff_reality_vs_conj_2_neg_d.eps}
}
\centerline{
\includegraphics[width=0.48\textwidth,height=2in]{quadratic_diff_reality_vs_conj_3_neg_d.eps}
\includegraphics[width=0.48\textwidth,height=2in]{quadratic_diff_reality_vs_conj_4_neg_d.eps}
}
\centerline{
\includegraphics[width=0.48\textwidth,height=2in]{quadratic_diff_reality_vs_conj_5_neg_d.eps}
\includegraphics[width=0.48\textwidth,height=2in]{quadratic_diff_reality_vs_conj_6_neg_d.eps}
}
\centerline{
\includegraphics[width=0.48\textwidth,height=2in]{quadratic_diff_reality_vs_conj_7_neg_d.eps}
\includegraphics[width=0.48\textwidth,height=2in]{quadratic_diff_reality_vs_conj_8_neg_d.eps}
}
\caption[Plot of the difference $\Delta_-(k,X)$ for $k=1,\ldots, 8$]{These
plots depict the difference $\Delta_-(k,X)$ between the numerically computed moments and
the CFKRS prediction, for $k=1,\ldots, 8$ and $d<0$,
sampled at $X = 10^7, 2\times 10^7, \ldots, 5\times 10^{10}$. The horizontal axis
is $X$, the vertical axis is the difference $\Delta_-(k,X)$, and the solid curve
is the mean up to $X$ of the plotted differences (see the discussion above).}\label{fig: fig3}
\end{figure}
\newpage
\thispagestyle{empty}
\begin{figure}[H]
\centerline{
\includegraphics[width=0.48\textwidth,height=2in]{quadratic_diff_reality_vs_conj_1_pos_d.eps}
\includegraphics[width=0.48\textwidth,height=2in]{quadratic_diff_reality_vs_conj_2_pos_d.eps}
}
\centerline{
\includegraphics[width=0.48\textwidth,height=2in]{quadratic_diff_reality_vs_conj_3_pos_d.eps}
\includegraphics[width=0.48\textwidth,height=2in]{quadratic_diff_reality_vs_conj_4_pos_d.eps}
}
\centerline{
\includegraphics[width=0.48\textwidth,height=2in]{quadratic_diff_reality_vs_conj_5_pos_d.eps}
\includegraphics[width=0.48\textwidth,height=2in]{quadratic_diff_reality_vs_conj_6_pos_d.eps}
}
\centerline{
\includegraphics[width=0.48\textwidth,height=2in]{quadratic_diff_reality_vs_conj_7_pos_d.eps}
\includegraphics[width=0.48\textwidth,height=2in]{quadratic_diff_reality_vs_conj_8_pos_d.eps}
}
\caption[Plot of the difference $\Delta_+(k,X)$ for $k=1,\ldots, 8$]{These
plots depict the difference $\Delta_+(k,X)$ between the numerically compute moments and
the CFKRS prediction, for $k=1,\ldots, 8$ and $d>0$,
sampled at $X = 10^7, 2\times 10^7, \ldots, 1.3\times 10^{10}$. The horizontal axis
is $X$, the vertical axis is the difference $\Delta_+(k,X)$, and the solid curve
is the mean up to $X$ of the plotted differences (see the discussion above).}\label{fig: fig4}
\end{figure}
\newpage
\begin{figure}[H]
\centerline{
\includegraphics[width =1\textwidth,height=3.7in]{quadratic_diff_reality_vs_conj_3zoom_neg_d.eps}
}
\centerline{
\includegraphics[width=1\textwidth,height=3.7in]{quadratic_diff_reality_vs_conj_3zoom_pos_d.eps}
}
\caption[Plot of $\Delta_\pm(3,X)$]{This graph depicts the remainder term for
the third moment for $d<0$ (top) and $d>0$ (bottom), i.e. $\Delta_-(3,X)$ and $\Delta_+(3,X)$.
The solid line is the average remainder, and the dashed line is Zhang's prediction.
See the discussion above concerning the quality of the fit.
}\label{fig: fig5}
\end{figure}
\newpage
\thispagestyle{empty}
\begin{figure}[H]
\centerline{
\includegraphics[width=.48\textwidth,height=2in]{quadratic_log_log_abs_mean_1_pos_d.eps}
\includegraphics[width=.48\textwidth,height=2in]{quadratic_log_log_abs_mean_2_pos_d.eps}
}
\centerline{
\includegraphics[width=.48\textwidth,height=2in]{quadratic_log_log_abs_mean_3_pos_d.eps}
\includegraphics[width=.48\textwidth,height=2in]{quadratic_log_log_abs_mean_4_pos_d.eps}
}
\centerline{
\includegraphics[width=.48\textwidth,height=2in]{quadratic_log_log_abs_mean_1_neg_d.eps}
\includegraphics[width=.48\textwidth,height=2in]{quadratic_log_log_abs_mean_2_neg_d.eps}
}
\centerline{
\includegraphics[width=.48\textwidth,height=2in]{quadratic_log_log_abs_mean_3_neg_d.eps}
\includegraphics[width=.48\textwidth,height=2in]{quadratic_log_log_abs_mean_4_neg_d.eps}
}
\caption[log log plot of the average reaminder]{Plots, on a log log scale, of
the absolute value of the average remainder term depicted in Figures 3--4,
for $1 \leq k \leq 4$, $d>0$ (top four plots), and $d<0$ (bottom four plots).
For the 3rd moment we compare to Zhang's predictions of $.14 \frac{4}{7} x^{3/4}$
(3rd plot) and $.07 \frac{4}{7} x^{3/4}$ (7th plot).
For the 1st moment, $d<0$, there seems to be a bias of size roughly $x^{1/4}$.
}\label{fig: fig6} \end{figure}
\newpage
\section{Our Computational Formulae}
The computations for the moments of $L(1/2,\chi_d)$
hinge on the efficient computation of $L(1/2,\chi_d)$ itself for many discriminants $d$.
This computation is split into two cases according to whether $d$ is positive
or negative. In the former case we calculate $L(1/2,\chi_d)$ using a
smooth approximate functional equation for $L(s,\chi_d)$, which is
representable in terms of the incomplete gamma function. In the latter case,
we consider the Dedekind zeta function for the associated quadratic field, and reduce the
computation of $L(1/2,\chi_d)$ to a sum over binary quadratic forms, and the
$K$-Bessel expansions of their Epstein zeta functions as determined by Chowla and
Selberg \cite{cs: epzeta}.
Testing the conjectures described in the introduction also involves numerical
values for the coefficients of the polynomials $Q_\pm(k,x)$. We reran
the program used in~\cite{cfkrs: intmom} on a faster machine and for
a longer amount of time to get slightly more accurate coefficients
for these polynomials.
\subsection{Computational Formula for $L(1/2,\chi_d)$, $d<0$}
Let
$$
\zeta_{\mb{Q}({\sqrt{D}})}(s)=\zeta(s)\ L(s,\chi_{d}) \label{eq: algDede}
$$
be the Dedekind zeta function of the quadratic number field $\mathbb Q({\sqrt{D}})$,
and $h(d)$ the corresponding class number.
Let $a_j m^2 + b_j mn + c_j n^2$, $j=1,\dots,h(d),$
be representatives for the $h(d)$ equivalence classes of primitive positive
definite binary quadratic forms of discriminant $b_j^2- 4a_j c_j=d<0$
\cite{land: elnumthe}.
Dirichlet proved (see also \cite{hdaven: mnumth}, Chapter 6) that:
\begin{equation}
\label{eq:Dirichlet}
\zeta_{\mb{Q}(\sqrt{D})}(s)=\frac{1}{\omega}\sum_{j=1}^{h(d)}\ \sum{}^{'} \biggl(a_j{m^2 +
b_j mn + c_j n^2 \biggr)}^{-s}, \qquad \Re{s} > 1
\end{equation}
where
$$
\omega=\begin{cases}
2 & d<-4, \\
4 & d=-4, \\
6 & d =-3,
\end{cases}
$$
\noindent and where $\sum{}^{'}$ denotes the sums over all pairs
$(m,n) \in \mathbb Z^2$, $(m,n) \ne (0,0)$.
Chowla and Selberg \cite{cs: epzeta} obtained the meromorphic
continuation of the Epstein zeta function
$$
Z(s):=\sum{}^{'} (am^2 + bmn + cn^2)^{-s}, \qquad \Re{s}>1,
$$
with $d=b^2-4ac<0$, $a,c>0$, by giving an expansion for $Z(s)$
as a series of $K$-Bessel functions. Specifically, they proved that
\begin{equation}
\label{eq:Z}
Z(s)=2\zeta(2s)a^{-s} + \frac{2a^{s-1}\ \sqrt\pi}{\Gamma(s) \Bigl(|d|^{1/2}/2 \Bigr)^{2s-1}}
\ \zeta (2s-1)\Gamma (s - 1/2) + B(s)
\end{equation}
where
\begin{align}
B(s)&= \frac{8 \pi^s 2^{s-1/2}}
{a^{1/2}\Gamma(s)\ |d|^{\frac{2s-1}{4}}} \sum_{n=1}^\infty
n^{s-1/2}
\ \sigma_{1-2s}(n)\cos\biggl(\frac{n \pi b}{a}\biggr)K_{s-1/2}
\biggl(\frac{\pi n \ |d|^{1/2}}{a}\biggr), \\
\sigma_\omega(n) &= \sum_{m|n} m^\omega,
\end{align}
and
$$
K_\omega(z)= \frac{1}{2}\int _0^\infty \ {\exp\biggl({-\frac{z}{2}(y+ 1/y
)}\biggr) {y^{\omega-1}} dy, \qquad \Re{z} >0.}
$$
$K_{s-1/2}(x)$ decreases exponentially fast as $x \to \infty$,
uniformly for $s$ in compact sets, and
the above expansion gives $Z(s)$ as an analytic function throughout $\mb{C}$ except
for a simple pole at $s=1$. Note that the poles at $s=1/2$ in \eqref{eq:Z} of
the terms with $\zeta(2s)$ and $\Gamma(s-1/2)$ cancel out.
Specializing the above formula to $s = 1/2$ gives
\begin{align}
Z(1/2) &= \fr{2}{a^{\fr{1}{2}}}\pr{\gamma +
\log\pr{\fr{\lr{d}^{\fr{1}{2}}}{8\pi a}}} \notag\\ &+
\fr{8}{a^{\fr{1}{2}}}\sum_{n\geq 1} \sigma_0(n) \cos\pr{\fr{\pi n b}{a}}
K_0\pr{\fr{\pi n\lr{d}^{\fr{1}{2}}}{a}}. \label{eq: expansion}
\end{align}
Substituting this into \eqref{eq:Dirichlet}, for a given a set of representative
quadratic forms, one for each equivalence class, yields a formula
that can be used to numerically compute $L(1/2,\chi_d)$.
An explicit bound on $K_0(x)$ can be obtained as follows.
\begin{equation}
K_0(x)=\int_1^\infty \ \exp\biggl(-\frac{x}2 (y+1/y)\biggr) {\frac{dy}{y}}
\end{equation}
so that
\begin{equation}
\label{eq:K-Bessel decay}
|K_0(x)| < {\sqrt{\frac{\pi}{2x}}e^{-x}}.
\end{equation}
The last inequality can be seen by writing $y+1/y = (y^{1/2}-y^{-1/2})^2+2$,
changing variables $u=x^{1/2}(y^{1/2}-y^{-1/2})$, so that
$dy/y =x^{-1/2} 2 du/(y^{1/2}+y^{-1/2})$,
and using, from the AGM inequality, $y^{1/2}+y^{-1/2} \geq 2 $.
Next, we discuss implementation issues and complexity arising from this formula.
Lagrange proved that each of the equivalence classes of primitive positive definite
forms of discriminant $d$ contains exactly one form $ax^2+bxy+cy^2$ for which
$-a < b \leq a < c$ or $0 \leq b \leq a=c$. Roughly, this is the set
$ 0 \leq |b| \leq a \leq c$, with some exceptions.
Furthermore, $a<(|d|/3)^{1/2}$. Recall that in this context primitive means that
$\gcd(a,b,c)=1$.
Therefore, let $A(X)$ denote the set of triples:
\begin{equation}
A(X):= \{(a,b,c) \in \mb{Z}^3 | 4ac-b^2\leq X, \text{$-a < b \leq a < c$ or $0 \leq b \leq a=c$} \}
\label{eq: A}
\end{equation}
and $A'(X)$ the set of primitive triples in $A(X)$:
\begin{equation}
A'(X):= \{(a,b,c) \in A(X) | \gcd(a,b,c)=1 \}.
\label{eq: A'}
\end{equation}
Our first step was to distribute the computation across several processors,
each one handling a range of discriminants, in order
to speed up the computation and also to reduce the memory requirements per processor.
Therefore, suppose $0 < -d \leq X$ for some positive integer $X$,
and let $\Delta X$ be a positive integer dividing $X$.
We partitioned the interval into blocks, $X_{i-1} < \lr{d} \leq X_i$, of equal length
$\Delta X = X_i - X_{i-1}$ for $i=1,2,\ldots,m$ (so $X_m := X$).
We then looped through all integers $(a,b,c)\in A'(X)$,
with corresponding $|d|$ lying in $(X_{i-1},X_i]$, i.e. satisfying
the following properties:
\begin{equation}
\label{eq: loops}
0< a \leq \sqrt{\fr{X_i}{3}}, \qquad 0 \leq |b| \leq a
\leq c, \qquad \fr{b^2 + X_{i-1}}{4a} < c \leq \fr{b^2 + X_i}{4a},
\end{equation}
(taking care to throw away the terms above at the endpoints that are not in
$A'(X)$, for example, terms with $-b=a$).
We computed $d = b^2-4ac$ and updated
$L(1/2,\chi_d)$, stored in an array, using \eqref{eq: expansion} to calculate
the corresponding contribution to~\eqref{eq:Dirichlet} from the triple $(a,b,c)$.
Combining $a<(|d|/3)^{1/2}$ with the exponential decay
of $K_0(x)$ in \eqref{eq:K-Bessel decay} shows that
few terms are needed to compute~\eqref{eq: expansion}
to given precision.
For example, the terms in~\eqref{eq: expansion}
with $n\geq 7$ contribute, in absolute value, less than $10^{-15}$
to the sum, and can be ignored. Smaller $a$ require even fewer terms.
Because we are evaluating $K_0(x)$ for a limited range of values
and to machine double precision (15-16 digits), we used a precomputed
table of the first five terms
of several thousand Taylor series expansions:
\begin{equation}
\label{eq:Ktaylor}
K_0(x) \approx K_0(x_j) + K_0^{(1)}(x_j) (x-x_j)+ \ldots (K_0^{(4)}(x_j)/4!) (x-x_j)^4,
\end{equation}
each one centred on a point $x_j$ of the form $x_j=j/200$ with
$5< x_j <37$, $j \in \mb{Z}$. The interval $[5,37]$ was used, because, on one end,
$\pi 3^{1/2}>5$, the lhs being a lower bound for the smallest possible $x$
for which we would need to evaluate $K_0(x)$. We chose $37$ for the other end of
the interval because $\exp(-37)<10^{-16}$, i.e. smaller than our desired precision.
While our code was written in {\tt C++},
the precomputation of these Taylor expansions were carried out in Maple, with the coefficients
stored in a file that could be read into our {\tt C++} program. Our {\tt C++} code
was compiled with the GNU compiler {\tt GCC}.
Only five terms were computed and stored because our Taylor expansions were
always applied with $|x-x_j|<1/400$. Combined with the exponential decay
of the Taylor coefficients (as a function of $x$), at most 5 terms,
and often fewer, were needed to evaluate the sum to within $10^{-15}$.
We also make note of a few of the hacks that helped to increase the speed of our
program:
\begin{itemize}
\item For a given $a,b$, only one cosine needs to be computed. Indeed,
given $\cos(\pi b/a)$, we can compute $\cos(\pi nb/a)$,
for $n=1,2,\ldots, 7$, using standard trigonometric identities.
For instance, the double angle identity computes the expression for $n=2$.
\item To test for primitivity, we must check whether $\gcd(a,b,c)=1$.
If one computes $\gcd(a,b)$ outside the $c$ loop as previously
mentioned, then for a given $\gcd(a,b)$, we can use
$$\gcd(a,b,c) = \gcd(\gcd(a,b), \, c \mod \gcd(a,b)),$$
and thus compute, and then store within the $c$ loop,
at most one gcd per residue class mod $\gcd(a,b)$.
\item
When reading an array, the computer loads blocks of consecutive
bytes of the array from RAM into the CPU's cache where it can be accessed
quickly by the CPU.
On profiling, we found, after optimizing and streamlining
the bulk of our code, that a significant amount of time was being spent
accessing the array in which we were storing $L(1/2,\chi_d)$ so
as to increment it by the contribution from a given triple $(a,b,c)$.
The reason was that, as the inner $c$ loop increments by 1,
the value of $d=b^2-4ac$ changes by $4a$, i.e. often a large step.
Non-sequential array accesses are expensive timewise.
We were able to significantly
decrease the time spent on array accesses by anticipating the subsequent $d$'s, and
using {\tt GCC}'s `\_\_builtin\_prefetch' function to fetch the corresponding array
entry for $L(1/2,\chi_d)$ eight turns in advance - the eight was determined
experimentally on the hardware that we used and for the range of $d$'s that we considered.
\item The computation of \eqref{eq: expansion} involves $\log(|d|)$. Therefore,
we precomputed these and stored them in an array, but were again faced with the same kind of
expensive memory accesses as for the values of $L(1/2,\chi_d)$. Rather than
prefetch these separately, we created a {\tt C++} struct to hold both $L(1/2,\chi_d)$ and
$\log(|d|)$ together. That way a single prefetch would load both at once.
\textbf{Remark.} On combining the last two hacks, the array access portion of our code sped up by a factor of 4,
and the overall running time of the program sped up by a factor of 2. These two hacks were
the last two implemented, and the speed up achieved indicates how expensive non-sequential
memory accesses can be, and how optimized the rest of our code was.
\item To avoid repeatedly checking whether the quantity $d=b^2-4ac$ is a fundamental
discriminant, we precomputed, for each block $X_{i-1} < \lr{d} \leq X_i$ whether
$d$ is a fundamental discriminant and stored that information in an array of boolean
variables. We essentially sieved for squarefree numbers and this
can be done, for each block of length $\Delta X$, in $O(\Delta X)$ steps,
because the sum of the reciprocal of the squares converges. Hence, doing so across
all blocks up to $X$ costs $O(X)$ arithmetic operations and array accesses.
No prefetching was used on this array as it did not seem to give a benefit, perhaps
because the array constists of single bit boolean variables, rather than 64 bit doubles
of the $L$-value and $\log(|d|)$ arrays, and fits more easily within cache.
\item Since $\cos(x)$ is an even function and $b$ gets squared in the
discriminant equation $\lr{d} = 4ac - b^2$, we can group $\pm b$
together, when possible, and restrict our attention to non-negative $b$ values.
Only a relatively small subset of triples cannot be paired in this fashion, namely
when $a=b$, $b=0$, or $c=a$.
\item Terms such as $$\fr{2}{a^{\fr{1}{2}}}\pr{\gamma - \log\pr{8\pi a}}$$
appearing in the leading term of (\ref{eq: expansion}), depend solely on
$a$. As such, it is to our advantage to compute this, and all other terms
depending solely on $a$, outside the $b$ and $c$ loops. Similarly, we
compute expressions like $\gcd(a,b)$ outside the $c$ loop, and so on.
While this is standard,
we took it to a meticulous extreme to save on as many arithmetic operations as possible.
\end{itemize}
\subsection{Complexity for $d<0$}\label{sec-negcomplexity}
First observe that the number of candidate triples $(a,b,c)$ defined by~\eqref{eq: A}
that we must loop over, satisfies
\begin{equation}
|A(X)| \sim \frac{\pi}{18} X^{3/2}.
\label{eq: A asympt}
\end{equation}
See \cite{kuhleitner} for a discussion on this counting problem.
Furthermore, the number of triples that survive the condition $\gcd(a,b,c)=1$
is:
\begin{equation}
|A'(X)| \sim \frac{\pi}{18\zeta(3)} X^{3/2}.
\label{eq: A' asympt}
\end{equation}
The latter asymptotic formula was essentially stated by Gauss in his Disquitiones
in connection to the sum of class numbers $A'(X) = \sum_{-X<d<0} h(d)$.
A proof, with a lower term and a bound on the remainder,
as well as further references can be found in \cite{ci}.
We can get a lower bound for the amount of computation required by
simply counting the number of triples $(a,b,c)$ that are considered. Furthermore,
because we are pairing together $\pm b$, the number of triples that survive the
gcd condition is roughly half of~\eqref{eq: A' asympt}, i.e.
\begin{equation}
\sim
\frac{\pi}{36\zeta(3)} X^{3/2}
\label{eq: number of triples}
\end{equation}
The relatively small constant of $\pi/36 \zeta(3)$ helps to makes this approach
very practical.
While the above asymptotic gives a lower bound on the number of operations
of our method
in computing all $L(1/2,\chi_d)$, for $0 < -d \leq X$, it ignores the amount of
work needed for each triple.
Most of the work involves: checking bounds on each loop,
testing whether $\gcd(a,b,c)=1$ and whether $d=b^2-4ac$ is a fundamental discriminant,
and carrying out simple arithmetic and array accesses
related to the evaluation of ~\eqref{eq: expansion}.
Each arithmetic operation can be done in polynomial time in the size of the
numbers involved (in fact, $\log(X)^{1+\epsilon}$ by using the FFT).
However, in the range of
discriminants we considered ($|d| < 5 \times 10^{10}$), the bit length is
quite small: 32 bit {\tt C++} ints
for $a,b,c$ sufficed and 64 bit long longs were used for discriminants.
We also used 64 bit machine doubles for the floating point arithmetic.
Therefore all arithmetic was carried out in hardware.
Recall that we are assuming that
$0 < -d \leq X$, and partitioning this interval as:
$$\underbrace{1,\ldots, \Dl{X}}_{\tx{Block 1}},
\underbrace{\Dl{X} + 1, \ldots, 2\Dl{X}}_{\tx{Block 2}}, \ldots,
\underbrace{(m-1)\Dl{X} + 1, \ldots, m\Dl{X}}_{\tx{Block m}}, \ldots,$$ where
$\Dl{X}$ is a positive integer assumed to divide $X$ (in practice one can
take $X$ and $\Delta X$ to be powers of, say, ten). The number of blocks equals
$X/\Dl{X}$, where, for reasons made clear below, we will eventually take $\Dl{X} \gg X^{1/2} \log(X)^2$.
As mentioned earlier,
we precomputed, via sieving, a table of fundamental discriminants
for each block of length $\Delta X$ using $O(\Delta X)$ arithmetic
operations and array accesses, hence $O(X)$ operations across all blocks,
i.e a relatively small cost compared to $O(X^{3/2})$.
The number of terms needed in the expansion of $K_0$ Bessel function
is proportionate to the number of digits of precision desired, and the number
of Taylor coefficients used in each Taylor series for $K_0(x)$ also depends on
the ouput precision. We worked with machine doubles and hence about 15-16
digits precision, and, as explained in the text near~\eqref{eq:Ktaylor}, we
used at most five terms in each Taylor series.
Next we consider the time used to carry out the $\gcd$ computations.
We described in the hacks listed above that, for each triple $(a,b,c)$, we computed
$\gcd(a,b,c)$ by first computing $\gcd(a,b)$ outside the
$c$-loop and then computing $\gcd(\gcd(a,b),c \mod \gcd(a,b))$ inside the
$c$-loop, with the latter calculation being performed at most once per residue
class modulo $\gcd(a,b)$, and then stored in the $c$-loop for subsequent use.
The number of $\gcd(a,b)$'s that are computed across all blocks is
\begin{equation}
\label{eq:gcd bound1}
\ll
X \sum_{m\leq \fr{X}{\Dl{X}}} \sum_{a \leq \sq{\fr{m\Dl{X}}{3}}} \sum_{0 \leq b \leq a} 1 \ll X^2/\Dl{X}.
\end{equation}
Next, we compute $\gcd(a,b,c) = \gcd\pr{\gcd(a,b),c \mod \gcd(a,b)}$,
storing these values, inside the $a,b$ loops, for each residue class
$\mod \gcd(a,b)$ so as to only compute one $\gcd$ per residue class.
Therefore, the number of additional $\gcd$'s required is
\begin{equation}
\ll \sum_{m\leq \fr{X}{\Dl{X}}} \sum_{a \leq \sq{\fr{m\Dl{X}}{3}}}\sum_{0 \leq b \leq a} \gcd(a,b). \label{eq: initialization}
\end{equation}
It is known that
\begin{equation}
\sum_{a\leq x} \sum_{0 \leq b \leq a} \gcd(a,b) \sim \fr{x^2\log x}{2\zeta(2)} \label{eq: gcd sum};
\end{equation}
see, for example, \cite{Bo} or \cite{Br}.
So, from (\ref{eq: initialization}) and the above asymptotic, we see that
the number of additional $\gcd$'s computations required across all blocks is
\begin{equation}
\label{eq: gcd bound 2}
\ll \sum_{m\leq \fr{X}{\Dl{X}}} m\Dl{X} \log\pr{m\Dl{X}} \ll \fr{X^2\log X}{\Dl{X}}.
\end{equation}
Therefore, combining with \eqref{eq:gcd bound1}, the total number of $\gcd$
calls is
\begin{equation}
\label{eq: gcd total bound}
O\pr{\fr{X^2\log X}{\Dl{X}}}.
\end{equation}
Furthermore each $\gcd(a,b)$ can be computed, using the Euclidean algorithm, in
$O\pr{\log(X)^2}$ bit operations, since the binary length of both $a$ and
$b$ is $O(\log{X})$, and so the total number of bit operations coming
from $\gcd$'s is $\ll X^2 \log(X)^3/\Delta X$.
Hence, we can make the overall time required for $\gcd$
evaluations an insignificant portion of the overall time by choosing
\begin{equation}
\Delta X \gg X^{1/2} \log(X)^2,
\end{equation}
as the number of bit operations for the remaining work (looping
through $a,b,c$ and $m$, and carrying out the required integer and
floating point arithmetic) is then $$O(X^{3/2} \log(X)^{1+\epsilon}).$$
The $X^{3/2}$ accounts for the overall number of triples $(a,b,c)$
considered, and the $\log(X)^{1+\epsilon}$ for the cost of arithmetic on
numbers of bit length $O(\log{X})$. The implied constant depends on the
number digits of precision desired for the $L$-values.
While the best choice might seem to be to take $\Delta X$ equal to $X$
so as to minimize the number of $\gcd$ calls,
this would come at a substantial price.
First, such a large $\Delta X$ would prevent us from simply distributing the
computation across several processors, each one handling one block at a time.
Second, the memory (RAM) requirements needed would be enormous.
There is also an advantage to having arrays that can fit entirely or significantly
within the CPU's cache, so as to avoid too many expensive memory fetches
from RAM, and, even with smaller $\Delta X$, there is a tradeoff between
minimizing calls to the Euclidean algorithm and memory accesses.
We determined a good choice of $\Delta X$ experimentally, since, in practice,
the big-Oh constants in the above estimates depend on the speed of
individual arithmetic and memory operations on given hardware and context
in which they are called.
Nonetheless, since the Euclidean algorithm is very simple, and the remaining work
associated with looping, computing the $K$-Bessel function, and updating
$L$-values involves a moderate number of arithmetic and memory operations, one
expects that the benefit should be felt sooner rather than later. Indeed, we
found that, in our range of $d$'s, a choice that eliminated the $\gcd$'s as a
bottleneck, while not paying too high of a cache size penalty, was
$\Delta X = 10^6$, i.e. blocksizes of one million.
\subsection{Computational Formula for $L(1/2,\chi_d)$, $d>0$}
The proof of de la Vall\'ee Poussin of the functional equation for $L(s,\chi_d)$ imitates that of Riemann
for his zeta function. It yields the analytic continuation of $L(s,\chi_d)$ and also the following formula,
an example of a `smoothed approximate functional equation', useful for its evaluation:
\begin{equation}
(d/\pi)^{s/2} \Gamma(s/2) L(s,\chi_d)
= \sum_{n=1}^{\infty}
\chi_d(n)( G(s/2, \pi n^2 /d) + G((1-s)/2, \pi n^2 / d ) )
\end{equation}
where $G(z,w)$ denotes the normalized incomplete gamma function
\begin{equation}
G(z,w) := \int_1^\infty x^{z-1} e^{-wx} dx
= w^{-z}\int_w^\infty x^{z-1}e^{-x} dx
= w^{-z}\Gamma(z,w), \quad \Re{w} > 0,
\label{eq:G(z,w)}
\end{equation}
with $\Gamma(z,w)$ the incomplete gamma function.
See for instance page 69 of \cite{hdaven: mnumth}, or Section 3.4.1 of \cite{rub: annumthe},
and use Gauss' formula for the Gauss sum, namely $\tau(\chi_d)=d^{1/2}$, when $d>0$.
Therefore, on specializing to $s=1/2$, we have
a smooth approximate functional equation for $L(1/2,\chi_d)$, namely
\begin{align}
L(1/2,\chi_d) = 2\pr{\fr{\pi}{d}}^{\fr{1}{4}}
\sum_{n\geq 1} \chi_d(n) \fr{G(1/4, n^2\pi/d)}{\Gamma(1/4)} =
2
\sum_{n\geq 1} \fr{\chi_d(n)}{\sqrt{n}} \frac{\Gamma(1/4, n^2\pi/d)}{\Gamma(1/4)},
\label{eq: pos-appformula}
\end{align}
valid for positive fundamental discriminants $d$.
To estimate the size of the terms being summed, first notice, from the definition, that
$G(1/4,w)>0$ for real $w$. Furthermore,
integrating by parts, gives an upper bound:
\begin{equation}
G(1/4,w) = \frac{e^{-w}}{w}- \frac{3}{4w} \int_1^\infty e^{-wx} x^{-7/4} dx < \frac{e^{-w}}{w}
\label{eq:G bound}
\end{equation}
This inequality tells us that the terms in~(\ref{eq: pos-appformula}) decrease exponentially fast
in the quantity $\pi n^2/d$, so that, roughly speaking, we need to truncate the sum when
$n$ is of size $d^{1/2}$ to achieve a small tail.
Let us consider this estimate more carefully. Set
\begin{equation}
f(t) = \fr{2}{\sqrt{t}} \fr{\Gamma\pr{1/4, t^2\pi/d}}{\Gamma\pr{1/4}}.
\label{eq:f(t)}
\end{equation}
In light of bound~\eqref{eq:G bound}, we have
\begin{equation}
|f(n)| < \frac{2}{\Gamma(1/4)} \left(\frac{d}{\pi}\right)^{3/4} \frac{e^{-\pi n^2/d}}{n^2}.
\label{eq:f bound}
\end{equation}
Let the number of working digits be labelled as `Digits'. Hence, for
\begin{equation}
\label{eq:n drop}
n> \sqrt{\fr{d}{\pi}\log(10) \cdot \tx{Digits}}
\end{equation}
we generously have
\begin{equation}
f(n) < 10^{-\tx{Digits}}.
\end{equation}
Furthermore, notice that
the terms start off, for smaller $n$ and large $d$, with
$$\frac{\Gamma(1/4, n^2\pi/d)}{\Gamma(1/4)} \sim 1.$$ Therefore, it does not make
sense to sum the terms beyond~\eqref{eq:n drop}, as those terms are
lost to numerical imprecision.
We must thus see to what extent the ignored tail end of the sum can
contribute to the value of $L(1/2,\chi_d)$.
Summation by parts yields
\begin{equation}
\sum_{n\leq N} \chi_d(n)f(n) = f(N)\sum_{n\leq N} \chi_d(n) - \int_1^N
\sum_{n\leq t} \chi_d(n) f'(t) dt. \label{eq: partialsum}
\end{equation}
So on letting $N\rightarrow \infty$, we obtain
\begin{equation}
L\pr{1/2, \chi_d} = -\int_1^\infty \sum_{n\leq t} \chi_d(n) f'(t) dt. \label{eq: lim-partial}
\end{equation}
Moreover, by subtracting (\ref{eq: partialsum}) from (\ref{eq: lim-partial}), we get a formula for the tail:
\begin{equation}
\sum_{n = N+1}^\infty \chi_d(n)f(n) = -f(N)\sum_{n\leq N} \chi_d(n) -
\int_N^\infty \sum_{n\leq t} \chi_d(n) f'(t) dt. \label{eq: tail}
\end{equation}
One could use the inequality of Polya-Vinogradov, Burgess, or even the trivial bound $\lr{\chi_d(n)}\leq 1$,
here to get a reasonable, but not optimal, estimate for the size of the tail.
However, something closer to the truth is obtained by using the conjectured
bound
\begin{equation}
\sum_{n\leq x} \chi_d(n) = O\pr{x^{1/2} d^{\,\epsilon}}. \label{eq: chi-conjecture}
\end{equation}
Combined with \eqref{eq:f bound} this gives
\begin{equation}
f(N)\sum_{n\leq N} \chi_d(n) = O\pr{\frac{d^{3/4+\epsilon}}{N^{3/2}} e^{-\pi N^2/d}}, \label{eq: firstterm}
\end{equation}
and, similarly,
\begin{equation}
\int_N^\infty \sum_{n\leq t} \chi_d(n) f'(t)dt \ll
d^{\,\epsilon}\int_N^\infty t^{\fr{1}{2}} f'(t) dt
\ll \frac{d^{3/4+\epsilon}}{N^{3/2}} e^{-\pi N^2/d}, \label{eq: secondterm}
\end{equation}
where we have applied integration by parts to get the last bound.
Applying these bounds to~\eqref{eq: tail} and choosing
\begin{equation}
N = \sqrt{\fr{d}{\pi}\log(10) \cdot \tx{Digits}}, \label{eq: N}
\end{equation}
gives the following bound for~\eqref{eq: tail}:
\begin{equation}
O\pr{10^{-\tx{Digits}}\frac{d^\epsilon}{\tx{Digits}^{1/2}}}.
\label{eq:tail bound}
\end{equation}
We therefore conclude that the tail isn't much bigger than an individual term, and, in principle,
we could compensate for the extra $d^\epsilon$ by taking Digits slightly
larger than our desired output precision, say by an amount equal to $\epsilon \log{d}/\log{10}$.
We remark that, by using the trivial estimate or Polya-Vinogradov inequality, we
could get rigorous estimates with explicit constants, but larger by a factor of roughly $d^{1/4}$
as compared to~\eqref{eq:tail bound}.
\subsection{Cancellation and accuracy}
We can also use the above analysis to show that our approach to computing $L(1/2,\chi_d)$
using the smooth approximate functional equation
is well balanced, i.e. that little cancellation and hence loss of
precision takes place in summing~\eqref{eq: pos-appformula}.
We consider the maximum size that the partial sums can attain so as to
give us a sense of how many digits accuracy after the decimal place are attained
when working with Digits decimal places.
Consider the partial sums ~\eqref{eq: partialsum} (for
a general $N'$, not just our specific choice of $N$), apply the conjectured
bound~\eqref{eq: chi-conjecture}, and integrate by parts:
\begin{equation}
\sum_{n\leq N'} \chi_d(n)f(n) \ll f(N') {N'}^{1/2} d^{\,\epsilon} + d^{\,\epsilon} t^{1/2} f(t) \Big|_1^{N'}
+ d^{\,\epsilon} \int_1^{N'} t^{-1/2} f(t) dt. \label{eq: estimate partial sum}
\end{equation}
Again, we can get a proven, though weaker, upper bound with explicit constants if we use
a proven bound rather than the conjecture~\eqref{eq: chi-conjecture}.
Next, notice that $\Gamma(1/4,x) < \Gamma(1/4)$, because the definition of the lhs here involves
integrating over a smaller portion of the positive real axis as compared to the rhs.
Thus, from~\eqref{eq:f(t)},
\begin{equation}
f(t) < 2 t^{-1/2}.
\end{equation}
Applying this to~\eqref{eq: estimate partial sum} gives
\begin{equation}
\sum_{n\leq N'} \chi_d(n)f(n) \ll d^{\,\epsilon} \log{N'}.
\end{equation}
Therefore, because we take the partial sums with $N'\leq N = O((d\, \tx{Digits})^{1/2})$,
we have, on adjusting $\epsilon$ to incorporate the $\log{d}$:
\begin{equation}
\sum_{n\leq N'} \chi_d(n)f(n) \ll d^{\,\epsilon} \log{\tx{Digits}}.
\end{equation}
Therefore, the partial sums do not get large and we thus have nearly as
many digits accuracy beyond the decimal place as our working precision.
Using a similar analysis, the effect of accumulated round off error can be estimated
by replacing $\chi_d(n)$ with random plus and minus ones
multiplied by a factor of size $10^{-\tx{Digits}}$ to model the random rounding up or down
of the terms in the sum. With high probability, we then get an error, due to accumulated
round off of size
\begin{equation}
(d^{\,\epsilon} \log{\tx{Digits}}) 10^{-\tx{Digits}}.
\end{equation}
If one desires rigorous, rather than experimental, values of $L(1/2,\chi_d)$,
an interval arithmetic package should be used in practice. Because our goal was to test
conjectures rather than prove a rigorous numerical result, we were satisfied with an
intuitive understanding of the accuracy
of our computation and carried out several checks of the values attained, for example comparing
a similar smooth approximate functional equation for the case $d<0$ against select values attained by
our implementation using the Epstein zeta function, and also using a high precision version
of Rubinstein's lcalc package to test a few several values.
\subsubsection{Hacks}
We list a few hacks which were helpful in the implementation of the smooth
approximate functional equation (\ref{eq: pos-appformula}).
\begin{itemize}
\item $\chi_d(n)$ can be efficiently computed by repeatedly extracting
powers of 2 and applying quadratic reciprocity.
\item As in the case for $d<0$, it is to our advantage to partition
$0< d \leq X$ into blocks and farm the work out to many processors.
\item Due to the presence of $\chi_d(n)$ in the (\ref{eq: pos-appformula}),
it is more efficient to place the $d$-loop on the inside and the
$n$-loop on the outside because $\chi_d(n)$ is periodic in $d$ with
period either $n$ or $8n$, depending on whether the power of two
dividing $n$ is even or odd. Furthermore, $n$ is comparatively small
compared to $d$, by~\eqref{eq: N}. Thus, for each $n$ we
precomputed a table of $\chi_d(n)$, so as to only compute this values
once per residue class $d$ mod $n$ or $8n$. This pays off so long as
each residue class gets hit, on average, more than once (perhaps
slightly more because of the overhead involved in storing the values
and looking up the array.) In our implementation, with blocks of
length $10^6$, $0< d < 1.3 \times 10^{10}$, and $16$ digits working
precision, it was conducive to do so.
\item We computed the normalized incomplete gamma function $G\pr{z,w}$,
evaluated at $z=1/4$ and $w = n^2\pi/d$, as follows. For
$w>37$, return $0$ (since $\exp\pr{-37} < 10^{-16}$). For $1 < w <
37$, use a precomputed table of Taylor series, centering each Taylor
series at multiples of $.01$ (so nearly $4000$ Taylor series) and
taking terms up to degree $7$ (less for larger $w$ because of the
exponential decay). Otherwise, for $w<1$, employ the complimentary
incomplete gamma function $$\gamma(z,w) := \Gamma(z) - \Gamma(z,w) = \int_0^w
e^{-x} x^{z-1} dx, \qquad \tx{$\Re(z) > 0$, $\lr{\tx{arg} w} <
\pi$}.$$ Specifically, set $$g(z,w) = w^{-z} \gamma\pr{z,w} = \int_0^1
e^{-wt}t^{z-1} dt,$$ so $G(z,w) = w^{-z}\Gamma(z) - g(z,w)$, and
integrate by parts to get $$g(z,w) = e^{-w} \sum_{j=0}^\infty
\fr{w^j}{\pr{z}_{j+1}},$$ where $$\pr{z}_j = \begin{cases}
z\pr{z+1}\cdots \pr{z+j-1} \quad &\tx{if $j>0$};\\ 1 \quad
&\tx{if $j=0$}. \end{cases}$$ We stored the value of $\Gamma(1/4)$ and
calculated the above series for $g(1/4,w)$
by truncating the sum when the tail was less
than $10^{-16}$.
\end{itemize}
\subsection{Complexity for $d>0$}
Recall that, as for the case of negative discriminants, we are partitioning the interval
$0 < d < X$ into blocks of length $\Delta X$.
The overall cost for sieving for fundamental discriminants, summed over all
blocks, is a meager $O(X)$ arithmetic operations and array accesses on numbers
of bit length $O(\log{X})$, as for the case of $d<0$.
Next we estimate the overall time, summed over blocks, required to create a
precomputed table of characters $\chi_d(n)$ for all residue classes
mod $n$ or $8n$.
Summing over blocks $m$, and taking the maximum truncation point~\eqref{eq: N}
that occurs for a given block, the time required is
\begin{equation}
\ll \log(X)^2 \sum_{m\leq \fr{X}{\Dl{X}}}
\sum_{n\leq M} n
\end{equation}
where
\begin{equation}
M = \sqrt{\fr{m\Dl{X}}{\pi}\log(10)\cdot \tx{Digits}}.
\label{eq: M}
\end{equation}
Here we have used the fact that
each character $\kr{d}{n}$ can be calculated in time
$O\pr{\tx{size}(d)\tx{size(n)}}$, where size means binary length (see, for example,
\cite{hcohen: numth1}), and that both $d$ and $n$ are of size
$O(\log X)$ in this case.
Summing, the time needed here is therefore
\begin{equation}
\ll \fr{X^2\log(X)^2\tx{Digits}}{\Dl{X}}.
\end{equation}
So, by choosing $\Dl{X}\gg X^{1/2+\epsilon}$, we can make the overall
time spent on computing the Kronecker symbol $o(X^{3/2})$.
As $\Dl{X}$ increases, there is a tradeoff between spending less time
on the character computation and
having larger arrays, similar to our computation of gcd's in the $d<0$ case.
There is a definite advantage, depending on the particular hardware, to having
smaller arrays, i.e. smaller
$\Delta X$, to reduce the number of calls to move data from RAM into cache.
On our hardware, and in our range $0<d<1.3 \times 10^{10}$, we found
that a value of $\Delta X = 10^6$ worked well.
Thus, the bulk of the work is spent on looping, for each block, through $n$ and
$d$, looking up the precomputed character values, computing the normalized
incomplete gamma function $G(1/4,\pi n^2/d)$ to given precision, and updating
the corresponding value of $L(1/2,\chi_d)$ by the amount $\chi_d(n) f(n)$.
The kind of work and operations required is thus very similar to our
approach for the $d<0$ case, with the handling of characters similar
to our handling of gcd's, and the approach to computing the incomplete gamma function
similar to that of the $K$-Bessel function.
However, there is one significant difference in the two methods.
For $d<0$, equation~\eqref{eq: number of triples} tells us that
our Epstein zeta function method
loops through $\frac{\pi}{36\zeta(3)} X^{3/2} =\approx 0.0726 X^{3/2}$
triples $a,b,c$. Not only is the constant $.0726$ small,
but the desired precision does not affect the number
of triples required. Precision
becomes a factor only in regards to computing the particular contribution from
each triple, for example the number of terms needed for the various $K$-Bessel Taylor
series expansions.
But, in the present case of the smooth approximate functional equation, both
the {\it length of the sum} and the amount of work needed to compute the individual
terms of the sum depends on the desired precision. So, the main difference in
these two approaches is the length of the sum.
In the case of $d>0$, the length of the main $d,n$
loops, summed over all blocks of length $\Dl{X}$, is quantified by
\begin{equation}
L_{\tx{pos}} =
\sum_{m \leq \fr{X}{\Dl{X}}}
\sum_{n\leq M}
\sum_{(m-1)\Dl{X} < d \leq m\Dl{X}} 1,
\label{eq: Lpos defn}
\end{equation}
with $M$ given by (\ref{eq: M}). Simplifying the two inner sums,
this quantity is easily estimated to asymptotically be
\begin{equation}
(\Delta X)^{3/2} \sqrt{\fr{\log(10)\cdot \tx{Digits}}{\pi}} \sum_{m \leq \fr{X}{\Dl{X}}} \sqrt{m}
\;\sim\; \frac{2}{3} \sqrt{\fr{\log(10)\cdot \tx{Digits}}{\pi}} X^{3/2}
\label{eq: Lpositive}
\end{equation}
So, if $\tx{Digits}=16$, then $L_{\tx{pos}} \approx 2.28 X^{\fr{3}{2}}$,
which is more than twenty times larger than the number of triples, $0.0726 X^{3/2}$,
considered for $d<0$.
It is impossible to precisely pin down, theoretically, the constant factor
savings in the runtime of our method for $d<0$ compared to the approach used
for $d>0$ as it depends on the speed of the various arithmetic and memory
operations on particular hardware. Furthermore, these are not easily
quantifiable as they change according to how the various resources of the
machine are being used at a given moment. Another obstacle to a precise
comparison is that one would need to take into account implementation choices
made by the programmer and also by the compiler at the minutest of levels.
Nonetheless, the rough comparison between the lengths of the main loops
involved, i.e. $2.28 X^{\fr{3}{2}}$ for $d>0$ and $0.0726 X^{3/2}$ for $d<0$
(see equation~\eqref{eq: number of triples}), does reflect the different
runtimes when compared experimentally.
We ran our computation for $d<0$ on {\tt mod.math.washington.edu}, which is a
Sun Fire X4450, dating from 2008, with 24 Intel Xeon X7640 2.66 GHz CPUs (we
used 12 of them), and 128 GB RAM. Our computation for $d>0$ was carried out
on {\tt pilatus.uwaterloo.ca} which is an older SGI Altix 3700
machine, dating from around 2003, with 64 Intel Madison Itanium CPUs (we used
55 of these) running at 1.3 GHz, and 192 GB of RAM.
Our computation, for $d>0$, took roughly $18.9$ CPU years, and
about $3.9$ CPU years for $d<0$. Recall that we went up to $0 < - d < 5\times10^{10}$,
whereas for $d>0$, we managed to get to $1.3\times10^{10}$. So not only
did our computation for $d<0$ take much less CPU time, but we went significantly further.
To make this more meaningful, we should compare intervals of similar length, i.e.
the subset of $0 < - d < 1.3\times 10^{10}$. This interval required
$.4$ CPU years, i.e. about 47 times faster than our computation for the interval
$0 < d < 1.3\times 10^{10}$. However, because different machines were used for
$d<0$ and $d>0$,
we should compensate by dividing the time used for $d>0$ by a factor of $2.5$ to
account for the fact that these $d$ were handled on an older and slower machine.
The value of $2.5$ was decided by rerunning select blocks of $d>0$ on both
machines, using the same {\tt C++} code, and comparing their runtimes, which
were about 2-3 times faster on the newer machine. Therefore, dividing $47$ by
$2.5$, our code ran around 20 times faster for $d<0$ than
it did for $d>0$, consistent with our rough expectations based on the
lengths in both methods of the main loops.
\addcontentsline{toc}{chapter}{\textbf{Bibliography}}
|
2204.12411
|
\section{Introduction}
The notion of entropy is more involved in quantum systems than in classical systems as it also
includes the information of potential entanglement with another set of dynamical degrees of freedom. This can be another system with which it is (weakly) coupled, the environment, or the measurement apparatus.
In classical equilibrium thermodynamics, the change in the entropy is associated with
heat flow according to the Second Law while
quantum mechanically the entropy can be changed by the quantum correlations in the system that may or may not necessarily involve heat flow. The field of quantum thermodynamics specifically pursues this question how
work, heat and entropy
are affected by quantum
correlations including entanglement;
see e.g. \cite{Goold_2016,deffner2019quantum} for recent reviews. This
field
is growing rapidly, even though these many-body entanglement effects are still less well understood than entanglement and decoherence in
few-qubit systems.
Here we take a quantum thermodynamics point of view on non-equilibrium dynamics in many-body systems with two theoretical models as example: the Sachdev-Ye-Kitaev (SYK) model and a mixed field Ising chain. The Sachdev-Ye-Kitaev model has a computable non-Fermi liquid ground state
that is long-range many body entangled
\cite{Sachdev1993Gapless,Kitaev2015Simple}.
Through the holographic duality between anti-de-Sitter quantum gravity and matrix large $N$ quantum systems, such SYK models at finite temperature are also dual descriptions of black holes in anti-de-Sitter gravity
\cite{Sachdev2015Bekenstein}.
Using this duality
to study the profound question of black hole evaporation through Hawking radiation and its information flow \cite{Penington:2019npb,Almheiri2019Entropy,Penington:2019kki,Almheiri2019Page,chen2020evaporating}, recent studies have considered the quenched cooling of a hot thermal SYK state (the black hole) suddenly being able to ``evaporate'' into a cooler or even $T=0$ SYK state (the container for the e\-va\-po\-rated radiation) \cite{Zhang2019Evaporation,Almheiri2019Universal,maldacena2020syk}.\footnote{Early work on SYK quenches is \cite{Eberlein:2017wah}. For other aspects of SYK dynamics, see this and citations thereof.} A surprising finding from the perspective of classical thermodynamics has been that these observe an initial energy {\em increase} \cite{Almheiri2019Universal, Zhang2019Evaporation,maldacena2020syk,larzul2022fast} in the hot subsystem, confirming results from preceding black hole evaporation studies \cite{Almheiri:2018xdw}.
It was
argued, using Schwinger-Keldysh field theory, that many relativistic continuum field theories will exhibit such an energy increase in the hot system when quench coupling two thermal states \cite{Almheiri:2018xdw,Almheiri2019Universal} even though a
fundamental proof or understanding
was
missing.
In particular, a quenched cooling
between two two-level systems provides a counterexample \cite{Almheiri2019Universal}.\footnote{{The thermal state of a two-level system is defined through its density matrix $\rho = \frac{1}{Z} \sum_{n} |n\rangle e^{-\beta E_{n}}\langle n| $ with $n=\downarrow,\uparrow$ and $Z$ the appropriate normalization such that Tr$\rho=1$.}}
In a recent article, we showed that quantum thermodynamics \cite{Goold_2016,deffner2019quantum} provides the universal explanation for this counterintuitive
rise \cite{gnezdilov2021information}. In a quenched cooling protocol where a (hot) thermal quantum system with Hamiltonian $H_A$ is brought into instantaneous contact with a (cooler) thermal reservoir at $t=0$ through $H_{\text{total}}=H_A+H_B+\theta(t)H_{\text{int}}$, the change in the energy of the hot subsystem $A$ equals
\begin{align}
\label{eq:1a}
\Delta E_{A}(t) = T_{A}\Delta S_{\text{vN},A}(t) + T_AD(\rho_A(t)||\rho_{T_A})~.
\end{align}
Here $S_{\text{vN}}=-{\rm Tr}(\rho_A\ln\rho_A)$ is the von-Neumann entropy of the reduced density matrix of the subsystem $A$: $\rho_A={\rm Tr}_B\rho$; the energy of the subsystem $E_A(t)$ is the expectation value of its subsystem Hamiltonian $E_A ={\rm Tr} H_A\rho(t) = {\rm Tr} H_A\rho_A(t)$; and {$D(\rho_A(t)||\rho_{T_A}) = {\rm Tr} \rho_A(t) \log \left(\rho_A(t)/\rho_{T_A}\right)$ }is the relative entropy between the reduced density matrix of system $A$ and the initial thermal density matrix of $A$ at $t=0$. The change $\Delta E(t)=E(t)-E(0)$ is with respect to the same quantity at $t=0$. By symmetry an analogous relation holds for subsystem $B$.
As the relative entropy $D(\rho_A(t)||\rho_{T_A}) \geq 0$ is positive semi-definite, one arrives at an inequality
that holds universally for any model Hamiltonian when such a quenched cooling protocol is considered
\begin{align}
\label{eq:1}
\Delta E_A(t) \geq T_A \Delta S_{\text{vN},A}(t)~.
\end{align}
In a quantum system the von-Neumann entropy can have a significant contribution from quantum correlations including
entanglement over and above the classical thermal
entropy.
As the
quantum correlations
between the system and the reservoir can only increase after a quench,
the quantum thermodynamic inequality Eq.\eqref{eq:1} can therefore force an associated {\em increase} in energy in system $A$ even if its initial energy density was higher. Moreover, in perturbation theory to leading order
the inequality saturates as the contribution of the relative entropy is subleading and one can use the equality as a way to measure the von Neumann entropy in a quenched cooling protocol through the energy difference \cite{gnezdilov2021information}.
A common view on non-equilibrium phenomena is that at the shortest time scales the system is extremely sensitive to microscopic information, details of the quench protocol etc, and it is only the longest-time-scale-relaxation to equilibrium that is universal. Eq.\eqref{eq:1} surprisingly shows
that it need not be so:
at the shortest possible non-equilibrium time scale there is still a notion of the first law that entropy is linked to energy, even though the standard first law in the absence of work $dE=TdS$ is relating state functions regarding equilibria.
This positive contribution due to quantum correlations to the von Neumann entropy is present in {\em any} quantum system, but our classical experience is that the energy in the hot system decreases directly upon contact because heat must flow from hot to cold. What must happen to restore this intuition that the energy in the hot system decreases instantaneously is that the positive quantum correlation- and entanglement-
contribution can be overwhelmed by the semi-classical heat and information flow from hot to cold.
By studying quenched cooling in SYK models, where entanglement is very strong, and one-dimensional mixed field Ising chains, where entanglement can be made very weak, we exhibit this. Classical experience is restored in a particle-like system at high temperatures where entanglement is weak.
\section{Energy dynamics in quenched cooling}
The setup we study consists of two initially independent quantum subsystems $A$ and $B$ with Hamiltonians $H_A$ and $H_B$ respectively. Initially $(t<0)$, each subsystem is prepared in a thermal state at temperature $T_A$ and $T_B$,
so the full system is in an uncorrelated product state:
\begin{gather}
\rho_0=\rho_{T_A}\otimes \rho_{T_B} \nonumber
\\
\rho_{T_\alpha} = \frac{1}{Z_\alpha}e^{-H_\alpha / T_\alpha}, \quad \alpha =A,B.
\end{gather}
We will study the behaviour of the subsystems when they are brought into instantaneous contact at $t=0$ through an interaction Hamiltonian $H_{int}$. The complete setup is a closed system that evolves with the full Hamiltonian:
\begin{align}
H_{\text{total}}=H_A+H_B+\theta(t)H_{\text{int}}~.
\end{align}
Motivated by current results presented in the introduction, we focus our interest on two different models:
\begin{itemize}
\item Finite $N$ Majorana SYK with each subsystem governed by the Hamiltonian
\begin{gather} \label{eq:MSYK}
H_{\alpha} = i^{q/2} \sum_{j_1 \dots j_q=1}^{N_\alpha} J^{\alpha}_{j_1 \dots j_q} \psi_{j_{1}}^{\alpha} \dots \psi_{j_q}^{\alpha} ~~~~~~~~\alpha=A,B
\end{gather}
where $q$ is same for both dots and can be either $q=2$ or $q=4$, further labeled as SYK$_2$ and SYK$_4$ respectively. The couplings are drawn from a Gaussian distribution with the following parameters:
\begin{gather} \label{eq:SYK_Jcouplings}
\langle J^{\alpha}_{j_1 \dots j_q } \rangle =0,
\quad
\langle J^{\alpha}_{j_1 \dots j_q } J^{\alpha}_{j_1 \dots j_q } \rangle = \frac{(q-1)! J ^2 }{N_\alpha^{q-1}}.
\end{gather}
Those two SYK dots are coupled through a two Majorana tunneling interaction which couplings are also sampled from a Gaussian distribution:\footnote{{We have taken a variance in $\lambda$ that is asymmetric in $N_A$ and $N_B$ to readily compare with \cite{Zhang2019Evaporation,Almheiri2019Universal}. These authors chose this such that the interaction stays relevant in the large $N_A$ limit.}}
\begin{gather} \label{eq:MSYK_int}
H_{int} = i\sum_{ij} \lambda_{ij} \psi_i^{A} \psi_i^{B},
\\
\langle \lambda_{ij} \rangle =0,
\quad
\langle \lambda_{ij}^2 \rangle = \frac{\lambda^2}{N_B}.
\end{gather}
This system is analyzed with exact diagonalization and
averaged over
$R={100}$ different coupling realizations. To reduce the number of free parameters we take two equal size dots $N_A=N_B \equiv N$.
\item The 1D mixed field Ising model, also analyzed using exact diagonalization, with a particle-like contact interaction:
\begin{gather} \label{eq:H_Ising}
H_\alpha = -\sum_{i}^{ N_\alpha} \left(J Z_i^{\alpha} Z_{i+1}^\alpha + g X_i^\alpha + h Z_i^\alpha \right), ~~~~~~~~\alpha=A,B
\\
H_{\text{int}}^{(tunn.)} = -
\lambda (X+iY)_{N_A}^{A} (X-iY)_{1}^B +{h.c.} \label{eq:H_Ising_Interaction}
\end{gather}
\end{itemize}
{Dimensionful parameters are expressed in $J$, which is usually set to $J=1$.}
Fig.~\ref{fig:E1_SYK_twoRegimes_bump} shows the classically unexpected rise in energy in system A directly following the cooling quench with $T_A> T_B$ found in \cite{Zhang2019Evaporation,Almheiri2019Universal}.
We shall now show that even though $E_A$ initially increases, there is no energy flux from the cold reservoir to the hot system. The energy increase instead follows from the energy contribution of the interaction Hamiltonian solely but it is nevertheless a real modification of energy, as a subsequent decoupling of A and B shows. At the moment of decoupling work must be performed on the combined system-reservoir as we shall show.
\begin{figure}[!t]
\includegraphics[width=0.5\textwidth]{FigsNew/DisAvg_EA_t_Em_SYK_Ns10Js1Nb10Jb1fs1fb1V01_Ts05_Tb01_qs4qb4_t-1_80d.pdf}
\caption{Normalized change of the energy
of the hotter system $A$ $(\Delta E_A= {\rm Tr} ((\rho_A(t)-\rho_{T_A}) H_A)$ as a function of time. At short times it increases counter to intuition. Majorana SYK${}_4$ in exact diagonalization averaged over $R=100$ realizations with parameters of both systems on top of the plot. Red dot marks the bump that is reached at time $t_m$ and has a height $E_m$ relative to the initial energy.
}
\label{fig:E1_SYK_twoRegimes_bump}
\end{figure}
The above conclusions follows from the following observations in
SYK systems:
\begin{enumerate}
\item Directly following the quench, the system-energy $E_A(t)$ and the reservoir-energy $E_B(t)$ {\em both} grow
(Fig.\ref{fig:EAll_SYK_twoRegimes_bump}).
The fact that there is no net energy flow from cold to hot means the energy must come from somewhere else.
\begin{figure}[!h]
\includegraphics[width=0.5\textwidth]{FigsNew/DisAvg_EA_EB_Eint_Etotal_t_SYK_Ns10Js1Nb10Jb1fs1fb1V01_Ts05_Tb01_qs4qb4_t-1_80d.pdf}
\caption{Normalized change of energy of the two subsystem $A$, $B$, the interaction energy $\Delta E_{\text{int}}= {\rm Tr} ((\rho(t)-\rho_0) H_{\text{int}} \}$ and the total energy $E_{\text{total}}$ as a function of time. Directly following the quench both the system-energy $E_A(t)$ and the reservoir-energy $E_B(t)$ grow, whereas the interaction energy $E_{\text{int}}(t)$ decreases. The sum vanishes as must be as no energy is put into the combined system/reservoir. {Majorana SYK in exact diagonalization} averaged over $R=100$ realizations with parameters of both systems on top of the plot.}
\label{fig:EAll_SYK_twoRegimes_bump}
\end{figure}
\item The total Hamiltonian $H_{\text{total}}=H_A+H_B+\theta(t)H_{\text{int}}$ contains a third contribution $H_{\text{int}}$. Its contribution to the energy is negative (Fig.\ref{fig:EAll_SYK_twoRegimes_bump}).
\item The change in the expectation value in the total Hamiltonian is nevertheless readily computed to vanish.
\begin{align}\label{eq:dHfdt}
\frac{d}{dt}\langle H_{\text{total}} \rangle = i\langle [H_{\text{total}},H_{\text{total}}]\rangle + \delta(t)\langle H_{\text{int}}\rangle
\end{align}
The first term vanishes trivially.
When
$\langle H_{int}\rangle (0)=0$ as well, as is the case in all the systems we study, then $\langle{H_{total}}\rangle$ is constant in time. The ``binding''-energy from $E_{\text{bind}}=-E_{\text{int}}(t)={\rm Tr}(H_{\text{int}}\rho(t))$ thus
completely accounts for the rise in both $E_A(t)$ and $E_B(t)$.
\item More precisely, for $E_A(t)$ to correspond to a {\em measurable}
energy change (in the sense of commuting with the Hamiltonian)
one should decouple the system from the reservoir with a second quench at a finite time $t_f$ later, as in the standard two-point measurement protocol in quantum thermodynamics \cite{Goold_2016,deffner2019quantum,Popovic2021Thermodynamics}.
Then $H_A$ commutes again with the full Hamiltonian for $t>t_f$.\footnote{Formally, if one does not decouple, the eigenstates of $H_{\text{tot}}$ are no longer localized within $A$ or $B$, and one cannot really say that the expectation value of $H_A$ is the energy of the sub-system $A$.
The expectation value of $H_A$ nevertheless comes the closest and is therefore what is conveniently called the energy of this subsystem.
}
In other words, as in our previous article \cite{gnezdilov2021information}, one considers the two-quench protocol $H_{\text{total}}=H_A+H_B+(\theta(t)-\theta(t_f))H_{\text{int}}$.
Computing the change in total energy, one clearly sees that the energy that must now be supplied equals the binding-energy $E_{\text{bind}}=-E_{\text{int}}(t_f)$.
\begin{align}
\frac{d}{dt}\langle H_{\text{total}}\rangle = -\delta(t_f)\langle H_{\text{int}}\rangle~.
\end{align}
Choosing $t_f$ during the initial time period where both $E_A$ and $E_B$ increase, one concludes that for a two-point measurement protocol of such short duration the total energy in the system has increased.
In particular there are initial configurations of $T_A,T_B$
where the final equilibrium temperature after such a short-time two measurement protocol is larger than both $T_A$ and $T_B$; see Fig.\ref{fig:E_turning_off}.
The decoupling
quench must therefore perform work on the system.
\begin{figure}[!t]
\centering
\includegraphics[width=0.5\textwidth]{FigsNew/DisAvg_EA_EB_Eint_Etotal_t_off1_stitched_SYK_Ns10Js1Nb10Jb1fs1fb1V01_Ts05_Tb01_qs4qb4_t-0d1_2d.pdf}
\caption{Normalized change of energy of the two subsystem $A$, $B$, the interaction energy $\Delta E_{\text{int}}= {\rm Tr} ((\rho(t)-\rho_0) H_{\text{int}} \}$ and the total energy $E_{\text{total}}$ as a function of time in a two-quench protocal with the interaction turned of at $t_f=1$. At $t_f$ the change in total energy shows the energy supplied to the system which exactly equals $E_{\text{int}}$. {Majorana SYK in exact diagonalization} averaged over $R=100$ realizations with parameters of both systems are on top of the plot.}
\label{fig:E_turning_off}
\end{figure}
\item
{In general, since the whole system $AB$ is closed, the total change in the energy of each subsystem, $A$ or $B$, can be due to two components, the contribution from/debit to the ``binding''-energy and the thermal exchange between $A$ and $B$:
\begin{subequations}
\begin{gather}
\Delta E_A = \Delta E_{A,bind} + \Delta E_{B~to~A} \label{eq:EAeqEAbind_EBtoA}
\\
\Delta E_B = \Delta E_{B,bind} - \Delta E_{B~to~A}. \label{eq:EBeqEBbind_EBtoA}
\end{gather}
\end{subequations}
We can estimate the binding energy for each subsystem $A, B$ with respective initial temperatures $T_A\neq T_B$ separately from the interaction energy of a second quench experiment with an equal temperature setup $E_{\alpha,bind}\approx -\frac{1}{2}E_{\text{int}}(T_A=T_B=T_\alpha)$, i.e. we determine $E_{A,{bind}}$ from a quench set-up where both system and reservoir have initial temperature $T_A$, and $E_{B,\text{bind}}$ from a quench set-up where both system and reservoir have initial temperature $T_B$.
{Using this estimate in the quenched cooling set-up with different temperatures that are not too different} we can numerically compute the thermal flux from $B$ into $A$ as
\begin{gather}
\Phi_A=\frac{d}{dt}{E}_{B~to~A}=\frac{1}{2}\left(\frac{d}{dt}{E}_A-\frac{d}{dt}{E}_B \right) -\frac{1}{2}\left(\frac{d}{dt}{E}_{A,bind}-\frac{d}{dt}{E}_{B,bind} \right) .
\label{eq:fluxBtoA}
\end{gather}
The flux $\Phi_A$ is always negative and at early times it is subdominant to the binding energy Fig. \ref{fig:Energy_Flux_AtoB}. This proves that even when $E_A$ increases initially, the energy flux/heat transport is nevertheless always from the hot system A to the cold reservoir B and the supplied energy for the increase comes solely from the binding-energy or the outside when decoupling $A$ and $B$.
}
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{FigsNew/DisAvg_PhiAThermal_PhiAInt_PhiA_t_SYK_Ns10Js1Nb10Jb1fs1fb1V01_Ts05_Tb01_qs4qb4_t-1_20d.pdf}
\caption{Time derivatives of the energy $E_A$ of subsystem $A$; time derivative of an estimate of binding energy contribution $E_{A,\text{bind}}$ from considering an equal temperature quench $(T_A=T_B=0.5J)$, and the resultant thermal flux {from cold reservoir $B$ to hot system $A$}. The flux is always negative and always flows from hot to cold.
Majorana SYK in exact diagonalization averaged over $R=100$ realizations with parameters of both systems on top of the plot.
}
\label{fig:Energy_Flux_AtoB}
\end{figure}
\end{enumerate}
\subsection{Energy rise driven by {quantum correlations}}
As previewed in the introduction the quantity that controls this rise in energy $E_A$ from the contribution of the ``binding''-energy to the combined system-reservoir is the von Neumann {entropy of the reduced density matrix of} system $A$: {$\rho_A(t)={\rm Tr}_B\rho(t)$ }. To see this, consider the relative entropy between $\rho_A(t)$ and the initial thermal density matrix
\begin{align}
D(\rho_A(t)||\rho_{T_A}) = {\rm Tr}(\rho_A(t)\ln\rho_A(t))-{\rm Tr}(\rho_A(t)\ln\rho_{T_A})~.
\end{align}
Substituting that $\rho_{T_A}=\frac{1}{Z_A}e^{-\hat{H}_{A}/T_A}$ one immediately has
\begin{align}
\label{eq:RelEntWithThermal}
T_A D(\rho_A(t)||\rho_{T_A})+T_A S_{\text{vN},A}(t) = E_A(t) -F_A~.
\end{align}
where $F_A=-\ln Z_A = E_{A}(0)-T_AS_A(0)$ is the free energy of the initial thermal state.
The time-dependent terms form the definition of the
{\em information free energy}
\begin{align}
{\cal F}(t:{T_A}) = E_A(t)-T_A S_{\text{vN},A}(t) = F_A + T_A D(\rho_A(t)||\rho_{T_A}).
\end{align}
It encodes the energy-available-for-work and its full counting statistics in open quantum systems that decohere due to their interaction with the environment. The loss of information due to decoherence and decorrelation costs work according the Landauer's principle and the information free energy accounts for that \cite{Goold_2016,deffner2019quantum}.
The change in energy of system $A$ after the quench directly follows from Eq.\eqref{eq:RelEntWithThermal}
and immediately brings us to Eq.(\ref{eq:1a}).
\begin{align}
\Delta E_A(t)=E_A(t)-E_A(0)= T_A\Delta S_{\text{vN},A}(t)+T_A D(\rho_A(t)||\rho_{T_A}),
\nonumber
\end{align}
{and using the semi-positive definiteness of the relative entropy} Eq.\eqref{eq:1}
\begin{align}
\Delta E_A(t) \geq T_A\Delta S_{\text{vN},A}.
\nonumber
\end{align}
Both the equality and the inequality are readily observed in exact diagonalization of Majorana SYK models, see Fig.\ref{fig:OhanesjansLaw}.
\bigskip
\bigskip
Two important remarks can be made:
\begin{enumerate}
\item As the relative entropy is very small at early times the initial rise in energy is completely determined by the rise in the von-Neumann entropy.\footnote{Strictly speaking fine tuned initial conditions
can exist where the von-Neumann entropy decreases, but decreases so little that the small rise in relative entropy nevertheless results in an energy increase in the hotter system.
}
\item This
rise is even present when the reservoir $B$ is at $T_B=0$, as well as when the system and reservoir are at equal $T$ (Fig.\ref{fig:OhanesjansLaw}). This unambiguously points to the growth of quantum entanglement as the contributing factor to the rise in the von-Neumann entropy; (see also \cite{gnezdilov2021information}).
\end{enumerate}
\begin{figure}[t!]
\includegraphics[width=0.7\textwidth]{FigsNew/EA_TASA_TASADA_Leg2_Ta05_05_Tb05_0_SYK_Ns10Js1Nb10Jb1fs1fb1V01_qs4qb4_t-1_10d.pdf}
\caption{{The
energy $E_A$ is verified to equal
the sum
of the von Neumann entropy $\Delta S_{\text{vN},A}$ times the initial temperature $T_A$ and the relative entropy $D_A\equiv D(\rho_A(t)||\rho_{T_A})$. The initial rise in the energy in particular is controlled by the initial rise in the von Neumann entropy.}
This persists when the reservoir is in the groundstate $T=0$ and at equal system-and-reservoir temperature $T_A=T_B$ pointing to entanglement as cause of the rise in von-Neumann entropy. Data from Majorana SYK in exact diagonalization averaged over $R=100$ realizations with parameters on top of the plot.
}
\label{fig:OhanesjansLaw}
\end{figure}
Given that it is the von Neumann entropy growth that controls the early time dynamics between the two subsystem, it is natural to also consider the evolution of mutual information between the two:\footnote{When the system and the reservoir have equal $T$, then
\begin{align}
\Delta E_A(t)+\Delta E_B(t)
&=T\Delta I(A:B) +D(\rho_A(t)||\rho_T)+D(\rho_B(t)||\rho_T)) ~.
\end{align}
since $\Delta S_{A\cup B}(t)=\Delta S_{\text{total}}(t)=0$ due to unitary evolution of the combined system-reservoir combination as a whole. In the early time regime where the relative entropies are very small, the combined energy change in $A$ and $B$, equal to work needed at the moment of a decoupling quench, is then equal to the mutual information. This was first pointed out in \cite{Groisman_2005} where it was shown that the minimum amount of noise to decorrelate two systems equals the mutual information. By Landauer's principle this is then also the minimal amount of work. Note, however, that the energy increase here is not directly related to decorrelation between $A$ and $B$.}
\begin{align}
I(A:B,t)=S_{\text{vN},A}(t)+S_{\text{vN},B}(t)-S_{\text{vN},A\cup B}(t)~,
\end{align}
where $S_{\text{vN},A\cup B} = - \mathrm{Tr}_{A,B} \, \rho_{A\cup B} \ln \rho_{A\cup B}$ with $\rho_{A\cup B}$ being the density matrix of the full system.
It displays two qualitatively distinct regimes: an
initial polynomial increase followed by an exponentially decaying approach to equilibrium.
Qualitatively, the early time $(t<t_{m})$ behaviour of the mutual information resembles the results reported in \cite{Deffner_2020,Deffner_2021} where mutual information was used as a better measure of quantum scrambling, compared to the OTOC. In particular these articles prove that $I(A:B)$ bounds the OTOC from above. This supports our deduction above that the initial energy increase is caused by quantum {correlation- and/or} entanglement-growth and scrambling.
Note that the OTOC of operators between two quenched quantum dots depends on the initial state and interaction between the two dots, hence the early time polynomial increase in our setup. This should not be confused with the exponential growth of OTOC within a single SYK dot, which is driven by strong entanglement.
The articles \cite{Deffner_2020,Deffner_2021} also emphasize the role of decoherence in addition to scrambling. It would be interesting to dissect and analyze their interplay further
but we leave this for the future.
\begin{figure}[h]
\includegraphics[width=0.5\textwidth]{FigsNew/DisAvg_IAB_t_SYK_Ns10Js1Nb10Jb1fs1fb1V01_Ts05_Tb01_qs4qb4_t-1_80d.pdf}
\caption{Growth of the mutual information between subsystems A and B.}
\label{fig:mutual_information}
\end{figure}
\section{The transition from quantum to classical cooling}
At late times after the quench the system behaves fully as expected in that the energy of the hotter system exponentially decreases until it equilibrates.
Given that the initial rise of energy is controlled by the rise in entanglement driven von-Neumann entropy, there are two clear regimes: this initial rise and the late time relaxation (Fig.\ref{fig:qu-vs-class}). For the specific case of the quenched cooling two SYK dots, one can use the fact that large $N$ SYK is exactly solvable to make analytic estimates for both these regimes as well as the intermediate regime and the long time hydrodynamic tails which eventually change the relaxation to equilibrium from exponential to power law \cite{Almheiri2019Universal}.
\begin{figure}[t!]
\includegraphics[width=0.6\textwidth]{FigsNew/DisAvg_EA_DA_t_QuantumClassical_VerLine_SYK_Ns10Js1Nb10Jb1fs1fb1V01_Ts05_Tb01_qs4qb4_t-1_80d.pdf}
\caption{The generic contact quench is characterized by an early time quantum scrambling dominated regime (red) that transitions to a regime exhibiting conventional classical relaxation (green). The transitions between these regimes are not sharp, but roughly indicated by the top of the initial energy bump and the saturation of the relative entropy, where the final density matrix has become approximately thermal.
}
\label{fig:qu-vs-class}
\end{figure}
Here we ask a different question. Having argued that the
initial rise is generically universally controlled by the rising
quantum correlation
contribution to the von-Neumann entropy, under what circumstances does the expected classical physics emerge, where heat immediately flows from hot to cold? The quantum
correlation- and/or
entanglement-growth is always present (except if the full system is purely classical where all the terms in the full Hamiltonian, including the coupling term, commute with each other). This can therefore only happen in circumstances where the ``classical''
relaxation overwhelms the quantum growth. Or more precisely, knowing that
\begin{align}
\Delta E_A(t) \geq T_A\Delta S_{\text{vN},A}(t), \nonumber
\end{align}
this transition can only happen if the ``classical'' thermal contribution to the von-Neumann entropy dominates over the entanglement contribution to the von-Neumann entropy already at the earliest possible time.
From the atomic statistical mechanics underpinning of classical thermodynamics we know that this must happen when we have a theory with well defined particles with suppressed quantum correlations. This should be the case at high temperatures (weak coupling) and low densities.
However, when we study the high $T$ ($T_A, T_B \gg J^2$ and $T_A \gg T_B$) regime in quenched cooling two SYK$_4$-dots, this disappearance of the initial rise and a transition to immediate classical energy flow from hot to cold is not seen to
emerge.
This is even so when we extrapolate our finite size exact diagonalization result to the thermodynamic limit $(N \rightarrow \infty)$ {(with the assumption that the finite $N$ studies do capture the appropriate large $N$ behavior)}. Fig.~\ref{fig:SYKbumpheight_log} shows the height of the energy bump $E_{m}=E_{max}-E(t=0)$ per particle $(E_{m}/N)$ in the Majorana SYK$_4$ model directly before it starts to decrease as a function of the temperature $T_A$. Any finite $N$ system will always contain quantum signatures and the classical behavior need only emerge in a thermodynamic limit. Numerics directly gives away that $E_m$ has a leading scaling with $N$.
Dividing this overall scaling out, a rough extrapolation to $N=\infty$ nevertheless shows that a positive energy bump remains.\footnote{This turns out to also be true for SYK$_2$ models. Though within the random ensemble of SYK$_2$ couplings, there are empirically always realizations for which the energy $E_A$ does decrease instantaneously.
}
\begin{figure}[t!]
\includegraphics[width=0.49\textwidth]{FigsNew/DisAvg_Em_Logbeta_highTInset_SYK_Ns10Js1Nb10Jb1fs1fb1V01_qs4qb4_NR25vsNR100_.pdf}
\includegraphics[width=0.49\textwidth]{FigsNew/DisAvg_tm_Logbeta_highTInset_SYK_Ns10Js1Nb10Jb1fs1fb1V01_qs4qb4_NR25vsNR100_.pdf}
\\[.1in]
\includegraphics[width=1\textwidth]{FigsNew/Em_N_SYK_Ns6_8_10_12Js1qs4Ta05_10_50Nb6_8_10_12Jb1qb4Tb01_01_01_V01.pdf}
\caption{Quenched cooling of two SYK$_4$ dots. {\bf Top:} Height $E_m$ of the energy bump (left) and time $t_m$ of the bump (right) for various initial temperatures $T_A=1/\beta_A$.
{\bf Bottom:} Height $E_m$ of the energy bump roughly extrapolated to larger $N$ for two different initial temperatures $\beta_A$. The height stays finite in this thermodynamic limit, indicated by $a>0$.
Combining the top and the bottom, the
initial rise in the hotter system energy $E_A$ seems to persist for any finite $T_A$ and infinite $N$.
}
\label{fig:SYKbumpheight_log}
\end{figure}
\begin{figure}
\includegraphics[width=0.49\textwidth]{FigsNew/Em_Logbeta_CA_T01_310_highTInset_TIsing_Ns6_gs0_1_1_hs05_0_01_Nb6_gb0_1_1_hb05_0_005_V01.pdf}
\includegraphics[width=0.49\textwidth]{FigsNew/tm_Logbeta_CA_T01_310_highTInset_TIsing_Ns6_gs0_1_1_hs05_0_01_Nb6_gb0_1_1_hb05_0_005_V01.pdf}
\\[.1in]
\includegraphics[width=1\textwidth]{FigsNew/Em_N_CA_TIsing_Ns4_5_6_7gs1_1hs0_005Ta05_50Nb4_5_6_7gb1_1hb0_005Tb01_01_V01.pdf}
\caption{Quenched cooling in two Ising half lines.
{\bf Top:} Height $E_m$ of the energy bump (left) and time $t_m$ of the bump (right) in for various parameter choices. {\bf Bottom:} Height $E_m$ of the energy bump extrapolated to larger $N$ for various initial temperatures $T_A=1/\beta_A$. For each initial temperature there is a finite extrapolated value of $N$ for which the bump disappears $(a\leq 0)$ and the system will cool instantaneously upon contact. The higher the initial temperature, the lower is this value of $N$.}
\label{fig:IsingBumpheight}
\end{figure}
To try to find the crossover to expected classical behavior where the energy rise in the hot system is absent, we change
the quenched cooling set-up from two SYK quantum dots to two mixed field Ising half-lines Eq. \eqref{eq:H_Ising} with a tunneling interaction at the end point of each line Eq.\eqref{eq:H_Ising_Interaction}.
Both at the free $g=0,h=0$ \cite{Doyon:2014qsa} and at the conformal fixed point $g=1, h=0$ in the continuum (thermodynamic) limit one can use conformal field theory techniques to study this type of quenched cooling \cite{Bernard:2012je,Bernard:2013aru,Bhaseen:2013ypa}. Then one indeed finds that there is no initial energy rise, but the energy starts to flow instantaneously from hot to cold.
As is well known by now, in the regime $h=0$
the late time behavior of the two subsystems, if isolated, is controlled by the large number of conserved charges and an associated generalized hydrodynamical relaxation towards a generalized Gibbs ensemble \cite{DeLuca:2013laa,2020GHDLectures}. The presence of the coupling term $\lambda$ makes the full system not integrable.
Indeed for the case $g=0$ {($h\neq 0$)} there is for any system size an immediate energy decrease in the hot subsystem, as shown in Fig. \ref{fig:IsingBumpheight} (top). This case is classical with only a small quantum tunneling between the two subsystems.
For generic values of $g$ and $h$, on the other hand, there is an initial rise in energy in accordance with the universal relation Eq.\eqref{eq:1}. The height of the energy bump $(E_m)$ is now independent of $N$, due to the {more local point-like interaction compared to the SYK non-local all-to-all tunneling.}
This
suggests that
the bump energy per particle $(E_m/N)$ will
vanish in the thermodynamic limit to match our classical intuition. However, instead of such a thermodynamic vanishing, we should expect that also a finite size system exist where semi-classical hot-to-cold energy dynamics overwhelms the information-driven gain at short times.
Indeed for a fixed temperature, we can estimate where the bump disappears, by
extrapolating the $E_{m}/N$ to
large $N$. Now we see the foretold disappearance of the bump at a fixed finite temperature at a finite value of $N$, restoring our classical intuition (Fig.\ref{fig:IsingBumpheight}).
An explicit finite $N$ example is given in Fig.\ref{fig:fixedptIsing}.
This finite $N$ example shows that it is not simply the fact that the interaction is local and thus non-extensive in the thermodynamic limit, that causes it to vanish for higher temperatures.
\begin{figure}[!t]
\includegraphics[width=0.5\textwidth]{FigsNew/EA_t_Transition_TIsing_Ns6gs_1_hs_005Nb6gb_1_hb_005_V01Ts72_Ts80_Tb01_Tb01_t-0d0002_0d02.pdf}
\caption{Quenched cooling in two Ising half lines. For $T<T_c\simeq 77.845 J$
one still observes the
initial rise in the hotter system $A$, but for $T>T_c$ one transitions to a regime where classical intuition is restored and the system cools
instantaneously upon contact.}
\label{fig:fixedptIsing}
\end{figure}
The most interesting case is the conformal point of the Ising model (Fig. \textcolor{red}{\ref{fig:IsingBumpheight}}) (see also \cite{Kormos_2017}). At exactly $g=1, h=0$ the bump only disappears by extrapolation to the continuum limit, similar to the SYK$_4$ results. This is still consistent with the earlier results on quenched cooling in conformal systems \cite{Bernard:2012je,Bernard:2013aru,Bhaseen:2013ypa}. The absence of a bump found there relies on conformal symmetry which is only a true symmetry in the continuum limit.
At the same time for any finite size quantum system at low $T$, there appears to always be a small but non-zero counterintuitive initial rise. The bump is a correlation driven effect, as a simple ballistic collision model based on the Boltzmann equation will never have an initial energy rise in the hot system \cite{Doyon:2014qsa}.\footnote{Perhaps the easiest way to see this is to realize that the quenched cooling protocol is the quantum version of the Riemann problem in hydrodynamics. In hydrodynamics one assumes local equilibrium and thus an absence of correlations between different spatial points at distances larger than the local mean free path.} The correlation can still be either quantum or classical statistical. In the latter case, this classical statistical two-particle correlation (the two-particle distribution function) vanishes in the thermodynamic limit in accordance with the assumption of molecular chaos.
In summary, classical thermodynamics --- or rather hydrodynamics as we are studying time-dependent processes --- emerges in the quasi-particle (high temperature low density) limit with a non-extensive interaction between system and reservoir and after taking the thermodynamic limit.
The converse is that in quantum systems the
initial rise
in energy in the hot system that undergoes quenched cooling is robust and generic, though not required, and universally explained by Eq.\eqref{eq:1a}.
\section{Conclusion}
In this manuscript we have analyzed the origins of the observed counter-intuitive early time energy increase in hotter systems {quench-coupled} to a cooler reservoir in quantum simulations.
Our numerical study of Majorana SYK$_4$, using exact diagonalization demonstrates that the early time energy behaviour is proportional to the increase of the von Neumann entropy and is not related to a thermal flux from the cold to the hot system, {demonstrating} the quantum nature of this phenomenon.
The energy increase is counterbalanced by the negative interaction potential (expectation value of the tunneling term in the Hamiltonian). In the setup here, the coupling quench does not supply energy into the system and the total energy is conserved. The same potential sets the amount of work needed to decouple the systems at given later time.
This peculiar phenomenon is well explained by the quantum {non-equilibrium extension} of the first law of thermodynamics Eq.\eqref{eq:1a} where the relative entropy $D(\rho(t)||\rho_T)$ plays a crucial role. Starting from a thermal state $D(\rho(t=0)||\rho_T)=0$ and using the positive semi-definiteness $D(\rho(t)||\rho_T)\geq 0$ the von Neumann entropy, scaled by the initial temperature, then sets a lower bound on the energy in each subsystem \eqref{eq:1}. This links the observed energy increase even in the hotter subsystem to an increase of the von Neumann entropy. Moreover, at sufficiently early times the change of the relative entropy is negligible compared to the energy which has two interesting consequences. Firstly, the early time evolution of the energy is almost directly proportional to the von Neumann entropy as we emphasized in our earlier paper \cite{gnezdilov2021information}; this provides a way to measure (dynamical) entanglement between two subsystems.\footnote{As the relative entropy is a measure of how distinguishable two states are, extremely small relative entropy means that at early times the subsystem is nearly indistinguishable from its initial thermal state implying that the energy increase is not related to a temperature rise, contrary to what was suggested in other papers {\cite{Zhang2019Evaporation,Almheiri2019Universal}}.}
{Secondly, it proves that the initial thermal state isn't instantaneously destroyed, hence the initial energy rise is not related to a temperature increase.}
The universality of this bound gives rise to an even more puzzling question: Why is such an energy increase not commonly encountered in our daily life? The reason lies in the quantum nature of this phenomenon. We show that at high temperatures in weakly interacting quasi-particle systems the height of the bump is suppressed and the time it crests gets very short. In the thermodynamic limit it vanishes altogether, making it essentially {unnoticeable at everyday macroscopic scales.}
As our results for SYK and the conformal point of the mixed field Ising model show, the more quantum mechanical the system is the closer one must push to the continuum quasiparticle limit for this bump to disappear and classical intuition to be restored. By extrapolation of our numerical simulation this is only ever possible to achieve in the strict thermodynamic limit.
This energy increase of the hotter system defies our intuition and understanding of classical thermodynamics but, as demonstrated here, it is well in accord with the laws of quantum thermodynamics.
There are three notable considerations that follow: There has been an substantial amount of research in the past few years on the out-of-time-ordered correlation function as a probe of classical and quantum chaos resulting in information exchange, scrambling and entropy growth (see e.g. \cite{2018NatPh..14..988S}). The standard wisdom is that this information flow is separate and faster than energy flow, because the latter is constrained by a conservation equation, as recalled for instance in \cite{Khemani:2017nda}. The result here and particular the inequality Eq.\eqref{eq:1} shows that this information flow, even though it is faster, must always drag some energy with it.
Secondly, one of the motivations to study SYK quenched cooling has been the equivalence with black hole evaporation through the holographic AdS/CFT correspondence. Because the evaporation of the black hole must expose the information behind the horizon, the quench can be modeled in the black hole context by a negative energy shock wave \cite{Engelsoy:2016xyb,Almheiri:2018xdw}, which shrinks the horizon upon contact.
The result here shows that at very early times (before the shock hits the horizon in global time), there should be an interesting connection between the Ryu-Takayanagi entanglement surface encoding the von-Neumann entropy and the dynamics of the energy wavefront that holographically encodes Eq.\eqref{eq:1}.
Finally, as already emphasized in \cite{gnezdilov2021information}, the inequality Eq.\eqref{eq:1} saturates in perturbation theory and can therefore be used in quenched cooling of weakly coupled systems to probe the von-Neumann entropy. Moreover, this is a universal result in the short time scale regime which is normally considered too sensitive to {peculiar details of the experimental set-up and the system} to be of interest. {It invites an experimental measurement of this universal way the von-Neumann entropy determines the energy response.}
\textbf{Acknowledgements} --- We thank Jan Zaanen for discussions in the early stage of this project and we thank Sebastian Deffner and Akram Touil for discussions. This research was supported in part by the Netherlands Organization for Scientific Research/Ministry of Science and Education (NWO/OCW), by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme.
|
1809.00738
|
\section{Introduction}
In its most concrete form, a \emph{lens} $S \ensuremath{\,\mathaccent\shortmid\rightarrow\,} A$ is a pair of maps ${\textsc{Get} : S \to A}$ and ${\textsc{Put} : S \times A \to S}$. From an engineering standpoint, such a lens allows us to ``zoom in'' on $S$ to focus on a small part $A$, manipulate $A$ in some way, then ``zoom out'' and have our changes reflected in $S$~\cite{CombinatorsForBidirectionalTreeTransformations}.
So that our lenses better adhere to this intuitive idea of ``zooming in'', we often want them satisfy some conditions known as the \emph{lens laws}:
\begin{center}
\begin{minipage}[b]{0.33333\textwidth}
\begin{center}
\[
\begin{tikzcd}
S \times A \ar[rr, "\textsc{Put}"] \ar[dr, "\pi_2", swap] && S \ar[dl, "\textsc{Get}"] \\
& A
\end{tikzcd}
\]
\hspace{0.8cm}$\textsc{Put}\textsc{Get}$
\end{center}
\end{minipage}%
\begin{minipage}[b]{0.33333\textwidth}
\begin{center}
\[
\begin{tikzcd}
S \ar[rr, "{[\mathrm{id}_S, \textsc{Get}]}"] \ar[dr, "\mathrm{id}_S", swap] && S \times A \ar[dl, "\textsc{Put}"] \\
& S
\end{tikzcd}
\]
\hspace{-0.6cm}$\textsc{Get}\textsc{Put}$
\end{center}
\end{minipage}%
\begin{minipage}[b]{0.33333\textwidth}
\begin{center}
\[
\begin{tikzcd}
S \times A \times A \ar[r, "\textsc{Put} \times A"] \ar[d, "\pi_{1, 3}", swap] & S \times A\ar[d, "\textsc{Put}"] \\
S \times A \ar[r, "\textsc{Put}", swap] & A
\end{tikzcd}
\]
\quad$\textsc{Put}\fput$
\end{center}
\end{minipage}%
\end{center}
We call such lenses lawful. The $\textsc{Put}\textsc{Get}$ law states that any update to $A$ is represented faithfully in $S$. The $\textsc{Get}\textsc{Put}$ law states that if $A$ is not changed then neither is $S$; and finally, the $\textsc{Put}\fput$ law states that any update to $A$ completely overwrites previous updates.
Lenses form a category, with the composition of two lenses $(\textsc{Get}_1, \textsc{Put}_1) : T \ensuremath{\,\mathaccent\shortmid\rightarrow\,} S$ and $(\textsc{Get}_2, \textsc{Put}_2) : S \ensuremath{\,\mathaccent\shortmid\rightarrow\,} A$ as indicated:
\begin{align*}
\textsc{Get} &: T \xrightarrow{\textsc{Get}_1} S \xrightarrow{\textsc{Get}_2} A \\
\textsc{Put} &: T \times A \xrightarrow{[\mathrm{id}_T, \textsc{Get}_1] \times A} T \times S \times A \xrightarrow{T \times \textsc{Put}_2} T \times S \xrightarrow{\textsc{Put}_1} T
\end{align*}
If the two input lenses are lawful then the composite is as well, so we find there is a subcategory of lawful lenses.
Lenses were discovered to be just one of a hierarchy of data accessors, including prisms, setters, traversals and more. These are collectively called \emph{optics} and have been best explored in the widely uesd Haskell \texttt{lens}{} library: see~\cite{LensLibrary}. Each optic variant has a concrete description as a certain collection of maps, with attendant laws under which we consider them well-behaved, similar to the pair $(\textsc{Get}, \textsc{Put})$ above and the lens laws. We begin in Section~\ref{sec:optics} by defining the \emph{category of optics} for a symmetric monoidal category in a sufficiently general way to encompass almost all the optic variants in use in the wild, using lenses as a running example. The category of lenses is precisely the result of this construction when applied to a symmetric monoidal category where the tensor is given by binary product. Section~\ref{sec:lawful-optics} defines the equivalent of the lens laws for a general category of optics. Then in Section~\ref{sec:examples} we see that these generic definitions specialise correctly to the other basic varieties of optic, including the laws.
When implementing optics, library authors often use a form known as the \emph{profunctor encoding}, which at first glance is completely different to that given in Section~\ref{sec:optics}. (The Haskell \texttt{lens}{} library itself actually uses a variant called the \emph{van Laarhoven encoding}, for reasons of efficiency and backwards compatibility.) As this was being written, Milewski~\cite{ProfunctorOpticsPost} and Boisseau and Gibbons~\cite{YouNeeda} independently described the isomorphism between optics and their profunctor encoding. In Section~\ref{sec:profunctor-optics} we review this isomorphism and verify that the folklore profunctor optic laws are equivalent to lawfulness as defined here.
More recently, concrete lenses have found use in compositional game theory~\cite{CompositionalGameTheory}. The $\textsc{Get}$ function is thought of as mapping observations on the state of play to choices of what move to make. The $\textsc{Put}$ function computes the utility of the moves that the players choose. There is interest in generalising this to a probabilistic setting, but it is not yet clear what the right replacement for concrete lenses is.
Much of what is known about optics is folklore, and careful verification of some of their categorical properties has been lacking, especially when working in categories other than $\mathbf{Set}$ (or $\mathbf{Set}$-like categories such as $\mathbf{Hask}$). The aim of the present paper is to fill this gap, with the hope that a better understanding of the general structure of these categories will make it easier to generalise optics to new and exotic settings. This is particularly important with the advent of linear types in Haskell, enabling a new branch of the lens family tree, and also with the new applications to game theory.
\subsection{Contributions}
\begin{itemize}
\item A careful account of the folklore optic construction in an arbitrary symmetric monoidal category $\mathscr{C}$, which we show extends to a functor $\mathbf{Optic} : \mathbf{SymmMonCat} \to \mathbf{SymmMonCat}$ (Section~\ref{sec:optics}),
\item A universal property of the $\mathbf{Optic}$ construction as freely adding counits to a category of `dualisable morphisms' (Section~\ref{sec:teleological-categories}),
\item A definition of lawfulness for a general optic category that specialises in the correct way to known cases and allows us to derive concrete laws for new kinds of optic (Section~\ref{sec:lawful-optics}),
\item Commentary on the optic variants used most frequently in the wild (Section~\ref{sec:examples}),
\item A proof that lawfulness as defined here is equivalent to the folklore profunctor optic laws (Section~\ref{sec:profunctor-optics}).
\end{itemize}
\subsection{(Co)ends and Yoneda Reduction}
In this paper we will make frequent use of the (co)end calculus. For a comprehensive introduction to ends and coends, see~\cite{CoendCofriend}. We write $\copr_X : F(X, X) \to \int^{X \in \mathscr{M}} F(X, X)$ for the structure maps of a coend. The most important results for us regarding ends and coends are:
\begin{lemma}[Coend as coequaliser]\label{lemma:calculate-coend}
If $\mathscr{E}$ is cocomplete and $\mathscr{M}$ is small, the coend of $P : \mathscr{M}^\mathrm{op} \times \mathscr{M} \to \mathscr{E}$ can be calculated as the coequaliser in the diagram
\[
\begin{tikzcd}
\displaystyle \coprod_{M \to N} P(N, M) \ar[r,shift left=.75ex] \ar[r,shift right=.75ex] & \displaystyle\coprod_{M \in \mathscr{M}} P(M, M) \ar[r] & \displaystyle\int^{M \in \mathscr{M}} P(M, M)
\end{tikzcd}
\]
\qed
\end{lemma}
\begin{lemma}[Ninja Yoneda Lemma/Yoneda Reduction]\label{lem:yoneda-reduction}
For every functor $K : \mathscr{C}^\mathrm{op} \to \mathbf{Set}$ and $H : \mathscr{C} \to \mathbf{Set}$, we have the following natural isomorphisms:
\begin{align*}
KX &\cong \int^{C \in \mathscr{C}} KC \times \mathscr{C}(X,C) &
KX &\cong \int_{C \in \mathscr{C}} \mathbf{Set}(\mathscr{C}(C,X), KC) \\
HX &\cong \int^{C \in \mathscr{C}} HC \times \mathscr{C}(C,X) &
HX &\cong \int_{C \in \mathscr{C}} \mathbf{Set}(\mathscr{C}(X,C), HC)
\end{align*}
where the isomorphisms are given by inclusion with the identity morphism $\mathscr{C}(X, X)$ for the left two, and evaluation at the identity morphism on the right.
\qed
\end{lemma}
\begin{theorem}[Fubini Theorem]
For a functor $F : \mathscr{C}^\mathrm{op} \times \mathscr{C} \times \mathscr{D}^\mathrm{op} \times \mathscr{D} \to \mathscr{E}$, there are canonical isomorphisms
\begin{align*}
\int^{C \in \mathscr{C}} \int^{D \in \mathscr{D}} F(C,C,D,D) \cong \int^{(C,D) \in \mathscr{C} \times \mathscr{D}} F(C,C,D,D) \cong \int^{D \in \mathscr{D}} \int^{C \in \mathscr{C}} F(C,C,D,D)
\end{align*}
\qed
\end{theorem}
\begin{lemma}[Mute coends]\label{lem:mute-coend}
Consider a functor $F : \mathscr{C} \to \mathscr{E}$ as a functor $\mathscr{C}^\mathrm{op} \times \mathscr{C} \to \mathscr{E}$ that ignores its contravariant argument. Then \[ \int^{C \in \mathscr{C}} F(C) \cong \colim F. \] \qed
\end{lemma}
\section{Optics}\label{sec:optics}
We begin by defining the category of optics for a symmetric monoidal category. This category was first defined in~\cite[Section 6]{Doubles} as the `double' of a monoidal category. There it was used for a completely different purpose---to investigate the relationship between Tambara modules and the `center' of a monoidal category. Our definition is almost identical, the only differences being that we have flipped the direction of the morphisms to match the existing work on lenses and restricted our attention to the unenriched setting.
Our definition of optic has as domain and codomain \emph{pairs} of objects of $\mathscr{C}$, one of which behaves covariantly and the other contravariantly. For example, our lenses will be pairs of maps $\textsc{Get} : S \to A$ and $\textsc{Put} : S \times A' \to S'$. This generality is important for the applications to game theory, and in fact helps in calculations by making the covariant and contravariant positions more difficult to confuse. Readers more familiar with lenses should ignore the primes.
In this section we work with a fixed symmetric monoidal category $(\mathscr{C}, \otimes, I)$, with associator $\alpha$ and unitors $\lambda$ and $\rho$. To avoid getting lost in the notation we will use the standard cheat of omitting associativity morphisms and trust that the dedicated reader could insert them everywhere they are needed.
\begin{definition}
Given two pairs of objects of $\mathscr{C}$, say $(S, S')$ and $(A, A')$, an \emph{optic} $p : (S, S') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (A, A')$ is an element of the set
\begin{align*}
\mathbf{Optic}_\mathscr{C}((S, S'), (A, A')) := \int^{M \in \mathscr{C}} \mathscr{C}(S, M \otimes A) \times \mathscr{C}(M \otimes A', S')
\end{align*}
\end{definition}
Because this coend takes place in $\mathbf{Set}$, we can use Lemma~\ref{lemma:calculate-coend} to describe $\mathbf{Optic}_\mathscr{C}((S, S'), (A, A'))$ explicitly. It is the set of pairs $(l, r)$, where $l : S \to M \otimes A$ and $r : M \otimes A' \to S'$, quotiented by the equivalence relation generated by relations of the form
\begin{align*}
((f \otimes A) l, r) \sim (l, r (f \otimes A'))
\end{align*}
for any $l : S \to M \otimes A$, $r : N \otimes A' \to S'$ and $f : M \to N$.
For a pair of maps $l : S \to M \otimes A$ and $r : M \otimes A' \to S'$, we write $\rep{l}{r} : (S, S') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (A, A')$ for their image in $\mathbf{Optic}_\mathscr{C}((S, S'), (A, A'))$, and say that the object $M$ is the \emph{residual} for this representative. Optics will always be written with a crossed arrow $\ensuremath{\,\mathaccent\shortmid\rightarrow\,}$ to distinguish them from morphisms of $\mathscr{C}$.
The residual $M$ should be thought of as a kind of `scratch space'; information from $S$ that we need to remember to construct $S'$. The quotienting imposed by the coend means we cannot inspect this temporary information, indeed, given an optic $S \ensuremath{\,\mathaccent\shortmid\rightarrow\,} A$ there is not even a canonical choice for the object $M$ in general.
Elements of $\mathbf{Optic}_\mathscr{C}((S, S'), (A, A'))$ have an appealing
interpretation as string diagrams with a ``hole'' missing. We draw the
pair $\rep{l}{r}$ as
\begin{center}
\input{diagrams/generic-optic.tikz}
\end{center}
reading left to right, so the portion of the diagram to the left of the line represents $l$ and the right portion $r$. The relation expressed by the coend can be drawn graphically as:
\begin{center}
\input{diagrams/coend-relation-left.tikz}
\hspace{0.7cm} \raisebox{1.35cm}{$\sim$} \hspace{1cm}
\input{diagrams/coend-relation-right.tikz}
\end{center}
We will therefore omit the vertical cut between $l$ and $r$ in most subsequent diagrams; any choice yields a representative of same optic.
A common use of the coend relation is to introduce or cancel isomorphisms. Given $l : S \to M \otimes A$ and $r : M \otimes A \to S$, for any isomorphism $f : M \to N$ we have
\begin{align*}
\rep{l}{r} = \rep{(f^{-1} \otimes A)(f \otimes A)l}{r} = \rep{(f \otimes A)l}{r(f^{-1} \otimes A)}
\end{align*}
Diagrammatically, this is the equality
\begin{center}
\input{diagrams/generic-optic.tikz}
\hspace{0.8cm} \raisebox{1.5cm}{$=$} \hspace{1cm}
\input{diagrams/generic-optic-with-iso.tikz}
\end{center}
\begin{example}
~\begin{enumerate}[(1)]
\item For any three objects $M, A, A' \in \mathscr{C}$, there is the \emph{tautological} optic \[t_{M,A,A'} : (M \otimes A, M \otimes A') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (A, A')\] given by $\rep{\mathrm{id}_{M \otimes A}}{\mathrm{id}_{M \otimes A'}}$.
This would be drawn as follows:
\begin{center}
\input{diagrams/tautological-optic.tikz}
\end{center}
\item We also have the \emph{identity} optic $\mathrm{id}_{(S, S')} : (S, S') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (S, S')$, given by $\rep{\lambda^{-1}_S}{\lambda_{S'}}$, where $\lambda_S : I \otimes S \to S$ is the left unitor for $S$ and similarly for $S'$.
The identity optic is drawn as
\begin{center}
\input{diagrams/identity-optic-full.tikz}
\end{center}
This dashed line above the diagram represents the unit object. It is common in string diagrams to omit unitors and the unit object unless they are necessary to make sense of the diagram. We therefore prefer to draw the identity morphism as:
\begin{center}
\input{diagrams/identity-optic.tikz}
\end{center}
\end{enumerate}
\end{example}
Optics compose as follows. The easiest interpretation is graphical: composition corresponds to substituting the first optic for the hole of the second:
\begin{center}
\input{diagrams/generic-optic-noline.tikz}
\hspace{0.9cm} \raisebox{1.5cm}{$\circ$} \hspace{1cm}
\input{diagrams/generic-optic2-noline.tikz} \\
\raisebox{1.5cm}{$:=$}\qquad
\input{diagrams/optic-composition.tikz}
\end{center}
More formally, we wish to construct a map
\begin{align*}
&\left(\int^{M \in \mathscr{C}} \mathscr{C}(S, M \otimes A) \times \mathscr{C}(M \otimes A', S')\right) \times \left(\int^{N \in \mathscr{C}} \mathscr{C}(R, N \otimes S) \times \mathscr{C}(N \otimes S', R')\right) \\
&\quad \to \int^{M \in \mathscr{C}} \mathscr{C}(R, M \otimes A) \times \mathscr{C}(M \otimes A', R').
\end{align*}
The product in $\mathbf{Set}$ preserves colimits, so in particular coends. Using this fact and the Fubini theorem for coends, the domain is isomorphic to
\begin{align*}
\int^{(M, N) \in \mathscr{C} \times \mathscr{C}} \mathscr{C}(S, M \otimes A) \times \mathscr{C}(M \otimes A', S') \times \mathscr{C}(R, N \otimes S) \times \mathscr{C}(N \otimes S', R').
\end{align*}
So by the universal property of coends, it suffices to construct maps
\begin{align*}
& \mathscr{C}(S, M \otimes A) \times \mathscr{C}(M \otimes A', S') \times \mathscr{C}(R, N \otimes S) \times \mathscr{C}(N \otimes S', R') \\ &
\quad \to \int^{M \in \mathscr{C}} \mathscr{C}(R, M \otimes A) \times \mathscr{C}(M \otimes A', R').
\end{align*}
natural in $M$ and $N$. For these we use the composites
\begin{align*}
&\mathscr{C}(S, M \otimes A) \times \mathscr{C}(M \otimes A', S') \times \mathscr{C}(R, N \otimes S) \times \mathscr{C}(N \otimes S', R')\\
\to \,& \mathscr{C}(N \otimes S, N \otimes M \otimes A) \times \mathscr{C}(N \otimes M \otimes A', N \otimes S') \times \mathscr{C}(R, N \otimes S) \times \mathscr{C}(N \otimes S', R') && \text{(functoriality of $N \otimes -$)} \\
\to \,& \mathscr{C}(R, N \otimes M \otimes A) \times \mathscr{C}(N \otimes M \otimes A', R') && \text{(composition in $\mathscr{C}$)} \\
\to \,&\int^{P \in \mathscr{M}} \mathscr{C}(R, P \otimes A) \times \mathscr{C}(P \otimes A', R') && \text{($\copr_{N \otimes M}$)}
\end{align*}
Written equationally, suppose $\rep{l'}{r'} : (R, R') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (S, S')$ and $\rep{l}{r} : (S, S') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (A, A')$ are optics with $M$ the residual for $\rep{l'}{r'}$. The composite $(R, R') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (A, A')$ is then: \[\rep{l}{r} \circ \rep{l'}{r'} := \rep{(M \otimes l)l'}{r'(M \otimes r)}.\]
\begin{proposition}\label{prop:optic-is-cat}
The above data form a category $\mathbf{Optic}_\mathscr{C}$.
\end{proposition}
\begin{proof}
In~\cite[Section 6]{Doubles} this is proven abstractly by exhibiting this category as the Kleisli category for a monad in the bicategory $\mathbf{Prof}$. We prefer a direct proof.
Suppose we have representatives of three optics
\begin{align*}
\rep{l_1}{r_1} &: (R, R) \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (S, S') \\
\rep{l_2}{r_2} &: (S, S') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (A, A') \\
\rep{l_3}{r_3} &: (A, A') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (B, B'),
\end{align*}
that have residuals $M$, $N$ and $P$ respectively. We must choose these representatives simultaneously but, as in the definition of composition, this is allowed by the Fubini theorem. Then:
\begin{align*}
(\rep{l_3}{r_3} \circ \rep{l_2}{r_2}) \circ \rep{l_1}{r_1}
&= \rep{(N \otimes l_3)l_2}{r_2(N \otimes r_3)} \circ \rep{l_1}{r_1} \\
&= \rep{(M \otimes ((N \otimes l_3)l_2))l_1}{r_1(M \otimes (r_2(N \otimes r_3)))} \\
&= \rep{(M \otimes N \otimes l_3)(M \otimes l_2)l_1}{r_1(M \otimes r_2)(M \otimes N \otimes r_3)} \\
&= \rep{l_3}{r_3} \circ (\rep{(M \otimes l_2)l_1}{r_1(M \otimes r_2)}) \\
&= \rep{l_3}{r_3} \circ (\rep{l_2}{r_2} \circ \rep{l_1}{r_1})
\end{align*}
For the unit laws, suppose we have $\rep{l}{r} : (S, S') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (A, A')$ with representative $M$. We calculate:
\begin{align*}
\mathrm{id}_{A, A'} \circ \rep{l}{r}
&= \rep{\lambda^{-1}_A}{\lambda_{A'}} \circ \rep{l}{r} \\
&= \rep{(M \otimes \lambda^{-1}_A) l}{r (M \otimes \lambda_{A'})} \\
&= \rep{(\rho^{-1}_M \otimes A) l}{r (\rho_M \otimes A')} \\
&= \rep{l}{r (\rho_M \otimes A') (\rho^{-1}_M \otimes A')} \\
&= \rep{l}{r} \\
\rep{l}{r} \circ \mathrm{id}_{S, S'}
&= \rep{l}{r} \circ \rep{\lambda^{-1}_S}{\lambda_{S'}} \\
&= \rep{(I \otimes l)\lambda^{-1}_S}{\lambda_{S'} (I \otimes r)} \\
&= \rep{(\lambda^{-1}_M \otimes S)l}{r (\lambda_{M} \otimes S')} \\
&= \rep{l}{r (\lambda_{M} \otimes S')(\lambda^{-1}_M \otimes S')} \\
&= \rep{l}{r}
\end{align*}
In both cases we have used the coend relation to cancel an isomorphism appearing on both sides of an optic.
\end{proof}
Note that the homsets of $\mathbf{Optic}_\mathscr{C}$ are given by a coend indexed by a possibly large category. If $\mathscr{C}$ is small then these coends always exist, but if $\mathscr{C}$ is not small their existence is not guaranteed by the cocompleteness of $\mathbf{Set}$. Because of this we should be careful to only discuss optic categories where we know that the coends exist by some other means, e.g., by exhibiting an isomorphism of $\mathbf{Optic}_\mathscr{C}((S, S'), (A, A'))$ with a set. For all of the examples we give later we provide such a isomorphism.
\begin{proposition}
If $\mathscr{C}$ is a category with finite products, then $\mathbf{Lens} \mathrel{\vcentcolon=} \mathbf{Optic}_\mathscr{C}$ is the category of lenses described in the introduction (so long as we restrict to optics of shape $(S, S) \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (A, A)$).
\end{proposition}
\begin{proof}
We see that optics correspond to pairs of $\textsc{Get}$ and $\textsc{Put}$ functions via the following isomorphisms:
\begin{align*}
\mathbf{Lens}((S, S'), (A, A'))
&= \int^{M \in \mathscr{C}} \mathscr{C}(S, M \times A) \times \mathscr{C}(M \times A', S') \\
&\cong \int^{M \in \mathscr{C}} \mathscr{C}(S, M) \times \mathscr{C}(S, A) \times \mathscr{C}(M \times A', S') && \text{(universal property of product)} \\
&\cong \mathscr{C}(S, A) \times \mathscr{C}(S \times A', S') && \text{(Yoneda reduction)}
\end{align*}
This last step deserves some explanation. We are applying the isomorphism $KX \cong \int^{C \in \mathscr{C}} \mathscr{C}(X,C) \times KC$ of Lemma~\ref{lem:yoneda-reduction} to the case $X = S$ and $K = \mathscr{C}(S, A) \times \mathscr{C}(- \times A', S')$.
Explicitly the isomorphism states that, given an optic $\rep{l}{r} : (S, S') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (A, A')$, the corresponding concrete lens is the pair $\textsc{Get} : S \to A$ and $\textsc{Put} : S \times A' \to S'$, where $\textsc{Get} = \pi_2 l$ and $\textsc{Put} = r (\pi_1 l \times A)$. In the other direction, given $(\textsc{Get}, \textsc{Put})$, the corresponding optic is represented by $\rep{[\mathrm{id}_S, \textsc{Get}]}{\textsc{Put}}$.
We leave it to the reader to verify that composition in $\mathbf{Lens}$ corresponds to ordinary composition of concrete lenses by using this isomorphism in both directions. (Of course, there is only one sensible way to compose such a collection of morphisms!)
\end{proof}
\begin{proposition}\label{prop:iota-functor}
There is a functor $\iota : \mathscr{C} \times \mathscr{C}^\mathrm{op} \to \mathbf{Optic}_\mathscr{C}$, which on objects is given by $\iota(S, S') = (S, S')$ and on morphisms $(f, g) : (S, S') \to (A, A')$ by $\iota(f, g) = \rep{\lambda_A^{-1} f}{g \lambda_{A'}}$.
\end{proposition}
\begin{proof}
Graphically, this is:
\begin{center}
\input{diagrams/iota.tikz}
\end{center}
This preserves identities, as the identity on an object $(S, S')$ in $\mathbf{Optic}_\mathscr{C}$ is defined to be exactly $\rep{\lambda^{-1}_S}{\lambda_{S'}}$. To check functoriality, suppose we have $(f, g) : (S, S') \to (A, A')$ and $(f', g') : (A, A') \to (B, B')$ in $\mathscr{C} \times \mathscr{C}^\mathrm{op}$. Then:
\begin{align*}
\iota(f', g') \circ \iota(f, g)
&= \rep{\lambda^{-1}_B f'}{g' \lambda_{B'}} \circ \rep{\lambda^{-1}_A f}{g \lambda_{A'}} \\
&= \rep{(I\otimes (\lambda^{-1}_B f'))\lambda^{-1}_A f}{g \lambda_{A'} (I\otimes (g' \lambda_{B'}))} && \text{(By definition of $\circ$)}\\
&= \rep{(I \otimes \lambda^{-1}_B) (I \otimes f')\lambda^{-1}_A f}{g \lambda_{A'} (I \otimes g')(I\otimes \lambda_{B'})} && \text{(Functoriality of $I \otimes -$)}\\
&= \rep{(I\otimes \lambda^{-1}_B) \lambda^{-1}_B f' f}{g g' \lambda_{B'} (I\otimes \lambda_{B'})} && \text{(Naturality of $\lambda$)}\\
&= \rep{(\lambda^{-1}_I \otimes B) \lambda^{-1}_B f' f}{g g' \lambda_{B'} (\lambda_I \otimes B')} && \text{(Unitality of action)} \\
&= \rep{\lambda^{-1}_B f' f}{g g' \lambda_{B'} (\lambda_I \otimes B') (\lambda^{-1}_I \otimes B')} && \text{(Coend relation)} \\
&= \rep{\lambda^{-1}_B f'f}{g g' \lambda_{B'}} \\
&= \iota(f'f, gg')
\end{align*}
Graphically, there is not much to do:
\begin{center}
\input{diagrams/iota-functorial-left.tikz}
\qquad \raisebox{0.3cm}{$=$} \qquad
\input{diagrams/iota-functorial-right.tikz}
\end{center}
\end{proof}
There are some other easy-to-construct optics; specifically, optics out of and into the monoidal unit $(I, I)$. Such maps in a monoidal category are sometimes called states and costates~\cite{CategoricalQuantumMechanics}.
\begin{proposition}\label{prop:costates}
The set of costates $(S, S') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (I, I)$ is isomorphic to $\mathscr{C}(S, S')$.
\end{proposition}
\begin{proof}
\begin{align*}
\mathbf{Optic}_\mathscr{C}((S, S'), (I, I))
&= \int^{M \in \mathscr{C}} \mathscr{C}(S, M \otimes I) \times \mathscr{C}(M \otimes I, S') \\
&\cong \int^{M \in \mathscr{C}} \mathscr{C}(S, M) \times \mathscr{C}(M, S') \\
&\cong \mathscr{C}(S, S')
\end{align*}
by Yoneda reduction, so a state $\rep{l}{r} : (S, S') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (I, I)$ corresponds to the morphism $rl : S \to S'$, and a morphism $f : S \to S'$ corresponds to the state $\rep{\rho_S^{-1}}{f \rho_S} : (S, S') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (I, I)$
\end{proof}
In particular, for any $S \in \mathscr{C}$, the identity $\mathrm{id}_S$ yields an optic $c_S = \rep{\rho_S^{-1}}{\rho_S} : (S, S) \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (I, I)$ that we call the \emph{connector}:
\begin{center}
\input{diagrams/connector.tikz}
\end{center}
\begin{proposition}\label{prop:states}
Suppose the monoidal unit $I$ of $\mathscr{C}$ is terminal. Then the set of states $(I, I) \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (A, A')$ is isomorphic to $\mathscr{C}(I, A)$.
\end{proposition}
\begin{proof}
First, note that
\begin{align*}
\mathbf{Optic}_\mathscr{C}((I,I), (A,A'))
&= \int^{M \in \mathscr{C}} \mathscr{C}(I, M \otimes A) \times \mathscr{C}(M \otimes A', I) \\
&\cong \int^{M \in \mathscr{C}} \mathscr{C}(I, M \otimes A).
\end{align*}
as $I$ is terminal. The interior of this coend is mute in the contravariant position, so the coend is equal to the colimit of the functor $\mathscr{C}(I, - \otimes A) : \mathscr{C} \to \mathbf{Set}$ by Lemma~\ref{lem:mute-coend}. But $\mathscr{C}$ has terminal object $I$, so
\begin{align*}
\int^{M \in \mathscr{C}} \mathscr{C}(I, M \otimes A)
&\cong \colim \mathscr{C}(I, - \otimes A) \\
&\cong \mathscr{C}(I, I \otimes A) \\
&\cong \mathscr{C}(I, A)
\end{align*}
Explicitly, a state $f : I \to A$ in $\mathscr{C}$ corresponds to the optic $\rep{\lambda_A^{-1} f}{!_{I \times A'}} : (I, I) \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (A, A')$, where $!_{I \times A'} : I \times A' \to I$ is the unique map.
\end{proof}
The remainder of this section comprises a proof of the following fact:
\begin{theorem}\label{thm:optic-functor}
The $\mathbf{Optic}_\mathscr{C}$ construction extends to a functor \[\mathbf{Optic} : \mathbf{SymmMonCat} \to \mathbf{SymmMonCat},\] where $\mathbf{SymmMonCat}$ denotes the (1-)category of (small) symmetric monoidal categories and strong symmetric monoidal functors.
\end{theorem}
\begin{proposition}\label{prop:change-of-action-monoidal}
A monoidal functor $F : \mathscr{C} \to \mathscr{D}$ induces a functor $\mathbf{Optic}(F) : \mathbf{Optic}_\mathscr{C} \to \mathbf{Optic}_\mathscr{D}$, given on objects by $\mathbf{Optic}(F)(S, S') = (FS, FS')$ and on morphisms $\rep{l}{r} : (S, S') \to (A, A')$ by
\begin{align*}
\mathbf{Optic}(F)(\rep{l}{r}) := \rep{\phi^{-1}_{M,A} (Fl)}{(Fr) \phi_{M,A'}},
\end{align*}
where $\phi_{M,A} : FM \otimes FA \to F(M \otimes A)$ and $\phi_I : I \to FI$ denote the structure maps of the monoidal functor. Graphically:
\begin{center}
\input{diagrams/induced-by-functor.tikz}
\end{center}
\end{proposition}
\begin{proof}
This preserves identities:
\begin{align*}
&\mathbf{Optic}(F)(\mathrm{id}_{(S, S')}) \\
&= \mathbf{Optic}(F)(\rep{\lambda^{-1}_S}{\lambda_{S'}}) &&\text{(Definition of $\mathrm{id}$)} \\
&= \rep{\phi^{-1}_{I,S} (F\lambda^{-1}_S)}{(F\lambda_{S'}) \phi_{I,S'}} && \text{(Definition of $\mathbf{Optic}(F)$)} \\
&= \rep{(\phi_I^{-1} \otimes S) \phi^{-1}_{I,S} (F\lambda^{-1}_S)}{(F\lambda_{S'}) \phi_{I,S'}(\phi_I \otimes S) } && \text{(Introducing isomorphism to both sides)} \\
&= \rep{\lambda^{-1}_{FS}}{\lambda_{FS'}} &&\text{($F$ is a monoidal functor)} \\
&= \mathrm{id}_{(FS, FS')}
\end{align*}
And given two optics $\rep{l}{r} : (S, S') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (A, A')$ and $\rep{l'}{r'} : (R, R') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (S, S')$ with residuals $M$ and $M'$, it preserves composition:
\begingroup
\allowdisplaybreaks
\begin{align*}
&\mathbf{Optic}(F)(\rep{l}{r} \circ \rep{l'}{r'}) \\
&\qquad \text{(Definition of $\circ$)} \\
&= \mathbf{Optic}(F)(\rep{(M' \otimes l)l'}{r'(M' \otimes r)}) \\
&\qquad \text{(Definition of $\mathbf{Optic}(F)$)} \\
&= \rep{\phi^{-1}_{M' \otimes M,A} F((M' \otimes l)l')}{F(r'(M' \otimes r)) \phi_{M' \otimes M,A'}} \\
&\qquad \text{(Functoriality of $F$)} \\
&= \rep{\phi^{-1}_{M' \otimes M,A} F(M' \otimes l)(Fl')}{(Fr') F(M' \otimes r) \phi_{M' \otimes M,A'}} \\
&\qquad \text{(Introducing isomorphism to both sides)} \\
&= \rep{(\phi^{-1}_{M', M} \otimes A)\phi^{-1}_{M' \otimes M,A} F(M' \otimes l)(Fl')}{(Fr') F(M' \otimes r) \phi_{M' \otimes M,A'}(\phi_{M', M} \otimes A)} \\
&\qquad \text{(Hexagon axiom for $F$)} \\
&= \rep{(FM' \otimes \phi^{-1}_{M,A})\phi^{-1}_{M',M \otimes A}(F(M' \otimes l)) (Fl')}{(Fr') (F(M' \otimes r)) \phi_{M',M \otimes A'} (FM' \otimes \phi_{M,A'})} \\
&\qquad \text{(Naturality of $\phi$)} \\
&= \rep{(FM' \otimes \phi^{-1}_{M,A})(FM' \otimes Fl)\phi^{-1}_{M',S} (Fl')}{(Fr') \phi_{M',S'} (FM' \otimes Fr) (FM' \otimes \phi_{M,A'})} \\
&\qquad \text{(Functoriality of $\otimes$)} \\
&= \rep{(FM' \otimes \phi^{-1}_{M,A} (Fl))(\phi^{-1}_{M',S} (Fl'))}{((Fr') \phi_{M',S'})(FM' \otimes (Fr) \phi_{M,A'})} \\
&\qquad \text{(Definition of $\circ$)} \\
&= \rep{\phi^{-1}_{M,A} (Fl)}{(Fr) \phi_{M,A'}} \circ \rep{\phi^{-1}_{M',S} (Fl')}{(Fr') \phi_{M',S'}} \\
&\qquad \text{(Definition of $\mathbf{Optic}(F)$)} \\
&= \mathbf{Optic}(F)(\rep{l}{r}) \circ \mathbf{Optic}(F)(\rep{l'}{r'})
\end{align*}
\endgroup
The critical move is adding the isomorphism $(\phi_{M', M} \otimes A)$ to both sides of the coend, so that the hexagon axiom for $F$ may be applied.
\end{proof}
\begin{lemma}\label{lem:iota-commute-with-opticf}
$\iota$ commutes with $\mathbf{Optic}(F)$, in the sense that
\[ \mathbf{Optic}(F)(\iota(f, g)) = \iota(Ff, Fg) \]
\end{lemma}
\begin{proof}
This is a straightforward calculation:
\begin{align*}
& \mathbf{Optic}(F)(\iota(f, g)) \\
&\qquad \text{(Definition of $\iota$)} \\
&= \mathbf{Optic}(F)(\rep{\lambda_A^{-1} f}{g \lambda_{A'}}) \\
&\qquad \text{(Definition of $\mathbf{Optic}(F)$)} \\
&= \rep{\phi^{-1}_{I,A} (F(\lambda_A^{-1} f))}{(F(g \lambda_{A'})) \phi_{I,A'}} \\
&\qquad \text{(Functoriality of $F$)} \\
&= \rep{\phi^{-1}_{I,A} (F\lambda_A^{-1}) (Ff)}{(Fg)(F \lambda_{A'}) \phi_{I,A'}} \\
&\qquad \text{(Introducing $\phi_I$ to both sides)} \\
&= \rep{(\phi^{-1}_I \otimes FA) \phi^{-1}_{I,A} (F\lambda_A^{-1}) (Ff)}{(Fg)(F \lambda_{A'}) \phi_{I,A'} (\phi_I \otimes FA)} \\
&\qquad \text{($F$ is a monoidal functor)} \\
&= \rep{\lambda_{FA}^{-1} (Ff)}{(Fg)\lambda_{FA'}} \\
&\qquad \text{(Definition of $\iota$)} \\
&= \iota(Ff, Fg)
\end{align*}
\end{proof}
\begin{proposition}\label{prop:iota-naturality}
$\iota : \mathscr{C} \times \mathscr{C}^\mathrm{op} \to \mathbf{Optic}_\mathscr{C}$ ``lifts natural isomorphisms'', in the following sense. Given monoidal functors $F, G : \mathscr{C} \to \mathscr{D}$ and a monoidal natural isomorphism $\alpha : F \Rightarrow G$, there is an induced natural isomorphism $\mathbf{Optic}(\alpha) : \mathbf{Optic}(F) \Rightarrow \mathbf{Optic}(G)$ with components:
\begin{align*}
{\mathbf{Optic}(\alpha)}_{(S, S')} &: (FS, FS') \to (GS, GS') \\
{\mathbf{Optic}(\alpha)}_{(S, S')} &:= \iota(\alpha_{S}, \alpha^{-1}_{S'})
\end{align*}
\end{proposition}
\begin{proof}
Suppose $\phi$ and $\psi$ are the structure maps for $F$ and $G$ respectively. We just have to show naturality, i.e.\ that for $p : (S, S') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (A, A')$ in $\mathscr{D}$, the equation \[\mathbf{Optic}(\alpha)_{(A, A')} \circ \mathbf{Optic}(F)(p) = \mathbf{Optic}(G)(p) \circ \mathbf{Optic}(\alpha)_{(S, S')}\] holds. Suppose $p = \rep{l}{r}$ with residual $M$. On the left we have:
\begin{center}
\input{diagrams/iota-lift-step1.tikz}
\end{center}
We use the coend relation to place an $\alpha$ on either side:
\begin{center}
\input{diagrams/iota-lift-step2.tikz}
\end{center}
And then monoidality of $\alpha$ to commute it past $\phi$.
\begin{center}
\input{diagrams/iota-lift-step3.tikz}
\end{center}
Finally, $\alpha$ commutes with $F l$ and $F r$ by naturality.
\begin{center}
\input{diagrams/iota-lift-step4.tikz}
\end{center}
This is the diagram for $\mathbf{Optic}(G)(p) \circ {\mathbf{Optic}(\alpha)}_{(S, S')}$.
\end{proof}
\begin{theorem}
$\mathbf{Optic}_\mathscr{C}$ is symmetric monoidal, where $(S, S') \otimes (T, T') = (S \otimes T, S' \otimes T')$, the unit object is $(I, I)$, and the action on a pair of morphisms $\rep{l}{r} : (S, S') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (A, A')$ and $\rep{l'}{r'} : (T, T') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (B, B')$ is given by:
\begin{center}
\input{diagrams/tensor-on-morphisms.tikz}
\end{center}
\end{theorem}
\begin{proof}
Suppose the two optics have residuals $M$ and $N$ respectively. Written equationally, their tensor is:
\begin{align*}
\rep{l}{r} \otimes \rep{l'}{r'} &:= \rep{(M \otimes s_{A,N} \otimes B)(l \otimes l')}{(r \otimes r')(M \otimes s_{A',N} \otimes B')}
\end{align*}
This does not depend on the choice of representatives, as demonstrated by the equivalence of the following diagrams:
\begin{center}
\input{diagrams/tensor-on-morphisms-defined-left.tikz}
\input{diagrams/tensor-on-morphisms-defined-right.tikz}
\end{center}
To check functoriality of $\otimes$, suppose we have optics
\begin{align*}
\rep{l_1}{r_1} : (S_1, S_1') &\ensuremath{\,\mathaccent\shortmid\rightarrow\,} (S_2, S_2') \\
\rep{l_2}{r_2} : (S_2, S_2') &\ensuremath{\,\mathaccent\shortmid\rightarrow\,} (S_3, S_3') \\
\rep{p_1}{q_1} : (T_1, T_1') &\ensuremath{\,\mathaccent\shortmid\rightarrow\,} (T_2, T_2') \\
\rep{p_2}{q_2} : (T_2, T_2') &\ensuremath{\,\mathaccent\shortmid\rightarrow\,} (T_3, T_3').
\end{align*}
The string diagram for $(\rep{l_2}{r_2} \circ \rep{l_1}{r_1}) \otimes (\rep{p_2}{q_2} \circ \rep{p_1}{q_1})$ is:
\begin{center}
\input{diagrams/tensor-functorial-left.tikz}
\end{center}
And for $(\rep{l_2}{r_2} \otimes \rep{p_2}{q_2}) \circ (\rep{l_1}{r_1} \otimes \rep{p_1}{q_1})$ is:
\begin{center}
\input{diagrams/tensor-functorial-right.tikz}
\end{center}
These two diagrams are equivalent: we can use the naturality of the symmetry morphism to push $l_2$ and $r_2$ past the crossing to be next to $p_2$ and $q_2$ respectively. This creates two extra twists that can be cancelled in the center of the diagram.
The structure morphisms are all lifted from the structure morphisms in $\mathscr{C} \times \mathscr{C}^\mathrm{op}$:
\begin{align*}
\alpha_{(R, R'), (S, S'), (T, T')} &:= \iota(\alpha_{R,S,T}, \alpha_{R',S',T'}^{-1}) \\
\lambda_{(S, S')} &:= \iota(\lambda_{S}, \lambda_{S'}^{-1}) \\
\rho_{(S, S')} &:= \iota(\rho_{S}, \rho_{S'}^{-1}) \\
s_{(S, S'), (T, T')} &:= \iota(s_{S, T}, s_{T', S'})
\end{align*}
Note that because $\iota(S, S') = (S, S')$, the equations required to hold for $\iota$ to be a monoidal functor hold by definition (although we don't yet know that $\mathbf{Optic}_\mathscr{C}$ is monoidal). The pentagon and triangle equations then hold in $\mathbf{Optic}_\mathscr{C}$, as they are the image of the same diagrams in $\mathscr{C} \times \mathscr{C}^\mathrm{op}$ under $\iota$. The only remaining thing to verify is that these structure maps are natural in $\mathbf{Optic}_\mathscr{C}$, but this follows from the previous proposition.
\end{proof}
\begin{proposition}
For monoidal $F : \mathscr{C} \to \mathscr{D}$, the induced $\mathbf{Optic}(F) : \mathbf{Optic}_\mathscr{C} \to \mathbf{Optic}_\mathscr{D}$ is also monoidal.
\end{proposition}
\begin{proof}
The structure morphisms for monoidality are given by lifting the structure morphisms for $F$:
\begin{align*}
\phi_{(S, S'), (T, T')} &:= \iota(\phi_{S, T}, \phi_{S', T'}) &&: F(S, S') \otimes F(T, T') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} F((S, S') \otimes (T, T')) \\
\phi &:= \iota(\phi, \phi) &&: (I, I) \ensuremath{\,\mathaccent\shortmid\rightarrow\,} F(I, I)
\end{align*}
The monoidality axioms follow by lifting the axioms for $F$ and naturality follows by Proposition~\ref{prop:iota-naturality}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:optic-functor}]
The functor is well defined on its domain: if $\mathscr{C}$ is small then $\mathbf{Optic}_\mathscr{C}$ exists. The only property left to check is functoriality, i.e. that for monoidal functors $F : \mathscr{C} \to \mathscr{D}$ and $G : \mathscr{D} \to \mathscr{E}$ we have
\[ \mathbf{Optic}(G) \circ \mathbf{Optic}(F) = \mathbf{Optic}(G \circ F).\]
On objects this is clear, as $\mathbf{Optic}(F)(S, S') = (FS, FS')$. On a morphism $\rep{l}{r} : (S, S') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (A, A')$ in $\mathscr{C}$, we check:
\begin{align*}
(\mathbf{Optic}(G) \circ \mathbf{Optic}(F))(\rep{l}{r})
&= \mathbf{Optic}(G) \left(\rep{\phi^{-1}_{M,A} (Fl)}{(Fr) \phi_{M,A'}}\right) \\
&= \rep{\psi^{-1}_{FM,FA} (G(\phi^{-1}_{M,A} (Fl)))}{(G((Fr) \phi_{M,A'}))\psi_{FM,FA'}} \\
&= \rep{\psi^{-1}_{FM,FA} (G\phi^{-1}_{M,A}) (GFl)}{(GFr) (G\phi_{M,A'})\psi_{FM,FA'}} \\
&=\mathbf{Optic}(G \circ F)(\rep{l}{r})
\end{align*}
where $\phi$ and $\psi$ denote the structure maps for $F$ and $G$ respectively, and in the last step we use that $(G\phi_{M,A'})\psi_{FM,FA'}$ is by definition the structure map for $G \circ F$. Checking that the identity is preserved is similar.
\end{proof}
This doesn't extend to a strict 2-functor $\mathbf{SymmMonCat} \to \mathbf{SymmMonCat}$, as there is only an action of $\mathbf{Optic}$ on natural \emph{isomorphisms}. It is however functorial on natural isomorphisms, giving a 2-functor on the `homwise-core' of $\mathbf{SymmMonCat}$. We do not explore this any further in the present note.
\begin{proposition}
If $\mathscr{C}$ is a strict symmetric monoidal category then $\mathbf{Optic}_\mathscr{C}$ is strict, and $\iota : \mathscr{C} \times \mathscr{C}^\mathrm{op} \to \mathbf{Optic}_\mathscr{C}$ is a strict monoidal functor. For $F : \mathscr{C} \to \mathscr{D}$ a strict monoidal functor, the induced functor $\mathbf{Optic}(F) : \mathbf{Optic}_\mathscr{C} \to \mathbf{Optic}_\mathscr{D}$ is also strict.
\end{proposition}
\begin{proof}
The structure maps of $\mathbf{Optic}_\mathscr{C}$ are given by $\iota$ applied to the structure maps of $\mathscr{C}$. If the latter are identities, then so are the former---the identity morphisms in $\mathbf{Optic}_\mathscr{C}$ are by definition $\iota(\mathrm{id}_S, \mathrm{id}_{S'})$.
That $\iota$ is strict is clear, as the structure morphisms in $\mathbf{Optic}_\mathscr{C}$ are exactly the structure morphisms in $\mathscr{C} \times \mathscr{C}^\mathrm{op}$ under $\iota$.
Finally, the structure morphisms of $\mathbf{Optic}(F)$ are lifted from $F$, so if the latter is strict then so is the former.
\end{proof}
\subsection{Teleological Categories}\label{sec:teleological-categories}
In this section we establish a universal property of the $\mathbf{Optic}$ construction. The idea is that every optic $\rep{l}{r} : (S, S') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (A, A')$ consists of a morphism $S \to M \otimes A$ and the `formal dual' of a morphism $M \otimes A' \to S'$, composed with a `formal counit' that traces out the object $M$:
\begin{center}
\input{diagrams/generic-optic-folded.tikz}
\end{center}
It will be convenient to equip $\mathbf{Optic}_\mathscr{C}$ with a slightly different symmetric monoidal structure:
\begin{definition}
The \emph{switched} monoidal product on $\mathbf{Optic}_\mathscr{C}$ is given on objects by
\begin{align*}
(S, S') \mathbin{\tilde{\otimes}} (T, T') := (S \otimes T, T' \otimes S')
\end{align*}
And on morphisms $\rep{l}{r} : (S, S') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (A, A')$ and $\rep{l'}{r'} : (T, T') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (B, B')$ by:
\begin{center}
\input{diagrams/switched-tensor-on-morphisms.tikz}
\end{center}
\end{definition}
The universal property for $\mathbf{Optic}_\mathscr{C}$ given in this section is an argument for this being the ``morally correct'' tensor, although it does seem a little strange. When we later discuss lawful optics, we are forced to use the unswitched tensor to maintain the invariant that our objects are of the form $(X, X)$.
\begin{proposition}
$(\mathbf{Optic}_\mathscr{C}, \mathbin{\tilde{\otimes}}, (I, I))$ is a symmetric monoidal category.
\end{proposition}
\begin{proof}
The proof that $(\mathbf{Optic}_\mathscr{C}, \mathbin{\tilde{\otimes}}, (I, I))$ is symmetric monoidal is nearly identical to that for the unswitched tensor. Note that due to the switching, the structure morphisms are slightly different:
\begin{align*}
\alpha_{(R, R'), (S, S'), (T, T')} &:= \iota(\alpha_{R,S,T}, \alpha_{T',S',R'}^{-1}) \\
\lambda_{(S, S')} &:= \iota(\lambda_{S}, \rho_{S'}^{-1}) \\
\rho_{(S, S')} &:= \iota(\rho_{S}, \lambda_{S'}^{-1}) \\
s_{(S, S'), (T, T')} &:= \iota(s_{S, T}, s_{S', T'})
\end{align*}
\end{proof}
\begin{remark}
Just as in the unswitched case, if $\mathscr{C}$ is a strict monoidal category than so is $(\mathbf{Optic}_\mathscr{C}, \mathbin{\tilde{\otimes}}, (I, I))$.
\end{remark}
We now define the structure on a symmetric monoidal category universally provided by the $\mathbf{Optic}$ construction.
\begin{definition}[Compare {\cite[Definition 5.1]{CoherenceForLenses}}]
A \emph{teleological category} is a symmetric monoidal category $(\mathscr{T}, \mathbin{\boxtimes}, I)$, equipped with:
\begin{itemize}
\item A symmetric monoidal subcategory $\mathscr{T}_d$ of \emph{dualisable morphisms} containing all the objects of $\mathscr{T}$, with an involutive symmetric monoidal functor ${(-)}^* : \mathscr{T}_d \to \mathscr{T}_d^\mathrm{op}$, where---not finding a standard symbol for such a thing---we mean $\mathscr{T}_d^\mathrm{op}$ to be the category with both the direction of the arrows \emph{and} the order of the tensor flipped: ${(A \mathbin{\boxtimes} B)}^* \cong B^* \mathbin{\boxtimes} A^*$. Note that there is therefore also a canonical isomorphism $\phi : I \cong I^*$
\item A symmetric monoidal extranatural family of morphisms $\varepsilon_X : X \mathbin{\boxtimes} X^* \to I$, called \emph{counits}, natural with respect to the \emph{dualisable} morphisms.
\end{itemize}
\end{definition}
Unpacking the definition, $\varepsilon$ being a symmetric monoidal extranatural transformation amounts to the following diagrams in $\mathscr{T}$ commuting:
\[
\begin{tikzcd}
X \mathbin{\boxtimes} Y^* \ar[r, "f \mathbin{\boxtimes} Y^*"] \ar[d, "X \mathbin{\boxtimes} f^*", swap] & Y \mathbin{\boxtimes} Y^* \ar[d, "\varepsilon_Y"] \\
X \mathbin{\boxtimes} X^* \ar[r, "\varepsilon_X", swap] & I
\end{tikzcd} \hspace{1cm}
\begin{tikzcd}
X^* \mathbin{\boxtimes} X \ar[r, "s"] \ar[d, "\cong" swap] & X \mathbin{\boxtimes} X^* \ar[d, "\varepsilon_X"] \\
X^* \mathbin{\boxtimes} {(X^*)}^* \ar[r, "\varepsilon_{X^*}", swap] & I
\end{tikzcd}\]
\[
\begin{tikzcd}[column sep = large]
X \mathbin{\boxtimes} Y \mathbin{\boxtimes} Y^* \mathbin{\boxtimes} X^* \ar[r, "X \mathbin{\boxtimes} \varepsilon_Y \mathbin{\boxtimes} X^*"] \ar[d, "\cong" swap] & X \mathbin{\boxtimes} X^* \ar[d, "\varepsilon_X"] \\
X \mathbin{\boxtimes} Y \mathbin{\boxtimes} (X \mathbin{\boxtimes} Y)^* \ar[r, "\varepsilon_{X \mathbin{\boxtimes} Y}", swap] & I
\end{tikzcd} \hspace{1cm}
\begin{tikzcd}
I \mathbin{\boxtimes} I^* \ar[r,"I \mathbin{\boxtimes} \phi"] \ar[dr, swap, "\varepsilon_I"] & I \mathbin{\boxtimes} I \ar[d, "\cong"] \\
& I
\end{tikzcd}
\]
where $f : X \to Y$ is dualisable.
Note that because $\mathscr{T}_d$ is symmetric monoidal and has the same collection of objects as $\mathscr{T}$, the symmetric monoidal structure morphisms of $\mathscr{T}$ must be contained in $\mathscr{T}_d$ and so are dualisable.
\begin{example}
~\begin{enumerate}[(1)]
\item Any compact closed category is a teleological category, where every morphism is dualisable and the unit morphisms have been forgotten.
\item Any symmetric monoidal category with terminal monoidal unit is trivially teleological, setting the dualisable morphisms to be all isomorphisms.
\end{enumerate}
\end{example}
This definition of teleological category differs from the original given in~\cite{CoherenceForLenses}, in that the duality switches the order of the tensor product. We do this so that compact closed categories are teleological, but the bookkeeping does admittedly become more confusing.
\begin{definition}
A \emph{teleological functor} $F : \mathscr{T} \to \S$ is a symmetric monoidal functor that restricts to a functor $F_d : \mathscr{T}_d \to \S_d$ on the dualisable subcategories, commutes with the duality via a monoidal natural isomorphism $d_X : F(X^*) \to {(FX)}^*$, and such that the counits are preserved:
\[
\begin{tikzcd}[column sep = large]
F(X \mathbin{\boxtimes} X^*) \ar[r, "\phi_{X, X^*}"] \ar[d, "F\varepsilon_X", swap] & FX \mathbin{\boxtimes} F(X^*) \ar[r, "FX \mathbin{\boxtimes} d_X"] & FX \mathbin{\boxtimes} (FX)^* \ar[d, "\varepsilon_{FX}"] \\
FI \ar[rr, "\phi_I", swap] & & I
\end{tikzcd}
\]
\end{definition}
Together we have $\mathbf{Tele}$, the category of teleological categories and teleological functors. There are evident functors
\begin{align*}
U &: \mathbf{Tele} \to \mathbf{SymmMonCat} \\
{(-)}_d &: \mathbf{Tele} \to \mathbf{SymmMonCat}
\end{align*}
that take a teleological category to its underlying symmetric monoidal category and subcategory of dualisable morphisms respectively.
The definition of teleological category suggests a string diagram calculus similar to that for compact closed categories, but where only counits are allowed and only morphisms known to be dualisable may be passed around a counit. We have of course not proven that such a calculus is sound for teleological categories, but we trust that a sceptical reader could verify our arguments equationally.
\begin{proposition}
$\mathbf{Optic}_\mathscr{C}$ forms a teleological category, where:
\begin{itemize}
\item The dualisable morphisms are all morphisms of the form $\iota(f, g)$;
\item The involution is given on objects by ${(S, S')}^* := (S', S)$, and on morphisms by $\iota{(f, g)}^* := \iota(g, f)$;
\item The counit $\varepsilon_{(S, S')} : (S, S') \mathbin{\tilde{\otimes}} {(S, S')}^* = (S \otimes S', S \otimes S') \to (I, I)$ is given by the connector: \[\varepsilon_{(S, S')} := c_{S \otimes S'}.\]
\end{itemize}
\end{proposition}
\begin{proof}
That morphisms of the form $\iota(f, g)$ constitute a symmetric monoidal subcategory is clear, they are the image of the symmetric monoidal functor $\iota$.
The functor ${(-)}^*$ is a symmetric monoidal involution, in fact it is strictly so:
\begin{align*}
{\left( (S, S') \mathbin{\tilde{\otimes}} (T, T') \right)}^*
&= {\left( S \otimes T, T' \otimes S' \right)}^* \\
&= {\left(T' \otimes S', S \otimes T \right)} \\
&= (T', T) \mathbin{\tilde{\otimes}} (S', S) \\
&= {(T, T')}^* \mathbin{\tilde{\otimes}} {(S, S')}^*
\end{align*}
To check extranaturality of $\varepsilon$, suppose we have a dualisable optic $\iota(f, g) : (S, S') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (T, T')$, so $f : S \to T$ and $g : T' \to S'$. Happily, all the switching in the definitions cancels out! Extranaturality is witnessed by the equality of the string diagrams:
\begin{center}
\input{diagrams/counit-extranatural-left.tikz}
\qquad \raisebox{1.5cm}{$=$} \qquad
\input{diagrams/counit-extranatural-right.tikz}
\end{center}
Symmetry of $\varepsilon$ by:
\begin{center}
\input{diagrams/counit-symmetry-left.tikz}
\qquad \raisebox{1.5cm}{$=$} \qquad
\input{diagrams/counit-symmetry-right.tikz}
\end{center}
And for monoidality of $\varepsilon$ there is essentially nothing to do in the graphical calculus:
\begin{center}
\input{diagrams/counit-monoidal-left.tikz}
\qquad \raisebox{2cm}{$=$} \qquad
\input{diagrams/counit-monoidal-right.tikz}
\end{center}
Note that the diagrams that are required to commute in the definition of teleological category all terminate with the unit $I$, so in view of Proposition~\ref{prop:costates} we should not be surprised that they correspond to equality of maps in $\mathscr{C}$.
\end{proof}
\begin{proposition}
The functor $\mathbf{Optic} : \mathbf{SymmMonCat} \to \mathbf{SymmMonCat}$ of Theorem~\ref{thm:optic-functor} extends to a functor to $\mathbf{Tele}$.
\end{proposition}
\begin{proof}
We have seen that $\mathbf{Optic}_\mathscr{C}$ is always teleological. We must show that for a symmetric monoidal functor $F : \mathscr{C} \to \mathscr{D}$, the induced functor $\mathbf{Optic}(F) : \mathbf{Optic}_\mathscr{C} \to \mathbf{Optic}_\mathscr{D}$ is teleological. That $\mathbf{Optic}(F)$ preserves the dualisable morphisms is exactly Lemma~\ref{lem:iota-commute-with-opticf}. It also preserves the counits:
\begin{align*}
&\mathbf{Optic}(F)(\varepsilon_{(S, S')}) \\
&= \mathbf{Optic}(F)(c_{S \otimes S'}) && \text{(Definition of the counit)} \\
&= \mathbf{Optic}(F)(\rep{\rho_S^{-1}}{\rho_{S'}}) && \text{(Definition of the $c$)} \\
&= \rep{\phi^{-1}_{S,I} (F \rho_S^{-1})}{(F \rho_{S'}) \phi_{S',I}}&& \text{(Definition of the $\mathbf{Optic}(F)$)} \\
&= \rep{(FS \otimes \phi_I^{-1}) \phi^{-1}_{S,I} (F \rho_S^{-1})}{(F \rho_{S'}) \phi_{S',I} (FS \otimes \phi_I)} && \text{(Introduce $\phi_I$ to both sides)} \\
&= \rep{\rho_{FS}^{-1}}{\rho_{FS'}} && \text{($F$ is monoidal)} \\
&= \varepsilon_{(FS, FS')} && \text{(Definition of the counit)}
\end{align*}
\end{proof}
We will establish the universal property in the somewhat contrived case of \emph{strict} symmetric monoidal categories and \emph{strict} monoidal functors, but anticipate that this result could be weakened to non-strict symmetric monoidal categories at the cost of checking far more coherences.
\begin{definition}
A teleological category is \emph{strict} if it is strict as a symmetric monoidal category and ${(-)}^*$ is a strict monoidal involution, so ${(A \mathbin{\boxtimes} B)}^* = B^* \mathbin{\boxtimes} A^*$ and $I^* = I$, and also ${(A^*)}^* = A$. A teleological functor is \emph{strict} if it is strict as a symmetric monoidal functor and strictly preserves the duality and counits.
\end{definition}
We have previously noted that $\mathbf{Optic}_\mathscr{C}$ is strict monoidal if $\mathscr{C}$ is, and that in that case the duality is strict. There are functors
\begin{align*}
\mathbf{Optic} &: \mathbf{StrictSymmMonCat} \to \mathbf{StrictTele} \\
U &: \mathbf{StrictTele} \to \mathbf{StrictSymmMonCat} \\
{(-)}_d &: \mathbf{StrictTele} \to \mathbf{StrictSymmMonCat}
\end{align*}
The crux is the following proposition that decomposes every optic in a canonical way.
\begin{proposition}\label{prop:optic-decompose}
Suppose $\rep{l}{r} : (S, S') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (A, A')$ has residual $M$. Then
\begin{align*}
\rep{l}{r} = ((A, I) \mathbin{\tilde{\otimes}} \varepsilon_{(M, I)} \mathbin{\tilde{\otimes}} (I, A'))(j(s_{M,A}l) \mathbin{\tilde{\otimes}} j{(rs_{A',M})}^*)
\end{align*}
where $j : \mathscr{C} \to \mathbf{Optic}_\mathscr{C}$ is the functor $j(A) := \iota(A, I)$.
\end{proposition}
The symmetries in the above expression could have been avoided if $\mathbf{Optic}$ had been defined as $\int^{M \in \mathscr{C}} \mathscr{C}(S, A \otimes M) \times \mathscr{C}(A' \otimes M, S')$, but it is too late to change the convention now!
\begin{proof}
First note that because $\mathscr{C}$ is strict monoidal, the counit $\varepsilon_{(M, I)} : (M \otimes I, M \otimes I) = (M, M) \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (I, I)$ is equal to the connector $c_M : (M, M) \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (I, I)$.
Then, up to strictness of the monoidal unit, we are composing the two optics
\begin{center}
\input{diagrams/optic-decomposed-outer.tikz}
\end{center}
and
\begin{center}
\input{diagrams/optic-decomposed-inner.tikz}
\end{center}
so the two pairs of twists cancel, and we are left exactly with the diagram for $\rep{l}{r}$.
\end{proof}
This also holds for monoidal categories that are not necessarily strict, if the unit object and unitors are inserted in the appropriate places.
\begin{proposition}
Suppose $(\mathscr{C}, \otimes, I)$ is a strict symmetric monoidal category and $(\mathscr{T}, \mathbin{\boxtimes}, I, {(-)}^*, \varepsilon)$ is a strict teleological category. Given a strict symmetric monoidal functor $F : \mathscr{C} \to \mathscr{T}_d$, there exists a unique strict teleological functor $K : \mathbf{Optic}_\mathscr{C} \to \mathscr{T}$ with the property $Kj = F$.
\end{proposition}
\begin{proof}
We construct $K$ as follows. Note that any object $(S, S')$ in $\mathbf{Optic}_\mathscr{C}$ can be written uniquely as $j(S) \mathbin{\tilde{\otimes}} {j(S')}^*$, so we are forced to define $K(S, S') = FS \mathbin{\boxtimes} {(FS')}^*$. Suppose $\rep{l}{r} : (S, S') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (A, A')$ is an optic. By the previous Proposition,
\begin{align*}
\rep{l}{r} = ((A, I) \mathbin{\tilde{\otimes}} \varepsilon_{(M, I)} \mathbin{\tilde{\otimes}} (I, A'))(j(s_{M,A}l) \mathbin{\tilde{\otimes}} {j(rs_{A', M})}^*)
\end{align*}
So if a $K$ with $Kj = F$ exists, it must hold that
\begin{align*}
K\rep{l}{r}
&= K((A, I) \mathbin{\tilde{\otimes}} \varepsilon_{(M, I)} \mathbin{\tilde{\otimes}} (I, A')) K(j(s_{M,A}l) \mathbin{\tilde{\otimes}} {j(rs_{A', M})}^*) \\
&\qquad\text{($K$ is monoidal)} \\
&= (K(A, I) \mathbin{\boxtimes} K\varepsilon_{(M, I)} \mathbin{\boxtimes} K(I, A')) (K(j(s_{M,A}l)) \mathbin{\boxtimes} K( {j(rs_{A', M})}^*)) \\
&\qquad\text{($K$ preserves the counit and duality)} \\
&= (K(A, I) \mathbin{\boxtimes} \varepsilon_{K(M,I)} \mathbin{\boxtimes} K(I, A')) (K(j(s_{M,A}l)) \mathbin{\boxtimes} K( {j(rs_{A', M})})^*) \\
&\qquad\text{($K$ satisfies $Kj = F$)} \\
&= (FA \mathbin{\boxtimes} \varepsilon_{FM} \mathbin{\boxtimes} {(FA')}^*) (F(s_{M,A}l) \mathbin{\boxtimes} F(rs_{A', M})^*)
\end{align*}
We therefore take \[ K\rep{l}{r} = (FA \mathbin{\boxtimes} \varepsilon_{FM} \mathbin{\boxtimes} {(FA')}^*) (F(s_{M,A}l) \mathbin{\boxtimes} F(rs_{A', M})^*)) \] as our definition of $K$. The diagram for $K\rep{l}{r}$ in $\mathscr{T}$ is as follows:
\begin{center}
\input{diagrams/k-generic-optic.tikz}
\end{center}
It remains to show that $K$ so defined is indeed a strict teleological functor. There are several things to check:
\begin{itemize}
\item Well-definedness: Suppose we have two optics related by the coend relation:
\begin{align*}
\rep{(f \otimes A) l}{r} = \rep{l}{r (f \otimes A')}
\end{align*}
Then well-definedness is shown by the equivalence of diagrams
\begin{center}
\input{diagrams/k-well-defined-left.tikz}
\qquad \raisebox{1.5cm}{$=$} \qquad
\input{diagrams/k-well-defined-right.tikz}
\end{center}
using naturality of the symmetry and extranaturality of the counit.
\item Functoriality: We have an equivalence of diagrams
\begin{center}
\input{diagrams/k-functorial-left.tikz}
\quad \raisebox{1.5cm}{$=$} \quad
\input{diagrams/k-functorial-right.tikz}
\end{center}
using naturality of the symmetry and monoidality of the counit.
\item Monoidality:
\begin{align*}
K((S, S') \mathbin{\tilde{\otimes}} (T, T'))
&= K(S \otimes T, T' \otimes S) \\
&= F(S \otimes T) \mathbin{\boxtimes} {F(T' \otimes S')}^* \\
&= FS \mathbin{\boxtimes} FT \mathbin{\boxtimes} {(FS')}^* \mathbin{\boxtimes} {(FT')}^* \\
&= FS \mathbin{\boxtimes} {(FS')}^* \mathbin{\boxtimes} FT \mathbin{\boxtimes} {(FT')}^* \\
&= K(S, S') \mathbin{\boxtimes} K(T, T')
\end{align*}
and
\begin{align*}
K(I, I)
&= FI \mathbin{\boxtimes} {(FI)}^* \\
&= I \mathbin{\boxtimes} I^* \\
&= I
\end{align*}
\item Preservation of duals:
\begin{align*}
K({(S, S')}^*)
= K(S', S)
= FS' \mathbin{\boxtimes} {(FS)}^*
= {(FS \mathbin{\boxtimes} {(FS')}^*)}^*
= {(K(S, S'))}^*
\end{align*}
\item Preservation of dualisable morphisms: For a morphism $\iota(f, g)$:
\begin{align*}
K(\iota(f, g))
&= K(\rep{\lambda_A^{-1} f}{g \lambda_{A'}}) \\
&= (FA \mathbin{\boxtimes} \varepsilon_{FI} \mathbin{\boxtimes} {(FA')}^*)(F(s_{I,A}\lambda_A^{-1} f) \mathbin{\boxtimes} {(F(g \lambda_{A'}s_{A', I}))}^* ) \\
&= (FA \mathbin{\boxtimes} {(FA')}^*)(Ff \mathbin{\boxtimes} {(Fg)}^* ) \\
&= Ff \mathbin{\boxtimes} {(Fg)}^*
\end{align*}
and this is dualisable, as dualisability is preserved by taking the monoidal product and duals.
\item Preservation of counits:
\begin{align*}
K(\varepsilon_{(S, S')})
&= K(c_{S \otimes S'}) \\
&= K(\rep{\rho_{S \otimes S'}^{-1}}{\rho_{S \otimes S'}}) \\
&= (FI \mathbin{\boxtimes} \varepsilon_{F(S \otimes S')} \mathbin{\boxtimes} {(FI)}^*)(F(s_{S \otimes S',I}\rho_{S \otimes S'}^{-1}) \mathbin{\boxtimes} (F(\rho_{S \otimes S'} s_{I, S \otimes S'}))^* ) \\
&= (\varepsilon_{F(S \otimes S')})(F(S \otimes S') \mathbin{\boxtimes} F{(S \otimes S')}^* ) \\
&= \varepsilon_{F(S \otimes S')} \\
&= \varepsilon_{FS \otimes FS'} \\
&= \varepsilon_{FS}(FS \otimes \varepsilon_{FS'} \otimes {(FS)}^*) \\
&= \varepsilon_{FS}(FS \otimes \varepsilon_{{FS'}^*} \otimes {(FS)}^*) \\
&= \varepsilon_{FS \otimes {FS'}^*} \\
&= \varepsilon_{K(S, S')}
\end{align*}
The critical move is applying the equality $\varepsilon_{FS'} = \varepsilon_{{FS'}^*}$, which follows because $\varepsilon$ is a symmetric monoidal transformation and the duality is strict.
\end{itemize}
\end{proof}
\begin{theorem}\label{thm:optic-is-free-teleological-cat}
$\mathbf{Optic} : \mathbf{StrictSymmMonCat} \to \mathbf{StrictTele}$ is left adjoint to the `underlying dualisable morphisms' functor ${(-)}_d : \mathbf{StrictTele} \to \mathbf{StrictSymmMonCat}$.
\end{theorem}
\begin{proof}
Precomposition with $j$ gives a function
\begin{align*}
\mathbf{StrictTele}(\mathbf{Optic}_\mathscr{C}, \mathscr{T}) \to \mathbf{StrictSymmMonCat}(\mathscr{C}, \mathscr{T}_d)
\end{align*}
and the previous proposition states that this is a isomorphism. This is automatically natural in $\mathscr{T}$. Naturality in $\mathscr{C}$ follows by Lemma~\ref{lem:iota-commute-with-opticf}.
\end{proof}
\begin{remark}
The above theorem and its proof have much in common with~\cite[Proposition 5.2]{JoyalStreetVerity}, which gave a similar universal property for their $\mathrm{Int}$ construction on traced monoidal categories.
\end{remark}
Working with strict monoidal categories made it significantly easier to prove the universal property. There is likely to be a 2-categorical universal property of $\mathbf{Optic}$ for non-strict monoidal categories, so long as we restrict our attention to the sub-2-category $\mathbf{SymmMonCat}_\mathrm{homcore}$ of $\mathbf{SymmMonCat}$ that only contains natural isomorphisms. We leave this to future work:
\begin{definition}
A \emph{teleological natural isomorphism} $\alpha : F \Rightarrow G$ is a monoidal natural isomorphism whose components are all dualisable and that is additionally compatible with the dualisation:
\[
\begin{tikzcd}
{(FX)}^* \ar[r, "\cong"] \ar[d, "(\alpha_X)^*", swap] & F(X^*) \ar[d, "\alpha_{X^*}"] \\
{(GX)}^* \ar[r, "\cong", swap] & G(X^*)
\end{tikzcd}
\]
There is a (strict) 2-category $\mathbf{Tele}$ consisting of teleological categories, functors and natural isomorphisms.
\end{definition}
\begin{conjecture}
\[ \mathbf{Optic} : \mathbf{SymmMonCat}_\mathrm{homcore} \to \mathbf{Tele} \] is left biadjoint to \[(-)_d : \mathbf{Tele} \to \mathbf{SymmMonCat}_\mathrm{homcore}\]
\end{conjecture}
\subsection{Optics for a Monoidal Action}
To capture more of the optic variants available in the Haskell \texttt{lens}{} library, we generalise to the case of a monoidal action of one category on another.
\begin{definition}
Let $\mathscr{C}$ be a category and $(\mathscr{M}, \otimes, I)$ a monoidal category. An \emph{action of $\mathscr{M}$ on $\mathscr{C}$} is a monoidal functor $a : \mathscr{M} \to [\mathscr{C}, \mathscr{C}]$. For two objects $M \in \mathscr{M}$ and $A \in \mathscr{C}$, the action $a(M)(A)$ is abbreviated $M \cdot A$.
\end{definition}
Given such an action, we define
\begin{align*}
\mathbf{Optic}_\mathscr{M}((S, S'), (A, A')) := \int^{M \in \mathscr{M}} \mathscr{C}(S, M \cdot A) \times \mathscr{C}(M \cdot A', S')
\end{align*}
This subsumes the earlier definition, taking $\mathscr{M} = \mathscr{C}$ and having $\mathscr{C}$ act on itself via left-tensor:
\begin{align*}
a : \mathscr{C} &\to [\mathscr{C}, \mathscr{C}] \\
X &\mapsto X \otimes -
\end{align*}
We henceforth write this case as $\mathbf{Optic}_\otimes$, to emphasise the action on $\mathscr{C}$ that is used.
\begin{proposition}
We have a category $\mathbf{Optic}_\mathscr{M}$ and a functor $\iota : \mathscr{C} \times \mathscr{C}^\mathrm{op} \to \mathbf{Optic}_\mathscr{M}$ defined analogously to Propositions~\ref{prop:optic-is-cat} and~\ref{prop:iota-functor}. \qed
\end{proposition}
\begin{definition}
Given two categories equipped with monoidal actions $(\mathscr{M}, \mathscr{C})$ and $(\mathscr{N}, \mathscr{D})$, a \emph{morphism of actions} is a monoidal functor $F^\bullet : \mathscr{M} \to N$ and a functor $F : \mathscr{C} \to \mathscr{D}$ that commutes with the actions, in the sense that there exists a natural isomorphism
\begin{align*}
\phi_{M,A} &: F(M \cdot A) \to (F^\bullet M) \cdot (F A)
\end{align*}
satisfying conditions analogous to those for a monoidal functor.
\end{definition}
\begin{proposition}\label{prop:change-of-action}
If $F : (\mathscr{M}, \mathscr{C}) \to (\mathscr{N}, \mathscr{D})$ is a morphism of actions, there is an induced functor $\mathbf{Optic}(F) : \mathbf{Optic}_\mathscr{M} \to \mathbf{Optic}_\mathscr{N}$. \qed
\end{proposition}
For the remainder of the paper we work in this more general setting.
\section{Lawful Optics}\label{sec:lawful-optics}
Typically we want our optics to obey certain laws. The `constant-complement' perspective suggests declaring an optic $\rep{l}{r}$ to be lawful if $l$ and $r$ are mutual inverses. There are a couple of issues with this definition. Firstly, it is not invariant under the coend relation, so the condition holding for one representative is no guarantee that it holds for any other. Still, we might say that an optic is lawful if it has \emph{some} representative that consists of mutual inverses. In our primary example of an optic variant, lenses in $\mathbf{Set}$, this does indeed correspond to the concrete lens laws. However, this fact relies on some extra structure possessed by $\mathbf{Set}$: the existence of pullbacks, and that all objects (other than the empty set) have a global element.
In this section we make a different definition of lawfulness that at first seems strange, but which in the case of lenses corresponds \emph{exactly} to the three concrete lens laws with no additional assumptions on $\mathscr{C}$ required. As further justification for this definition, in Section~\ref{sec:profunctor-optics} we will see an interpretation of (unlawful) optics as maps between certain comonoid objects. Lawfulness in our sense corresponds exactly to this map being a comonoid homomorphism.
The optic laws only make sense for optics of the form $p : (S,S) \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (A, A)$. In this section we will abbreviate $\mathbf{Optic}_\mathscr{M}((S, S), (A, A))$ as $\mathbf{Optic}_\mathscr{M}(S, A)$ and $p : (S, S) \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (A, A)$ as $p : S \ensuremath{\,\mathaccent\shortmid\rightarrow\,} A$.
\begin{remark}
We use $;$ to denote composition of $\mathscr{C}$ in diagrammatic order. The reason for this is that the coend relation can be applied simply by shifting the position of $\mid$ in a representative:
\begin{align*}
\rep{l;(\phi \cdot A)}{r} = \rep{l}{(\phi \cdot A);r}
\end{align*}
\end{remark}
Let $\mathbf{Optic}^2_\mathscr{M}(S, A)$ denote the set \[ \int^{M_1, M_2 \in \mathscr{M}} \mathscr{C}(S, M_1 \cdot A) \times \mathscr{C}(M_1 \cdot A, M_2 \cdot A) \times \mathscr{C}(M_2 \cdot A, S). \]
Using the universal property of the coend, we define three maps:
\begin{align*}
\mathsf{outside} &: \mathbf{Optic}_\mathscr{M}(S, A) \to \mathscr{C}(S, S) \\
\mathsf{once}, \mathsf{twice} &: \mathbf{Optic}_\mathscr{M}(S, A) \to \mathbf{Optic}^2_\mathscr{M}(S, A)
\end{align*}
by
\begin{align*}
\mathsf{outside}(\rep{l}{r}) &= l;r \\
\mathsf{once}(\rep{l}{r}) &= \repthree{l}{\mathrm{id}_{M\cdot A}}{r} \\
\mathsf{twice}(\rep{l}{r}) &= \repthree{l}{r;l}{r}
\end{align*}
\begin{definition}
An optic $p : S \ensuremath{\,\mathaccent\shortmid\rightarrow\,} A$ is \emph{lawful} if
\begin{align*}
\mathsf{outside}(p) &= \mathrm{id}_S \\
\mathsf{once}(p) &= \mathsf{twice}(p)
\end{align*}
\end{definition}
Returning to ordinary lenses, we can show that this is equivalent to the laws we expect.
\begin{proposition}\label{prop:lawful-lens-laws}
A concrete lens described by $\textsc{Get}$ and $\textsc{Put}$ is lawful (in our sense) iff it obeys the three concrete lens laws.
\end{proposition}
\begin{proof}
We begin by giving $\mathbf{Optic}^2_\times(S, A)$ the same treatment as we did $\mathbf{Optic}_\times(S, A)$. Using the universal property of the product and Yoneda reduction twice each, we have:
\begin{align*}
\mathbf{Optic}^2_\times(S, A)
&= \int^{M_1, M_2 \in \mathscr{C}} \mathscr{C}(S, M_1 \times A) \times \mathscr{C}(M_1 \times A, M_2 \times A) \times \mathscr{C}(M_2 \times A, S) \\
&\cong \int^{M_1, M_2 \in \mathscr{C}} \mathscr{C}(S, M_1) \times \mathscr{C}(S, A) \times \mathscr{C}(M_1 \times A, M_2 \times A) \times \mathscr{C}(M_2 \times A, S) \\
&\cong \int^{M_2 \in \mathscr{C}} \mathscr{C}(S, A) \times \mathscr{C}(S \times A, M_2 \times A) \times \mathscr{C}(M_2 \times A, S) \\
&\cong \int^{M_2 \in \mathscr{C}} \mathscr{C}(S, A) \times \mathscr{C}(S \times A, M_2) \times \mathscr{C}(S \times A, A) \times \mathscr{C}(M_2 \times A, S) \\
&\cong \mathscr{C}(S, A) \times \mathscr{C}(S \times A, A) \times \mathscr{C}(S \times A \times A, S)
\end{align*}
Written equationally, the isomorphism $\Phi : \mathbf{Optic}^2_\times(S, A) \to \mathscr{C}(S, A) \times \mathscr{C}(S \times A, A) \times \mathscr{C}(S \times A \times A, S)$ is given by:
\begin{align*}
\Phi(\repthree{l}{c}{r}) = (\quad&l;\pi_2, \\
&(l;\pi_1 \times A);c;\pi_2, \\
&((l;\pi_1 \times A);c;\pi_1 \times A);r \quad )
\end{align*}
Now suppose we are given a lens $p$ that corresponds concretely to $(\textsc{Get}, \textsc{Put})$, so $p = \rep{[\mathrm{id}_S, \textsc{Get}]}{\textsc{Put}}$. Evaluating $\mathsf{outside}$ on this gives:
\begin{align*}
\mathsf{outside}(\rep{[\mathrm{id}_S, \textsc{Get}]}{\textsc{Put}}) = [\mathrm{id}_S, \textsc{Get}];\textsc{Put}
\end{align*}
so requiring $\mathsf{outside}(p) = \mathrm{id}_S$ is precisely the $\textsc{Get}\textsc{Put}$ law.
We now have to slog through evaluating $\Phi(\mathsf{once}(p))$ and $\Phi(\mathsf{twice}(p))$.
\begingroup
\allowdisplaybreaks
\begin{alignat*}{3}
\Phi(\mathsf{once}(\rep{[\mathrm{id}_S, \textsc{Get}]}{\textsc{Put}})) &=
\Phi(&&\repthree{[\mathrm{id}_S, \textsc{Get}]}{\mathrm{id}_{S \times A}}{\textsc{Put}}) \\
&= (&& [\mathrm{id}_S, \textsc{Get}];\pi_2, \\
&&& ( [\mathrm{id}_S, \textsc{Get}];\pi_1 \times A);\mathrm{id}_{S \times A};\pi_2, \\
&&& (( [\mathrm{id}_S, \textsc{Get}];\pi_1 \times A);\mathrm{id}_{S \times A};\pi_1 \times A) ; \textsc{Put} \quad) \\
%
&= (&&\textsc{Get}, \\
&&& \pi_2, \\
&&& \pi_{1,3} ;\textsc{Put} \quad) \\
\Phi(\mathsf{twice}(\rep{[\mathrm{id}_S, \textsc{Get}]}{\textsc{Put}})) &=
\Phi(&&\repthree{[\mathrm{id}_S, \textsc{Get}]}{\textsc{Put};[\mathrm{id}_S, \textsc{Get}]}{\textsc{Put}}) \\
&= (&& [\mathrm{id}_S, \textsc{Get}];\pi_2, \\
&&& ([\mathrm{id}_S, \textsc{Get}];\pi_1 \times A);\textsc{Put};[\mathrm{id}_S, \textsc{Get}];\pi_2, \\
&&& (([\mathrm{id}_S, \textsc{Get}];\pi_1 \times A);\textsc{Put};[\mathrm{id}_S, \textsc{Get}];\pi_1 \times A);\textsc{Put} \quad) \\
%
&= (&&\textsc{Get}, \\
&&& (\mathrm{id}_S \times A);\textsc{Put};\textsc{Get}, \\
&&& ((\mathrm{id}_S \times A);\textsc{Put} \times A);\textsc{Put} \quad) \\
%
&= (&&\textsc{Get}, \\
&&& \textsc{Put};\textsc{Get}, \\
&&& (\textsc{Put} \times A);\textsc{Put} \quad)
\end{alignat*}
\endgroup
So comparing component-wise, $\Phi(\mathsf{once}(p))$ being equal to $\Phi(\mathsf{twice}(p))$ is exactly equivalent to the $\textsc{Put}\textsc{Get}$ and $\textsc{Put}\fput$ laws holding.
\end{proof}
We can also check when some other of our basic optics are lawful.
\begin{proposition}
If $p = \rep{l}{r} : S \ensuremath{\,\mathaccent\shortmid\rightarrow\,} A$ is an optic such that $l$ and $r$ are mutual inverses, then $p$ is lawful.
\end{proposition}
\begin{proof}
The conditions are easy to check:
\begin{align*}
\mathsf{outside}(\rep{l}{r}) &= l;r = \mathrm{id}_S \\
\mathsf{twice}(\rep{l}{r})
&= \repthree{l}{r;l}{r} \\
&= \repthree{l}{\mathrm{id}_{M\cdot A}}{r} \\
&= \mathsf{once}(\rep{l}{r})
\end{align*}
\end{proof}
\begin{corollary}\label{cor:iota-lawful}
If $f : S \to A$ and $g : A \to S$ are mutual inverses, then $\iota(f, g) : S \ensuremath{\,\mathaccent\shortmid\rightarrow\,} A$ is a lawful optic, so $\iota$ restricts to a functor $\iota : \mathrm{Core}(\mathscr{C}) \to \mathbf{Lawful}_\mathscr{M}$. \qed
\end{corollary}
\begin{corollary}\label{cor:tautological-lawful}
For any two objects $A \in \mathscr{C}$ and $M \in \mathscr{M}$, the tautological optic $M \cdot A \ensuremath{\,\mathaccent\shortmid\rightarrow\,} A$ is lawful. \qed
\end{corollary}
\begin{proposition}
A costate $p : (S, S) \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (I, I)$ corresponding to a morphism $f : S \to S$ via Proposition~\ref{prop:costates} is lawful iff $f = \mathrm{id}_S$.
\end{proposition}
\begin{proof}
The first law states that $\mathsf{outside}(p) = \mathrm{id}_S$, so if $p$ is lawful we have
\[ \mathrm{id}_S = \mathsf{outside}(\rep{\rho_S^{-1}}{\rho_S;f}) = \rho_S^{-1};\rho_S;f = f \]
On the other hand, if $f = \mathrm{id}_S$ then $\rep{\rho_S^{-1}}{\rho_S}$ is lawful because its components are mutual inverses.
\end{proof}
\begin{proposition}\label{prop:lawful-category}
There is an subcategory $\mathbf{Lawful}_\mathscr{M}$ of $\mathbf{Optic}_\mathscr{M}$ given by objects of the form $(S, S)$ and lawful optics between them.
\end{proposition}
\begin{proof}
This will follow from our description of lawful profunctor optics later, but we give a direct proof. The identity optic is lawful as by definition it has a representative $\rep{\lambda_S^{-1}}{\lambda_S}$ consisting of mutual inverses. We just have to show that lawfulness is preserved under composition.
Suppose we have two lawful optics $\rep{l}{r} : R \ensuremath{\,\mathaccent\shortmid\rightarrow\,} S$ and $\rep{l'}{r'} : S \ensuremath{\,\mathaccent\shortmid\rightarrow\,} A$ with residuals $M$ and $N$ respectively. We must show that $\rep{l;(M\cdot l')}{(M\cdot r');r}$ is also a lawful optic. Showing the first law is straightforward:
\begin{align*}
\mathsf{outside}(\rep{l; (M\cdot l')}{(M\cdot r') ; r})
&= l ; (M \cdot l') ; (M \cdot r') ; r \\
&= l ; (M \cdot l'r') ; r \\
&= l ; (M \cdot \mathrm{id}_{N \cdot A}) ; r \\
&= l ; r \\
&= \mathrm{id}_S
\end{align*}
For the second law, we must show that
\[ \repthree{ l;(M\cdot l')}{(M\cdot r'); r;l;(M\cdot l')}{(M\cdot r') ; r} = \repthree{l;(M\cdot l')}{\mathrm{id}_{M \cdot N \cdot A}}{(M\cdot r') ; r}. \]
The idea is that, by the lawfulness of $\rep{l}{r}$ and $\rep{l'}{r'}$, there are chains of coend relations that prove
\begin{align*}
\repthree{l}{r;l}{r} &= \repthree{l}{\mathrm{id}_{M \cdot S}}{r} \\
\repthree{l'}{r';l'}{r'} &= \repthree{l'}{\mathrm{id}_{N \cdot A}}{r'}
\end{align*}
The result is achieved by splicing these chains of relations together in the following way.
Consider one of the generating relations $\repthree{l;(\phi \cdot S)}{c}{r} = \repthree{l}{(\phi \cdot S) ; c}{r}$ in $\mathbf{Optic}^2_\mathscr{M}(R, S)$, where $\phi : M \to M'$. By the functoriality of the action, we calculate:
\begin{align*}
&\repthree{l;(\phi \cdot S);(M' \cdot l')}{(M'\cdot r'); c ;(M\cdot l')}{(M\cdot r') ; r} \\
&= \repthree{l;(M \cdot l');(\phi \cdot N \cdot A)}{(M' \cdot r'); c ;(M \cdot l')}{(M \cdot r') ; r} && \text{(functoriality)} \\
&= \repthree{l;(M \cdot l')}{(\phi \cdot N \cdot A);(M' \cdot r'); c ;(M \cdot l')}{(M \cdot r') ; r} && \text{(coend relation)} \\
&= \repthree{l;(M \cdot l')}{(M \cdot r');(\phi \cdot S); c ;(M \cdot l')}{(M \cdot r') ; r} && \text{(functoriality)}
\end{align*}
And similarly for the other generating relation, $\repthree{l}{c;(\phi \cdot S)}{r} = \repthree{l}{c}{(\phi \cdot S); r}$.
So indeed by replicating the same chain of relations that proves $\repthree{l}{r;l}{r} =\repthree{l}{\mathrm{id}_{M \cdot S}}{r}$, we see
\begin{align*}
\repthree{l;(M \cdot l')}{(M \cdot r');r;l;(M \cdot l')}{(M \cdot r') ; r}
&= \repthree{l;(M \cdot l')}{(M \cdot r');\mathrm{id}_{M \cdot S};(M \cdot l')}{(M \cdot r') ; r} \\
&= \repthree{l;(M \cdot l')}{(M \cdot r';l')}{(M \cdot r') ; r}.
\end{align*}
Now that the $r;l$ in the center has been cleared away, we turn to the chain of relations proving $\repthree{l'}{r';l'}{r'} = \repthree{l'}{\mathrm{id}_{N\cdot A}}{r'}$. A generating relation $\repthree{l';(\psi \cdot A)}{c'}{r'} = \repthree{l'}{(\psi \cdot A) ; c'}{r'}$ in $\mathbf{Optic}^2_\mathscr{M}(S, A)$ implies that
\begin{align*}
\repthree{l;(M \cdot l';\psi \cdot A)}{M \cdot c'}{(M \cdot r') ; r}
&= \repthree{l;(M\cdot l');(M \cdot \psi \cdot A)}{M \cdot c'}{(M \cdot r') ; r} \\
&= \repthree{l;(M \cdot l')}{(M \cdot \psi \cdot A) ; (M \cdot c')}{(M\cdot r') ; r} \\
&= \repthree{l;(M \cdot l')}{M \cdot ((\psi \cdot A);c')}{(M \cdot r') ; r}
\end{align*}
And similarly for the generating relation on the other side. So again we can replicate the chain of relations proving $\repthree{l'}{r';l'}{r'} = \repthree{l'}{\mathrm{id}_{M\cdot A}}{r'}$ to show that
\begin{align*}
\repthree{l;(M \cdot l')}{(M \cdot r';l')}{(M \cdot r') ; r}
&= \repthree{l;(M \cdot l')}{M(\mathrm{id}_{N\cdot A})}{(M \cdot r') ; r} \\
&= \repthree{l;(M \cdot l')}{\mathrm{id}_{M \cdot N \cdot A}}{(M \cdot r') ; r}
\end{align*}
as required. We conclude that composition preserves lawfulness, so $\mathbf{Lawful}_\mathscr{M}$ is indeed a subcategory of $\mathbf{Optic}_\mathscr{M}$.
\end{proof}
\begin{proposition}
In the case that $\mathscr{C}$ is symmetric monoidal and $\mathscr{M} = \mathscr{C}$ acts by left-tensor, then $\mathbf{Lawful}_\otimes$ is symmetric monoidal with the unswitched tensor.
\end{proposition}
This would of course make no sense with the switched tensor, as the tensor of two objects would typically no longer be of the form $(X, X)$.
\begin{proof}
Due to Corollary~\ref{cor:iota-lawful}, the structure maps of $\mathbf{Optic}_\mathscr{M}$ are all lawful. We just have to check that $\otimes : \mathbf{Optic}_\mathscr{M} \times \mathbf{Optic}_\mathscr{M} \to \mathbf{Optic}_\mathscr{M}$ restricts to a functor on $\mathbf{Lawful}_\mathscr{M}$.
Given two lawful optics $p : S \ensuremath{\,\mathaccent\shortmid\rightarrow\,} A$ and $q : T \ensuremath{\,\mathaccent\shortmid\rightarrow\,} B$, the first law for $p \otimes q$ follows immediately from the first law for $p$ and $q$. To prove the second law, we follow the same strategy as used in the previous proposition: the two chains of relations proving $p$ and $q$ lawful can be combined to prove the law for $p \otimes q$.
\end{proof}
\begin{proposition}
Suppose $F : (\mathscr{M}, \mathscr{C}) \to (\mathscr{N}, \mathscr{D})$ is a morphism of actions. Then $\mathbf{Optic}(F) : \mathbf{Optic}_\mathscr{M} \to \mathbf{Optic}_\mathscr{N}$ restricts to a functor $\mathbf{Lawful}_\mathscr{M} \to \mathbf{Lawful}_\mathscr{N}$.
\end{proposition}
\begin{proof}
If $p = \rep{l}{r}$ is lawful, then verifying the first equation is easy:
\begin{align*}
\mathsf{outside}(\mathbf{Optic}(F)(\rep{l}{r}))
&= \mathsf{outside}\left(\rep{(Fl);\phi^{-1}_{M,A}}{\phi_{M,A'};(Fr)}\right) \\
&= (Fl);\phi^{-1}_{M,A};\phi_{M,A'};(Fr)\\
&= (Fl);(Fr)\\
&= \mathrm{id}_{FS}
\end{align*}
where $\phi_{M,A'} : (F^\bullet M) \cdot (FA) \to F(M \cdot A)$ is the structure map that commutes $F$ with the actions.
For the second equation, consider a generating relation $\repthree{l;(\psi \cdot A)}{c}{r} = \repthree{l}{(\psi \cdot A) ; c}{r}$. We can use the naturality of $\phi$ to show
\begin{align*}
\repthree{F(l;\psi \cdot A);\phi^{-1}_{M,A}}{\phi_{M,A};Fc;\phi^{-1}_{N,A}}{\phi_{N,A};Fr}
&= \repthree{Fl;F(\psi \cdot A);\phi^{-1}_{M,A}}{\phi_{M,A};Fc;\phi^{-1}_{N,A}}{\phi_{N,A};Fr} \\
&= \repthree{Fl;\phi^{-1}_{M',A};(F^\bullet \psi) \cdot A}{\phi_{M,A};Fc;\phi^{-1}_{N,A}}{\phi_{N,A};Fr} \\
&= \repthree{Fl;\phi^{-1}_{M',A}}{(F^\bullet \psi) \cdot A;\phi_{M,A};Fc;\phi^{-1}_{N,A}}{\phi_{N,A};Fr} \\
&= \repthree{Fl;\phi^{-1}_{M',A}}{\phi_{M',A};F(\psi \cdot A);Fc;\phi^{-1}_{N,A}}{\phi_{N,A};Fr} \\
&= \repthree{Fl;\phi^{-1}_{M',A}}{\phi_{M',A};F(\psi \cdot A;c);\phi^{-1}_{N,A}}{\phi_{N,A};Fr}
\end{align*}
Similarly,
\begin{align*}
\repthree{Fl;\phi^{-1}_{M,A}}{\phi_{M,A};Fc;\phi^{-1}_{N,A}}{\phi_{N,A};F((\psi \cdot A);r)}
&= \repthree{Fl;\phi^{-1}_{M,A}}{\phi_{M,A};F(c;(\psi \cdot A));\phi^{-1}_{N,A}}{\phi_{N,A};Fr}
\end{align*}
If $\rep{l}{r}$ is lawful, we can therefore replicate the chain of relations proving $\mathsf{twice}(\rep{l}{r}) = \mathsf{once}(\rep{l}{r})$ to show:
\begin{align*}
\mathsf{twice}(\mathbf{Optic}(F)(\rep{l}{r}))
&= \repthree{Fl;\phi^{-1}_{M,A}}{\phi_{M,A};F(r;l);\phi^{-1}_{M,A}}{\phi_{M,A};Fr} \\
&= \repthree{Fl;\phi^{-1}_{M,A}}{\phi_{M,A};F(\mathrm{id}_{M \cdot A});\phi^{-1}_{M,A}}{\phi_{M,A};Fr} \\
&= \repthree{Fl;\phi^{-1}_{M,A}}{\mathrm{id}_{(F^\bullet M) \cdot (FA)}}{\phi_{M,A};Fr} \\
&= \mathsf{once}(\mathbf{Optic}(F)(\rep{l}{r}))
\end{align*}
\end{proof}
We end with some commentary on the optic laws. The requirement that $\mathsf{once}(p) = \mathsf{twice}(p)$ is mysterious, but there are sufficient conditions that are easier to verify.
\begin{proposition}
Let $\rep{l}{r} : S \ensuremath{\,\mathaccent\shortmid\rightarrow\,} A$ be an optic. If $l;r = \mathrm{id}_S$ and $r;l = \phi \cdot A$ for some $\phi : M \to M$ in $\mathscr{M}$, then $\rep{l}{r}$ is lawful.
\end{proposition}
\begin{proof}
The statement $\mathsf{outside}(\rep{l}{r}) = l;r = \mathrm{id}_S$ is exactly the first law. And for the second, we verify:
\begin{align*}
\mathsf{twice}(\rep{l}{r})
&= \repthree{l}{r;l}{r} \\
&= \repthree{l}{\phi \cdot A}{r} && \text{($r;l = \phi \cdot A$)}\\
&= \repthree{l ; (\phi \cdot A)}{\mathrm{id}_{M\cdot A}}{r} && \text{(coend relation)} \\
&= \repthree{l;r;l}{\mathrm{id}_{M\cdot A}}{r} && \text{($r;l = \phi \cdot A$ again)}\\
&= \repthree{l}{\mathrm{id}_{M\cdot A}}{r} && \text{($l;r = \mathrm{id}_S$)}\\
&= \mathsf{once}(\rep{l}{r})
\end{align*}
\end{proof}
Even if $r;l = \phi \cdot A$ for some $\phi$, the same is not necessarily true for other representatives of the same optic. Let $\mathsf{inside} : \mathbf{Optic}_\mathscr{M}(S, A) \to \int^{M \in \mathscr{M}} \mathscr{C}(M \cdot A, M \cdot A)$ be the map induced by $\mathsf{inside}(\rep{l}{r}) = \langle r ; l \rangle$. We might ask that instead of requiring $r;l = \phi \cdot A$ exactly, we have $\langle r ; l \rangle = \langle \phi \cdot A \rangle$ in $\int^{M \in \mathscr{M}} \mathscr{C}(M \cdot A, M \cdot A)$. In fact, this is equivalent:
\begin{proposition}\label{prop:onthenose}
Suppose $p : S \ensuremath{\,\mathaccent\shortmid\rightarrow\,} A$ satisfies $\mathsf{outside}(p) = \mathrm{id}_S$ and $\mathsf{inside}(p) = \langle \phi \cdot A \rangle$. Then there exists a representative $\rep{l}{r}$ such that $r;l = \psi \cdot A$ on the nose for some (possibly different) $\psi : M \to M$.
\end{proposition}
\begin{proof}
The generating relation for $\int^{M \in \mathscr{M}} \mathscr{C}(M \cdot A, M \cdot A)$ is
\[ \langle f; (\phi \cdot A) \rangle = \langle (\phi \cdot A); f \rangle \]
whenever $f : N \cdot A \to M \cdot A$ and $\phi : M \to N$. This relation $f; (\phi \cdot A) \rightsquigarrow (\phi \cdot A); f$ is not likely to be symmetric or transitive in general. Note that if $f; (\phi \cdot A) \rightsquigarrow (\phi \cdot A); f$ then $f ;(\phi \cdot A) ;f; (\phi \cdot A) \rightsquigarrow (\phi \cdot A) ;f ;(\phi \cdot A); f$. More generally, if $f \rightsquigarrow g$ then $f^n \rightsquigarrow g^n$ for any $n$.
Now let $\rep{l}{r}$ be a representative for $p$, so $l;r = \mathrm{id}_S$ and $\langle r;l \rangle = \langle \psi \cdot A\rangle$. There therefore exists a finite chain of relations $r;l = u_1 \leftrightsquigarrow \dots \leftrightsquigarrow u_n = \psi \cdot A$. Suppose the first relation faces rightward, so there exists a $k$ and $\phi$ with $r;l = (\phi \cdot A);k$ and $u_2 = k;(\phi \cdot A)$. Define $l' = l;(\phi \cdot A)$ and $r' = k;r$. Then:
\begin{align*}
\rep{l'}{r'}
&= \rep{l ; (\phi \cdot A)}{k ; r} \\
&= \rep{l}{(\phi \cdot A) ; k ; r} \\
&= \rep{l}{r ; l; r} \\
&= \rep{l}{r}
\end{align*}
This new representative satisfies
\begin{align*}
l';r' &= l;(\phi \cdot A);k;r = l;r;l;r = \mathrm{id}_S \\
r';l' &= k;r;l;(\phi \cdot A) = k;(\phi \cdot A);k;(\phi \cdot A) = u_2^2
\end{align*}
A symmetric argument shows that if instead the relation faces leftward, so $lr \leftsquigarrow u_2$, there again exists $l'$ and $r'$ so that $\rep{l}{r} = \rep{l'}{r'}$, and both $l';r' = \mathrm{id}_S$ and $r';l' = u_2^2$.
We can now inductively apply the above argument to the shorter chain \[l'r' = u_1^2 \leftrightsquigarrow \dots \leftrightsquigarrow u_n^2 \leftrightsquigarrow {(\psi \cdot A)}^2 = \psi^2 \cdot A,\] obtained by squaring each morphism in the original chain, until we are left with a representative $\rep{l^*}{r^*}$ such that $r^*;l^* = \psi^N \cdot A$, for some $N>0$. This pair $\rep{l^*}{r^*}$ is the required representative.
\end{proof}
The above argument has a similar form to those that appear in~\cite{OnTheTrace}, which considered (among other things) coends of the form $\int^{c \in \mathscr{C}} \mathscr{C}(c, Fc)$ for an endofunctor $F : \mathscr{C} \to \mathscr{C}$.
\section{Examples}\label{sec:examples}
The general pattern is as follows. Once we choose a particular monoidal action $\mathscr{M} \to [\mathscr{C}, \mathscr{C}]$, we find an isomorphism between the set of optics $(S, S') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (A, A')$ and a set $\mathbb{C}((S, S'), (A, A'))$ that is easier to describe. We follow~\cite{ProfunctorOptics} (and others) in calling elements of $\mathbb{C}$ \emph{concrete optics}. There is no canonical choice for this set; our primary goal is to find a way to eliminate the coend so that we no longer have to deal with equivalence classes of morphisms.
Ideally, we then also find a simplified description $\mathbb{C}^2(S, A)$ for the set $\mathbf{Optic}^2_\mathscr{C}(S, A)$. We can then ``read off'' what conditions on are needed on a concrete optic to ensure that the corresponding element of $\mathbf{Optic}_\mathscr{M}(S, A)$ is lawful. We will call these conditions the \emph{concrete laws}.
It is worth emphasising that once a monoidal action has been chosen and a concrete description of the corresponding optics found, no further work is needed to show that the result forms a category with a subcategory of lawful optics. This is especially useful when devising new optic variants, as we do later.
\subsection{Lenses}
The founding example, that of lenses, has already been discussed in the previous sections. We add a couple of remarks.
\begin{remark}\label{lens-iota-not-faithful}
For the category of sets, the functor $\iota : \mathbf{Set} \times \mathbf{Set}^\mathrm{op} \to \mathbf{Optic}_\times$ is not faithful. The problem is the empty set: the functor $0 \times (-)$ is not faithful. Any pair of maps $f : 0 \to A$, $g : A' \to S'$ yield equivalent optics $\iota(f, g)$, as the corresponding $\textsc{Get}$ and $\textsc{Put}$ functions must be the unique maps from $0$.
\end{remark}
\begin{remark}
In the case that $\mathscr{C}$ is cartesian closed, $\mathbf{Optic}_\times$ is monoidal closed via the astonishing formula
\begin{align*}
[(S, S'), (A, A')] := (\underline{\C}(S, A) \times \underline{\C}(S \times A', S'), \, S \times A')
\end{align*}
where $\underline{\C}(-, -)$ denotes the internal hom. For a proof see~\cite[Section 1.2]{DialecticaCategories}. This cannot be extended to non-cartesian closed categories, the isomorphism
\begin{align*}
\mathbf{Lens}((S, S') \otimes (T, T'), (A, A')) \cong \mathbf{Lens}((S, S'), [(T, T'), (A, A')])
\end{align*}
uses the diagonal maps of $\mathscr{C}$ in an essential way.
\end{remark}
If we ask more of our category $\mathscr{C}$, we can show that a lens $S \ensuremath{\,\mathaccent\shortmid\rightarrow\,} A$ implies the existence of a complement $\mathscr{C}$ with $S \cong C \times A$. This doesn't appear to follow purely from the concrete lens laws---an argument that a definition of lawfulness based on constant complements is not the correct generalisation. For completeness we include a proof in our notation.
\begin{proposition}[{Generalisation of~\cite[Corollary 13]{AlgebrasAndUpdateStrategies}}]
Suppose $\mathscr{C}$ has pullbacks and that there is a morphism $x : 1 \to A$. If $p : S \ensuremath{\,\mathaccent\shortmid\rightarrow\,} A$ is lawful then there exists $C \in \mathscr{C}$ and mutual inverses $l : S \to C \times A$ and $r : C \times A \to S$ so that $p = \rep{l}{r}$.
\end{proposition}
\begin{proof}
Set $C$ to be the pullback of $\textsc{Get}$ along $x$, so there is a map $i : C \to S$ with $\textsc{Get} \, i = x !_C$. There is also a map $j : S \to C$ induced by the following diagram:
\[
\begin{tikzcd}
S \ar[ddr, bend right = 20] \ar[dr, "j", dashed] \ar[r, "{[\mathrm{id}_S, x !_S]}"] & S \times A \ar[dr, "\textsc{Put}"] & \\
& C \ar[r, "i"] \ar[d] \arrow[dr, phantom, "\lrcorner", very near start] & S \ar[d, "\textsc{Get}"] \\
& 1\ar[r, "x", swap] & A
\end{tikzcd}
\]
which commutes by the $\textsc{Put}\textsc{Get}$ law. Note that $ji = \mathrm{id}_C$ by the universal property of pullbacks.
Now take $l : [j,\textsc{Get}] : S \to C \times A$ and $r : \textsc{Put} (i \times A) : C \times A \to S$. That they are mutual inverses is easily checked:
\begin{align*}
\textsc{Put} (i \times A)[j,\textsc{Get}] &= \textsc{Put} [ij,\textsc{Get}] \\
&= \textsc{Put} [\textsc{Put} [\mathrm{id}_S, x!_S],\textsc{Get}] && \text{(by definition of $j$)} \\
&= \textsc{Put} [\mathrm{id}_S,\textsc{Get}] && \text{(by $\textsc{Put}\fput$)} \\
&= \mathrm{id}_S && \text{(by $\textsc{Get}\textsc{Put}$)}
\intertext{and}
[j,\textsc{Get}]\textsc{Put} (i \times A) &= [j\textsc{Put} (i \times A),\textsc{Get}\,\textsc{Put} (i \times A)] && \text{(by universal property of product)} \\
&= [j\textsc{Put} (i \times A), \pi_2 (i \times A)] && \text{(by $\textsc{Put}\textsc{Get}$)} \\
&= [j\textsc{Put} (i \times A), \pi_2] && \\
&= [jij\textsc{Put} (i \times A), \pi_2] && \\
&= [j\textsc{Put} [\mathrm{id}_S,x !_S] \textsc{Put} (i \times A), \pi_2] && \\
&= [j\textsc{Put} [\mathrm{id}_S, x !_S] \pi_1 (i \times A), \pi_2] && \text{(by $\textsc{Put}\fput$)}\\
&= [jij \pi_1 (i \times A), \pi_2] && \\
&= [jiji \pi_1, \pi_2] && \\
&= [\pi_1, \pi_2] && \\
&= \mathrm{id}_{C \times A}
\end{align*}
Finally, the coend relation gives that \[\rep{[j,\textsc{Get}]}{\textsc{Put} (i \times A)} = \rep{(i \times A)[j,\textsc{Get}]}{\textsc{Put}} = \rep{[\mathrm{id}_S, \textsc{Get}]}{\textsc{Put}}\] as elements of $\mathbf{Optic}_\mathscr{M}(S, A)$.
\end{proof}
\begin{remark}
Much of the work on bidirectional transformations~\cite{CombinatorsForBidirectionalTreeTransformations} considers lenses that are only `well-behaved', not `very well-behaved': they obey the $\textsc{Put}\textsc{Get}$ and $\textsc{Get}\textsc{Put}$ laws but not the $\textsc{Put}\fput$ law.
For example, the ``change counter'' lens $\mathbb{N} \times A \ensuremath{\,\mathaccent\shortmid\rightarrow\,} A$ from~\cite{AClearPictureOfLensLaws} has $\textsc{Put}$ and $\textsc{Get}$ given by:
\begin{align*}
\textsc{Get}(n, a) &= a \\
\textsc{Put}((n, a), a') &= \begin{cases}
(n, a) & \text{if } a = a' \\
(n+1, a') & \text{otherwise}
\end{cases}
\end{align*}
This example is typical of (merely) well-behaved lenses: there is metadata stored alongside the target of a lens that mutates as the lens is used.
Lenses that satisfy only the two laws correspond to pairs $\rep{l}{r}$ such that $rl = \mathrm{id}_S$ and $\pi_2lr = \pi_2$. This condition seems unavoidably tied to the product structure on $\mathscr{C}$; there is no obvious way generalise this to other optics variants.
\end{remark}
\subsection{Prisms}
Prisms are dual to lenses:
\begin{definition}
Suppose $\mathscr{C}$ has finite coproducts. The \emph{category of prisms} is the category of optics with respect to the coproduct $\sqcup$: $\mathbf{Prism} \mathrel{\vcentcolon=} \mathbf{Optic}_\sqcup$.
\end{definition}
Just as optics for $\times$ correspond to a pair of maps $\textsc{Get} : S \to A$ and $\textsc{Put} : S \times A \to S$, optics for $\sqcup$ correspond to pairs of maps $\textsc{Review} : A \to S$ and $\textsc{Matching} : S \to S \sqcup A$. These names are taken from the Haskell \texttt{lens}{} library.
\begin{align*}
\mathbf{Prism}((S, S'), (A, A')) &= \int^{M \in \mathscr{C}} \mathscr{C}(S, M \sqcup A) \times \mathscr{C}(M \sqcup A', S') \\
&\cong \int^{M \in \mathscr{C}} \mathscr{C}(S, M \sqcup A) \times \mathscr{C}(M, S') \times \mathscr{C}(A, S') && \text{(universal property of coproduct)} \\
&\cong \mathscr{C}(S, S' \sqcup A) \times \mathscr{C}(A', S') && \text{(Yoneda reduction)}
\end{align*}
If we are given a prism $\rep{l}{r} : (S, S') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (A, A')$ then associated $\textsc{Review}$ and $\textsc{Matching}$ morphisms are given by $\textsc{Review} = r \mathrm{inr}$ and $\textsc{Matching} = (r\mathrm{inl} \sqcup A)l$
The concrete laws for prisms are the obvious duals to the lens laws:
\begin{align*}
\textsc{Matching} \; \textsc{Review} &= \mathrm{inr} \\
[\mathrm{id}_S, \textsc{Review}] \textsc{Matching} &= \mathrm{id}_S \\
(\textsc{Matching} \sqcup A) \textsc{Matching} &= \mathrm{in}_{1,3} \, \textsc{Matching}
\end{align*}
In the \texttt{lens}{} library documentation the third law is missing, on account of following:
\begin{proposition}
When $\mathscr{C} = \mathbf{Set}$, the third law is implied by the other two.
\end{proposition}
\begin{proof}
The key is that for any map $f : X \to Y$ in $\mathbf{Set}$, the codomain $Y$ is equal to the union of $\im f$ and its complement. The first law implies that $\textsc{Review}$ is injective, so $S \cong C \sqcup A$ for some complement $C$. Identifying $A$ with its image in $S$, the second law implies that if $a\in A \subset S$ then $\textsc{Matching}(a) = \mathrm{inr}(a)$ and if $c\in C \subset S$ then $\textsc{Matching}(c) = \mathrm{inl}(c)$. The third law can then be verified pointwise by checking both cases $a\in A \subset S$ and $c\in C \subset S$ separately.
\end{proof}
The following is then exactly the dual of Proposition~\ref{prop:lawful-lens-laws}.
\begin{proposition}\label{prop:lawful-prism-laws}
If $p : S \ensuremath{\,\mathaccent\shortmid\rightarrow\,} A$ is a lawful prism then the associated $\textsc{Matching}$ and $\textsc{Review}$ functions satisfy the concrete prism laws. \qed
\end{proposition}
\subsection{Isos}
For any category $\mathscr{C}$, there is a unique action of the terminal category $1$ on $\mathscr{C}$ that fixes every object.
\begin{proposition}
The category of optics for this action is isomorphic to $\mathscr{C} \times \mathscr{C}^\mathrm{op}$.
\end{proposition}
\begin{proof}
\begin{align*}
\mathbf{Optic}_1((S, S'), (A, A')) &= \int^{M \in 1} \mathscr{C}(S, M \cdot A) \times \mathscr{C}(M \cdot A', S') \\
&\cong \mathscr{C}(S, \star \cdot A) \times \mathscr{C}(\star \cdot A', S') \\
&\cong \mathscr{C}(S, A) \times \mathscr{C}(A', S')
\end{align*}
where $\star$ denotes the object of $1$. Composition in $\mathbf{Optic}_1((S, S'), (A, A'))$ does indeed correspond to composition in $\mathscr{C} \times \mathscr{C}^\mathrm{op}$.
\end{proof}
\begin{proposition}
An iso $\rep{l}{r}$ is lawful iff (as expected) $l$ and $r$ are mutual inverses.
\end{proposition}
\begin{proof}
$\mathbf{Optic}^2_\mathscr{M}(S, A)$ specialises in this case to just
\[ \mathscr{C}(S, A) \times \mathscr{C}(A, A) \times \mathscr{C}(A, S) \]
The condition $\mathsf{outside}(\rep{l}{r}) = \mathrm{id}_S$ is the claim that $rl = \mathrm{id}_S$, and $\mathsf{once}(\rep{l}{r}) = (l, \mathrm{id}_A, r)$ is equal to $\mathsf{twice}(\rep{l}{r}) = (l, lr, r)$ iff $lr = \mathrm{id}_A$.
\end{proof}
\subsection{Coalgebraic Optics}\label{sec:coalgebraic}
There is a common pattern in many of the examples to follow: for every object $A \in \mathscr{C}$, the evaluation-at-$A$ functor $- \cdot A : \mathscr{M} \to \mathscr{C}$ has a right adjoint, say $R_A : \mathscr{C} \to \mathscr{M}$. To fix notation, let \[\overrightarrow{(-)} : \mathscr{C}(F \cdot A, S) \to \mathscr{M}(F, R_{A} S) : \overleftarrow{(-)}\] denote the homset bijection, so the unit and counit are:
\begin{equation*}
\begin{aligned}[t]
\eta_F &: F \to R_A (F \cdot A) \\
\eta_F &:= \overrightarrow{\mathrm{id}_{F \cdot A}}
\end{aligned}
\qquad\qquad\qquad
\begin{aligned}[t]
\varepsilon_{S} &: (R_A S) \cdot A \to S \\
\varepsilon_{S} &:= \overleftarrow{\mathrm{id}_{R_{A} S}}
\end{aligned}
\end{equation*}
It is shown in~\cite[Section 6]{ANoteOnActions} that, at least in the case $\mathscr{M}$ is right closed, to give such an action is equivalent to giving a ``copowered $\mathscr{M}$-category $\mathscr{C}$''. In most cases of interest to us, however, $\mathscr{M}$ is not right closed.
When we have such a right adjoint, we can always find a concrete description of an optic:
\begin{align*}
\mathbf{Optic}_\mathscr{M}((S, S'), (A, A')) &= \int^{F \in \mathscr{M}} \mathscr{C}(S, F\cdot A) \times \mathscr{C}(F\cdot A', S') \\
&\cong \int^{F \in \mathscr{M}} \mathscr{C}(S, F \cdot A) \times \mathscr{M}(F, R_{A'} S') \\
&\cong \mathscr{C}(S, (R_{A'} S') \cdot A)
\end{align*}
A concrete optic is therefore a map $\textsc{Unzip} : S \to (R_{A'} S') \cdot A$. The above isomorphism sends $\textsc{Unzip}$ to the element $\rep{\textsc{Unzip}}{\varepsilon_{S'}}$. In the other direction, given $\rep{l}{r}$ we have $\textsc{Unzip} = (\overrightarrow{r} \cdot A)l$.
\begin{theorem}\label{thm:optics-are-coalgebras}
A concrete optic $\textsc{Unzip} : S \to (R_{A} S) \cdot A$ is lawful iff it is a coalgebra for the comonad $X \mapsto (R_{A} X) \cdot A$.
\end{theorem}
\begin{proof}
By adjointness and Yoneda reduction we have an isomorphism \[\Phi : \mathbf{Optic}^2_\mathscr{M}(S, A) \to \mathscr{C}(S, R_A (R_A S \cdot A) \cdot A)\] given by
\begin{align*}
&\mathbf{Optic}^2_\mathscr{M}(S, A) \\
&= \int^{M_1, M_2 \in \mathscr{M}} \mathscr{C}(S, M_1 \cdot A) \times \mathscr{C}(M_1 \cdot A, M_2 \cdot A) \times \mathscr{C}(M_2 \cdot A, S) \\
&\cong \int^{M_1, M_2 \in \mathscr{M}} \mathscr{C}(S, M_1 \cdot A) \times \mathscr{M}(M_1, R_A (M_2 \cdot A)) \times \mathscr{M}(M_2, R_A S) \\
&\cong \int^{M_1, M_2 \in \mathscr{M}} \mathscr{C}(S, M_1 \cdot A) \times \mathscr{M}(M_1, R_A (M_2 \cdot A)) \times \mathscr{M}(M_2, R_A S) \\
&\cong \int^{M_1 \in \mathscr{M}} \mathscr{C}(S, M_1 \cdot A) \times \mathscr{M}(M_1, R_A (R_A S \cdot A)) \\
&\cong \mathscr{C}(S, R_A (R_A S \cdot A) \cdot A)
\end{align*}
which evaluated on an element $\repthree{l}{c}{r}$ is
\begin{align*}
\Phi(\repthree{l}{c}{r}) &= (\overrightarrow{(\overrightarrow{r} \cdot A)c} \cdot A)l
\end{align*}
So now interpreting the optic laws, we find
\[\mathsf{outside}(\rep{\textsc{Unzip}}{\varepsilon_{S} }) = \varepsilon_{S} \; \textsc{Unzip} = \mathrm{id}_S \] is exactly the coalgebra counit law, and equality of
\begin{align*}
\Phi(\mathsf{once}(\rep{\textsc{Unzip}}{\varepsilon_S}))
&= \Phi(\repthree{\textsc{Unzip}}{\mathrm{id}_{(R_{A} S) \cdot A}}{\varepsilon_S }) \\
&= (\overrightarrow{(\overrightarrow{\varepsilon_S} \cdot A)\mathrm{id}_{(R_{A} S) \cdot A}} \cdot A)\textsc{Unzip} \\
&= (\overrightarrow{(\mathrm{id}_{R_A S} \cdot A)} \cdot A)\textsc{Unzip} \\
&= (\overrightarrow{\mathrm{id}_{R_A S \cdot A}} \cdot A)\textsc{Unzip} \\
&= (\eta_{R_A S} \cdot A) \textsc{Unzip} \\
\Phi(\mathsf{twice}(\rep{\textsc{Unzip}}{\varepsilon_S }))
&= \Phi(\repthree{\textsc{Unzip}}{\textsc{Unzip} \; \varepsilon_S}{\varepsilon_S }) \\
&= (\overrightarrow{(\overrightarrow{\varepsilon_S} \cdot A)(\textsc{Unzip} \; \varepsilon_S)} \cdot A)\textsc{Unzip} \\
&= (\overrightarrow{(\textsc{Unzip} \; \varepsilon_S)} \cdot A)\textsc{Unzip} \\
&= (R_A (\textsc{Unzip}) \cdot A)\textsc{Unzip}
\end{align*}
is exactly the coalgebra comultiplication law.
\end{proof}
\subsection{Setters}\label{sec:setters}
\begin{definition}
The \emph{category of setters} $\mathbf{Setter}_\mathscr{C}$ is the category of optics for the action of $[\mathscr{C}, \mathscr{C}]$ on $\mathscr{C}$ by evaluation.
\end{definition}
To devise the concrete form of a setter, we use the following proposition. This is a generalisation of~\cite[Proposition 2.2]{SecondOrderFunctionals}, and helps to explain why the store comonad is so important in the theory of lenses.
\begin{proposition}
If $\mathscr{C}$ is powered over $\mathbf{Set}$ then the evaluation-at-$A$ functor $-A : [\mathscr{C}, \mathscr{C}] \to \mathscr{C}$ has a right adjoint given by $S \mapsto S^{\mathscr{C}(-, A)}$.
If $\mathscr{C}$ is copowered over $\mathbf{Set}$ then $-A : [\mathscr{C}, \mathscr{C}] \to \mathscr{C}$ has a left adjoint given by $S \mapsto \mathscr{C}(A, -) \bullet S$, where $\bullet$ denotes the copower.
\end{proposition}
\begin{proof}
For the first, we have
\begin{align*}
\mathscr{C}(FA, S)
&\cong \int_X \mathbf{Set}(\mathscr{C}(X, A), \mathscr{C}(FX, S)) \\
&\cong \int_X \mathscr{C}(FX, S^{\mathscr{C}(X, A)}) \\
&\cong [\mathscr{C}, \mathscr{C}](F, S^{\mathscr{C}(-, A)})
\end{align*}
and for the second,
\begin{align*}
\mathscr{C}(S, FA)
&\cong \int_X \mathbf{Set}(\mathscr{C}(A, X), \mathscr{C}(S, FX)) \\
&\cong \int_X \mathscr{C}(\mathscr{C}(A, X) \bullet S, FX) \\
&\cong [\mathscr{C}, \mathscr{C}](\mathscr{C}(A, -) \bullet S, F)
\end{align*}
\end{proof}
Recall that any category with coproducts is copowered over $\mathbf{Set}$ and any category with products is powered over $\mathbf{Set}$.
We could immediately use the previous section to give a coalgebraic description of setters and their laws, but with a little manipulation we get a form that looks more familiar:
\begin{align*}
\mathbf{Setter}_\mathscr{C}((S, S'), (A, A')) &= \int^{F \in [\mathscr{C}, \mathscr{C}]} \mathscr{C}(S, FA) \times \mathscr{C}(FA', S') \\
&\cong \int^{F \in [\mathscr{C}, \mathscr{C}]} [\mathscr{C}, \mathscr{C}](\mathscr{C}(A, -) \bullet S, F) \times \mathscr{C}(FA', S') \\
&\cong \mathscr{C}(\mathscr{C}(A, A') \bullet S, S') \\
&\cong \mathbf{Set}(\mathscr{C}(A, A'), \mathscr{C}(S, S'))
\end{align*}
In the Haskell \texttt{lens}{} library, the map $\mathscr{C}(A, A) \to \mathscr{C}(S,S)$ corresponding to a setter is called $\textsc{Over}$: we think of a setter as allowing us to apply a morphism $A \to A$ over some parts of $S$. Tracing through the isomorphisms, the optic corresponding to $\textsc{Over}$ is $\rep{l}{r}$ where $l : S \to \mathscr{C}(A, A) \bullet S$ is the inclusion with the identity morphism $\mathrm{id}_A$ and $r : \mathscr{C}(A, A') \bullet S \to S'$ is the transpose of $\textsc{Over}$ along the adjunction defining the copower.
The laws for setters in this form are a kind of functoriality:
\begin{proposition}
A setter $p : S \ensuremath{\,\mathaccent\shortmid\rightarrow\,} A$ is lawful iff
\begin{align*}
\textsc{Over}(\mathrm{id}_A) &= \mathrm{id}_S \\
\textsc{Over}(f)\textsc{Over}(g)&= \textsc{Over}(fg)
\end{align*}
\end{proposition}
\begin{proof}
The key is concretely describing $\mathbf{Optic}^2_{[\mathscr{C}, \mathscr{C}]}(S, A)$ as \[ \mathbf{Set}( \mathscr{C}(A, A) \times \mathscr{C}(A, A), \mathscr{C}(S, S) ).\] We leave the verification that the conditions are equivalent to the reader.
\end{proof}
This characterisation of setters is maybe a little odd, in that we have ended up with a function of $\mathbf{Set}$s, rather than a description internal to $\mathscr{C}$. If we modify our definition of $\mathbf{Setter}$, we do get an internal characterisation. Suppose $\mathscr{C}$ is cartesian closed and let $\mathbf{Strong}_\mathscr{C}$ be the category of \emph{strong functors} on $\mathscr{C}$.
\begin{definition}[\cite{StrongFunctors}]\label{def:strong-functor}
A \emph{(left) strong functor} is a functor $F : \mathscr{C} \to \mathscr{C}$ equipped with a natural transformation called the \emph{strength}:
\begin{align*}
\theta_{A,B} : A \otimes F B \to F(A \otimes B)
\end{align*}
such that the strength commutes with the unitor:
\[
\begin{tikzcd}
I \otimes F A \ar[r, "\theta_{1,A}"] \ar[d, "\cong" left] & F(I \otimes A) \ar[d, "\cong" right] \\
F A \ar[r, equals] & F A
\end{tikzcd}
\]
and with associativity:
\[
\begin{tikzcd}
(A \otimes B) \otimes F C \ar[rr, "\theta_{A \otimes B, C}"] \ar[d, "\alpha_{A,B,FC}" left] && F((A \otimes B) \otimes C) \ar[d, "F\alpha_{A,B,FC}" right] \\
A \otimes (B \otimes F C) \ar[r, "A \otimes \theta_{B,C}" below] & A \otimes F(B \otimes C) \ar[r, "\theta_{A, B\otimes C}" below] & F(A \otimes (B \otimes C))
\end{tikzcd}
\]
A \emph{strong natural transformation} $\tau : (F,\theta) \Rightarrow (G,\theta')$ is a natural transformation that respects the strengths. There is an evident category $\mathbf{Strong}(\mathscr{C})$ of strong endofunctors and strong natural transformations, and a forgetful functor $U : \mathbf{Strong}(\mathscr{C}) \to [\mathscr{C}, \mathscr{C}]$.
\end{definition}
Then, again, $\mathbf{Strong}_\mathscr{C}$ acts on $\mathscr{C}$ by evaluation. We leave it to the reader to verify there is a natural isomorphism \[\mathscr{C}(S, FA) \cong \mathbf{Strong}_\mathscr{C}(\underline{\C}(A, -) \times S, F),\] which we can use to describe optics for this action as elements of $\mathscr{C}(\underline{\C}(A, A'), \underline{\C}(S,S'))$.
\subsection{Traversals}
In this section we work in the case $\mathscr{C} = \mathbf{Set}$. Traversals allow us to traverse through a data structure, accumulating applicative actions as we go. We begin by reviewing the definitions of applicative and traversable functors~\cite{AnInvestigationOfTheLawsOfTraversals}.
\begin{definition}
An \emph{applicative functor} $F : \mathscr{C} \to \mathscr{C}$ is a lax monoidal functor with a strength compatible with the monoidal structure, in the sense that
\[
\begin{tikzcd}[column sep = large]
A \otimes FB \otimes FC \ar[r, "{\theta_{A, B} \otimes FC}"] \ar[d, swap, "{A \otimes \phi_{B, C}}"] & F(A \otimes B) \otimes FC \ar[d, "{\phi_{A \otimes B, C}}"] \\
A \otimes F(B \otimes C) \ar[r, swap, "{\theta_{A, B \otimes C}}"] & F(A \otimes B \otimes C)
\end{tikzcd}
\]
commutes. An \emph{applicative natural transformation} is one that is both monoidal and strong. Applicative functors and natural transformations form a monoidal category $\mathbf{App}$ with the tensor given by functor composition.
\end{definition}
\begin{definition}
A \emph{traversable functor} is a functor $T : \mathscr{C} \to \mathscr{C}$ equipped with a distributive law $\delta_F : TF \to FT$ for $T$ over the action of $\mathbf{App}$ on $\mathscr{C}$ by evaluation.
Explicitly, this means that the diagrams
\[
\begin{tikzcd}
TF \ar[r, "\delta_F"] \ar[d, swap, "T\alpha"] & FT \ar[d, "T\alpha"] \\
TG \ar[r, swap, "\delta_G"] & GT
\end{tikzcd} \hspace{1cm}
\begin{tikzcd}
TFG \ar[dr, swap, "\delta_F G"] \ar[rr, "\delta_{FG}"] & & FGT \\
& FTG \ar[ur, swap, "F \delta_G"] &
\end{tikzcd} \hspace{1cm}
\begin{tikzcd}
T\mathrm{id}_\mathscr{C} \ar[r, bend left, "\mathrm{id}_T"] \ar[r, bend right, swap, "\delta_{\mathrm{id}_\mathscr{C}}"] & \mathrm{id}_\mathscr{C} T
\end{tikzcd}
\]
in $[\mathscr{C}, \mathscr{C}]$ commute.
\end{definition}
\begin{definition}
The category $\mathbf{Traversal}$ of traversals is the category of optics for the action of $\mathbf{Traversable}$ on $\mathbf{Set}$ given by evaluation. (Yes, the names $\mathbf{Traversal}$/$\mathbf{Traversable}$ are confusing!)
\end{definition}
It is known that traversable functors correspond to coalgebras for a particular parameterised comonad. See~{\cite[Definitions 4.1 and 4.2]{SecondOrderFunctionals}}, also~\cite{AlgebrasForParameterisedMonads} for the relevant definitions of parameterised comonads and coalgebras.
\begin{proposition}[{\cite[Theorem 4.10, Proposition 5.4]{SecondOrderFunctionals}}]
Traversable structures on a functor $T : \mathbf{Set} \to \mathbf{Set}$ correspond to parameterised coalgebra structures
\begin{align*}
t_{A, B} : TA \to UR^*_{A, B}(T B)
\end{align*}
where $UR^*_{X,Y}$ is the parameterised comonad
\begin{align*}
UR^*_{X, Y} Z = \Sigma_{n\in \mathbb{N}} X^n \times \mathbf{Set}(Y^n,Z)
\end{align*}
Moreover, this correspondence forms an isomorphism of categories between $\mathbf{Traversable}$ and the Eilenberg-Moore category of coalgebras for $UR^*_{-, -}$, which we denote $\mathscr{E}$. \qed
\end{proposition}
\begin{lemma}
For any objects $A, B \in \mathbf{Set}$ and traversable functor $F$, \[\mathbf{Set}(FA, B) \cong \mathbf{Traversable}(F, \Sigma_n {(-)}^n \times \mathbf{Set}(A^n,B))\]
naturally in $B$ and $F$. In other words, the functor \[(B \mapsto \Sigma_n {(-)}^n \times \mathbf{Set}(A^n,B)) : \mathbf{Set} \to \mathbf{Traversable}\] is right adjoint to the evaluation-at-$A$ functor $-A : \mathbf{Traversable} \to \mathbf{Set}$.
\end{lemma}
\begin{proof}
By~\cite[Proposition 6]{AlgebrasForParameterisedMonads}, there is a parameterised adjunction $L_T \dashv R_T$, where
\begin{align*}
L_T : \mathbf{Set} \times \mathscr{E} &\to \mathbf{Set} \\
(X, (F, f)) &\mapsto FX \\
R_T : \mathbf{Set} \times \mathbf{Set} &\to \mathscr{E} \\
(Y, Z) &\mapsto (UR^*_{-, Y} Z, \varepsilon)
\end{align*}
where $\varepsilon$ is the counit of $UR^*_{-, -}$. Evaluating these with the fixed parameter $A$, we get an ordinary adjunction
\begin{align*}
L_T(A) : \mathscr{E} &\to \mathbf{Set} \\
(F, f) &\mapsto FA \\
R_T(A) : \mathbf{Set} &\to \mathscr{E} \\
Z &\mapsto (UR^*_{-, A} Z, \varepsilon)
\end{align*}
But this is exactly the adjunction we were trying to show.
\end{proof}
We can then use the coalgebraic pattern from earlier to reach the same concrete description of traversals as found in~\cite{ProfunctorOptics}.
\begin{align*}
\mathbf{Traversal}((S, S'), (A, A')) &\cong \mathbf{Set}(S, \Sigma_n A^n \times \mathbf{Set}(A'^n,S'))
\end{align*}
The concrete laws for this representation are the coalgebra laws. These laws, however, are not the ones usually presented for traversals. Instead, versions of the profunctor laws are used, see Section~\ref{sec:profunctor-optics}.
\subsection{Polymorphic Optics}
Haskell's optics allow \emph{polymorphic updates}, where the type of the codomain of the lens may be changed by an update, causing a corresponding change in the type of the domain. As an example, we permitted to use a lens into the \mintinline{haskell}{first} entry of a tuple in the following way:
\begin{minted}{haskell}
set first (1, 5) "hello" == ("hello", 5)
\end{minted}
This has changed the type from \mintinline{haskell}{(Int, Int)} to \mintinline{haskell}{(String, Int)}.
Polymorphic optics can be captured by the coend formalism as follows. Any action of a monoidal category $\mathscr{M} \times \mathscr{C} \to \mathscr{C}$ can be extended to act object-wise on a functor category:
\begin{align*}
\mathscr{M} \times [\mathscr{D}, \mathscr{C}] &\to [\mathscr{D}, \mathscr{C}] \\
(M, F) &\mapsto M \cdot (F-)
\end{align*}
So in the above example, we have the product $\times$ acting pointwise on the functor category $\mathbf{Set} \to \mathbf{Set}$. Our example \mintinline{haskell}{first} is then an optic $F \ensuremath{\,\mathaccent\shortmid\rightarrow\,} G$, where $F = (-) \times \mintinline{haskell}{Int}$ and $G$ is the identity functor.
Given such a polymorphic optic in $[\mathscr{D}, \mathscr{C}]$, we can always `monomorphise' to obtain an ordinary optic in $\mathscr{C}$.
\begin{proposition}
There is a functor
\begin{align*}
\mathsf{mono} : \mathscr{D} \times \mathscr{D}^\mathrm{op} \times \mathbf{Optic}_{[\mathscr{D}, \mathscr{C}]} \to \mathbf{Optic}_\mathscr{C}
\end{align*}
that sends an object $(D, D') \in \mathscr{D} \times \mathscr{D}^\mathrm{op}$ and optic $\rep{l}{r} : (F, F') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (G, G')$ in $\mathbf{Optic}_{[\mathscr{D}, \mathscr{C}]}$ to the optic $\rep{l_D}{r_{D'}} : (FD, F'D') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (GD, G'D')$ in $\mathbf{Optic}_\mathscr{C}$. For fixed $(D, D) \in \mathscr{D} \times \mathscr{D}^\mathrm{op}$, this functor preserves lawfulness.
\end{proposition}
\begin{proof}
On an object $(D, D') \in \mathscr{D} \times \mathscr{D}^\mathrm{op}$, that we get a functor $\mathbf{Optic}_{[\mathscr{D}, \mathscr{C}]} \to \mathbf{Optic}_\mathscr{C}$ is essentially the same proof as Proposition~\ref{prop:change-of-action} but with different functors on each side of the lens: the evaluation-at-$D$ functor $[\mathscr{D}, \mathscr{C}] \to \mathscr{C}$ on the left and evaluation-at-$D'$ on the right.
For functoriality in $\mathscr{D} \times \mathscr{D}^\mathrm{op}$, given $(f, g) : (D_1, D'_1) \to (D_2, D'_2) \in \mathscr{D} \times \mathscr{D}^\mathrm{op}$ and an object $(F, F') \in \mathbf{Optic}_{[\mathscr{D}, \mathscr{C}]}$, there is an induced lens $\iota(Ff, F'g) : (FD_1, F'D'_1) \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (FD_2, F'D'_2)$. Bifunctoriality of $\mathsf{mono}$ is ensured by the naturality of each $l$ and $r$ in the morphisms of $\mathbf{Optic}_{[\mathscr{D}, \mathscr{C}]}$.
\end{proof}
\subsection{Linear Lenses}\label{sec:linear-lenses}
\newcommand{\mathsf{ev}}{\mathsf{ev}}
\newcommand{\mathsf{coev}}{\mathsf{coev}}
If $\mathscr{C}$ is closed monoidal but not necessarily cartesian, we can still define the category of \emph{linear lenses} to be $\mathbf{Optic}_\otimes$. The internal hom provides a right adjoint to the evaluation-at-$A$ functor, so we have immediately
\begin{align*}
\mathbf{Optic}_\otimes((S, S'), (A, A')) &\cong \mathscr{C}(S, \underline{\C}(A',S') \otimes A)
\end{align*}
where $\underline{\C}(A', S')$ denotes the internal hom. If $\mathscr{C}$ is cartesian, this is of course isomorphic to the set of $(\textsc{Get}, \textsc{Put})$ functions discussed earlier.
We cannot possibly use the three $\textsc{Put}$/$\textsc{Get}$ style lens laws in this setting as we lack projections, but specialising the coalgebra laws gives us:
\begin{proposition}\label{prop:concrete-linear-lawful}
A linear lens $p : S \ensuremath{\,\mathaccent\shortmid\rightarrow\,} A$ is lawful iff the following two concrete laws for $\textsc{Unzip}$ hold:
\begin{align*}
\mathsf{ev}_{A, S} \; \textsc{Unzip} &= \mathrm{id}_S && \textsc{(Rezip)} \\
(\mathsf{coev}_{\underline{\C}(A, S), A} \otimes A)\textsc{Unzip} &= ((\textsc{Unzip} \circ -) \otimes A)\textsc{Unzip} && \textsc{(ZipZip)}
\end{align*}
where \[ \textsc{Unzip} \circ - : \underline{\C}(A, S) \to \underline{\C}(A, \underline{\C}(A, S) \otimes A) \] denotes internal composition and \[\mathsf{coev}_{\underline{\C}(A, S), A} : \underline{\C}(A, S) \to \underline{\C}(A, \underline{\C}(A, S) \otimes A)\] is coevaluation.
\qed
\end{proposition}
We have essentially rederived the result given in~\cite[Section 3.2]{RelatingAlgebraicAndCoalgebraic} for ordinary lenses, but we note that cartesianness was not required.
\subsection{Effectful Optics}
\newcommand{\rtimes}{\rtimes}
Many proposed definitions of effectful lenses~\cite{ReflectionsOnMonadicLenses} have modified one or both of $\textsc{Get}$ and $\textsc{Put}$ to produce results wrapped in a monadic action. There are disadvantages to this approach: it is not obvious what the laws ought to be and there is no clear generalisation to other optic variants. The general definition of optic given in Section~\ref{sec:optics} suggests we instead work with the Kleisli category $\mathscr{C}_T$ of some monad $(T, \eta, \mu) : \mathscr{C} \to \mathscr{C}$.
\begin{definition}
The Kleisli category $\mathscr{C}_T$ of a monad $T$ has the same objects as $\mathscr{C}$, with morphisms $X \to Y$ in $\mathscr{C}_T$ given by morphisms $X \to TY$ in $\mathscr{C}$. Identity morphisms are given by the unit of $T$, and the composite of two morphisms $f : X \to Y$ and $g : Y \to Z$ in $\mathscr{C}_T$ is given by
\begin{align*}
X \xrightarrow{f} TY \xrightarrow{Tg} TTZ \xrightarrow{\mu_Z} TZ
\end{align*}
For $f : X \to Y$ in $\mathscr{C}_T$, we write $\underline{f} : X \to TY$ for its underlying morphism in $\mathscr{C}$.
\end{definition}
Working in a Kleisli category presents its own set of difficulties. The product in $\mathscr{C}$ is a monoidal product in a $\mathscr{C}_T$ only when the monad in question is \emph{commutative}, which rules out many monads of interest. A premonoidal structure~\cite{PremonoidalCategories} is not sufficient: composition of optics would in that case not be well defined.
But this does not preclude the existence of monoidal actions on $\mathscr{C}_T$. In fact, there is a monoidal action that has long been used under a different guise:
\begin{definition}[{\cite{NotionsOfComputationAndMonads}}]
A \emph{strong monad} $T : \mathscr{C} \to \mathscr{C}$ on a monoidal category $(\mathscr{C}, \otimes, I)$ is a monad that is strong as a functor (Definition~\ref{def:strong-functor}), and such that the strength commutes with the unit and multiplication:
\[
\begin{tikzcd}
A \otimes B \ar[d, swap, "A \times \eta_B"] \ar[dr, "\eta_{A \times B}"] & \\
A \otimes TB \ar[r, swap, "\theta_{A, B}"] & T(A \otimes B)
\end{tikzcd} \hspace{1cm}
\begin{tikzcd}
A \otimes T^2 B \ar[r, "\theta_{A, TB}"] \ar[d, swap, "A \otimes \mu_B"] & T(A \otimes TB) \ar[r, "T\theta_{A, B}"] & T^2(A \otimes B) \ar[d, "\mu_{A \otimes B}"] \\
A \otimes TB \ar[rr, swap, "\theta_{A, B}"] & & T(A \times B)
\end{tikzcd}
\]
\end{definition}
\begin{proposition}
If $T : \mathscr{C} \to \mathscr{C}$ is a strong monad then $\mathscr{C}$ acts on $\mathscr{C}_T$ by $X \cdot Y := X \otimes Y$.
\end{proposition}
The crucial difference between this and a monoidal structure on $\mathscr{C}_T$ is that we only demand $X$ be functorial with respect to \emph{pure functions} in $\mathscr{C}$, whereas $Y$ must be functorial with respect to \emph{computations} in $\mathscr{C}_T$. We will write this action as $X \rtimes Y$ to highlight the different roles played by $X$ and $Y$.
\begin{proof}
Suppose $T$ is a strong monad with strength $\theta_{A, B} : A \otimes T B \to T(A \otimes B)$. For $A \in \mathscr{C}$, we have a functor $A \otimes - : \mathscr{C}_T \to \mathscr{C}_T$ which on a morphism $f : X \to Y$ in $\mathscr{C}_T$ is defined to be the composite
\begin{align*}
A \otimes X \xrightarrow{A \otimes \underline{f}} A \otimes TY \xrightarrow{\theta_{A, Y}} T(A \otimes Y)
\end{align*}
For details, see~\cite[Theorem 4.2]{PremonoidalCategories}. Our goal is to show this extends to a monoidal functor $a : \mathscr{C} \to [\mathscr{C}_T, \mathscr{C}_T]$.
A morphism $f : A \to B$ in $\mathscr{C}$ induces a natural transformation $A \otimes - \Rightarrow B \otimes -$ of functors $\mathscr{C}_T \to \mathscr{C}_T$, with components $A \otimes X \to T(B \otimes X)$ given by composing $A \otimes X \to B \otimes X$ with the unit of the monad. Naturality follows by the naturality of the strength and the unit of $T$.
Monoidality of $a$ is shown exactly by the commutative diagrams in the definition of strong functor, i.e.\ that the strength commutes with associator and left unitor of $\mathscr{C}$.
\end{proof}
Suppose $\mathscr{C}$ is a monoidal closed category and $T : \mathscr{C} \to \mathscr{C}$ is a strong monad. Then the evaluation-at-$A$ functor has a right adjoint:
\begin{align*}
\mathscr{C}_T(M \rtimes A', S')
&= \mathscr{C}(M \times A', T S') \\
&\cong \mathscr{C}(M, \underline{\C}(A', T S'))
\end{align*}
Using the coalgebraic description, we see that concrete effectful lenses consist of a single morphism in $\mathscr{C}$ \[\textsc{Munzip} : S \to T(\underline{\C}(A', T S') \otimes A).\] The optic laws in this case specialise to:
\begin{proposition}
A concrete effectful lens is lawful iff
\begin{align*}
\mu_S T(\mathsf{ev}_{A, TS}) \; \textsc{Munzip} &= \eta_S \\
T(\eta_{\underline{\C}(A, TS) \otimes A}\mathsf{coev}_{\underline{\C}(A, TS), A} \otimes A)\textsc{Munzip} &= T((\textsc{Munzip} \circ_T -) \otimes A)\textsc{Munzip}
\end{align*}
where \[ \textsc{Munzip} \circ_T - : \underline{\C}(A, TS) \to \underline{\C}(A, T(\underline{\C}(A, TS) \otimes A)) \] denotes internal Kleisli composition and \[\mathsf{coev}_{\underline{\C}(A, TS), A} : \underline{\C}(A, TS) \to \underline{\C}(A, \underline{\C}(A, TS) \otimes A) \] is coevaluation. \qed
\end{proposition}
Or, if you prefer do-notation, the two laws are:
\begin{multicols}{2}
\begin{minted}{haskell}
do (c, a) <- munzip s
c a
==
return s
\end{minted}
~\columnbreak
\begin{minted}{haskell}
do (c, a) <- munzip s
let f a' = do
s' <- c a'
munzip s'
return (f, a)
==
do (c, a) <- munzip s
let f a' = (c, a')
return (f, a)
\end{minted}
\end{multicols}
The inclusion of $\mathscr{C}$ into $\mathscr{C}_T$ preserves the action of $\mathscr{C}$, so there is an induced inclusion $\mathbf{Optic}_\otimes \to \mathbf{Optic}_\rtimes$.
If we choose a specific monad, we can hope to simplify the description of a concrete effectful optic and its laws.
\subsubsection{Writer Lenses}
We begin with a simple example. Suppose $\mathscr{C}$ has finite products.
\begin{definition}
The \emph{writer monad} for a monoid $W$ is defined by
\begin{align*}
T_W X = X \times W
\end{align*}
The unit, multiplication of $T_W$ are given by pairing with the unit and multiplication of $W$, and the strength is simply the associativity morphism.
\end{definition}
We can find a more explicit description of concrete effectful lenses for this monad.
\begin{align*}
\mathbf{Optic}((S, S'), (A, A'))
&= \int^{M \in \mathscr{C}} \mathscr{C}_{T_W}(S, M \rtimes A) \times \mathscr{C}_{T_W}(M \rtimes A', S') \\
&= \int^{M \in \mathscr{C}} \mathscr{C}(S, M \times A \times W) \times \mathscr{C}_{T_W}(M \rtimes A', S') \\
&\cong \int^{M \in \mathscr{C}} \mathscr{C}(S, M) \times \mathscr{C}(S, A\times W) \times \mathscr{C}_{T_W}(M \rtimes A', S') \\
&\cong \mathscr{C}_{T_W}(S, A) \times \mathscr{C}_{T_W}(S \times A', S')
\end{align*}
Fortunately, concrete writer lenses correspond to $\textsc{Get}$ and $\textsc{Put}$ functions in the Kleisli category of $T_W$.
\subsubsection{Stateful Lenses}
Suppose $\mathscr{C}$ is cartesian closed.
\begin{definition}
The \emph{state monad} with state $Q$ is defined by
\begin{align*}
T_Q X = \underline{\C}(Q, X \times Q)
\end{align*}
\end{definition}
We call optics for the action $\rtimes : \mathscr{C} \times \mathscr{C}_{T_Q} \to \mathscr{C}_{T_Q}$ \emph{stateful lenses}. We can find a concrete description that is closer to that for ordinary lenses:
\begin{align*}
\mathbf{Optic}_\rtimes((S, S'), (A, A'))
&= \int^{M \in \mathscr{C}} \mathscr{C}_{T_Q}(S, M \rtimes A) \times \mathscr{C}_{T_Q}(M \rtimes A', S') \\
&= \int^{M \in \mathscr{C}} \mathscr{C}(S, \underline{\C}(Q, M \times A \times Q)) \times \mathscr{C}_{T_Q}(M \rtimes A', S') \\
&\cong \int^{M \in \mathscr{C}} \mathscr{C}(S \times Q, M \times A\times Q) \times \mathscr{C}_{T_Q}(M \rtimes A', S') \\
&\cong \int^{M \in \mathscr{C}} \mathscr{C}(S \times Q, M) \times \mathscr{C}(S \times Q, A\times Q) \times \mathscr{C}_{T_Q}(M \rtimes A', S') \\
&\cong \mathscr{C}(S \times Q, A \times Q) \times \mathscr{C}_{T_Q}((S \times Q) \rtimes A', S') \\
&\cong \mathscr{C}_{T_Q}(S, A) \times \mathscr{C}_{T_Q}(S \times Q \times A', S')
\end{align*}
By analogy with ordinary lenses, let us call these maps $\textsc{MGet}$ and $\textsc{MPut}$.
The induced composition of effectful lenses is a little intricate, and is possibly best explained in code. The composite $\textsc{MGet}$ is straightforward, just the composite of $\textsc{MGet}_1$ and $\textsc{MGet}_2$ in the Kleisli category. For $\textsc{MPut}$ however, there is some curious plumbing of the state into different places. Tracing through the isomorphism, two stateful lenses $(\textsc{MGet}_1, \textsc{MPut}_1) : (T, T') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (S, S')$ and $(\textsc{MGet}_2, \textsc{MPut}_2) : (S, S') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (A, A')$ compose as follows.
\begin{minted}{haskell}
mget t = do
s <- mget1 t
mget2 s
mput t q a = do
start <- getState
s <- mget1 t
q' <- getState
putState start
s' <- mput1 s q' a
mput2 t q s'
\end{minted}
\begin{proposition}
A stateful lens given by
\begin{minted}{haskell}
mget :: s -> State q a
mput :: s -> q -> a -> State q s
\end{minted}
is lawful iff the following three laws hold: \\
\begin{minipage}{\textwidth}
\begin{multicols}{3}
\begin{minted}{haskell}
do
q <- getState
a <- mget s
mput s q a
==
return s
\end{minted}
~\columnbreak
\begin{minted}{haskell}
do s' <- mput s q a
mget s'
==
return a
\end{minted}
~\columnbreak
\begin{minted}{haskell}
let (s', q')
= runState (mput s q1 a1) q2
in mput s' q' a2
==
mput s q1 a2
\end{minted}
\end{multicols}
\end{minipage}
By analogy we call these the $\textsc{Get}\textsc{Put}$, $\textsc{Put}\textsc{Get}$ and $\textsc{Put}\fput$ laws.
\end{proposition}
Of course, this notion of effectful lens may not be useful! It is hard to get intuition for the meaning of the laws, but they seem to suffer from the same deficiency that other attempts at effectful lenses do: they are too strong. The $\textsc{Get}\textsc{Put}$ law here appears easier to satisfy than the $\mathsf{MGetPut_0}$ law of~\cite{ReflectionsOnMonadicLenses}, as $\textsc{MPut}$ is given access to the original state. However, our $\textsc{Put}\textsc{Get}$ law seems very restrictive: no matter what auxiliary state is provided, $\textsc{Put}$ting then $\textsc{Get}$ting must leave the state unchanged.
\subsection{Further Examples}
The dedicated reader may enjoy deriving the concrete representation and laws for the following optic varieties:
\begin{itemize}
\item \emph{``Achromatic'' Lenses}~\cite[Section 5.2]{ProfunctorOpticsThesis} are lenses that also admit an operation $\textsc{Create} : A \to S$. These are optics for the action of $\mathscr{C}$ on itself by $M \cdot A = (M \sqcup 1) \times A$, or equivalently, of the category of pointed objects of $\mathscr{C}$ on $\mathscr{C}$ by cartesian product. Concrete achromatic lenses $(S, S') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (A, A')$ are elements of the set \[\mathscr{C}(S, \underline{\C}(A', S') \sqcup 1) \times \mathscr{C}(S, A) \times \mathscr{C}(A', S').\]
\item \emph{Affine Traversals}~\cite{SecondOrderFunctionals} allow access to a target that may or may not be present. Suppose $\mathscr{C}$ is cartesian closed and has binary coproducts. Let $\mathsf{Aff}$ be the category $\mathscr{C} \times \mathscr{C}$, equipped with the monoidal structure
\begin{align*}
(P', Q') \otimes (P, Q) &= (P' \sqcup (Q' \times P) , Q' \times Q)
\end{align*}
The category $\mathsf{Aff}$ acts on $\mathscr{C}$ by $(P, Q) \cdot A = P \sqcup (Q \times A)$, in fact, $\mathsf{Aff}$ is cooked up to act on $\mathscr{C}$ exactly by the closure of the actions $- \times A$ and $- \sqcup A$ under composition. A concrete affine traversal is an element of \[\mathscr{C}(S, S' \sqcup (\underline{\C}(A', S') \times A)).\]
Affine traversals are described in the folklore as pairs of maps $\mathscr{C}(S, A \sqcup S') \times \mathscr{C}(S\times A', S')$. Such a pair does determine an affine traversal, but gives more information than is necessary: the right-hand map need not be defined at all $S$.
\item \emph{Grates}~\cite{GratesPost} are optics for the contravariant action of a monoidal closed category $\mathscr{C}$ on itself by $X \cdot A \mapsto \underline{\C}(X, A)$. Concretely these correspond to morphisms \[ \mathscr{C}(\underline{\C}(\underline{\C}(S, A), A'), S'). \]
\end{itemize}
\section{The Profunctor Encoding}\label{sec:profunctor-optics}
To use optics in practice, one could take the definition of the optic category and translate it almost verbatim into code---using an existential type in place of the coend. In Haskell syntax, lenses would be defined as:
\begin{minted}{haskell}
data Lens s s' a a' = forall m. Lens {
l :: s -> (m, a),
r :: (m, a') -> s'
}
\end{minted}
This not the approach usually taken by implementations! Instead the somewhat indirect \emph{profunctor encoding} is used. (This is not quite true for the Haskell \texttt{lens}{} library, for a few reasons \texttt{lens}{} uses the closely related \emph{van Laarhoven encoding}, see Section~\ref{sec:van-laarhoven}. The Purescript \texttt{purescript-profunctor-lenses} library~\cite{PurescriptLibrary} does use the profunctor encoding directly.)
The equivalence between the profunctor encoding and optics as described earlier has been explored in~\cite{ProfunctorOptics} and~\cite{ProfunctorOpticsPost}. We begin by reviewing this equivalence from a categorical perspective before investigating how the optic laws manifest in this setting.
\subsection{Tambara Modules}
Let $I = \mathscr{C}(-,{=}) : \mathscr{C} \ensuremath{\,\mathaccent\shortmid\rightarrow\,} \mathscr{C}$ be the identity profunctor and $\odot$ be profunctor composition, written in diagrammatic order. The following section generalises definitions that first appeared in~\cite[Section 3]{Doubles} for monoidal categories to the more general case of a monoidal action.
\begin{definition}
Suppose a category $\mathscr{C}$ is acted on by $(\mathscr{M}, \otimes, I)$ and let $P \in \mathbf{Prof}(\mathscr{C}, \mathscr{C})$ be a profunctor. A \emph{Tambara module structure for $\mathscr{M}$ on $P$} is a family of maps:
\begin{align*}
\zeta_{A,B,M} : P(A,B) \to P(M \cdot A, M\cdot B)
\end{align*}
natural in $A$ and $B$, dinatural in $M$, and such that $\zeta$ commutes with the action of $\mathscr{M}$:
\[
\begin{tikzcd}
P(A,B) \ar[r, "\zeta_{A,B,M}"] \ar[d, "\zeta_{A, B, N\otimes M}" swap] & P(M \cdot A, M \cdot B) \ar[d, "\zeta_{M \cdot A, M \cdot B, N}" right] \\
P((N\otimes M) \cdot A), (N\otimes M) \cdot B) \ar[r, "\alpha_{N, M, A}" swap] & P(N\cdot (M\cdot A), N\cdot (M \cdot B))
\end{tikzcd}
\qquad
\begin{tikzcd}
P(A,B) \ar[r, "\zeta_{A,B,I}"] \ar[dr, equal] & P(I\cdot A, I\cdot B) \ar[d, "{P(\lambda_A^{-1}, \lambda_B)}" right] \\
& P(A, B)
\end{tikzcd}
\]
for all $A, B \in \mathscr{C}$ and $N, M \in \mathscr{M}$.
\end{definition}
Note that the identity profunctor $I$ has a canonical Tambara module structure $\zeta_{A, B, M} : \mathscr{C}(A, B) \to \mathscr{C}(M \cdot A, M \cdot B)$ for any $\mathscr{M}$, given by functoriality.
If $P, Q \in \mathbf{Prof}(\mathscr{C}, \mathscr{C})$ are equipped with module structures $\zeta$ and $\xi$ respectively, there is a canonical module structure on $P \odot Q$. Given $M \in \mathscr{M}$ and $A,B \in \mathscr{C}$, the structure map ${(\zeta \odot \xi)}_{A,B,M}$ is induced by
\begin{align*}
&P(A,C) \times Q(C,B) \\
\xrightarrow{\zeta_{A,C,M} \times \xi_{C,B,M}} \quad& P(M\cdot A, M\cdot C) \times Q(M\cdot C, M\cdot B) \\
\xrightarrow{\copr_{M\cdot C}} \quad&\int^{C \in \mathscr{C}} P(M\cdot A, C) \times Q(C, M\cdot B) \\
= \quad&(P \odot Q)(M\cdot A, M\cdot B)
\end{align*}
\begin{definition}
There is a category $\mathbf{Tamb}_\mathscr{M}$ of Tambara modules and natural transformations that respect the module structure, in the sense that for any $l : P \to Q$, the diagram
\[
\begin{tikzcd}
P(A,B) \ar[r, "\zeta_{A,B,M}"] \ar[d, "l_{A,B}" left] & P(M\cdot A, M\cdot B) \ar[d, "l_{M\cdot A, M\cdot B}" right] \\
Q(A,B) \ar[r, "\xi_{A,B,M}" swap] & Q(M \cdot A, M \cdot B)
\end{tikzcd}
\]
commutes.
\end{definition}
This category is monoidal with respect to $\odot$ as given above with monoidal unit $I$. There is an evident forgetful functor $U : \mathbf{Tamb}_\mathscr{M} \to \mathbf{Prof}(\mathscr{C}, \mathscr{C})$ that is strong monoidal. This forgetful functor has both a left and right adjoint; important for us is the left adjoint: (The right adjoint to $U$ is described in~\cite{NotionsOfComputationAsMonoids}, used there to investigate Haskell's \mintinline{haskell}{Arrow} typeclass.)
\begin{definition}[{\cite[Section 5]{Doubles}}]
Let $\mathrm{Pastro}_\mathscr{M} : \mathbf{Prof}(\mathscr{C}, \mathscr{C}) \to \mathbf{Tamb}_\mathscr{M}$ be the functor:
\begin{align*}
\mathrm{Pastro}_\mathscr{M}(P) := \int^{M \in \mathscr{M}} \mathscr{C}(-, M\cdot {=}) \odot P \odot \mathscr{C}(M\cdot -, {=})
\end{align*}
Or, in other words,
\begin{align*}
\mathrm{Pastro}_\mathscr{M}(P)(A,B) := \int^{M \in \mathscr{M}} \int^{C,D \in \mathscr{C}} \mathscr{C}(A, M\cdot C) \times P(C,D) \times \mathscr{C}(M \cdot D, B)
\end{align*}
The module structure $\zeta_{A,B,M} : \mathrm{Pastro}_\mathscr{M} P(A,B) \to \mathrm{Pastro}_\mathscr{M} P (M\cdot A, M\cdot B)
$ is induced by the maps
\begin{align*}
&\mathscr{C}(A, N\cdot C) \times P(C,D) \times \mathscr{C}(N\cdot D, B) \\
\xrightarrow{\text{functoriality}} \quad& \mathscr{C}(M\cdot A, M\cdot N\cdot C) \times P(C,D) \times \mathscr{C}(M\cdot N\cdot D, M\cdot B) \\
\xrightarrow{\copr_{M\otimes N}} \quad&\int^{N \in \mathscr{M}} \int^{C,D \in \mathscr{C}} \mathscr{C}(M\cdot A, N\cdot C) \times P(C,D) \times \mathscr{C}(N\cdot D, M\cdot B) \\
= \quad&\mathrm{Pastro}_\mathscr{M} P (M \cdot A, M \cdot B)
\end{align*}
for all $C, D \in \mathscr{C}$ and $N \in \mathscr{M}$. Equationally, this is $\zeta_{A,B,M}(\repthree{l}{p}{r} ) = \repthree{M\cdot l}{p}{M\cdot r} $.
\end{definition}
\begin{proposition}
$\mathrm{Pastro}_\mathscr{M} : \mathbf{Prof}(\mathscr{C}, \mathscr{C}) \to \mathbf{Tamb}_\mathscr{M}$ is left adjoint to $U : \mathbf{Tamb}_\mathscr{M} \to \mathbf{Prof}(\mathscr{C}, \mathscr{C})$.
\end{proposition}
\begin{proof}
For any $P \in \mathbf{Prof}(\mathscr{C}, \mathscr{C})$, there is a map $\eta : P \ensuremath{\,\mathaccent\shortmid\rightarrow\,} U \mathrm{Pastro}_\mathscr{M} P$, given by $\eta(p) = \repthree{\mathrm{id}_A}{p}{\mathrm{id}_B}$. Suppose we have an element $\repthree{l}{p}{r} \in \mathrm{Pastro}_\mathscr{M} P(A,B)$. One can check that this element is equal to
\begin{align*}
\repthree{l}{p}{r} = (\mathrm{Pastro}_\mathscr{M} P(l, r)) \zeta_{A, B, M} \; \eta(p)
\end{align*}
where $\zeta_{A, B, M}$ is the module structure map for $\mathrm{Pastro}_\mathscr{M} P$.
If $T \in \mathbf{Tamb}_\mathscr{M}$ is a Tambara module with structure map $\xi$, we would like to show that for any map $f : P \ensuremath{\,\mathaccent\shortmid\rightarrow\,} UT$ there exists a unique $\hat f : \mathrm{Pastro}_\mathscr{M} P \ensuremath{\,\mathaccent\shortmid\rightarrow\,} T$ so that $f$ factors as \[P \xrightarrow{\eta} U \mathrm{Pastro}_\mathscr{M} P \xrightarrow{U\hat f} UT. \]
The data of such a map $\hat f : \mathrm{Pastro}_\mathscr{M} P \ensuremath{\,\mathaccent\shortmid\rightarrow\,} T$ is a natural transformation between the underlying profunctors. For the factorisation property to hold we must have that $\hat{f}\eta(p) = f(p)$ for any $p \in P(A,B)$, but then the action on the remainder of $\mathrm{Pastro}_\mathscr{M} P(A, B)$ is fixed:
\begin{align*}
\hat{f}(\repthree{l}{p}{r})
&= \hat{f}(\mathrm{Pastro}_\mathscr{M} P(l, r) \zeta_{A, B, M} \; \eta(p)) \\
&=T(l, r) \; \xi_{A, B, N} \; f(p)
\end{align*}
This establishes uniqueness. It remains to show that $\hat{f}$ so defined is actually a Tambara module morphism, but this is easy:
\begin{align*}
\hat{f}\zeta_{A,B,N}(\repthree{l}{p}{r})
&= \hat{f}(\repthree{N\cdot l}{p}{N\cdot r}) && \text{(definition of $\zeta$)}\\
&= T(N\cdot l, N\cdot r) \; \xi_{A, B, N \otimes M} \; f(p) && \text{(definition of $\hat{f}$)}\\
&= T(N\cdot l, N\cdot r) \xi_{M\cdot A,M\cdot B,N} \; \xi_{A, B, M} \; f(p) && \text{($\xi$ commutes with tensor in $\mathscr{M}$)} \\
&= \xi_{A,B,N} T(l, r) \; \xi_{A, B, M} \; f(p) && \text{(naturality of $\xi$)} \\
&= \xi_{A,B,N} \hat{f} (\repthree{l}{p}{r}) && \text{(definition of $\hat{f}$)}
\end{align*}
\end{proof}
\begin{corollary}
$\mathrm{Pastro}_\mathscr{M}$ (and therefore also $U \mathrm{Pastro}_\mathscr{M}$) is oplax monoidal.
\end{corollary}
\begin{proof}
This follows from abstract nonsense as $\mathrm{Pastro}_\mathscr{M}$ is the left adjoint of a strong monoidal functor, see~\cite{Kelly1974}.
\end{proof}
\subsection{Optics}
\begin{definition}
For a pair of objects $A, A' \in \mathscr{C}$, the \emph{exchange profunctor} $E_{A, A'}$ is defined to be $\mathscr{C}(-, A) \times \mathscr{C}(A', {=})$.
\end{definition}
Given a profunctor, or indeed a Tambara module, we can evaluate it at any two objects of $\mathscr{C}$. This process is functorial in the choice of Tambara module, giving a functor $(U-)(A,A') : \mathbf{Tamb}_\mathscr{M} \to \mathbf{Set}$.
\begin{lemma}\label{lemma-rep}
The functor $(U-)(A,A') : \mathbf{Tamb}_\mathscr{M} \to \mathbf{Set}$ is representable: there is a isomorphism $(U-)(A,A') \cong \mathbf{Tamb}_\mathscr{M}(\mathrm{Pastro}_\mathscr{M} E_{A, A'}, -)$
\end{lemma}
\begin{proof}
We have the chain of isomorphisms:
\begin{align*}
&(U-)(A,A') \\
\cong \;&\int_{X,Y \in \mathscr{C}} \mathbf{Set}(\mathscr{C}(X,A) \times \mathscr{C}(A',Y), (U-)(X,Y)) && \text{(by Yoneda reduction twice)} \\
=\;&\int_{X,Y \in \mathscr{C}} \mathbf{Set}(E_{A, A'}(X,Y), (U-)(X,Y)) && \text{(by definition)}\\
\cong \;&\mathbf{Prof}(E_{A, A'}, U-) && \text{(natural transformations as ends)} \\
\cong \;&\mathbf{Tamb}_\mathscr{M}(\mathrm{Pastro}_\mathscr{M} E_{A, A'}, -) && \text{(by adjointness)}
\end{align*}
\end{proof}
Note that the value of $\mathrm{Pastro}_\mathscr{M} E_{A, A'}$ at $(X,Y)$ is precisely the set of optics $(X, Y) \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (A, A')$:
\begin{align*}
\mathrm{Pastro}_\mathscr{M} E_{A, A'} (X, Y)
&= \int^{M \in \mathscr{M}} \int^{C,D \in \mathscr{C}} \mathscr{C}(X, M\cdot C) \times E_{A, A'}(C,D) \times \mathscr{C}(M\cdot D, Y) \\
&= \int^{M \in \mathscr{M}} \int^{C,D \in \mathscr{C}} \mathscr{C}(X, M\cdot C) \times \mathscr{C}(C, A) \times \mathscr{C}(A', D) \times \mathscr{C}(M\cdot D, Y) \\
&\cong \int^{M \in \mathscr{M}} \mathscr{C}(X, M\cdot A) \times \mathscr{C}(M\cdot A', Y)
\end{align*}
For convenience we identify $\mathrm{Pastro}_\mathscr{M} E_{A, A'}(X,Y)$ with $\mathbf{Optic}_\mathscr{M}((X, Y), (A, A'))$.
We can now show that profunctor optics are precisely optics in the ordinary sense.
\begin{proposition}[Profunctor Optics are Optics]\label{prop:profunctor-optics-are-optics}
\begin{align*}
[\mathbf{Tamb}_\mathscr{M}, \mathbf{Set}]((U-)(A,A'),(U-)(S,S')) &\cong \mathbf{Optic}_\mathscr{M}((S, S'), (A, A'))
\end{align*}
\end{proposition}
\begin{proof}
We have the chain of isomorphisms:
\begin{align*}
&[\mathbf{Tamb}_\mathscr{M}, \mathbf{Set}]((U-)(A,A'),(U-)(S,S')) \\
\cong \;&[\mathbf{Tamb}_\mathscr{M}, \mathbf{Set}](\mathbf{Tamb}_\mathscr{M}(\mathrm{Pastro}_\mathscr{M} E_{A, A'}, -), (U-)(S,S')) && \text{(by Lemma~\ref{lemma-rep})}\\
\cong \;&(U\mathrm{Pastro}_\mathscr{M} E_{A, A'})(S,S') && \text{(by Yoneda)} \\
= \;&\mathbf{Optic}_\mathscr{M}((S, S'), (A, A'))
\end{align*}
\end{proof}
For $p : (S, S') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (A, A')$, let $\tilde{p} : (U-)(A,A') \Rightarrow (U-)(S,S')$ denote the corresponding natural transformation under this isomorphism, and for $t : (U-)(A,A') \Rightarrow (U-)(S,S')$, let $\hat{t} : (S, S') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (A, A')$ be the corresponding optic.
\begin{corollary}
A profunctor optic $t$ is determined by its component at $\mathrm{Pastro}_\mathscr{M} E_{A, A'}$, and furthermore, this component is determined by its value on $\rep{\lambda_A^{-1}}{\lambda_{A'}} \in (U \mathrm{Pastro}_\mathscr{M} E_{A, A'})(A, A')$.
\end{corollary}
\begin{proof}
This is the content of the first two isomorphisms above. Explicitly, suppose $p = \rep{l}{r}$ with $l : S \to M\cdot A$ and $r : M\cdot A' \to S'$. Then for any Tambara module $P$, the component of $\tilde{p}$ at $P$ is
\begin{align*}
\tilde{p}_P = (UP)(l,r) \zeta_{A,A',M}
\end{align*}
where $\zeta$ is the module structure for $P$. In particular,
\[
\tilde{p}_{\mathrm{Pastro}_\mathscr{M} E_{A, A'}}(\rep{\lambda_A^{-1}}{\lambda_{A'}}) = \rep{l}{r}
\]
\end{proof}
We finish with one final isomorphic description of an optic:
\begin{proposition}
$\mathbf{Optic}_\mathscr{M}((S, S'), (A, A'))$ is isomorphic to $\mathbf{Tamb}_\mathscr{M}(\mathrm{Pastro}_\mathscr{M} E_{S, S'}, \mathrm{Pastro}_\mathscr{M} E_{A, A'})$.
\end{proposition}
\begin{proof}
This follows from the previous two propositions and the Yoneda lemma:
\begin{align*}
&\mathbf{Optic}_\mathscr{M}((S, S'), (A, A')) \\
&\cong [\mathbf{Tamb}_\mathscr{M}, \mathbf{Set}]((U-)(A,A'),(U-)(S,S')) \\
&\cong [\mathbf{Tamb}_\mathscr{M}, \mathbf{Set}](\mathbf{Tamb}_\mathscr{M}(\mathrm{Pastro}_\mathscr{M} E_{A, A'}, -),\mathbf{Tamb}_\mathscr{M}(\mathrm{Pastro}_\mathscr{M} E_{S, S'}, -)) \\
&\cong \mathbf{Tamb}_\mathscr{M}(\mathrm{Pastro}_\mathscr{M} E_{S, S'}, \mathrm{Pastro}_\mathscr{M} E_{A, A'})
\end{align*}
Explicitly, an optic $p = \rep{l}{r}$ corresponds to the natural transformation with components:
\begin{align*}
t_{X, Y} : \mathrm{Pastro}_\mathscr{M} E_{S, S'}(X, Y) \to \mathrm{Pastro}_\mathscr{M} E_{A, A'}(X, Y) \\
t_{X, Y}(\rep{f}{g}) = \rep{(M\cdot l)f}{g(M\cdot r)}
\end{align*}
where $M$ is the residual for the representative $\rep{f}{g}$.
This is exactly the formula for optic composition!
\end{proof}
\subsection{Lawful Profunctor Optics}
The next goal is to characterise the profunctor optics that correspond to lawful optics.
The exchange profunctor $E_{A, A}$, hereafter abbreviated to $E_A$, has a comonoid structure, where the comultiplication $\Delta : E_A \to E_A \odot E_A$ and counit $\varepsilon : E_A \to \mathscr{C}$ are given by
\begin{align*}
\Delta_{X, Y} : (E_A)(X, Y) &\to (E_A \odot E_A)(X, Y) \\
\Delta_{X, Y}(\rep{f}{g}) &= \repthree{f}{\mathrm{id}_A}{g} \\
\varepsilon_{X, Y} : (E_A)(X, Y) &\to \mathscr{C}(X, Y) \\
\varepsilon_{X, Y}(\rep{f}{g}) &= gf
\end{align*}
respectively. Here we have identified $E_A \odot E_A$ with the profunctor $\mathscr{C}(-, A) \times \mathscr{C}(A, A) \times \mathscr{C}(A, =)$, via the isomorphism
\begin{align*}
E_A \odot E_A
&= \int^{Z \in \mathscr{C}} E_A(-, Z) \times E_A(Z, =) \\
&= \int^{Z \in \mathscr{C}} \mathscr{C}(-, A) \times \mathscr{C}(A, Z) \times \mathscr{C}(Z, A) \times \mathscr{C}(A, =) \\
&\cong \mathscr{C}(-, A) \times \mathscr{C}(A, A) \times \mathscr{C}(A, =)
\end{align*}
Because $\mathrm{Pastro}_\mathscr{M}$ is oplax monoidal, the Tambara module $\mathrm{Pastro}_\mathscr{M} E_A$ has an induced comonoid structure, in this case given by
\begin{align*}
\Delta_{X, Y} : (\mathrm{Pastro}_\mathscr{M} E_A)(X, Y) &\to (\mathrm{Pastro}_\mathscr{M} E_A \odot \mathrm{Pastro}_\mathscr{M} E_A)(X, Y) \\
\Delta(\rep{l}{r}) &= \repthree{l}{\mathrm{id}_{M\cdot A}}{r} \\
\varepsilon_{X, Y} : (\mathrm{Pastro}_\mathscr{M} E_A)(X, Y) &\to \mathscr{C}(X, Y) \\
\varepsilon(\rep{l}{r}) &= rl
\end{align*}
The connection with lawfulness is hopefully now evident!
\begin{proposition}\label{prop:lawful-if-homomorphism}
An optic $p : S \ensuremath{\,\mathaccent\shortmid\rightarrow\,} A$ is lawful iff the corresponding natural transformation $\mathrm{Pastro}_\mathscr{M} E_S \rightarrow \mathrm{Pastro}_\mathscr{M} E_A$ is a comonoid homomorphism.
\end{proposition}
\begin{proof}
For $t : \mathrm{Pastro}_\mathscr{M} E_S \rightarrow \mathrm{Pastro}_\mathscr{M} E_A$ to be a comonoid homomorphism means that the following diagrams commute for every $X, Y \in \mathscr{C}$:
\[
\begin{tikzcd}
(\mathrm{Pastro}_\mathscr{M} E_S)(X, Y) \ar[r, "t_{X, Y}"] \ar[d, "\varepsilon_{X,Y}" swap] & (\mathrm{Pastro}_\mathscr{M} E_A)(X, Y) \ar[d, "\varepsilon_{X,Y}"] \\
\mathscr{C}(X, Y) \ar[r, equals] & \mathscr{C}(X, Y)
\end{tikzcd}
\quad
\begin{tikzcd}
(\mathrm{Pastro}_\mathscr{M} E_S)(X, Y) \ar[r, "t_{X, Y}"] \ar[d, "\Delta_{X, Y}" swap] & (\mathrm{Pastro}_\mathscr{M} E_A)(X, Y) \ar[d, "\Delta_{X, Y}"] \\
(\mathrm{Pastro}_\mathscr{M} E_S \odot \mathrm{Pastro}_\mathscr{M} E_S)(X, Y) \ar[r, "(t \odot t)_{X, Y}" swap] & (\mathrm{Pastro}_\mathscr{M} E_A \odot \mathrm{Pastro}_\mathscr{M} E_A)(X, Y)
\end{tikzcd}
\]
Suppose $t$ corresponds to an optic with representative $\rep{l}{r}$ with residual $M$ and we have an element $\rep{f}{g} : (\mathrm{Pastro}_\mathscr{M} E_S)(X, Y)$ with residual $N$. The left diagram requires that
\begin{align*}
g(Nr)(Nl)f = gf,
\end{align*}
as an element of $\mathscr{C}(X, Y)$. This is certainly true as $rl = \mathrm{id}_S$. The right diagram claims that
\begin{align*}
\repthree{(N\cdot l)f}{\mathrm{id}_{N\cdot M\cdot A}}{g(N \cdot r)} = \repthree{(N\cdot l)f}{(N\cdot r)(N\cdot l)}{g(N\cdot r)}
\end{align*}
But this holds by exactly the same argument as used in Proposition~\ref{prop:lawful-category} to show that the composite of lawful optics is lawful: by transplanting the relations showing the second optic law for $\rep{l}{r}$
For the backward direction, consider the above diagrams specialised to $X = Y = S$. Tracing the element $\rep{\lambda_S^{-1}}{\lambda_S} \in (\mathrm{Pastro}_\mathscr{M} E_S)(S, S)$ around the commutative diagrams yields precisely the first and second optic laws respectively.
\end{proof}
All that is needed to complete the connection with profunctor optics is the following standard result in category theory.
\begin{lemma}
For an object $X$ in a monoidal category $(\mathscr{C}, \otimes, I)$, a comonoid structure $(X,\Delta,\varepsilon)$ is equivalent to a lax monoidal structure on the functor $\mathscr{C}(X, -) : \mathscr{C} \to \mathbf{Set}$, considering $\mathbf{Set}$ as a monoidal category with respect to $\times$.
Further, a morphism $(X_1,\Delta_1,\varepsilon_1) \to (X_2,\Delta_2,\varepsilon_2)$ is a comonoid homomorphism iff the induced natural transformation $\mathscr{C}(X_2, -) \Rightarrow \mathscr{C}(X_1, -)$ is monoidal.
\end{lemma}
\begin{proof}
This is a follow-your-nose result!
\end{proof}
\begin{theorem}
$p : S \ensuremath{\,\mathaccent\shortmid\rightarrow\,} A$ is a lawful optic iff the associated natural transformation $\tilde{p} : (U-)(A,A) \Rightarrow (U-)(S,S)$ is monoidal with respect to the canonical lax monoidal structures on $(U-)(A,A)$ and $(U-)(S,S)$.
\end{theorem}
\begin{proof}
\begin{align*}
& p : S \ensuremath{\,\mathaccent\shortmid\rightarrow\,} A \text{ is lawful} \\
\Leftrightarrow\; & \mathrm{Pastro}_\mathscr{M} E_S \to \mathrm{Pastro}_\mathscr{M} E_A \text{ is a comonoid homomorphism} \\
\Leftrightarrow\; & \mathbf{Tamb}_\mathscr{M}(\mathrm{Pastro}_\mathscr{M} E_A, -) \Rightarrow \mathbf{Tamb}_\mathscr{M}(\mathrm{Pastro}_\mathscr{M} E_S, -) \text{ is a monoidal natural transformation} \\
\Leftrightarrow\; & (U-)(A, A) \Rightarrow (U-)(S, S) \text{ is a monoidal natural transformation}
\end{align*}
\end{proof}
\subsection{Implementation}
We review quickly how the profunctor encoding is translated into code in the Haskell~\cite{LensLibrary} and Purescript~\cite{PurescriptLibrary} libraries. We define a typeclass for profunctors:
\begin{minted}{haskell}
class Profunctor p where
dimap :: (a -> b) -> (c -> d) -> p b c -> p a d
\end{minted}
To be considered a valid instance of \mintinline{haskell}{Profunctor}, the function \mintinline{haskell}{dimap} must behave functorially. Now, for each optic variant we wish to define, we create a typeclass for the corresponding Tambara module. In the case of $\mathbf{Lens}$es, this typeclass is named \mintinline{haskell}{Strong}:
\begin{minted}{haskell}
class Profunctor p => Strong p where
second :: p a b -> p (c, a) (c, b)
\end{minted}
This \mintinline{haskell}{second} function is the equivalent of the structure map $\zeta$ for the Tambara module. We require this map to satisfy the Tambara module coherences, but as with any definition in Haskell, these equations must be checked manually.
Now the type of lenses $(S, S') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (A, A')$ is the direct translation of the set of natural transformations $(U-)(A,A') \Rightarrow (U-)(S,S')$:
\begin{minted}{haskell}
type Lens s s' a a' = forall p. Strong p => p a a' -> p s s'
\end{minted}
where we use parametricity in \mintinline{haskell}{p} as a proxy for naturality. A profunctor lens
\begin{minted}{haskell}
l :: forall p. Strong p => p a a -> p s s
\end{minted}
is lawful if it is monoidal as a natural transformation. In code this is:
\begin{minted}{haskell}
l id == id
l (Procompose p q) == Procompose (l p) (l q)
\end{minted}
where
\begin{minted}{haskell}
data Procompose p q d c where
Procompose :: p x c -> q d x -> Procompose p q d c
\end{minted}
denotes profunctor/Tambara module composition, once equipped with appropriate \mintinline{haskell}{Profunctor} and \mintinline{haskell}{Strong} instances.
\subsection{The van Laarhoven Encoding}\label{sec:van-laarhoven}
Some optic variants can be encoded in a profunctor-like style without requiring the full complexity of profunctors. Chronologically this development came before profunctor optics, and was first introduced by Twan van Laarhoven~\cite{VanLaarhovenPost}.
The van Laarhoven encoding for \mintinline{haskell}{Lens}es, \mintinline{haskell}{Traversal}s and \mintinline{haskell}{Setter}s is:
\begin{minted}{haskell}
type Lens s a = forall f. Functor f => (a -> f a) -> (s -> f s)
type Traversal s a = forall f. Applicative f => (a -> f a) -> (s -> f s)
type Setter s a = forall f. Settable f => (a -> f a) -> (s -> f s)
\end{minted}
What allows such a description to work for these particular optic variants is that the Tambara module that characterises them, $\mathrm{Pastro}_\mathscr{M} E_A$, can be written in the form $\mathscr{C}(-, \mintinline{haskell}{f}=)$ for some \mintinline{haskell}{f} that is an instance of the corresponding typeclass. This is possible in particular for the optic variants that admit a coalgebraic description; the ones for which the evaluation-at-$A$ functor has a right adjoint.
No expressive power is lost by defining an optic to operate only on functions of the shape \mintinline{haskell}{a -> f a'}, as the entire concrete description of the optic can be extracted from its value on that particular Tambara module. The same is not true for other optic variants, and indeed in the Haskell \texttt{lens}{} library, \mintinline{haskell}{Prism}s and \mintinline{haskell}{Review}s take a form much closer to the profunctor encoding. (The \texttt{lens}{} library does not use \emph{precisely} the profunctor encoding even here, for backwards compatibility reasons.)
A consequence is that the laws typically given for $\mathbf{Traversal}$s actually only need to be checked for the applicative functor we earlier called $UR^*$. In Haskell this functor is implemented as \mintinline{haskell}{FunList}~\cite{FunListPost} or \mintinline{haskell}{Bazaar}~\cite{LensLibrary}.
\section{Future Work}
There are many avenues for future exploration!
\subsection{Mixed Optics}
One can generalise the definition of $\mathbf{Optic}$ so that the two halves lie in different categories. Suppose $\mathscr{C}_L$ and $\mathscr{C}_R$ are categories that are acted on by a common monoidal category $\mathscr{M}$. Write these actions as ${\circled{\tiny$\mathsf{L}$}} : \mathscr{M} \to [\mathscr{C}_L, \mathscr{C}_L]$ and ${\circled{\tiny$\mathsf{R}$}} : \mathscr{M} \to [\mathscr{C}_R, \mathscr{C}_R]$ respectively.
\begin{definition}
Given two objects of $\mathscr{C}_L \times \mathscr{C}_R^\mathrm{op}$, say $(S, S')$ and $(A, A')$, a \emph{mixed optic} $p : (S, S') \ensuremath{\,\mathaccent\shortmid\rightarrow\,} (A, A')$ for ${\circled{\tiny$\mathsf{L}$}}$ and ${\circled{\tiny$\mathsf{R}$}}$ is an element of the set
\begin{align*}
\mathbf{Optic}_{{\circled{\tiny$\mathsf{L}$}}, {\circled{\tiny$\mathsf{R}$}}}((S, S'), (A, A')) := \int^{M \in \mathscr{M}} \mathscr{C}_L(S, M {\circled{\tiny$\mathsf{L}$}} A) \times \mathscr{C}_R(M {\circled{\tiny$\mathsf{R}$}} A', S')
\end{align*}
\end{definition}
$\mathbf{Optic}_{{\circled{\tiny$\mathsf{L}$}}, {\circled{\tiny$\mathsf{R}$}}}$ forms a category. It is not so clear what notion of lawfulness is appropriate in this setting.
Examples of mixed optics include the \emph{degenerate optics} of the \texttt{lens}{} library: \mintinline{haskell}{Getter}s, \mintinline{haskell}{Review}s and \mintinline{haskell}{Fold}s. The mixed optic formalism also appears able to capture \emph{indexed optics} such as \mintinline{haskell}{IndexedLens}es and \mintinline{haskell}{IndexedTraversal}s~\cite{ProfunctorOpticsPost}.
\subsection{Monotonic Lenses}
In the bidirectional transformation community, the $\textsc{Put}\fput$ law is often considered too strong. In particular we have seen that in $\mathbf{Set}$, together with the other laws, it implies that $\textsc{Get}$ must be a projection from a product.
To overcome this we work in $\mathbf{Cat}$, so that the objects under consideration have internal morphisms that we think of as updates. We modify $\textsc{Put}$ so that instead of accepting an object $a$ of $A$ to overwrite the original in $S$ with, it requires a morphism in $A$ of the form $\textsc{Get}(s) \to a$. In this way we are restricted in what updates we may perform. This is captured in the following definition:
\begin{definition}[{\cite[Definition 4.1]{LensesFibrationsAndUniversalTranslations}}]
A \emph{c-lens} $S \ensuremath{\,\mathaccent\shortmid\rightarrow\,} A$ in $\mathbf{Cat}$ is a pair of functors
\begin{align*}
\textsc{Get} &: S \to A \\
\textsc{Put} &: (\textsc{Get} \downarrow \mathrm{id}_{A}) \to S
\end{align*}
such that a version of the three lens laws hold, where $(\textsc{Get} \downarrow \mathrm{id}_{A})$ denotes the comma category construction.
\end{definition}
We can rewrite this in a form that gives hope for a correspondence with some optic category:
\begin{theorem}
The data of a c-lens $S \ensuremath{\,\mathaccent\shortmid\rightarrow\,} A$ corresponds to a functor \[ S \to \int [(- / A), S] \] where $(-/A)$ denotes the slice category and $\int$ denotes (confusingly!) the Grothendieck construction.
Furthermore, a c-lens is lawful iff it is a coalgebra for the comonad of the adjunction
\[
\begin{tikzcd}[column sep = large]
{[A^\mathrm{op}, \mathbf{Cat}]} \ar[r, bend left, "\int"] \ar[r, phantom, "\bot" pos = 0.4] & \mathbf{Cat} \ar[l, bend left, "{X \mapsto [(-/A), X]}" below]
\end{tikzcd}
\]
\qed
\end{theorem}
It is not clear whether there is an action on $\mathbf{Cat}$ that generates this description as its concrete optics. There doesn't seem to be a natural place for an $A'$ to appear! We remain optimistic:
\begin{conjecture}
c-lenses are the lawful (possibly mixed) optics for some action on $\mathbf{Cat}$.
\end{conjecture}
\subsection{Functor and Monad Transformer Lenses}
These were considered by Edward Kmett~\cite{MonadTransformerLensesTalk} as a method for embedding pieces of a monad transformer stack into the whole. There is some debate about the correct categorical description of monad transformers~\cite{MonadTransformersAsMonoidTransformers, CalculatingMonadTransformersCategoryTheory}, so we do not attempt to say anything precise, but the perspective given here could help in a couple of ways.
Kmett considers optics for the operation of composing two monad transformers. The primary test-case was to embed \mintinline{haskell}{ReaderT} actions into \mintinline{haskell}{StateT} actions, but from the constant-complement perspective, this is impossible: \mintinline{haskell}{StateT} does not factor as the composite of \mintinline{haskell}{ReaderT} with some other monad transformer. In this setting the constant-complement laws may be asking too much, the optic laws given here might be the correct notion of lawfulness for monad transformers.
Also, instead of considering optics within a category of monad transformers, we could instead look at optics for the action of monad transformers on monads. One can indeed define an optic $\mintinline{haskell}{State} \ensuremath{\,\mathaccent\shortmid\rightarrow\,} \mintinline{haskell}{Reader}$ that uses residual $\mintinline{haskell}{StateT}$. Whether this is lawful or useful is not clear!
\subsection{Learners}
A recent paper in applied category theory~\cite{BackpropAsFunctor} describes a compositional approach to machine learning, with a category whose morphisms describe learning algorithms.
\begin{definition}[{\cite[Definition 2.1]{BackpropAsFunctor}}]
For $A$ and $B$ sets, a \emph{learner} $A \ensuremath{\,\mathaccent\shortmid\rightarrow\,} B$ is a tuple $(P, I, U, r)$ where $P$ is a set, and $I$, $U$, and $r$ are functions of shape:
\begin{align*}
I &: P \times A \to B \\
U &: P \times A \times B \to P \\
r &: P \times A \times B \to A
\end{align*}
\end{definition}
To form a category, one must consider learners up to an equivalence relation on the sets $P$. There is an alternate slick description of the set of learners $A \ensuremath{\,\mathaccent\shortmid\rightarrow\,} B$, that goes as follows. Note that the data of a learner is describing an element of the coend
\begin{align*}
\int^{P, Q \in \mathbf{Set}} \mathbf{Set}(P \times A, Q \times B) \times \mathbf{Set}(Q \times B, P \times A)
\end{align*}
via the isomorphisms
\begin{align*}
&\int^{P, Q \in \mathbf{Set}} \mathbf{Set}(P \times A, Q \times B) \times \mathbf{Set}(Q \times B, P \times A) \\
&\cong \int^{P, Q \in \mathbf{Set}} \mathbf{Set}(P \times A, Q) \times \mathbf{Set}(P \times A, B) \times \mathbf{Set}(Q \times B, P \times A) \\
&\cong \int^{P \in \mathbf{Set}} \mathbf{Set}(P \times A, B) \times \mathbf{Set}(P \times A \times B, P \times A) \\
&\cong \int^{P \in \mathbf{Set}} \mathbf{Set}(P \times A, B) \times \mathbf{Set}(P \times A \times B, P) \times \mathbf{Set}(P \times A \times B, A)
\end{align*}
Composition of learners can be defined analogously to composition for optics. This perspective explains the slight fussing around required in dealing with equivalence classes of learners, and suggests a generalisation to other monoidal categories.
\bibliographystyle{alpha}
|
1809.05338
|
\section*{Abstract}
The set $\mathfrak{L}$ of infinite-dimensional, symmetric stable tail dependence functions associated with exchangeable max-stable sequences of random variables with unit Fr\'echet margins is shown to be a simplex. Except for a single element, the extremal boundary of $\mathfrak{L}$ is in one-to-one correspondence with the set $\mathfrak{F}_1$ of distribution functions of non-negative random variables with unit mean. Consequently, each $\ell \in \mathfrak{L}$ is uniquely represented by a pair $(b,\mu)$ of a constant $b$ and a probability measure $\mu$ on $\mathfrak{F}_1$. A canonical stochastic construction for arbitrary exchangeable max-stable sequences and a stochastic representation for the Pickands dependence measure of finite-dimensional margins of $\ell$ are immediate corollaries. As by-products, a canonical analytical description and an associated canonical Le Page series representation for non-decreasing stochastic processes that are strongly infinitely divisible with respect to time are obtained.
\textbf{Keywords:} exchangeable sequence, max-stable sequence, stable tail dependence function, extreme-value copula, strong IDT process, Pickands representation
\section{Motivation and mathematical background}
Before we start, we clarify some notation. We will denote the indexing argument of a stochastic process $f$ as a subindex by writing $f_t$, instead of $f(t)$. This is in order to emphasize that $f_t$ is a random variable, whereas when writing $f(t)$ we mean the (non-random) value of a deterministic function $f$ in the variable $t$. Further, for $x>0$ we define $x/0:=\infty$ in order to simplify notation. We denote by $[0,\infty)_{00}^{\mathbb{N}}$ the set of sequences $\vec{t}=(t_1,t_2,\ldots)$ with non-negative members that are eventually zero, i.e.\ $t_k=0$ for almost all $k \in \mathbb{N}$.
\par
An (infinite) sequence of random variables is called \emph{exchangeable} if its probability distribution is invariant with respect to permutations of finitely many, but arbitrarily many, constituents, see \cite{aldous85} for a textbook treatment. Let $\vec{Y}=(Y_1,Y_2,\ldots)$ be an exchangeable sequence of random variables with $\mathbb{E}[Y_1]=1$ and such that $\min_{k \geq 1}\{Y_k/t_k\}$ has an exponential distribution with rate $\ell(\vec{t})$ for arbitrary $\vec{t} \in [0,\infty)_{00}^{\mathbb{N}}$ except the zero sequence (or this one included when interpreting the exponential distribution with rate zero as an atom at $\infty$). The sequence $\vec{Y}$ is said to be \emph{min-stable multivariate exponential} and its law is uniquely described by the function $\ell$, which is called the \emph{stable tail dependence function} of the sequence $\vec{Y}$. Indeed,
\begin{gather*}
\mathbb{P}(\vec{Y}>\vec{t}) = \mathbb{P}\big(\min_{k \geq 1}\{Y_k/t_k\}>1\big)= \exp\big\{ -\ell(\vec{t})\big\},\quad \vec{t} \in[0,\infty)_{00}^{\mathbb{N}},
\end{gather*}
with the first ``$>$''-sign understood componentwise. The function $\ell$ is symmetric in its arguments by exchangeability of $\vec{Y}$. Our goal is to determine the shape of the set of all symmetric stable tail dependence functions $\ell:[0,\infty)_{00}^{\mathbb{N}} \rightarrow [0,\infty)$, which we denote by $\mathfrak{L}$ in the sequel. The sequence $1/\vec{Y}=(1/Y_1,1/Y_2,\ldots)$ is called \emph{max-stable with unit Fr\'echet margins}, which makes the probability law of $\vec{Y}$ interesting for the field of multivariate extreme-value theory, for background the interested reader is referred to \cite{segers12} and \cite[Chapter 6]{joe97}. Concretely, the set of all $d$-variate functions $C_{\ell}:[0,1]^d\rightarrow [0,1]$, defined by
\begin{gather*}
C_{\ell}(u_1,\ldots,u_d):=\exp\Big\{ - \ell\big( -\log(u_1),\ldots,-\log(u_d),0,0,\ldots\big)\Big\},\quad \ell \in \mathfrak{L},
\end{gather*}
constitutes a (proper) subfamily of exchangeable, $d$-dimensional \emph{extreme-value copulas}\footnote{For background, the interested reader is referred to \cite{gudendorf09}.}. More precisely, an exchangeable $d$-dimensional extreme-value copula $C$ is of the form $C_{\ell}$ if and only if there is an infinite exchangeable sequence $\vec{U}=(U_1,U_2,\ldots)$ on some probability space $(\Omega,\mathcal{G},\mathbb{P})$ such that $C$ equals the distribution function of $(U_1,\ldots,U_d)$. By virtue of De Finetti's Theorem this is the case if and only if there exists a sub-$\sigma$-algebra $\mathcal{T} \subset \mathcal{G}$ such that $U_1,\ldots,U_d$ (and $U_{d+1},\,U_{d+2},\ldots$ as well) are independent and identically distributed (iid) conditioned on $\mathcal{T}$. We recall from \cite[p.\ 26, Corollary 3.12]{aldous85} that $\mathcal{T}$ almost surely coincides with the tail-$\sigma$-field $\cap_{k \geq 1}\sigma(U_k,\,U_{k+1},\ldots)$ of $\vec{U}$. It is educational to remark, however, that there are exchangeable $d$-dimensional extreme-value copulas that are not of the form $C_{\ell}$, because in general the notion of ``infinite exchangeability'' (or, more loosely, ``conditionally iid'') is stronger than that of finite ($d$-dimensional) exchangeability. In analytical terms, a $d$-margin of some $\ell \in \mathfrak{L}$ is always a symmetric stable tail dependence function, but not every symmetric, $d$-variate stable tail dependence function is a $d$-margin of some $\ell \in \mathfrak{L}$.
\par
It is well known at least since \cite{dehaan84} that $\ell$ can be represented as
\begin{gather}
\ell(\vec{t}) = -\log\big\{ \mathbb{P}(\vec{Y}>\vec{t}) \big\}=\mathbb{E}\Big[ \max_{k \geq 1}\{t_k\,X_k\}\Big],
\label{spectralrepr}
\end{gather}
for some sequence $\vec{X}=(X_1,X_2,\ldots)$ of random variables with finite means\footnote{Even though we are only interested in distributional statements throughout, for the sake of a more intuitive exposition we find it sometimes convenient to express formulas like (\ref{spectralrepr}) in probabilistic notation with probability measure $\mathbb{P}$ and expectation $\mathbb{E}$ (as compared to writing integrals), with the generic random objects $\vec{X},\vec{Y}$ being viewed as defined on some generic probability space $(\Omega,\mathcal{G},\mathbb{P})$, on which we do not necessarily work.}. As an example\footnote{This is the example on page 1198 in \cite{dehaan84}.} for the representation (\ref{spectralrepr}), if $\vec{Y}$ has independent components which all have the unit exponential distribution, that is $\ell(\vec{t})=\sum_{k \geq 1}t_k$, the probability law of $\vec{X}$ can be defined via a vector of discrete probabilities $\vec{p}=(p_1,p_2,\ldots)$ as
\begin{gather}
\mathbb{P}\Big(\vec{X} = \frac{\vec{e}_k}{p_k} \Big) = p_k > 0,\quad k \geq 1,\quad \sum_{k \geq 1}p_k=1,
\label{repr_nonunique}
\end{gather}
where $\vec{e}_k$ denotes the sequence with all members equal to zero except for the $k$-th. The representation (\ref{spectralrepr}) of a stable tail dependence function is called a \emph{spectral representation}. As (\ref{repr_nonunique}) shows, it is not unique in general (i.e.\ different $\vec{X}$ can imply the same $\ell$, hence $\vec{Y}$). Furthermore, even though $\vec{Y}$ is assumed to be exchangeable in the present work, $\vec{X}$ needs not be exchangeable and, in fact, the proof in \cite{dehaan84} constructs $\vec{X}$ from $\vec{Y}$ in such a way that $\vec{X}$ is not exchangeable (the particular example (\ref{repr_nonunique}) demonstrates this). Conversely, however, the spectral representation (\ref{spectralrepr}) can be used to construct models for $\vec{Y}$ by choosing convenient models for $\vec{X}$ that allow the expected value in (\ref{spectralrepr}) to be computed in closed form, as highlighted in \cite{segers12}. If one pursues this strategy and starts with an exchangeable $\vec{X}$, one obtains an exchangeable sequence $\vec{Y}$, but to the best of our knowledge it is an open question (solved by the present article) whether all exchangeable min-stable multivariate exponential $\vec{Y}$ can be obtained in such a way.
\par
We denote by $\ell^{(d)}$ the restriction of $\ell$ to the first $d \in \mathbb{N}$ components, i.e.\ $\ell^{(d)}$ determines the law of $(Y_1,\ldots,Y_d)$. For stable tail dependence functions in finite dimensions, such as $\ell^{(d)}$, there exist different methods to obtain uniqueness of the spectral representation (\ref{spectralrepr}) by imposing certain restrictions on the law of $\vec{X}$. The most prominent one is the \emph{Pickands representation}, named after \cite{pickands81}, see also \cite{dehaan77,ressel13}, which states that if $\ell^{(d)}$ is the stable tail dependence function associated with some min-stable multivariate exponential random vector $\vec{Y}^{(d)}=(Y_1,\ldots,Y_d)$, then there is a random vector $\vec{X}^{(d)}=(X^{(d)}_1,\ldots,X^{(d)}_d)$, uniquely determined in law, which takes values on the $d$-dimensional unit simplex $S_d:=\{\vec{q} \in [0,1]^d\,:\,q_1+\ldots+q_d=1\}$ and satisfying $\mathbb{E}[X^{(d)}_k]=1/d$ for all $k=1,\ldots,d$, such that
\begin{gather}
\ell^{(d)}(\vec{t}) = d\,\mathbb{E}\big[ \max\{t_1\,X^{(d)}_1,\ldots,t_d\,X^{(d)}_d\}\big].
\label{Pickandsrepr}
\end{gather}
In our infinite-dimensional setting, even though we assume exchangeability of $\vec{Y}=(Y_1,Y_2,\ldots)$, an unfortunate aspect of the Pickands representation is that the relation between the laws of $\vec{X}^{(d)}$ and $\vec{X}^{(d+1)}$ is not easy to understand, in particular $\vec{X}^{(d)}$ is not a re-scaled $d$-margin of $\vec{X}^{(d+1)}$, like one might naively hope on first glimpse. Consequently, describing the infinite-dimensional, symmetric stable tail dependence function $\ell$ in terms of the collection of its finite-dimensional Pickands measures is neither easily accomplished nor convenient or algebraically natural.
\par
In the main body of this article, we derive a natural and convenient spectral representation for symmetric stable tail dependence functions. To wit, each $\ell \in \mathfrak{L}$ can be represented as
\begin{gather}
\ell(\vec{t}) = b\,\sum_{k \geq 1}t_k + (1-b)\,\mathbb{E}\big[ \max_{k \geq 1}\{t_k\,X_k\}\big],\quad \vec{t} \in [0,\infty)^{\mathbb{N}}_{00},
\label{STDFrepr_prob}
\end{gather}
with a constant $b \in [0,1]$ and an exchangeable sequence $\vec{X}$ satisfying $\mathbb{E}[X_1]=1$ (hence $\mathbb{E}[X_k]=1$ for each $k \geq 1$). Whereas the constant $b$ is unique, the law of $\vec{X}$ is still not unique in general, but it becomes unique if we postulate in addition that the conditional mean of $\vec{X}$, that is $\lim_{n \rightarrow \infty}\frac{1}{n}\sum_{k=1}^{n}X_k$, is identically equal to one. In particular, the stable tail dependence function $\ell(\vec{t})=\sum_{k \geq 1}t_k$, corresponding to independent members in $\vec{Y}$, cannot be represented via an exchangeable $\vec{X}$. However, the canonical representation (\ref{STDFrepr_prob}) shows that the independence case occupies an isolated role in this regard.
\par
By virtue of De Finetti's Theorem, see \cite{definetti31,definetti37} and \cite[p.\ 19 ff]{aldous85}, studying the law of the exchangeable sequence $\vec{Y}$ is tantamount to a study of the law of a random distribution function $F=\{F_t\}_{t \geq 0}$ that is defined by $F_t:=\mathbb{P}(Y_1 \leq t\,|\,\mathcal{T})$, with $\mathcal{T}=\cap_{k \geq 1}\sigma(Y_k,\,Y_{k+1},\ldots)$ denoting the tail-$\sigma$-field of $\vec{Y}$. A result of \cite{maischerer13} shows that the stochastic process $H:=-\log(1-F)$ is\footnote{\cite{molchanov18} call the ``strong IDT'' processes ``time-stable'' processes, but we prefer to stick with the original nomenclature.} \emph{strongly infinitely divisible with respect to time (strong IDT)}, meaning that
\begin{gather*}
\{H_t\}_{t \geq 0} \stackrel{d}{=} \big\{ H^{(1)}_{\frac{t}{n}}+\ldots+ H^{(n)}_{\frac{t}{n}}\big\}_{t \geq 0},\quad \forall n \in \mathbb{N},
\end{gather*}
where $\stackrel{d}{=}$ denotes equality in law and $H^{(i)}$ are independent copies of $H$. Conversely, given a non-decreasing, right-continuous strong IDT process $H$ and an independent sequence $\{\eta_k\}_{k \geq 1}$ of iid unit exponential variables, the exchangeable, min-stable multivariate exponential sequence $\vec{Y}$ can be represented as
\begin{gather}
Y_k:=\inf\{t>0\,:\,H_t>\eta_k\},\quad k \in \mathbb{N},
\label{ciiddef}
\end{gather}
establishing a canonical stochastic representation, which is conditionally iid in the sense of De Finetti's Theorem. If $H$ is normalized to satisfy $\mathbb{E}[\exp(-H_1)]=\exp(-1)$, it follows that $\mathbb{E}[Y_1]=1$, so that the function
\begin{gather*}
\ell(\vec{t}):=-\log\{\mathbb{P}(\vec{Y}>\vec{t})\}=-\log\Big\{\mathbb{E}\Big[e^{-\sum_{k \geq 1}H_{t_k}}\Big]\Big\},\quad \vec{t} \in [0,\infty)^{\mathbb{N}}_{00},
\end{gather*}
lies in $\mathfrak{L}$.
\par
In fact, in our proof of (\ref{STDFrepr_prob}) we rely heavily on the concept of strong IDT processes, for which \cite{molchanov18} recently have derived a convenient series representation, which we make use of. Translating the analytical result (\ref{STDFrepr_prob}) on symmetric stable tail dependence functions into the language of these processes then implies that each non-decreasing strong IDT process is uniquely determined by a triplet $(b,c,\mu)$ of constants $b \geq 0$, $c>0$ and a probability measure $\mu$ on the set of distribution functions of non-negative random variables with unit mean, as we will see.
\par
Regarding the organization of the remaining article, we prove and discuss the main result (\ref{STDFrepr_prob}) in Section \ref{sec_proof}, and we conclude in Section \ref{sec_concl}.
\section{The structure of $\mathfrak{L}$}\label{sec_proof}
We denote by $\ell_{\Pi}(\vec{t}):=\sum_{k \geq 1}t_k$ the stable tail dependence function associated with an iid sequence of unit exponentials. We denote by $\mathfrak{F}$ (resp.\ $\mathfrak{F}_1$) the set of all distribution functions of non-negative random variables with finite (resp.\ unit) mean. Then with $F \in \mathfrak{F}_1$ the function
\begin{gather*}
\ell_F(\vec{t}):=\int_0^{\infty}1-\prod_{k \geq 1}F\Big( \frac{s}{t_k}\Big)\,\mathrm{d}s,\quad \vec{t} \in [0,\infty)^{\mathbb{N}}_{00},
\end{gather*}
defines a symmetric stable tail dependence function, which is investigated thoroughly in \cite{mai17}. In this definition, implicitly we mean $F(\infty)=1$ for those $t_k$ that are zero. We remark that $\ell_F$ has the stochastic representation $\ell_F(\vec{t})=\mathbb{E}[\max_{k \geq 1}\{t_k\,X_k\}]$, where $\vec{X}=(X_1,X_2,\ldots)$ is an iid sequence drawn from $F$. We seek to show that $\mathfrak{L}$ (equipped with the topology of pointwise convergence) is a simplex with extremal boundary $\partial_e \mathfrak{L}=\{\ell_{\Pi}\}\cup \{\ell_F\,:\,F \in \mathfrak{F}_1\}$, which is the main contribution of the present work, see Theorem \ref{thm_main} and Corollary \ref{cor_uni} below.
\par
The key idea of the presented proof relies on the aforementioned link to strong IDT processes, found in \cite[Theorem 5.3]{maischerer13}. In a recent article, \cite[Theorem 4.2]{molchanov18} show that a non-negative\footnote{More generally, \cite{molchanov18} consider strong IDT processes without Gaussian component, but only the subclass of non-negative ones is of interest in the present article.}, c\`adl\`ag, stochastically continuous, strong IDT process without Gaussian component admits a LePage series representation
\begin{gather}
\{H_t\}_{t \geq 0} \stackrel{d}{=} \Big\{b\,t+\sum_{k \geq 1}f^{(k)}_{\frac{t}{\epsilon_1+\ldots+\epsilon_k}}\Big\}_{t \geq 0},\quad t \geq 0,
\label{molchanov_repr}
\end{gather}
where $\{f^{(k)}\}_{k \geq 1}$ is a sequence of independent copies of a non-vanishing c\`adl\`ag stochastic process $f=\{f_t\}_{t \geq 0}$ with $f_0=0$ (denote the space of all such functions by $\mathfrak{D}$ in the sequel) and, independently, $\{\epsilon_k\}_{k \geq 1}$ is a list of iid unit exponentials. Furthermore, the process $f$ satisfies
\begin{gather}
\int_{\mathfrak{D}} \int_0^{\infty}\min\{1,|f(u)|\}\,\frac{\mathrm{d}u}{u^2}\,\gamma(\mathrm{d}f)<\infty,
\label{intcondition}
\end{gather}
with $\gamma$ denoting the probability law of $f$ on $\mathfrak{D}$. In general, the law of $f$ in this representation of a non-negative strong IDT process is non-unique. However, the following auxiliary lemma shows that if $H$ is non-decreasing, we can at least learn that $f$ is non-decreasing as well. This proof is the most technical step towards Theorem \ref{thm_main} below, and it is a result of independent interest as well.
\begin{lemma}[Non-decreasing strong IDT processes] \label{lemma_molchanov}
If $H$ is a non-decreasing, right-continuous strong IDT process, the stochastic process $f$ of any LePage series representation (\ref{molchanov_repr}) is necessarily non-decreasing and $b \geq 0$.
\end{lemma}
\begin{proof}
First notice that non-decreasingness, right-continuity, and the strong IDT property imply stochastic continuity of $H$. This follows from the fact that for each fixed $t>0$ the random variable $H_{t-}:=\lim_{x \uparrow t}H_x$ exists in $[0,\infty]$ by non-decreasingness and has the same infinitely divisible law as the random variable $H_t$ (since $\mathbb{E}[\exp(-x\,H_u)]=\exp\{-t\,\Psi_H(u)\}$ for some Bernstein function\footnote{See \cite{schilling10} for a textbook treatment on Bernstein functions.} $\Psi_H$, see \cite[Lemma 3.7]{maischerer13}). Thus, the non-negative random variable $H_t-H_{t-}$ has zero expectation and is thus identically equal to zero. On the other hand, right-continuity implies that $H_{t+}:=\lim_{x \downarrow 0}H_x=H_t$ almost surely. Consequently, the stochastic continuity assumption of \cite[Theorem 4.2]{molchanov18} is satisfied, hence there is a LePage series representation.
\par
Consider a probability space $(\Omega,\mathcal{G},\mathbb{P})$, on which $H$ is defined by the right-hand side of (\ref{molchanov_repr}). We first prove that $f$ is non-decreasing. The heuristic idea is to consider
\begin{gather*}
H_{\epsilon_1\,t} = f^{(1)}_t+ b\,\epsilon_1\,t+ \sum_{k \geq 2}f^{(k)}_{\frac{\epsilon_1\,t}{\epsilon_1+\ldots+\epsilon_k}},
\end{gather*}
and argue that there is a positive probability that $\epsilon_1$ is so small that $f^{(1)}_t$ is the dominating part on the right-hand side, from which one concludes that a violation of the non-decreasingness of $f^{(1)}$ on some interval $[x_1,x_2]$ would imply a violation of the non-decreasingness of $H$ on $[x_1/\epsilon_1,x_2/\epsilon_1]$.
\par
To fill this intuitive idea with some mathematical rigor is what's done in the sequel. We assume a violation of non-decreasingness of $f^{(1)}$, which means that there exists $\epsilon>0$ and $0 \leq x_1<x_2<\infty$ such that $\mathbb{P}(A_{f})>0$ for the event
\begin{gather*}
A_{f}:=\Big\{ f^{(1)}\mbox{ not non-decreasing on }[x_1,x_2] \mbox{ and }\inf_{x \in [x_1,x_2]}\{f^{(1)}_x-f^{(1)}_{x_1}\} \leq -\epsilon\Big\}.
\end{gather*}
Our goal is to show that this implies a violation of the non-decreasingness of $H$. For later use we define the $\sigma$-algebra $\mathcal{H}:=\sigma(\epsilon_k,f^{(k)}\,:\,k \geq 2)$ generated by all involved stochastic objects except for $\epsilon_1,f^{(1)}$.
\par
For a moment consider independent copies $\{g^{(k)}\}_{k \geq 1}$ of $g:=|f|$ and for each $x \in (0,1],\,t \geq 0$, let
\begin{align*}
\tilde{H}_t&:=\sum_{k \geq 2}g^{(k)}_{\frac{t}{\epsilon_2+\ldots+\epsilon_k}},\quad \tilde{H}^{(x)}_t:=\sum_{k \geq 2}g^{(k)}_{\frac{x\,t}{x+\epsilon_2+\ldots+\epsilon_k}},\\
\hat{H}^{(x)}_t&:=\sum_{k \geq 2}g^{(k)}_{\frac{x\,t}{\epsilon_2+\ldots+\epsilon_k}}\,1_{\{\epsilon_2+\ldots+\epsilon_k > x\}}.
\end{align*}
By \cite[Theorem 4.2]{molchanov18}, the process $\tilde{H}$ is non-negative, strong IDT, c\`adl\`ag, and satisfies $\tilde{H}_0=0$. Right-continuity in zero implies for each $x \in (0,1]$ that the first exit time of $\{\tilde{H}_{x\,t}\}_{t \geq 0}$ from the interval $[0,\epsilon/4]$ is almost surely positive, i.e.\
\begin{gather*}
\tilde{T}_x:=\inf\{t>0\,:\,\tilde{H}_{x\,t}>\epsilon/4\}>0.
\end{gather*}
Furthermore, it is obvious from the definition that $\tilde{T}_x=\tilde{T}_1/x$, which implies that $\lim_{x \searrow 0}\tilde{T}_x = \infty$ almost surely. We use the Laplace functional formula \cite[Proposition 3.6]{resnick87} for Poisson random measure to observe for $d \in \mathbb{N}$ and $t_1,y_1,\ldots,t_d,y_d \geq 0$ arbitrary that
\begin{align*}
\mathbb{E}\Big[e^{-\sum_{i=1}^{d}y_i\,\tilde{H}^{(x)}_{t_i}}\Big] &= \exp\Big( -\int_{\mathfrak{D}}\int_0^{\infty}1-e^{\sum_{i=1}^{d}y_i\,|f|(x\,t_i/(x+s))}\,\mathrm{d}s\,\gamma(\mathrm{d}f)\Big)\\
& = \exp\Big( -x\,\int_{\mathfrak{D}}\int_0^{1}1-e^{\sum_{i=1}^{d}y_i\,|f|(u\,t_i)}\,\frac{\mathrm{d}u}{u^2}\,\gamma(\mathrm{d}f)\Big),
\end{align*}
having applied the substitution $u=x/(x+s)$. An analogous computation with the substitution $u=x/s$ shows that
\begin{align*}
\mathbb{E}\Big[e^{-\sum_{i=1}^{d}y_i\,\hat{H}^{(x)}_{t_i}}\Big] &= \exp\Big( -\int_{\mathfrak{D}}\int_0^{\infty}1-e^{\sum_{i=1}^{d}y_i\,|f|(x\,t_i/s)\,1_{\{s>x\}}}\,\mathrm{d}s\,\gamma(\mathrm{d}f)\Big)\\
& = \exp\Big( -x\,\int_{\mathfrak{D}}\int_0^{1}1-e^{\sum_{i=1}^{d}y_i\,|f|(u\,t_i)}\,\frac{\mathrm{d}u}{u^2}\,\gamma(\mathrm{d}f)\Big),
\end{align*}
which implies that $\{\tilde{H}^{(x)}_t\}_{t \geq 0}$ has the same law as $\{\hat{H}_t^{(x)}\}_{t \geq 0}$. This implies that the first exit time of $\tilde{H}^{(x)}$ from $[0,\epsilon/4]$ is equal in law to that of $\hat{H}^{(x)}$. The process $\hat{H}^{(x)}$ evidently satisfies
\begin{gather*}
\hat{H}^{(x)}_t = \sum_{k \geq 2}g^{(k)}_{\frac{x\,t}{\epsilon_2+\ldots+\epsilon_k}}\,1_{\{\epsilon_1+\ldots+\epsilon_k > x\}} \leq \sum_{k \geq 2}g^{(k)}_{\frac{x\,t}{\epsilon_2+\ldots+\epsilon_k}}=\tilde{H}_{t\,x}.
\end{gather*}
For its first exit time from $[0,\epsilon/4]$ this gives the lower bound
\begin{gather*}
\inf\{t>0\,:\,\hat{H}^{(x)}_t>\epsilon/4\} \geq \inf\{t>0\,:\,\tilde{H}_{x\,t}>\epsilon/4\} = \tilde{T}_x,
\end{gather*}
which was shown to converge to infinity almost surely as $x \searrow 0$. Since $\tilde{H}^{(x)}\stackrel{d}{=}\hat{H}^{(x)}$, the first exit time of $\tilde{H}^{(x)}$ from $[0,\epsilon/4]$ is thus also shown to converge to infinity as $x \searrow 0$.
\par
Now the process of interest for us is
\begin{gather*}
{H}^{(x)}_t:=\sum_{k \geq 2}f^{(k)}_{\frac{x\,t}{x+\epsilon_2+\ldots+\epsilon_k}},\quad t \geq 0,
\end{gather*}
which satisfies $|H^{(x)}_t| \leq \tilde{H}^{(x)}_t$ for all $t$. Consequently, the first exit time $T_x$ of $H^{(x)}$ from the interval $[-\epsilon/4,\epsilon/4]$ is almost surely larger than that of $\tilde{H}^{(x)}$, which was shown above to converge to infinity almost surely as $x \searrow 0$. Consequently, we find an $\mathcal{H}$-measurable (notice that $H^{(x)}$ is $\mathcal{H}$-measurable) random variable $Z>0$ such that $T_x \geq x_2$ for all $x \leq Z$. In particular, on the event $\{\epsilon_1 \leq Z\}$ we have that $T_{\epsilon_1} \geq x_2$ and hence $\sup_{t \in [0,x_2]}|H^{(\epsilon_1)}_t| \leq \epsilon/4$. Finally, on the event $A_{\epsilon}:=\{\epsilon_1 < \epsilon/(4\,x_2\,|b|)\}$ we have $|b\,\epsilon_1\,t| \leq \epsilon/4$ for all $t \leq x_2$. Notice that $A_{\epsilon}$ has positive probability (possibly equal to one if $b=0$). Summing up all terms, we show that the event
\begin{align*}
A_H&:= \Big\{ \{H_{\epsilon_1\,t}\}_{t \geq 0} \mbox{ not non-decreasing on }[x_1,x_2]\\
& \qquad \mbox{ and }\inf_{x \in [x_1,x_2]}\{H_{\epsilon_1\,x}-H_{\epsilon_1\,x_1}\} \leq -\frac{\epsilon}{4}\Big\}
\end{align*}
has positive probability. To this end, by construction $(A_{f} \cap A_{\epsilon} \cap \{\epsilon_1 \leq Z\}) \subset A_H$, since on this set we have for $x \in [x_1,x_2]$ that
\begin{align*}
\inf_{x \in [x_1,x_2]}\{H_{\epsilon_1\,x}-H_{\epsilon_1\,x_1}\} &= \hspace{-.4cm}\inf_{x \in [x_1,x_2]}\{f^{(1)}_x-f^{(1)}_{x_1}+\underbrace{b\,\epsilon_1\,(x-x_1)}_{\leq \epsilon/4} + \hspace{-.4cm}\underbrace{H^{(\epsilon_1)}_x-H^{(\epsilon_1)}_{x_1}}_{\leq |H_x^{(\epsilon_1)}|+|H_{x_1}^{(\epsilon_1)}| \leq \epsilon/2}\hspace{-.4cm}\} \\
& \leq \underbrace{\inf_{x \in [x_1,x_2]}\{f^{(1)}_x-f^{(1)}_{x_1}\}}_{\leq -\epsilon}+\frac{3\,\epsilon}{4} \leq -\frac{\epsilon}{4}.
\end{align*}
Hence,
\begin{align*}
\mathbb{P}(A_H) & \geq \mathbb{E}[1_{A_{f}}\,1_{A_{\epsilon}}\,1_{\{\epsilon_1 \leq Z\}}]\\
& = \mathbb{P}(A_f)\,\mathbb{E}[\mathbb{E}[1_{A_{\epsilon}}\,1_{\{\epsilon_1 \leq Z\}}\,|\,\mathcal{H}]] = \mathbb{P}(A_f)\,\mathbb{E}\Big[ 1-e^{-\min\{Z,\epsilon/(4\,|b|\,x_2)\}}\Big]>0.
\end{align*}
That the last expression is strictly positive follows from the fact that the random variable $\min\{Z,\epsilon/(4\,|b|\,x_2)\}$ is not almost surely zero (it is even almost surely positive). Since $A_H$ has positive probability, $H$ cannot be non-decreasing almost surely, hence the assumption was wrong and $f$ needs to be non-decreasing.
\par
Next, we prove that $b \geq 0$. To this end, we know that $H_1$ is non-negative and infinitely divisible, consequently it has a non-negative drift $b_H \geq 0$ in its L\'evy-Khinchin representation, see \cite{bertoin99,schilling10} for background. Also, the stochastic process
\begin{gather*}
\tilde{H}_t:=\sum_{k \geq 1}f^{(k)}_{\frac{t}{\epsilon_1+\ldots+\epsilon_k}}, \quad t \geq 0,
\end{gather*}
is strong IDT by \cite[Theorem 4.2]{molchanov18}, hence $\tilde{H}_1$ is infinitely divisible. But by what we have shown, each $f^{(k)}$ is non-negative almost surely, so $\tilde{H}_1 \geq 0$. Since $f \geq 0$ by what we have just shown, the Bernstein function $\Psi_{\tilde{H}}$ associated with $\tilde{H}_1$ via $\Psi_{\tilde{H}}(x)=-\log(\mathbb{E}[\exp(-x\,\tilde{H}_1)])$ can be computed using the Laplace functional formula for Poisson random measure, cf.\ \cite[Proposition 3.6]{resnick87}, which yields
\begin{gather*}
\Psi_{\tilde{H}}(x) = \int_{\mathfrak{D}} \int_0^{\infty}1-e^{-x\,f(u)}\,\frac{\mathrm{d}u}{u^2}\,\gamma(\mathrm{d}f).
\end{gather*}
By non-negativity of $f$, we get for all $x \geq 1$ and all $u>0$ the estimate $(1-\exp(-x\,f(u)))/x \leq \min\{1/x,f(u)\} \leq \min\{1,f(u)\}$. By the dominated convergence theorem, using (\ref{intcondition}), we may thus conclude that $\Psi_{\tilde{H}}(x)/x$ converges to zero as $x \rightarrow \infty$. Consequently, the random variable $\tilde{H}_1$ has no drift in its L\'evy-Khinchin representation. This implies $b=b_H \geq 0$.
\end{proof}
We are now in the position to derive the main contribution of the present article. We denote by $M_+^{1}( \mathfrak{E})$ the set of Radon probability measures on some Hausdorff space $\mathfrak{E}$. In particular, we consider the sets $\mathfrak{F}$ and $\mathfrak{F}_1$ equipped with the topology induced by convergence in distribution of the associated non-negative random variables (aka weak convergence of distribution functions). This topology is well known to be metrizable, hence is Hausdorff, see \cite{sibley71}.
\begin{theorem}[The structure of $\mathfrak{L}$] \label{thm_main}
Let $\ell \in \mathfrak{L}$, not equal to $\ell_{\Pi}$. There exists a pair $(b,\mu) \in [0,1) \times M_+^{1}(\mathfrak{F}_1)$ such that $\ell = b\,\ell_{\Pi}+ (1-b)\,\int_{\mathfrak{F}_1}\ell_F\,\mu(\mathrm{d}F)$.
\end{theorem}
\begin{proof}
Given $\ell \in \L$, there exists an exchangeable sequence $\vec{Y}=(Y_1,Y_2,\ldots)$ of random variables on a probability space $(\Omega,\mathcal{G},\mathbb{P})$ such that $\mathbb{P}(\vec{Y}>\vec{t})=\exp(-\ell(\vec{t}))$ for $\vec{t} \in [0,\infty)_{00}^{\mathbb{N}}$. By \cite[Theorem 5.3]{maischerer13}, the stochastic process $H_t:=-\log\big( \mathbb{P}(Y_1>t\,|\,\mathcal{T})\big)$, $t \geq 0$, with $\mathcal{T}$ the tail-$\sigma$-field of $\vec{Y}$, is strong IDT, non-decreasing and not identically equal to $H_t=t$ (since $\ell \neq \ell_{\Pi}$). Lemma \ref{lemma_molchanov} proves existence of $b \geq 0$ and a non-vanishing, right-continuous, non-decreasing stochastic process $f=\{f_t\}_{t \geq 0}$ with $f_0=0$ (denote the space of all such functions by $\mathfrak{D}_+$ in the sequel) such that (\ref{molchanov_repr}) holds. Denoting the probability law of $f$ by $\gamma$, (\ref{intcondition}) now reads
\begin{gather}
\int_{\mathfrak{D}_+} \int_0^{\infty}\min\{1,f(u)\}\,\frac{\mathrm{d}u}{u^2}\,\gamma(\mathrm{d}f)<\infty.
\label{DCTcond}
\end{gather}
From the properties of $f$ (non-decreasingness, right-continuity and $f(0)=0$) we conclude that the function
\begin{gather*}
\tilde{F}_t:= e^{-\lim_{x \downarrow t} f_{1/x}} ,\quad t \geq 0,
\end{gather*}
is almost surely the distribution function of some non-negative random variable, which is not identically zero (since $f$ is non-vanishing). In the sequel, we denote the probability measure of $\tilde{F}$ by $\tilde{\mu}$. We get from (\ref{DCTcond}) with the estimate $1-x \leq \min\{1,-\log(x)\}$ for $x \in [0,1]$ and the substitution $t=1/u$ that
\begin{align}
\int_{\mathfrak{F}} \int_{0}^{\infty}1-F(t)\,\mathrm{d}t \,\tilde{\mu}(\mathrm{d}F) &\leq \int_{\mathfrak{F}} \int_{0}^{\infty}\min\{1,-\log(F(t))\}\,\mathrm{d}t \tilde{\mu}(\mathrm{d}F) \nonumber\\
& = \int_{\mathfrak{D}_+} \int_0^{\infty}\frac{\min\{1,f(u)\}}{u^2}\,\mathrm{d}u \,\gamma(\mathrm{d}f) <\infty,
\label{finite_mean}
\end{align}
i.e.\ $\tilde{\mu}$ is an element of $M_+^1({\mathfrak{F}})$. For arbitrary $F \in {\mathfrak{F}}$ we denote by $M_F:=\int_0^{\infty}1-F(t)\,\mathrm{d}t$ its mean. By (\ref{finite_mean}), the positive random variable $M_{\tilde{F}}$ has finite mean $c > 0$ (note that $M_{\tilde{F}}$ is positive almost surely and $c=0$ is ruled out since $f$ is non-vanishing, hence $\tilde{F}$ not almost surely identically equal to one), i.e.\ $\int_{{\mathfrak{F}}} M_{F} \,\tilde{\mu}(\mathrm{d}F)=c $. Consequently,
\begin{gather*}
\hat{\mu}(\mathrm{d}F) := \frac{M_F}{c}\,\tilde{\mu}(\mathrm{d}F)
\end{gather*}
defines an equivalent probability measure on $\mathfrak{F}$. Finally, we denote by $\mu$ the probability measure that describes the law of the process $\{\tilde{F}_{M_{\tilde{F}}\,t}\}_{t \geq 0}$ under the measure $\hat{\mu}$, and observe that
\begin{gather*}
M_{\tilde{F}_{M_{\tilde{F}}\,.}} = \int_0^{\infty}1-\tilde{F}_{M_{\tilde{F}}\,s}\,\mathrm{d}s = \frac{1}{M_{\tilde{F}}}\,\int_0^{\infty}1-\tilde{F}_{s}\,\mathrm{d}s =1
\end{gather*}
almost surely. Consequently, $\mu \in M_+^1(\mathfrak{F}_1)$. Putting together the pieces, we may re-write (\ref{molchanov_repr}) as
\begin{gather*}
\{H_t\}_{t \geq 0} \stackrel{d}{=} \Big\{b\,t+\sum_{k \geq 1}-\log\Big[\tilde{F}^{(k)}_{\frac{\epsilon_1+\ldots+\epsilon_k}{t}-}\Big]\Big\}_{t \geq 0},\quad t \geq 0,
\end{gather*}
and we observe\footnote{Here, $\delta_{e}$ denotes the Dirac measure at a point $e$ in some Hausdorff space $\mathfrak{E}$.} that $\sum_{k \geq 1}\delta_{(\epsilon_1+\ldots+\epsilon_k,\tilde{F}^{(k)})}$ is a Poisson random measure on $[0,\infty) \times \mathfrak{F}$ with mean measure $\mathrm{d}x \times \tilde{\mu}(\mathrm{d}F)$. Hence, the Laplace functional formula \cite[Proposition 3.6]{resnick87}, applied in the third equality below, gives
\begin{align*}
\mathbb{P}(\vec{Y}>\vec{t}) & = \mathbb{E}\Big[ e^{-\sum_{i \geq 1}H_{t_i}}\Big] = e^{-b\,\ell_{\Pi}(\vec{t})}\,\mathbb{E}\Big[ \exp\Big\{{-\sum_{k \geq 1} -\log\big[\prod_{i \geq 1} \tilde{F}^{(k)}_{\frac{\epsilon_1+\ldots+\epsilon_k}{t_i}-}\big]}\Big\}\Big]\\
& = e^{-b\,\ell_{\Pi}(\vec{t})}\,\exp\Big\{ -\int_{{\mathfrak{F}}}\int_0^{\infty}1-\prod_{i \geq 1}F\Big( \frac{x}{t_i}\Big)\,\mathrm{d}x\,\tilde{\mu}(\mathrm{d}F)\Big\}\\
& = e^{-b\,\ell_{\Pi}(\vec{t})}\,\exp\Big\{ -\int_{{\mathfrak{F}}}\int_0^{\infty}1-\prod_{i \geq 1}F\Big( \frac{M_F\,x}{t_i}\Big)\,\mathrm{d}x\,M_F\,\tilde{\mu}(\mathrm{d}F)\Big\}\\
& = e^{-b\,\ell_{\Pi}(\vec{t})}\,\exp\Big\{ -c\,\int_{{\mathfrak{F}}_1}\int_0^{\infty}1-\prod_{i \geq 1}F\Big( \frac{M_F\,x}{t_i}\Big)\,\mathrm{d}x\,{\hat{\mu}}(\mathrm{d}F)\Big\}\\
& = e^{-b\,\ell_{\Pi}(\vec{t})}\,\exp\Big\{ -c\,\int_{{\mathfrak{F}}_1}\int_0^{\infty}1-\prod_{i \geq 1}F\Big( \frac{x}{t_i}\Big)\,\mathrm{d}x\,{{\mu}}(\mathrm{d}F)\Big\}\\
& = \exp\Big\{ -b\,\ell_{\Pi(\vec{t})}-c\,\int_{{\mathfrak{F}}_1}\ell_F(\vec{t})\,\mu(\mathrm{d}F)\Big\}.
\end{align*}
The normalizing assumption $\mathbb{E}[Y_1]=1$ in the definition of $\mathfrak{L}$ means that the exponential rate of the exponential random variable $Y_1$ equals one, which implies that $\ell(1,0,0,\ldots)=1$. We thus observe from the last equation that
\begin{gather*}
1 = \ell(1,0,0,\ldots) = b+c.
\end{gather*}
From this we conclude that $c=1-b$, hence
\begin{gather*}
\ell = b\,\ell_{\Pi}+(1-b)\,\int_{{\mathfrak{F}}_1}\ell_{F}\,\mu(\mathrm{d}F),
\end{gather*}
as claimed.
\end{proof}
The following corollary is of particular relevance when thinking about potential further research concerning the parameter estimation of non-decreasing strong IDT processes or exchangeable max-stable sequences, resp.\ extreme-value copulas.
\begin{corollary}[Uniqueness]\label{cor_uni}
The pair $(b,\mu)$ in Theorem \ref{thm_main} is unique.
\end{corollary}
\begin{proof}
It is convenient to study uniqueness in terms of the probability law of the uniquely associated strong IDT process $H$, determined by
\begin{gather*}
\mathbb{E}\Big[ e^{-\sum_{k \geq 1}H_{t_k}}\Big] = e^{-b\,\ell_{\Pi}(\vec{t})-(1-b)\,\int_{\mathfrak{F}_1}\ell_F(\vec{t})\,\mu(\mathrm{d}F)},\quad \vec{t} \in [0,\infty)^{\mathbb{N}}_{00}.
\end{gather*}
The constant $b$ is unique, because it is the unique drift constant in the L\'evy-Khinchin representation of the infinitely divisible random variable $H_1$. To explain this, denote by $\bm{1}_n=(1,\ldots,1)$ an $n$-dimensional row vector with all entries equal to one. For each fixed $F \in \mathfrak{F}_1$ the stable tail dependence function $\ell_F$ satisfies $\ell_F(\bm{1}_n,0,0,\ldots)=\Psi_F(n)$ for a Bernstein function $\Psi_F$ without drift, see \cite[Lemma 3]{mai17}. This property carries over to probability mixtures, so that $\mathbb{E}[\exp(-n\,H_1)]=\exp(-b\,n-(1-b)\,\Psi(n))$ for some Bernstein function $\Psi$ without drift. Hence, the Bernstein function associated with $H_1$ has (unique) drift $b$ and its L\'evy measure equals $1-b$ times the L\'evy measure of $\Psi$.
\par
Regarding $\mu$, it follows from Lemma \ref{lemma_molchanov} and the proof of Theorem \ref{thm_main} that the probability law of $f$ in any Le Page series representation for $H$ is necessarily supported by the set
\begin{align*}
\mathfrak{G}&:=\Big\{ g:[0,\infty) \rightarrow [0,\infty]\,:\,g(0)=0,\,g\mbox{ right-continuous, non-decreasing,}\\
& \qquad \qquad \lim_{t \rightarrow \infty}g(t)=\infty,\,\int_0^{\infty}1-e^{-g_{\frac{1}{s}}}\,\mathrm{d}s<\infty\Big\}.
\end{align*}
For each $g \in \mathfrak{G}$ there is a unique $c>0$, namely
\begin{gather*}
c:=\int_0^{\infty}1-e^{-g_{\frac{1}{s}}}\,\mathrm{d}s,
\end{gather*}
such that\footnote{We use the notation of \cite{molchanov18}, denoting $(c \circ g^{(1)})(t):=g^{(1)}(c\,t)$, for $t \geq 0$ and $c>0$.} $g = c \circ g^{(1)}$, where $g^{(1)}$ lies in the smaller set
\begin{gather*}
\mathfrak{G}_1 := \Big\{g \in \mathfrak{G}\,:\,\int_0^{\infty}1-e^{-g_{\frac{1}{s}}}\,\mathrm{d}s=1\Big\}.
\end{gather*}
By \cite[Remark 4.1]{molchanov18} the law of $f^{(1)}$ is unique. This implies that the measure $\mu$ is unique, since by the proof of Theorem \ref{thm_main} it equals the law of
\begin{gather*}
F_t:=e^{-\lim_{x \downarrow t} f^{(1)}_{1/x}},\quad t \geq 0,
\end{gather*}
and this transformation maps $\mathfrak{G}_1$ to $\mathfrak{F}_1$ in a bijective manner.
\end{proof}
\begin{remark}[Consequences and explanations of the results]\label{rmk}
We collect a few explanatory remarks ((b),(c),(f)) and immediate consequences ((a),(d),(e)) of the main results:
\begin{itemize}
\item[(a)] \textbf{$\mathfrak{L}$ is a simplex:}\\
We first provide a proof that $\mathfrak{L}$ is compact (in the topology of pointwise convergence).
\begin{proof}
Let $\{\ell_n\}_{n \in \mathbb{N}} \subset \mathfrak{L}$. Then we find strong IDT processes $\{H^{(n)}\}_{n \in \mathbb{N}}$ associated with these $\ell_n$. The processes $F^{(n)}:=1-\exp(-H^{(n)})$ define random variables on the space of distribution functions of non-negative random variables. The set of distribution functions of random variables taking values in $[0,\infty]$ (equipped with the topology of pointwise convergence at all continuity points of the limit) is compact by Helly's Selection Theorem and Hausdorff (since it is metrizable by the L\'evy metric, see \cite{sibley71}). Thus, the Radon probability measures on this set (equipped with the weak topology) form a Bauer simplex by \cite[Corollary II.4.2, p.\ 104]{alfsen71}, in particular form a compact set. Since the probability measures of the given sequence $\{F^{(n)}\}$ lie in this set, we find a convergent subsequence $\{F^{(n_i)}\}$ and a limiting law, hence a limiting stochastic process $F$, and we define $H:=-\log(1-F)$. Then $F \in M_+^{1}(\mathfrak{F})$ almost surely, which follows from
\begin{align*}
\mathbb{E}\Big[ \int_0^{\infty}1-F_s\,\mathrm{d}s\Big] &= \mathbb{E}\Big[ \int_0^{\infty}e^{-H_s}\,\mathrm{d}s\Big]= \int_0^{\infty}\mathbb{E}\Big[e^{-H_s}\Big]\,\mathrm{d}s\\
& = \int_0^{\infty}\lim_{i \rightarrow \infty}\mathbb{E}\Big[e^{-H^{(n_i)}_s}\Big]\,\mathrm{d}s = \int_0^{\infty}e^{-s}\,\mathrm{d}s = 1<\infty,
\end{align*}
where the third equality follows from the bounded convergence theorem and the fourth from the fact that $\mathbb{E}[\exp(-H^{(n)}_s)]=\exp(-s\,\ell_n(1,0,0,\ldots))=\exp(-s)$ for each $n \in \mathbb{N}$. We define
\begin{gather*}
\ell(\vec{t}):=-\log\Big\{ \mathbb{E}\Big[ e^{-\sum_{k \geq 1}H_{t_k}}\Big]\Big\},\quad \vec{t} \in [0,\infty)^{\mathbb{N}}_{00},
\end{gather*}
and claim that $\ell$ equals the limit of $\ell_{n_i}$. To see this, for fixed $\vec{t}$ we compute
\begin{align*}
\ell(\vec{t}) & = -\log\Big\{ \mathbb{E}\Big[ e^{-\sum_{k \geq 1}H_{t_k}}\Big]\Big\} = -\log\Big\{ \mathbb{E}\Big[ e^{-\sum_{k \geq 1}\lim_{i \rightarrow \infty}H^{(n_i)}_{t_k}}\Big]\Big\}\\
& \stackrel{(\ast)}{=} -\log\Big\{ \mathbb{E}\Big[ e^{-\lim_{i \rightarrow \infty}\sum_{k \geq 1}H^{(n_i)}_{t_k}}\Big]\Big\} \stackrel{(\ast\ast)}{=} -\log\Big\{\lim_{i \rightarrow \infty} \mathbb{E}\Big[ e^{-\sum_{k \geq 1}H^{(n_i)}_{t_k}}\Big]\Big\} \\
& = -\log\Big\{\lim_{i \rightarrow \infty}e^{-\ell_{n_i}(\vec{t})}\Big\} = \lim_{i \rightarrow \infty}\ell_{n_i}(\vec{t}).
\end{align*}
We have used the fact that almost all entries of $\vec{t}$ are zero in $(\ast)$ and bounded convergence in $(\ast\ast)$. Finally, to see that $H$ is strong IDT, it suffices to verify that the homogeneity of order one of the $\ell_{n_i}$ carries over to the limit $\ell$ (obviously), hence $\ell \in \mathfrak{L}$.
\end{proof}
$\mathfrak{L}$ is obviously convex and by Theorem \ref{thm_main} the extremal boundary of $\mathfrak{L}$ is $\partial_e \mathfrak{L}=\{\ell_{\Pi}\}\cup \{\ell_F\,:\,F \in \mathfrak{F}_1\}$. Thus, $\mathfrak{L}$ is a simplex, since the boundary integral representation is unique by Corollary \ref{cor_uni}. Whether $\mathfrak{L}$ is a Bauer simplex, i.e.\ whether $\partial_e \mathfrak{L}$ is closed, is not obvious, since $\mathfrak{F}_1$ is not compact. It is left as an open question at this point.
\item[(b)] \textbf{The isolated nature of $\ell_{\Pi}$:}\\
One noticeable aspect about the topology on $\mathfrak{L}$ is that the seemingly isolated point $\ell_{\Pi}$ is in fact not isolated. A sequence $\{\ell_{F_n}\}_{n \in \mathbb{N}} \subset \partial_e \mathfrak{L}$ converges (pointwise) to $\ell_{\Pi}$ if and only if $\{F_n\}_{n \in \mathbb{N}} \subset \mathfrak{F}_1$ converges to $F_0$ at all continuity points of $F_0$ (which means at all $x>0$, but not necessarily at $x=0$). To see this, we point out for $F \in \mathfrak{F}_1$ that
\begin{gather*}
\ell^{(2)}_{F_n}(1,1) = 2-\int_0^{\infty}\big(1-F_n(s)\big)^2\,\mathrm{d}s=2-||F_0-F_n||_{L^2}^{2}.
\end{gather*}
Thus, $\ell^{(2)}_{F_n}(1,1)$, which is always $\leq 2$, converges to $2$ as $n \rightarrow \infty$ if and only if $F_n(x)$ converges to $F_0(x)=1$ for all $x>0$. But the only element $\ell \in \mathfrak{L}$ satisfying $\ell^{(2)}(1,1) = 2$ is $\ell_{\Pi}$, since $\ell^{(2)}_F(1,1)<2$ for each $F \in \mathfrak{F}_1$.
\item[(c)] \textbf{Probabilistic interpretation of the results:}\\
As already remarked in the introduction, rewriting Theorem \ref{thm_main} and Corollary \ref{cor_uni} in probabilistic terms, instead of in the analytical terms of $\mathfrak{L}$, means that $\ell \in \mathfrak{L} \setminus\{\ell_{\Pi}\}$ has the spectral representation
\begin{gather}
\ell(\vec{t}) = b\,\ell_{\Pi}(\vec{t})+(1-b)\,\mathbb{E}\big[ \max_{k \geq 1}\{t_k\,X_k\}\big],\quad \vec{t} \in [0,\infty)^{\mathbb{N}}_{00},
\label{specshort}
\end{gather}
where $b \in [0,1)$ and $\vec{X}$ is an exchangeable sequence of non-negative random variables with $\mathbb{E}[X_1]=1$ and the property that $\lim_{n \rightarrow \infty} \frac{1}{n}\,\sum_{k=1}^{n}X_k$ is almost surely identically equal to one. This representation is canonical in the sense that the constant $b$ as well as the probability law of $\vec{X}$ are unique.
\par
Recalling the classical extreme-value theory based on Poisson random measure, see \cite{resnick87} for a textbook treatment, a stochastic representation for $\vec{Y}$ based on the spectral representation (\ref{specshort}) is given by
\begin{gather*}
\vec{Y} \stackrel{d}{=} \Big( \min\Big\{\frac{ Y_1^{(1)}}{b},\frac{ Y_1^{(0)}}{1-b}\Big\},\,\min\Big\{\frac{ Y_2^{(1)}}{b},\frac{ Y_2^{(0)}}{1-b}\Big\},\ldots\Big),
\end{gather*}
where $\vec{Y}^{(1)}=(Y^{(1)}_1,Y^{(1)}_2,\ldots)$ is a sequence of iid unit exponentials (corresponding to the case $b=1$), and, independently, $\vec{Y}^{(0)}=(Y^{(0)}_1,Y^{(0)}_2,\ldots)$ corresponds to the case $b=0$ and satisfies
\begin{gather*}
\vec{Y}^{(0)} \stackrel{d}{=} \Big(\min_{n \geq 1}\Big\{ \frac{\epsilon_1+\ldots+\epsilon_n}{X_1^{(n)}}\Big\},\,\min_{n \geq 1}\Big\{ \frac{\epsilon_1+\ldots+\epsilon_n}{X_2^{(n)}}\Big\},\ldots\Big),
\end{gather*}
where $\vec{X}^{(n)}$ are independent copies of $\vec{X}$ and, independently, $\epsilon_1,\epsilon_2,\ldots$ is an iid sequence of unit exponentials. This classical representation does not make explicit use of exchangeability, but is only an instance of the general (non-exchangeable) theory. An alternative stochastic representation for $\vec{Y}$, due to \cite[Theorem 5.3]{maischerer13} and making use of exchangeability, is given by (\ref{ciiddef}) with the stochastic process $H$ defined via its Le Page representation
\begin{gather*}
H_t = b\,t+(1-b)\,\sum_{k \geq 1}-\log\Big\{ F^{(k)}_{\frac{\epsilon_1+\ldots+\epsilon_k}{t}-} \Big\},
\end{gather*}
where $F^{(k)}$ are independent copies of the random distribution function $F$ from which $\vec{X}$ is drawn. By Glivenko-Cantelli, $F$ may be written in terms of $\vec{X}$ as $F_t = \lim_{n \rightarrow \infty}\frac{1}{n}\,\sum_{i=1}^{n}1_{\{X_i \leq t\}}$. In other words, the sequence $\vec{Y}$ is an iid sequence drawn from the random distribution function $t \mapsto 1-\exp(-H_t)$. Notice in particular that $M_F=\int_0^{\infty}1-F_t\,\mathrm{d}t=\lim_{n \rightarrow \infty}\frac{1}{n}\,\sum_{i=1}^{n}X_i$ is identically one by the normalization to $\mathfrak{F}_1$ (instead of $\mathfrak{F}$). Depending on the law of $\vec{X}$ (and thus the properties of $H$), this representation is particularly convenient to simulate the random vector $(Y_1,\ldots,Y_d)$ for arbitrarily large $d$. An alternative strategy to accomplish this simulation, whose idea rather makes use of the general (non-exchangeable) theory, is presented in the following bullet point (d).
\item[(d)] \textbf{Pickands representation of finite-dimensional margins:}\\
The Pickands representation of $\ell^{(d)}_F$ has been derived in \cite[Lemma 4]{mai17}. Furthermore, the Pickands representation of $\ell^{(d)}_{\Pi}$ is well known to correspond to a uniform distribution on $\{1,\ldots,d\}$. Combining these facts with Theorem \ref{thm_main} immediately implies for the random vector $\vec{X}^{(d)}$ in (\ref{Pickandsrepr}) associated with $\ell^{(d)}$ for $\ell \in \mathfrak{L}$, represented by $(b,\mu)$, that
\begin{gather}
\vec{X}^{(d)} \stackrel{d}{=} \Big(\frac{W^{(d)}_1}{\sum_{i=1}^{d}W^{(d)}_i},\ldots, \frac{W^{(d)}_d}{\sum_{i=1}^{d}W^{(d)}_i} \Big),
\label{repr_Pick_ell}
\end{gather}
where the random vector $\vec{W}^{(d)}$ can be simulated as follows:
\begin{itemize}
\item Draw a random variable $D$ which is uniformly distributed on $\{1,\ldots,d\}$.
\item Draw a Bernoulli random variable with success probability $b$. If success, define $W^{(d)}_k:=1_{\{k=D\}}$ for $k=1,\ldots,d$ and return. Otherwise, proceed with the following steps.
\item Simulate the random distribution function $F=\{F_t\}_{t \geq 0}$ from the probability measure $\mu \in M_+^1(\mathfrak{F}_1)$ and draw a random variable $Z$ with distribution function $t \mapsto \int_0^{t}s\,\mathrm{d}F_s$, $t \geq 0$.
\item Draw iid random variables $Z_1,\ldots,Z_d$ with distribution function $F$.
\item Define $W^{(d)}_k:=1_{\{k=D\}}\,Z+1_{\{k \neq D\}}\,Z_k$ for $k=1,\ldots,d$ and return.
\end{itemize}
This algorithm to simulate the random vector $\vec{X}^{(d)}$ can be used to derive an exact simulation algorithm for the random vector $(Y_1,\ldots,Y_d)$, see \cite[Algorithm 1]{dombry16}.
\par
We emphasize again at this point that the probability law of $\vec{X}^{(d)}$ is uniquely determined by the function $\ell^{(d)}$. However, there exist exchangeable random vectors $\vec{X}^{(d)}$ taking values in $S_d$, and thus via (\ref{Pickandsrepr}) lead to symmetric $d$-dimensional stable tail dependence functions, but which cannot be represented as in (\ref{repr_Pick_ell}). These correspond to $d$-dimensional exchangeable max-stable random vectors that do not satisfy the stronger notion of ``infinite (De Finetti) exchangeability'' (or ``conditionally iid''), and are thus outside the realm of the present article since they do not arise as $d$-margins of some $\ell \in \mathfrak{L}$. An example for $d=3$ can be retrieved from \cite[Example 2.4]{maibrazil13}. We recall from this example that the $3$-variate stable tail dependence function
\begin{gather*}
\ell^{(3)}(t_1,t_2,t_3) = \frac{\lambda_1}{\lambda_1+2\,\lambda_2+\lambda_3}\,t_{[1]}+\frac{\lambda_1+\lambda_2}{\lambda_1+2\,\lambda_2+\lambda_3}\,t_{[2]}+t_{[3]},
\end{gather*}
with $t_{[1]} \leq t_{[2]} \leq t_{[3]}$ the order list of $t_1,t_2,t_3$ and with positive parameters $\lambda_1,\lambda_2,\lambda_3>0$, arises as $3$-margin of some $\ell \in \mathfrak{L}$ if and only if $\lambda_2^2 \leq \lambda_1\,\lambda_3$. If this condition is violated, $\ell^{(3)}$ is still a symmetric stable tail dependence function, but the associated random vector $(Y_1,Y_2,Y_3)$ cannot have a stochastic representation that is ``conditionally iid''.
\item[(e)] \textbf{Arbitrary, non-decreasing strong IDT processes:}\\
The probability law of a non-decreasing, right-continuous, strong IDT process $H=\{H_t\}_{t \geq 0}$, which is not deterministic (i.e.\ not identically $H_t=b\,t$ for some $b \geq 0$), is uniquely described by a triplet $(b,c,\mu) \in [0,\infty)\times (0,\infty) \times M_+^{1}(\mathfrak{F}_1)$, and has the LePage series representation
\begin{gather*}
H_t=b\,t+c\,\sum_{k \geq 1}-\log\Big\{ F^{(k)}_{\frac{\epsilon_1+\ldots+\epsilon_k}{t}-}\Big\},
\end{gather*}
where $\sum_{k \geq 1}\delta_{(\epsilon_1+\ldots+\epsilon_k,F^{(k)})}$ is a Poisson random measure on $[0,\infty) \times \mathfrak{F}_1$ with mean measure $\mathrm{d}t \times \mu$. Lemma \ref{lemma_molchanov} and the proof of Theorem \ref{thm_main} show that other LePage series representations of the form (\ref{molchanov_repr}) can only differ from this canonical one by changing from the unique measure $\mu \in M_+^{1}(\mathfrak{F}_1)$ to some $\tilde{\mu} \in M_+^{1}(\mathfrak{F})$ (and potentially adjusting the constant $c$ accordingly). The unboundedness of $b$ as well as the additional constant $c>0$ in the triplet $(b,c,\mu)$, when compared to $(b,\mu)$ in Theorem \ref{thm_main}, is due to the fact that a stable tail dependence function $\ell$ is normalized to satisfy $\ell(1,0,0,\ldots)=b+c=1$ (corresponding to $\mathbb{E}[Y_1]=1$), while the triplet $(b,c,\mu)$ describes the law of an arbitrary non-decreasing, right-continuous strong IDT process $H$ via the relation
\begin{gather*}
\mathbb{E}\Big[ e^{-\sum_{k \geq 1}H_{t_k}}\Big] = e^{-b\,\ell_{\Pi}(\vec{t})-c\,\int_{\mathfrak{F}_1}\ell_F(\vec{t})\,\mu(\mathrm{d}F)}.
\end{gather*}
\item[(f)] \textbf{The measure change in Theorem \ref{thm_main}:}\\
If $\tilde{F}$ is a random variable on $(\Omega,\mathcal{G},\mathbb{P})$ taking values in $\mathfrak{F}$ with the additional property that $\mathbb{E}[M_{\tilde{F}}]=1$, with $M_{\tilde{F}}=\int_0^{\infty}1-\tilde{F}_s\,\mathrm{d}s$, then
\begin{gather*}
\ell(\vec{t}) := \mathbb{E}\Big[ \int_0^{\infty}1-\prod_{k \geq 1}\tilde{F}_{\frac{s}{t_k}}\,\mathrm{d}s\Big]
\end{gather*}
defines an element in $\mathfrak{L}$. What is its canonical boundary integral representation? To this end, we define $F_t:=\tilde{F}_{M_{\tilde{F}}\,t}$, $t \geq 0$, and observe that $F$ takes values in $\mathfrak{F}_1$. We further define an equivalent probability measure $\mathbb{Q}$ via the measure change $\mathrm{d}\mathbb{Q} = M_{\tilde{F}}\,\mathrm{d}\mathbb{P}$. Denoting expectation with respect to $\mathbb{Q}$ by $\mathbb{E}^{\mathbb{Q}}$, we observe that
\begin{align*}
\ell(\vec{t}) &= \mathbb{E}\Big[ \int_0^{\infty}1-\prod_{k \geq 1}\tilde{F}_{\frac{s}{t_k}}\,\mathrm{d}s\Big] = \mathbb{E}\Big[M_{\tilde{F}}\, \int_0^{\infty}1-\prod_{k \geq 1}\tilde{F}_{M_{\tilde{F}}\,\frac{s}{t_k}}\,\mathrm{d}s\Big]\\
&=\mathbb{E}\big[M_{\tilde{F}}\,\ell_F(\vec{t})\big]=\mathbb{E}^{\mathbb{Q}}\big[ \ell_{F}(\vec{t})\big].
\end{align*}
Example \ref{ex_logistic} below provides an example for such $\ell$.
\end{itemize}
\end{remark}
We end this section with a few examples.
\begin{example}[The L\'evy subordinator case]\label{exlevy}
Recall that a \emph{L\'evy subordinator} $L=\{L_t\}_{t \geq 0}$, see \cite{bertoin99} for a textbook treatment, is a non-decreasing, right-continuous strong IDT process with independent and stationary increments, which implies that its law is fully determined by the law of $L_1$. By the well known L\'evy-Khinchin formula for infinitely divisible distributions, the probability law of $L_1$ is canonically described in terms of a pair $(b_L,\nu_L)$ of a drift constant $b_L \geq 0$ and a Radon measure $\nu_L$ on $(0,\infty]$ subject to the condition $\int_0^{1}x\,\nu_L(\mathrm{d}x)<\infty$, the so-called L\'evy measure. The (non-deterministic) L\'evy subordinator $L$ associated with the pair $(b_L,\nu_L)$ is obtained when specifying $b:=b_L$, $c:=\int_{(0,\infty]}1-\exp(-x)\,\nu_L(\mathrm{d}x)$, and $\mu$ as the law of the random distribution function
\begin{gather*}
F_t:=e^{-\Theta}+\big(1-e^{-\Theta} \big)\,1_{\{1-e^{-\Theta} \geq 1/t\}},\quad t \geq 0, \quad \Theta \sim \big(1-e^{-x}\big)\,\nu_L(\mathrm{d}x)/c.
\end{gather*}
Conditioned on the randomized parameter $\Theta$, this $F$ corresponds to a random variable taking the value $1/(1-\exp(-\Theta))$ with probability $1-\exp({-\Theta})$, and the value zero with complementary probability $\exp(-\Theta)$. The random parameter $\Theta$ itself is drawn from the probability measure $\big(1-e^{-x}\big)\,\nu_L(\mathrm{d}x)/c$. Notice that every probability measure on $(0,\infty]$ is possible for $\Theta$, but the law of $\Theta$ is invariant with respect to changes of $c$, that is when changing from $\nu_L$ to $\beta\,\nu_L$ for some $\beta>0$.
\par
If $L$ is normalized to satisfy $b+c=1$, this example corresponds to a stable tail dependence function $\ell \in \mathfrak{L}$, given by
\begin{align*}
\ell(\vec{t}) &= \sum_{k=1}^{d(\vec{t})}t_{[k]}\,\big(\Psi(d(\vec{t})-k+1)-\Psi(d(\vec{t})-k)\big),\\
\Psi(x)&=b\,x+\int_{(0,\infty]}1-e^{-x\,t}\,\nu_L(\mathrm{d}t),
\end{align*}
where $d(\vec{t}):=\max\{n \in \mathbb{N}\,:\,t_n>0\}$ and $t_{[1]} \leq t_{[2]} \leq \ldots \leq t_{[d(\vec{t})]}$ denotes an ordered list of $t_1,\ldots,t_{d(\vec{t})}$. The associated extreme-value copulas $C_{\ell}$ form precisely the ``conditionally iid'' subfamily of the survival copulas of the Marshall-Olkin exponential distribution, see \cite[Chapter 3.3]{maischerer17} for a textbook treatment of this connection. Furthermore, $\ell \in \partial_e \mathfrak{L}$ if and only if either $L_t=t$ (corresponding to $b=1$ and $\ell=\ell_{\Pi}$) or if $L$ is a compound Poisson subordinator with constant jump sizes. In the latter case, $b=0$ and $\ell=\ell_F$ with $F$ as described above, but $\Theta$ a non-random, positive constant.
\end{example}
\begin{example}[(Generalized) logistic model]\label{ex_logistic}
Let $\ell \in \mathfrak{L}$ arbitrary and $\alpha \in (0,1)$. Furthermore, we denote by $H=\{H_t\}_{t \geq 0}$ a non-decreasing strong IDT process associated with $\ell$, and by $H^{(k)}$ independent copies thereof, $k \in \mathbb{N}$. Let $M$ be a positive random variable with Laplace transform $\mathbb{E}[\exp(-x\,M)]=\exp(-x^{\alpha})$, i.e.\ an $\alpha$-stable random variable, independent of $H$. It is not difficult to see that $\{H_{M\,t^{1/\alpha}}\}_{t \geq 0}$ is another non-decreasing strong IDT process, and its associated stable tail dependence function is given by
\begin{gather*}
\ell_{\alpha}(\vec{t})=\ell\big(t_1^{1/\alpha},\,t_2^{1/\alpha},\ldots\big)^{\alpha},\quad \vec{t}=(t_1,\,t_2,\ldots) \in [0,\infty)^{\mathbb{N}}_{00},
\end{gather*}
satisfying $\ell_{\alpha} \in \mathfrak{L}$, since $\ell_{\alpha}(1,0,0,\ldots)=1$. With the constant $c_{\alpha}:=\Gamma(1-\alpha)^{-1/{\alpha}}$, a Le Page series representation for this stochastic process is given by
\begin{gather*}
\{H_{M\,t^{1/\alpha}}\}_{t \geq 0} \stackrel{d}{=}\Big\{\sum_{k \geq 1}-\log\big( \tilde{F}^{(k)}_{\frac{\epsilon_1+\ldots+\epsilon_k}{t}-}\big)\Big\}_{t \geq 0},
\end{gather*}
where $\tilde{F}^{(k)}$ are independent copies of $\tilde{F}_t:= \exp(-H_{c_{\alpha}\,t^{-1/\alpha}})$. We notice that each realization of $\tilde{F}$ is an element of $\mathfrak{F}$ (though not necessarily of $\mathfrak{F}_1$), since
\begin{gather*}
\mathbb{E}\Big[ \int_0^{\infty}1-\tilde{F}_t\,\mathrm{d}t\Big] = \int_0^{\infty}1-e^{-c_{\alpha}\,t^{-1/\alpha}}\,\mathrm{d}t=1<\infty.
\end{gather*}
The canonical Le Page series representation is obtained when changing from $\tilde{F}$ to $F$, where $F_t := \tilde{F}_{M_{\tilde{F}}\,t}$, and additionally changing measure like in Remark \ref{rmk}(f). As a final remark, if $\ell=\ell_{\Pi}$, meaning $H_t=t$, then $\ell_{\alpha}=\ell_{F_{\alpha}} \in \partial_e \mathfrak{L}$ corresponds precisely to the well known logistic model based on the Fr\'echet distribution function $F_{\alpha}(x)=\exp(-c_{\alpha}\,x^{-1/\alpha}) \in \mathfrak{F}_1$.
\end{example}
\begin{example}[A convenient umbrella for many well known models]
As already highlighted in \cite{mai17}, some well known models correspond to extremal points $\ell_F \in \partial_e \mathfrak{L}$, for instance the logistic model (see previous example) and the negative logistic model. However, Example \ref{exlevy} shows that the popular Marshall-Olkin model, resp.\ the infinite exchangeable subfamily thereof, is not an extremal element in general. With the motivation to establish an analytically tractable umbrella for many well known parametric models, in particular including both $\partial_e \mathfrak{L}$ and Example \ref{exlevy}, a rich semi-parametric specification for $\ell \in \mathfrak{L}$ can be constructed as follows. For $F \in \mathfrak{F}_1$ the function $\Psi_F(z):=\int_0^{\infty}1-F^z(t)\,\mathrm{d}t$ defines a Bernstein function with $\Psi_F(1)=1$, see \cite[Lemma 3]{mai17}. This implies that for arbitrary $z \in (0,\infty)$ the function $F_z(x):= F(x\,\Psi_F(z))^z$ lies again in $\mathfrak{F}_1$. With a probability law $\rho$ on $(0,\infty)$ we see $\ell_{\rho,F} := \int_{(0,\infty)}\ell_{F_z}\,\rho(\mathrm{d}z) \in \mathfrak{L}$. Many parametric models from the literature are comprised by this construction. In particular, the elements $\ell_F \in \partial_e \mathfrak{L}$ correspond to $\rho = \delta_{1}$ by construction, and Example \ref{exlevy} corresponds to the special case when $F(x)=\exp(-1)+(1-\exp(-1))\,1_{\{x \geq 1/(1-\exp(-1))\}}$ is held fix, but $\rho$ is varied, corresponding to the law of $\Theta$.
\end{example}
\begin{example}[Constructing $\ell$ via inclusion-exclusion]
Let $\ell_X \in \mathfrak{L}$ arbitrary and assume that $\vec{X}$ is min-stable multivariate exponential with stable tail dependence function $\ell_X$. Now consider $\vec{Y}$ with a spectral representation given in terms of $\vec{X}$, i.e.\ $-\log \{\mathbb{P}(\vec{Y}>\vec{t})\}=\mathbb{E}[\max_{k \geq 1}\{t_k\,X_k\}]$. Using the principle of inclusion and exclusion it is not difficult to compute the stable tail dependence function $\ell_Y$ of $\vec{Y}$, to wit
\begin{gather*}
\ell_Y(\vec{t}) = \sum_{k=1}^{d(\vec{t})}(-1)^{k+1}\,\sum_{1 \leq i_1<\ldots<i_k \leq d(\vec{t})}\ell_X\Big(\frac{1}{t_{i_1}},\ldots,\frac{1}{t_{i_k}},0,0,\ldots\Big)^{-1},
\end{gather*}
with $d(\vec{t}):=\max\{n \in \mathbb{N}\,:\,t_n>0\}$ as in Example \ref{exlevy}. In particular, if $\ell_X=\ell_{F_{\alpha}}$ with $F_{\alpha}$ from Example \ref{ex_logistic} (logistic model), then $\ell_Y$ corresponds to a negative logistic model. The mapping $\ell_X \mapsto \ell_Y$ on $\mathfrak{L}$ seems to be ``association-increasing''. For instance, we observe that $\ell_Y^{(2)}(1,1)=2-\ell_X^{(2)}(1,1)^{-1} \leq \ell_X^{(2)}(1,1)$. Furthermore, maximal dependence $\ell_X(\vec{t})=\ell_Y(\vec{t})=\max_{k \geq 1}\{t_k\}$ is a fixpoint. It might potentially be interesting to study the relationship between the spectral representations of $\ell_X$ and $\ell_Y$.
\end{example}
\section{Conclusion}\label{sec_concl}
It has been shown that the set $\mathfrak{L}$ of infinite-dimensional, symmetric stable tail dependence functions is a simplex. The boundary of the simplex and a respective boundary integral representation for $\ell \in \mathfrak{L}$ has been derived in terms of a pair $(b,\mu)$ of a constant $b\in [0,1]$ and a probability measure $\mu$ on the set of distribution functions of non-negative random variables with unit mean. Equivalently, the pair $(b,\mu)$ was shown to conveniently describe the probability law of a non-decreasing, right-continuous stochastic process which is strongly infinitely divisible with respect to time, subject to a normalizing condition.
\section*{Acknowledgments}
Inspiring discussions with Paul Ressel and his helpful comments on earlier versions of this manuscript are gratefully acknowledged. His remarks in particular made me aware of the somewhat special role of the point $\ell_{\Pi}$. Helpful comments by the anonymous referees and the handling editor are also gratefully acknowledged.
|
0903.5339
|
\section{Introduction}
The main motivation behind this paper is the question of the global well-posedness of the DKG system in dimensions higher than one. This is a relativistic field model that describes nuclear interactions of subatomic particles and plays an important role in the relativistic quantum electrodynamics, see \cite{BD}. The system generates a significant mathematical interest too. Mathematically, its main feature is: a system for two quantities where there is an a priori bound for only one of the two in the $L^2$-class and no positive definite energy, but at the same time a presence of a special null-form structure in both nonlinearities allowing the system to be studied at very low regularities, see \cite{GS}, \cite{DFS} \cite{Se2}, \cite{Pe}, \cite{Ma}, \cite{DFS2}.
Strichartz estimates with spherical symmetry have attracted a lot of interest recently. The gain of regularity of these estimates over the standard Strichartz estimates varies with the equation but, for example, in the context of the wave equation this gain is significant. Most attention has been dedicated to the homogeneous setting, see Sterbenz \cite{Sz}, Fang and Wang \cite{FW}, Hidano and Kurokawa \cite{HK}, Machihara et al \cite{MNNO}, Tao \cite{T2}, and Vilela \cite{V2}, with only a few special inhomogeneous estimates being proved. Below we address this issue by making use of duality arguments to show that every homogeneous Strichartz estimate with spherical symmetry has its dual counterpart which is an inhomogeneous Strichartz estimate with spherical symmetry much in the same way as with the standard Strichartz estimates.
The main idea of our proof is to identify the class of $L^p$-functions on $\mathbb R^n$ with spherical symmetry with the class of $L^p$-functions on $[0,\infty)$ with the weighted measure $\rho^{n-1}d\rho$. This allows us to use duality and to proceed to a large extent as in the standard case.
Once we get our Strichartz estimates the main challenge will be to come up with the correct definition of spherical symmetry for spinors. It is well-known that the Dirac operator does not preserve spherical symmetry, at least not in the way one expects if one takes the erroneous attitude to treat spinors as normal functions. Thus, we investigate the action of rotations on spinor-space and
define spherical symmetry for spinors to be the invariance with respect to that action. However, we do not know whether this definition has been used in the physics literature before.
\section{Preliminaries}
We shall make use of the following two results in the sequel.
\begin{lem}[Christ-Kiselev, see lemma 3.1 of \cite{T2}, or \cite{T}] \label{lem: Christ-Kiselev}
Suppose that the integral operator
\begin{equation} \label{eq: Boch int}
T[F](t)=\int_{-\infty}^{\infty} K(t,s)F(s)ds
\end{equation}
is bounded from $L^p(\mathbb R;\mathcal B_1)$ to $L^q(\mathbb R;\mathcal B_2)$ for some Banach spaces ${\mathcal B}_1$, ${\mathcal B}_2$ and $1\leq p< q\leq \infty$. The operator-valued kernel $K(t,s)$ maps ${\mathcal B}_1$ to ${\mathcal B}_2$ for all $t,s \in \mathbb R$. Assume also that $K$ is regular enough to ensure that (\ref{eq: Boch int}) makes sense as a ${\mathcal B}_2$-valued Bochner integral for almost all $t \in \mathbb R$. Then the operator
\[
\tilde{T}[F](t)=\int_{-\infty}^{t} K(t,s)F(s)ds
\]
is also bounded on the same spaces.
\end{lem}
\begin{thm}[D'Ancona, Foschi, Selberg \cite{DFS}] \label{thm: DKG}
Consider the IVP for the DKG system \eqref{eq: DKG 1}, \eqref{eq: DKG 2}
for initial data in the class $\psi_{|t=0} =\psi_0 \in L^2$, $\phi_{|t=0}=\phi_0 \in H^r$ and $\partial_t \phi_{|t=0} =\phi_1 \in H^{r-1}$, where $1/4 < r< 3/4$. Then there exist a time $T>0$, depending continuously on the $L^2 \times H^r \times H^{r-1}$-norm of the data, and a solution
\[
\psi \in C([0, T]; H^s), \quad \phi \in C([0, T ]; H^r) \cap C^1([0, T]; H^{r-1}),
\]
of the DKG system \eqref{eq: DKG 1}, \eqref{eq: DKG 2} on $(0, T ) \times \mathbb R^2$, satisfying the initial condition above. Moreover, the solution is unique in this class, and depends continuously on the data.
\end{thm}
\section{Inhomogeneous Strichartz estimates with spherical symmetry}
To every spherically symmetric function $f(x) \in L^p(\mathbb R^n)$ we map a function $f_\rho(\rho) \in L^p([0, \infty); \rho^{n-1}d\rho)$ by the rule $f_\rho(\rho) = f(\rho, 0,\dots,0)$. This mapping is a one-to-one isometry. The inverse mapping is defined by the rule $g_x(x)=g(\abs{x})$ where $g(\rho) \in L^p([0, \infty); \rho^{n-1}d\rho)$ and obviously we have $(g_x)_\rho=g(\rho)$ and $(f_\rho)_x=f(x)$. Note that the dual space to $L^p([0, \infty); \rho^{n-1}d\rho)$ is the space $L^{p'}([0, \infty); \rho^{n-1}d\rho)$, where $1\leq p < \infty$, and $p$ and $p'$ are H\"older conjugate.
Suppose now that $\mathcal{H}(\mathbb R^n)$ is a Hilbert space of functions on $\mathbb R^n$ on which space rotations act as unitary operators. Examples of such include the Sobolev spaces $H^s(\mathbb R^n)$ for any $s\in \mathbb R$. The class of spherically symmetric functions in $\mathcal{H}(\mathbb R^n)$ is a Hilbert space, too, which we shall identify with the Hilbert space $\mathcal{H}_\rho([0,\infty))$ of functions on $[0, \infty)$ with a scalar product
\[
\langle f,g \rangle_{\mathcal{H}_\rho([0,\infty))} = \langle f_x,g_x \rangle_{\mathcal{H}(\mathbb R^n)}.
\]
\begin{lem} Suppose that a linear continuous operator $U(t): \mathcal{H}(\mathbb R^n) \rightarrow L^2(\mathbb R^n)$ commutes with rotations, i.e. U(t)[f(Rx)]=U(t)[f](Rx), where $R$ denotes a space rotation on $\mathbb R^n$. Then its dual $U^*(t): L^2(\mathbb R^n) \rightarrow \mathcal{H}(\mathbb R^n)$ does too.
\end{lem}
\begin{defn} Let $U(t): \mathcal{H}(\mathbb R^n) \rightarrow L^2(\mathbb R^n)$ be a linear continuous operator that commutes with space rotations. We define the linear continuous operator $U_\rho(t): \mathcal{H_\rho}([0,\infty)) \rightarrow L^2([0,\infty);\rho^{n-1}d\rho)$ by the rule $U_\rho(t)f=(U(t)f_x)\rho$ for every $f \in \mathcal{H_\rho}([0,\infty))$.
\end{defn}
\begin{lem}
Let $U(t): \mathcal{H}(\mathbb R^n) \rightarrow L^2(\mathbb R^n)$ be a linear continuous operator and let
$U^*(t): L^2(\mathbb R^n) \rightarrow \mathcal{H}(\mathbb R^n) $ be its dual. Then the dual to $U_\rho(t): \mathcal{H_\rho}([0,\infty)) \rightarrow L^2([0,\infty);\rho^{n-1}d\rho)$ is the operator
$(U^*(t))_\rho: \mathcal{H_\rho}([0,\infty)) \rightarrow L^2([0,\infty);\rho^{n-1}d\rho)$
\end{lem}
\begin{proof}
\begin{equation}
\begin{split}
\langle U_{\rho}(t)f, g \rangle_{L^2([0,\infty);\rho^{n-1}d\rho)} = \rp {\omega_n}
\langle U(t)f_x, g_x \rangle_{L^2(\mathbb R^n)}=\\
\rp {\omega_n} \langle f_x, U^*(t)g_x \rangle_{\mathcal{H}(\mathbb R^n)}=
\langle f, (U^*(t)g)_{\rho} \rangle_{\mathcal{H_\rho}([0,\infty)},
\end{split}
\end{equation}
where $\omega_n$ is the area of the unit sphere in $\mathbb R^n$.
\end{proof}
Suppose that we have the following estimates for $U(t)$
\begin{align}\label{est: hom}
\normQR{U(t)f}{q}{r} \lesssim \normH{f}{s}
\end{align}
for all $f \in \mathcal{H}^s$ whenever $(q, r) \in A$, where $\mathcal{H}^s$, $s=s(q,r)$, are a collection of Hilbert spaces of functions on $\mathbb R^n$, $f \in \mathcal{H}^s$ is spherically symmetric, and $A$ is the index set of admissability for the estimate \eqref{est: hom}. We can express this more succinctly by saying that the operator
\[
T: \mathcal{H}_\rho^s \rightarrow L_t^qL_\rho^r, \qquad Tf=U_\rho(t)f,
\]
where $L_t^qL_\rho^r = L^q((0, \infty); L^r([0, \infty); \rho^{n-1}d\rho))$, is bounded whenever $(q, r) \in A$. Then by duality the operator
\[
TT^*: L_t^{\tilde q'}L_\rho^{\tilde r'} \rightarrow L_t^qL_\rho^r, \qquad TT^*F=U_\rho(t)\int_{-\infty}^{\infty} U_\rho^*(s)F(s)ds
\]
is bounded too whenever $(q,r), (\tilde q, \tilde r) \in A$. Consider now the operator
\begin{align}\label{oper: W}
W_\rho(t)F = U_\rho(t)\int_{0}^{t} U_\rho^*(s)F(s)ds,
\end{align}
which due to Duhamel's formula expresses the solution to an inhomogeneous PDE whenever $U(t)$ is the linear continuous group associated with that equation and $F(t)$ is a spherically symmetric function with respect to the space variables.
\begin{thm}[Inhomogeneous Strichartz estimates with spherical symmetry] \label{thm: inhom abs}
Suppose that the homogenous Strichartz estimate \eqref{est: hom} holds for all spherically symmetric $f \in \mathcal{H}^s$ whenever $(q, r) \in A$. Then we have that the following inhomogeneous Strichartz estimate
\begin{align}\label{est: inhom}
\normQR{U(t)\int_{0}^{t} U^*(s)F(s)ds}{q}{r} \lesssim \normQR{F}{\tilde q'}{\tilde r'}
\end{align}
holds for all spherically symmetric $F \in L_t^{\tilde q'}L_x^{\tilde r'}$, whenever $(q,r), (\tilde q, \tilde r) \in A$ and $q > \tilde q'$ or $(q,r)=(\tilde q, \tilde r)$.
\end{thm}
\begin{proof}
In the case when $q> \tilde q'$ we apply the Christ-Kiselev lemma to the $TT^*$-operator, otherwise we use symmetry considerations as in Keel and Tao \cite{KT}.
\end{proof}
\section{Strichartz estimates for the wave equation}
Define the operators
\begin{align*}
\widehat {U_{\pm}(t)f} = e^{\pm i(t\abs{\xi})}\hat{f}(\xi), \\
U_0(t)f = (U_{+}(t-s)+ U_{-}(t-s))/2,\\
W_0(t)F = \int_{0}^{t} \frac {U_{+}(t-s)- U_{-}(t-s)} {2iD} F(s) ds,
\end{align*}
where the operator $D$ has a Fourier symbol $\abs{\xi}$. Note that $D$ commutes with rotations and thus preserves spherical symmetry.
Then the solution to the IVP for the wave equation
\begin{align}
\Box u = F(t,x), \quad t \in [0, \infty) \times \mathbb R^n, \label{eq: wave 1}\\
u(0)=f, \quad \partial_t u(0) = g. \label{eq: wave 2}
\end{align}
is given by the formula
\[
u(t) = \partial_t U_0(t)f + U_0(t)g + W_0(t)F.
\]
For simplicity, we denote by $U_0(t)[f,g]=\partial_t U_0(t)f + U_0(t)g$ the propagation of the free wave with initial data $f$ and $g$.
\begin{defn}
We say that the exponent pair $(q,r)} \def\pairt{(\tilde q, \tilde r)$ is radially wave-admissible if
\begin{equation} \label{eq: defn rad Strich}
\rp q + \frac {n-1} r < \frac {n-1} 2, \qquad n >1,
\end{equation}
where $2 \leq q,r \leq \infty$, $(q,r)} \def\pairt{(\tilde q, \tilde r) \neq (\infty, \infty)$, or if $(q,r)} \def\pairt{(\tilde q, \tilde r)$ coincides with $(\infty, 2)$.
\end{defn}
\begin{thm}[\cite{FW}, \cite{HK}] \label{thm: wave hom}
The following estimate
\begin{equation} \label{est: rad Strich}
\normQR{U_0(t)[f,g]}{q}{r} \lesssim \normHt{f}{s} + \normHt{g}{s-1},
\end{equation}
holds for all spherically symmetric $f \in \dot{H}^s(\mathbb R^n)$, $g \in \dot{H}^{s-1}(\mathbb R^n)$, whenever the exponent pair $(q,r)} \def\pairt{(\tilde q, \tilde r)$ is radially wave-admissible and the Sobolev exponent $s$ satisfies the scaling condition
\[
\rp q + \frac n r = \frac n 2 - s.
\]
\end{thm}
\begin{thm} \label{thm: wave full}
Let $u(t)$ be the solution to the IVP for the wave equation \eqref{eq: wave 1}, \eqref{eq: wave 2}, where $f$, $g$, and $F(t)$ are spherically symmetric. Then the following estimate
\begin{equation*}
\normQR{D^{\sigma_1}u(t)}{q}{r} \lesssim \norm{f}_{\dot H^s} + \norm{g}_{\dot H^{s-1}} + \normQR{D^{\sigma_2}F}{\tilde q'}{\tilde r'}
\end{equation*}
holds for all $f \in \dot{H}^s(\mathbb R^n)$, $g \in \dot{H}^{s-1}(\mathbb R^n)$, and $D^{\sigma_2}F(t) \in L^{\tilde q'}_tL^{\tilde r'}_x$ whenever $(q,r)} \def\pairt{(\tilde q, \tilde r)$, $\pairt$ are two radially wave admissible pairs\footnote{Except when $q=\tilde q=2$, $r\neq \tilde r$, and either $r$ or $\tilde r$ is equal to $\infty$.} and satisfy the following scaling condition
\begin{align}\label{eq: dim cond}
\rp q + \frac n r - \sigma_1= \frac n 2 - s = \rp {\tilde q'} + \frac n {\tilde r'} - 2-\sigma_2.
\end{align}
\end{thm}
\begin{proof}
The homogeneous Strichartz estimates of theorem \ref{thm: wave hom} hold for each of the operators $U_{\pm}$ separately. For simplicity let us consider $U_{-}(t)$ first. For $U_{-}(t): H^s(\mathbb R^n) \rightarrow L^2(\mathbb R^n)$ we have that $U_{-}^*(t): L^2(\mathbb R^n) \rightarrow H^s(\mathbb R^n)$ and $U_{-}^*(t)=D^{-2s}U_{+}(t)$. In view of theorem \ref{thm: wave hom}, the operators $T_1: H^s(\mathbb R^n) \rightarrow L^q_t L_x^r$, $T_1f = D^{\sigma_1}U_{-}(t)f$, and $T_2: H^s(\mathbb R^n) \rightarrow L^{\tilde q}_tL_x^{\tilde r}$, $T_2f = D^{s-\beta}U_{-}(t)f$ are bounded on spherically symmetric data $f \in H^s(\mathbb R^n)$, where
\[
s= \frac n 2 - \frac n r - \rp q + \sigma_1, \quad \beta = \frac n 2 - \frac n {\tilde r} - \rp {\tilde q},
\]
and $(q,r)} \def\pairt{(\tilde q, \tilde r)$, $\pairt$ are two radially wave admissible pairs and $q>\tilde q'$. Hence, in view of theorem \ref{thm: inhom abs}, we obtain the estimate
\begin{align*}
\normQR{\int_{0}^{t} U_{-}(t-s)D^{s-\beta-2s}F(s)ds}{q}{r} \lesssim \normQR{F}{\tilde q'}{\tilde r'}.
\end{align*}
Repeating the same argument for $U_{+}(t)$, we obtain the estimate
\begin{align*}
\normQR{\int_{0}^{t} W_0(t-s)F(s)ds}{q}{r} \lesssim \normQR{D^{s+\beta-1}F}{\tilde q'}{\tilde r'}.
\end{align*}
Setting $\sigma_2 = s+\beta-1$ gives condition \eqref{eq: dim cond}.
The case when $(q,r)} \def\pairt{(\tilde q, \tilde r)$, $\pairt$ are two radially wave admissible pairs with $q=\tilde q=2$ and $r=\tilde r$ is treated similarly.
And finally, the case when $(q,r)} \def\pairt{(\tilde q, \tilde r)$, $\pairt$ are two radially wave admissible pairs with $q=\tilde q=2$ and $r \neq \tilde r$ is reduced to the previous one by Sobolev embedding.
\end{proof}
\section{Applications}
The two-dimensional DKG system reads
\begin{align}
(\partial_t + \sigma_1 \partial_x + \sigma_2 \partial_y + iM\sigma_3) \psi(t,x,y) &=i\phi\sigma_3\psi, \quad (t,x,y) \in [0,\infty)\times\mathbb R\times\mathbb R, \label{eq: DKG 1}\\
(\partial_t^2-\Delta+m^2)\phi(t,x,y) &=\langle \sigma_3\psi, \psi\rangle, \label{eq: DKG 2}
\end{align}
where
\begin{align}
\sigma_1 = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix},\quad \sigma_2 = \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix},\quad \sigma_3 = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix},
\end{align}
are the Pauli spin matrices and $M$ and $m$ are nonnegative constants. The unknown quantities are a two-spinor $\psi(t,x,y): [0,\infty)\times\mathbb R^2 \rightarrow \mathbb C^2$, and a real scalar field $\phi(t,x,y): [0,\infty)\times\mathbb R^2 \rightarrow \mathbb R$.
Let us recall that the system \eqref{eq: DKG 1}, \eqref{eq: DKG 2} is form covariant with respect to Lorentzian transformations and in particular to spatial rotations. Suppose that the coordinate system $Oxy$ is changed into $Ox'y'$ by a spatial rotation $R(\varphi)$ of an angle $\varphi$
\begin{align*}
\begin{pmatrix} x' \\ y' \end{pmatrix} = \begin{pmatrix} \cos \varphi & -\sin \varphi \\ \sin \varphi & \cos \varphi \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix}.
\end{align*}
Then we want to find a rule $\psi \rightarrow \psi'$ as $Oxy \rightarrow Ox'y'$ of the form $\psi'(t,z')=S(\varphi) \psi(t,z)$, where
$S(\varphi)$ is a $2\times2$ matrix and $z$ denotes $(x,y)$, that leaves \eqref{eq: DKG 1}, \eqref{eq: DKG 2} form invariant. Of course, for the scalar field $\phi$ we have $\phi'(t,z')=\phi(t,z)$. Substituting in \eqref{eq: DKG 1}, \eqref{eq: DKG 2}
\begin{align*}
\psi(t,z) = S^{-1}(\varphi)\psi'(t, R(\varphi)z),\\
\phi(t,z) = \phi'(t,R(\varphi)z),
\end{align*}
we obtain
\begin{align}
(\partial_t + \sigma_1' \partial_{x'} + \sigma_2' \partial_{y'} + iM\sigma_3') \psi'(t, z') &=i\phi\sigma_3'\psi', \label{eq: DKG 1'}\\
(\partial_t^2-\Delta+m^2)\phi'(t,z') &=\langle \sigma_3'\psi', \psi'\rangle, \label{eq: DKG 2'}
\end{align}
where
\begin{align*}
\sigma_1' &= S(\varphi) \left( \sigma_1 \cos \varphi - \sigma_2 \sin \varphi \right) S^{-1}(\varphi) \\
\sigma_2' &= S(\varphi) \left( -\sigma_1 \sin \varphi + \sigma_2 \cos \varphi \right) S^{-1}(\varphi)\\
\sigma_3' &=\ S(\varphi) \sigma_3 S^{-1}(\varphi).
\end{align*}
Thus the matrix $S(\varphi)$ must be such that $\sigma'_j=\sigma_j$, for $j=1,2,3$. One can check that if we set
\[
S(\varphi) = \begin{pmatrix} e^{i\varphi} & 0 \\ 0 & 1 \end{pmatrix}
\]
all of the above conditions are satisfied. Note that the Klein-Gordan part of the system is form invariant as $\langle \sigma_3'\psi', \psi'\rangle = \langle \sigma_3\psi, \psi\rangle$ due to the fact that $S(\varphi)$ is unitary and the well-known invariance of the Laplacian $\Delta$ with respect to rotations. Thus we come with the following
\begin{defn} We say that the two-spinor $\psi_0(z): \mathbb R^2 \rightarrow \mathbb C^2$ is spherically symmetric if it satisfies
\begin{align} \label{eq: spin sym}
\psi_0(R(\varphi)z) = S(\varphi)\psi_0(z).
\end{align}
\end{defn}
\begin{lem} A function $\psi_0(z): \mathbb R^2 \rightarrow \mathbb C^2$ satisfies \eqref{eq: spin sym} if and only if it has the form
\[
\psi_0(z) = S(\varphi)\chi(\abs{z}),
\]
where $\varphi$ is the argument of the complex number $x+iy$ and $\chi(\rho) : [0,\infty) \rightarrow \mathbb C^2$.
\end{lem}
\begin{proof}
Trivial.
\end{proof}
\begin{rem} From the explicit representation above and the fact that $e^{i\varphi}=(x+iy)/ \abs{z} \in C^\infty(\mathbb R^2 \setminus O)$, we see that the smoothness of $\psi_0$ depends on the smoothness of $\chi$ and the behavior of $\chi$ around the origin.
\end{rem}
\begin{lem} Suppose that IVP for \eqref{eq: DKG 1}, \eqref{eq: DKG 2} has a unique solution in some class of initial data. Then for a spherically symmetric data from that class the solution to \eqref{eq: DKG 1}, \eqref{eq: DKG 2} remains spherically symmetric for all time.
\end{lem}
\begin{proof}
Trivial.
\end{proof}
\begin{lem} \label{lem: DKG}
Suppose that $u(t)$ is the solution to the IVP for the wave equation (\ref{eq: wave 1}), (\ref{eq: wave 2}) in space dimension $n=2$. Suppose that the data $f$ and $g$ and the forcing term $F(t)$ are spherically symmetric with $f \in H^s(\mathbb R^2)$, $g \in H^{s-1}(\mathbb R^2)$, and $F(t) \in L^\infty_tL^1_x(\mathbb R^2)$. Then we have the estimate
\begin{equation} \label{est: L1}
\normQR{D^s u(t)}{\infty}{2} \lesssim_T \norm{f}_{\dot H^s(\mathbb R^2)} + \norm{g}_{\dot H^{s-1}(\mathbb R^2)} + \norm{F}_{L^{\tilde q'}_t([0,T]; L^{1}_x)},
\end{equation}
for $s \in [0, 1/2)$ and $1/{\tilde q}=s$.
\end{lem}
\begin{proof}
We apply theorem \ref{thm: wave full} with $(q,r)} \def\pairt{(\tilde q, \tilde r)=(\infty, 2)$, $\pairt=(\tilde q, \infty)$, $\tilde q > 2$, $\sigma_1=s$, and $\sigma_2=0$.
\end{proof}
\begin{thm} \label{thm: DKG sph}
Consider the IVP for the DKG system \eqref{eq: DKG 1}, \eqref{eq: DKG 2}, with $m=0$,
for initial data in the class $\psi_{|t=0} =\psi_0 \in L^2$, $\phi_{|t=0}=\phi_0 \in H^r$ and $\partial_t \phi_{|t=0} =\phi_1 \in H^{r-1}$, where $1/4 < r< 1/2$ and $\psi_0$, $\phi_0$, and $\phi_1$ are spherically symmetric. Then there exist a spherically symmetric solution
\[
\psi \in C((0, \infty); L^2), \quad \phi \in C((0, \infty); H^r) \cap C^1((0, \infty); H^{r-1}),
\]
of the DKG system \eqref{eq: DKG 1}, \eqref{eq: DKG 2} on $(0, \infty) \times \mathbb R^2$, satisfying the initial condition above. Moreover, the solution is unique in this class, and depends continuously on the data.
\end{thm}
\begin{proof}
The fundamental conserved property of the system is the charge estimate
\[
\normP{\psi(t)}{2} = \normP{\psi_0}{2}.
\]
Using this, the proof follows by standard arguments from theorem \ref{thm: DKG} and lemma \ref{lem: DKG}.
\end{proof}
\nocite{Bou2}
\bibliographystyle{plain}
|
0903.4124
|
\section{General Remarks}
Many attempts to construct a gauge model of gravitation exist. In particular, the works by Utiyama \cite{utiyama56} and by Kibble \cite{kibble61} were the starting points for various gauge approaches to gravitation (as discussed in Section 2). As a result, Poincar\'e gauge theory (PGT), see \cite{hehl-von74,hehl-von76,blago1,blago2,deser}, is a generalization of the Einstein scheme of gravity, in which not only the energy-momentum tensor, but also the spin of matter plays a dynamical role when coupled to spin connections, in a non-Riemannian space-time.
To include spinor fields consistently, it is necessary to extend the framework of General Relativity (GR), as already realized by Hehl \emph{et al.} \cite{hehl-von76}: this necessity is strictly connected with the non existence in GR of an independent concept of spin momentum for physical fields, as the Lorentz Group (LG) has not an independent status of gauge group in GR. In fact, we will demonstrate that an isometric diffeomorphism can induce a local Lorentz rotation, thus standard spin connections $\omega_\mu^{\phantom1 ab}$ have no longer a gauge role in this framework (being only a function of tetrads and behaving like vectors under the diffeomorphism-induced Lorentz rotation). New gauge connections $A_\mu^{\phantom1 ab}$ have to be introduced into the dynamics to appropriately recover the Lorentz invariance of the scheme, when spinor fields are taken into account (as discussed in Section 3).
This paradigm is well established in flat space-time, where isometric diffeomorphism are allowed and spin connections can be set to zero. Within this framework, fermion dynamics is analyzed including the effects of the new gauge fields (as discussed in Section 4). A Modified Dirac Equation is the starting point to study the non-relativistic limit and the resulting Generalized Pauli Equation in presence of a Coulomb central potential. For an hydrogen-like atom, energy-level splits are predicted but no new spectral lines are allowed (as discussed in Section 5).
The analysis developed in flat space-time can be extended to a curved one. In the First-Order Approach, the geometrical identification of the LG gauge fields with a suitable bein projection of the contortion field is allowed when a non-standard interaction term between generalized connections and these gauge fields is postulated, if fermion matter is absent. On the other hand, when spinors are present, both spin connections and the spin current contribute to the torsion term (as discussed in Section 6).\vspace{0.3cm}
{\textbf{\emph{Notation:}} Greek indices (\emph{e.g.}, $\mu=0,1,2,3$) change as tensor ones under general coordinate transformations (\emph{i.e.}, world transformations); Latin indices (\emph{e.g.}, $a=0,1,2,3$) are the tetradic indices and refer to Lorentz transformations; \emph{Only} indices $i,\;j,\;k$ are 3-dimensional indices and run from 1 to 3.}
\section{Internal and Space-Time Symmetries} This Section is aimed at analyzing the internal symmetries of the space-time. We focus on the description of GR as a gauge model, underling the ambiguity that arises from this approach.
We can introduce the usual orthonormal basis $e^{\phantom1 a}_{\mu}$ (tetrads) for the local Minkowskian tangent space-time of a 4-dimensional manifold. Tetrads are locally defined in curved space-time and their transformations can be read as generic reference-system changes. This way, such a standard formalism allows one to recover Lorentz symmetry because tetrad changes are defined as local Lorentz transformations linking the different inertial references they describe.
\newcommand{\scriptscriptstyle{(\omega)}}{\scriptscriptstyle{(\omega)}}
\newcommand{\scriptscriptstyle{(\omega)}}{\scriptscriptstyle{(\omega)}}
The relations between tetrads and the metric $g_{\mu\nu}$ are
\begin{equation}\label{relation}
g_{\mu\nu}=\eta_{ab}\,e_{\mu}^{\phantom1 a}\,e_{\nu}^{\phantom1 b},\qquad\quad
e_{\mu}^{\phantom1 a}\,e^{\mu}_{\phantom1 b}=\delta^{a}_{b},\qquad\quad
e_{\mu}^{\phantom1 a}\,e^{\nu}_{\phantom1 a}=\delta^{\nu}_{\mu}\;,
\end{equation}
where $\eta_{ab}$ is the local Minkowski metric. Projecting tensor fields from the 4-dimensional manifold to the Minkowskian space-time allows us to emphasize the local Lorentz invariance of the scheme in presence of spinor fields. In fact, fermions transform like a particular representation $S$ of the LG, \emph{i.e.}, $\psi\to S\psi$, where
\begin{align}\label{LG}
S=I-\tfrac{i}{4}\;\epsilon^{{a}{b}}\,\Sigma_{{a}{b}}\;,\qquad\qq
\Sigma_{ab}=\tfrac{i}{2}\,[\gamma_{a},\gamma_{b}]\;,\qquad\qq
[\Sigma_{cd},\Sigma_{ef}]=i\mathcal{F}^{ab}_{cdef}\,\Sigma_{ab}\;,
\end{align}
here the $\Sigma_{ab}$'s and the $\mathcal{F}^{ab}_{cdef}$'s are the generators and the structure constants of the LG, respectively and $\epsilon^a_b(x)$ is the infinitesimal Lorentz rotational parameter. To assure the Lorentz covariance of the spin derivative $\partial_\mu\,\psi$, connections $\omega_{\mu}^{\phantom1 ab}$ must be introduced to define a covariant derivative as
\begin{equation}\label{spin_connections}
D^{\scriptscriptstyle{(\omega)}}_\mu=\partial_\mu+\Gamma^{\scriptscriptstyle{(\omega)}}_\mu\;,\qquad
\Gamma^{\scriptscriptstyle{(\omega)}}_\mu=\tfrac{1}{2}\;\omega_{\mu}^{\phantom1 ab}\, \Sigma_{ab}\;,\qquad
\omega_{\mu}^{\phantom1 ab}=e^{a\nu}\nabla_\mu e^{\phantom1 b}_{\nu}=
e^{\phantom1 c}_{\mu}\,\gamma^{ba}_{\phantom1\ph c}\;,
\end{equation}
where the $\omega_{\mu}^{\phantom1 ab}$'s denote the so-called \emph{spin connections} and $\gamma_{abc}=e^{\mu}_{\phantom1 c} e^{\nu}_{\phantom1 b} \nabla_\mu e_{\nu a}$ are the Ricci Rotation Coefficients ($\nabla_\mu$ is the usual coordinate covariant derivative). It is worth underlining that the introduction of the tetrad formalism enables us to include spinor fields in the dynamics \cite{lusanna}. In this sense, spin connections are introduced to restore the correct Dirac algebra in curved space-time \emph{i.e.}, $D^{\scriptscriptstyle{(\omega)}}_\mu\;\gamma^{\nu}=0$ \cite{hammond}. By other words, the correct treatment of spinors in curved space-time leads to the introduction of those connections, which guarantee an appropriate gauge model for the LG.
This picture suggests, in appearance, the description of gravity as a gauge model \cite{cho1,cho2}. Spin connections are a bein projection of Ricci Rotation Coefficients and this formalism leads to the usual definition of the curvature tensor:
\begin{equation}\label{Riemann}
R^{\phantom1\ph ab}_{\mu\nu}=\partial_\nu\omega_{\mu}^{\phantom1 ab}-
\partial_\mu\omega_{\nu}^{\phantom1 ab}+\mt{F}^{ab}_{cdef}\omega_{\mu}^{\phantom1 cd}\omega_{\nu}^{\phantom1 ef}\;,
\end{equation}
which corresponds to the I Cartan Structure Equation. The Hilbert-Einstein Action rewrites now as
\begin{equation}\label{action for o}
S_G(e,\omega)=-\tfrac{1}{4}{\textstyle \int}
\mathrm{det}(e)\,d^{4}x\;\;e^{\phantom1\mu}_{a}e^{\phantom1\nu}_{b} R^{\phantom1\ph ab}_{\mu\nu}\;.
\end{equation}
Variation wrt connections leads to the II Cartan Structure Equation,
\begin{equation} \label{Cartan eq}
\partial_\mu e^{\phantom1 a}_{\nu} -\partial_\nu e^{\phantom1 a}_{\mu}-\omega_{\mu}^{\phantom1 ab}e_{\nu b}
+\omega_{\nu}^{\phantom1 ab}e_{\mu b}=0\;,
\end{equation}
which links the tetrad fields to the spin connections. Since $\omega_{\mu}^{\phantom1 ab}=e^{\phantom1 c}_{\mu}\,\gamma^{ba}_{\phantom1\ph c}$, we underline that such connections behave like ordinary vectors under general coordinate transformations (\emph{i.e.}, world transformations), and variation wrt tetrads gives rise the Einstein Equations, once the solution of eq. (\ref{Cartan eq}) is addressed.
In the standard approach, spin connections transform like \emph{Lorentz gauge vectors} under infinitesimal local Lorentz transformations (described by the Lorentz matrix $\Lambda_{a}^{b}=\delta_{a}^{b}+\epsilon_{a}^{b}$):
\begin{equation}\label{gaugetr}
\omega_{\mu}^{\phantom1 ab}\stackrel{L}{\to} \omega_{\mu}^{\phantom1 ab}-\partial_\mu\epsilon^{ab}+
\tfrac{1}{4}\mt{F}^{ab}_{cdef}\epsilon^{cd}\omega_{\nu}^{\phantom1 ef}
\end{equation}
and the Riemann tensor is preserved by such a change; therefore, in flat space-time, we deal with non-zero gauge connections, but a vanishing curvature. In both flat and curved space-time, the connections $\omega_{\mu}^{\phantom1 ab}$ exhibit the right behavior to play the role of Lorentz gauge fields and GR exhibits the features of a gauge theory. On the other hand, the presence of tetrad fields (introduced by the Principle of General Covariance) is an ambiguous element for the gauge paradigm. In fact, spin connections can be uniquely determined as functions of tetrads in terms of the Ricci Rotation Coefficients $\gamma_{abc}$. This relation generates an ambiguity in the interpretation of the $\omega_{\mu}^{\phantom1 ab}$'s as the only fundamental fields of the gauge scheme since the theory were based on two dependent degrees of freedom.
It is just the introduction of fermions that requires to treat local Lorentz transformations as the real independent gauge of GR. In fact, when spinor fields are taken into account, their transformations under the local Lorentz symmetry imply that the Dirac Equation is endowed with non-zero spin connections, even in flat space-time. Because of the behavior of spinor fields, it becomes crucial to investigate whether diffeomorphisms can be reinterpreted to some extent as local Lorentz transformations.
\section{A Novel Approach for a Gauge Theory of the LG} Here we want to fix some guidelines that are at the ground of this approach to Lorentz gauge theory. We first note that spin connections $\omega_{\mu}^{\phantom1 ab}$ are gauge potentials and not physical fields, in the sense that they are subjected to such gauge transformations \reff{gaugetr} that do not alter the curvature tensor.
The \emph{key point} is that, if we are able to show (as we will demonstrate in the following) that diffeomorphisms can induce local Lorentz transformations, we can conclude that the $\omega_{\mu}^{\phantom1 ab}$'s can no longer be regarded as gauge potentials for the LG, because of the ambiguity of their transformation properties: do they behave like gauge fields or ordinary (coordinate) vectors? In this sense, new gauge fields must be added to retore the Lorentz invariance of the theory.
By other words, an inconsistence arises as far as the spinor behavior is analyzed under diffeomorphism-induced Lorentz rotation. Fermions are coordinate scalars and transform as the usual laws under Lorentz rotations, \emph{i.e.}, a spinor representation of the LG. They live in the tangent bundle without experiencing coordinate changes and, if the two transformations overlap, an ambiguity on the nature of spinors comes out.
In flat space-time, in the case $e^{\phantom1 a}_{\mu}=\delta^{\phantom1 a}_{\mu}$, spin connections vanish and they must remain identically zero under local Lorentz transformations. In fact, the coordinate transformations can now be reinterpreted as Lorentz rotations and the request for external Lorentz gauge field appears well grounded. This picture remains still valid in curved space-time, as long as we accept that \emph{spin connections transform like vectors when diffeomorphism-induced rotations are implemented}. In this scheme, new gauge fields, transforming according to their Lorentz indices, are the only fields able to restore Lorentz invariance when local rotations are induced by coordinate changes. In fact, the nature of gauge potentials is naturally lost by the gravitational connections, which behave like tensors only. It is worth noting that, if the $\omega_{\mu}^{\phantom1 ab}$'s are assumed to behave like gauge vectors under the induced Lorentz transformation, the standard approach can be recovered and the ambiguity of the tetrad dependence arises. It can also be interesting to compare our approach with that of \cite{ortin, kosmann}, where the formalism of the \emph{Lie Derivative} is extended to spinor fields.
\newcommand{{\scriptscriptstyle\prime}}{{\scriptscriptstyle\prime}}
We are now going to demonstrate that such correspondence between coordinate transformations and local rotations takes place only if we deal with \emph{isometric diffeomorphisms}. This request is naturally expected if we want to reproduce a Lorentz symmetry: an isometric diffeomorphism induces orthonormally-transformed basis and, to this extent, an isometry generates a local Lorentz transformation of the basis. An infinitesimal isometric diffeomorphism is described by
\begin{equation}\label{newdiff}
x^{\mu}\to x^{{\scriptscriptstyle\prime}\mu}=x^{\mu}+\xi^{\mu}(x)\;,\qquad
\qquad\;\;\nabla_{\mu}\xi_{\nu}+\nabla_{\nu}\xi_{\mu}=0\;,
\end{equation}
and it induces the following transformation of the basis vectors
\begin{equation}\label{taylor}
e_{\mu}^{\phantom1 a}(x)\stackrel{D}{\to}e_{\mu}^{{\scriptscriptstyle\prime}\phantom1 a}(x^{{\scriptscriptstyle\prime}})=
e_{\nu}^{\phantom1 a}(x)\;\nicefrac{\partial x^{\nu}}{\partial x^{{\scriptscriptstyle\prime}\mu}}=
e^{\phantom1 a}_{\mu}(x)-e^{\phantom1 a}_{\nu}(x)\;\nicefrac{\partial \xi^{\nu}}{\partial x^{{\scriptscriptstyle\prime}\mu}}\;.
\end{equation}
If we deal with an infinitesimal local Lorentz transformation $\Lambda_{a}^{b}(x)=\delta^b_a +\epsilon^{b}_{a}(x)$, we get, up to the leading order, the tetrad change (evaluated in $x^{{\scriptscriptstyle\prime}}$ of eq. \reff{newdiff}):
\begin{equation}
e_{\mu}^{\phantom1 a}(x)\stackrel{L}{\to}e_{\mu}^{{\scriptscriptstyle \prime}\phantom1 a}(x^{{\scriptscriptstyle \prime}})=\Lambda_{a}^{b}(x^{\scriptscriptstyle \prime})e_{\mu}^{\phantom1 a}(x^{{\scriptscriptstyle \prime}})=
e_{\mu}^{\phantom1 a}(x^{{\scriptscriptstyle \prime}})+e_{\mu}^{\phantom1 b}(x)\epsilon^{a}_{b}(x)\;.
\end{equation}
In this sense, we can infer that the two transformation laws overlap if we assume the condition:
\begin{equation}
\epsilon_{ab}=\nabla_{[a}\xi_{b]}-\gamma_{abc}\xi^c\;,
\end{equation}
where, to pick up local Lorentz transformations from the set of generic diffeomorphisms, the isometry condition $\nabla_{(\mu}\xi_{\nu)}=0$ has to be taken into account in order to get the antisymmetry condition $\epsilon_{ab}=-\epsilon_{ba}$ for the infinitesimal parameter $\epsilon_{ab}$.
\section{Spinors and Gauge Theory of the LG in Flat Space-Time}
Let us now analyze the formulation of a gauge model for the LG in a flat Minkowski space-time when diffeomorphism-induced Lorentz transformation are allowed. The choice of flat space is due to the fact that the Riemann curvature tensor vanishes and, consequently, the usual spin connections $\omega_\mu^{\phantom1 ab}$ can be set to zero choosing the gauge $e_\mu^{\phantom1 a}=\delta_\mu^{\phantom1 a}$ $^{(}$\footnote{In general, the $\omega_\mu^{\phantom1 ab}$'s are allowed to be non-vanishing quantities.}$^{)}$. This allows one to introduce new Lorentz connections $A_\mu^{\phantom1 ab}$ as the gauge fields as far as the correspondence between an infinitesimal diffeomorphism and a local Local rotation is recovered, as shown in the previous section.
In a 4-dimensional flat manifold, the metric tensor reads $g_{\mu\nu}=\eta_{ab}e^{\phantom1 a}_{\mu}e^{\phantom1 b}_{\nu}$ and an infinitesimal diffeomorphism and a local Lorentz transformation write as
\begin{equation}
x^a\stackrel{D}{\to}x^a +\xi^a (x^c)\;,\qquad\qquad
x^a\stackrel{L}{\to}x^a +\epsilon^a_{b}(x^c)\;x^b\;,
\end{equation}
respectively. If vectors are taken into account, no inconsistencies arise if the two transformation coincide. In fact, if we set $\epsilon_a^b\equiv \partial^b\xi_a(x^c)$, the two transformation laws
\begin{align}
^{D}V^{{\scriptscriptstyle\prime}}_a (x^{{\scriptscriptstyle\prime}})= V_a(x)+\partial_a \xi^b(x)\;V_b(x)\;,\qquad\quad
^{L}V^{{\scriptscriptstyle\prime}}_a (x^{{\scriptscriptstyle\prime}})= V_a(x)+\epsilon^b_a(x)\;V_b(x)\;,
\end{align}
overlap and the LG loses its status of independent gauge group \cite{hehl-von76}. Here, the isometry condition $\partial_b\xi_a+\partial_a\xi_b=0$ has to be imposed to restore the proper number of degrees of freedom of a Lorentz transformation, \emph{10}, out of that of a generic diffeomorphism, \emph{16}.
On the other hand, spin$-\nicefrac{1}{2}$ fields are described by the usual Lagrangian density
\begin{equation}
\mathcal{L}_F=\tfrac{i}{2}\;\bar{\psi}\gamma^ae^{\mu}_{\phantom1 a}\partial_{\mu}\psi-
\tfrac{i}{2}\;e^{\mu}_{\phantom1 a}\partial_{\mu}\bar{\psi}\gamma^a\psi
\end{equation}
and, if accelerated coordinates are taken into account, they have to recognize the isometric components of the diffeomorphism as a local Lorentz transformation, differently from vectors: a spinor can noway be a Lorentz scalar. In this scheme, new Lorentz connections $A_\mu^{\phantom1 ab}$ have to be introduced for matter fields since, for assumption, the standard connections $\omega_{\mu}^{\phantom1 ab}$ do not follow Lorentz gauge transformations and they are not able to restore Lorentz invariance. Let us implement an infinitesimal local Lorentz transformation considering the spin$-\nicefrac{1}{2}$ representation $S=S(\Lambda(x))$ of eqs. \reff{LG}, \emph{i.e.},
\begin{align}
S=I-\tfrac{i}{4}\;\epsilon^{{a}{b}}\,\Sigma_{{a}{b}}\;,\qquad\qq
\Sigma_{ab}=\tfrac{i}{2}\,[\gamma_{a},\gamma_{b}]\;,\qquad\qq
[\Sigma_{cd},\Sigma_{ef}]=i\mathcal{F}^{ab}_{cdef}\,\Sigma_{ab}\;.
\end{align}\newcommand{\as}{\scriptscriptstyle{(A)}}Such a transformation acts on the spinor in the standard way $\psi(x)\to S\;\psi(x)$ and $\gamma$ matrices are assumed to transform like Lorentz vectors, \emph{i.e.},
$S\,\gamma^{{a}}\,S^{-1}=(\Lambda^{-1})^{{a}}_{{b}}\,\gamma^{{b}}$. In this approach, gauge invariance is restored by a new covariant derivative, \emph{i.e.}, $\partial_\mu\to D^{\as}_\mu$,
\begin{equation}
D^{\as}_\mu\psi=(\partial_\mu-\tfrac{i}{4}\,A_\mu)\,\psi=
(\partial_\mu-\tfrac{i}{4}\,A_\mu^{\phantom1 ab}\,\Sigma_{ab})\,\psi\;,
\end{equation}
which behaves correctly like $\gamma^{\mu}D^{\as}_{\mu}\psi\to S(\Lambda)\gamma^{\mu}D^{\as}_{\mu}\psi$, since the field $A_\mu=A^{\phantom1 ab}_\mu\Sigma_{ab}$ transforms under the following law:
$A_\mu\rightarrow S\,A_\mu\,S^{-1}-4i\,S\,\partial_\mu\,S^{-1}$. Connections $A_\mu^{\phantom1 ab}\,\,(\neq\omega_\mu^{\phantom1 ab})$ transform like
\begin{equation}
A_\mu^{\phantom1 ab}\rightarrow A_\mu^{\phantom1 ab}-\partial_\mu\epsilon^{ab}+4\mathcal{F}^{\,ab}_{cdef}\,\, \epsilon^{ef}\,A^{\phantom1 cd}_\mu\;,
\end{equation}
\emph{i.e.}, as natural Yang-Mill fields associated to the LG, living in the tangent bundle $^{(}$\footnote{The tangent bundle coordinates differ for the presence of an infinitesimal displacement from those of the Minkowskian space \cite{lecian}.}$^{)}$. A Lagrangian associated to the gauge connections can be constructed by the introduction of the gauge field strength
\begin{equation}
F_{\mu\nu}^{\phantom1\ph ab}=\partial_\mu A_\nu^{\phantom1 ab}-\partial_\nu A_\mu^{\phantom1 ab}
+\tfrac{1}{4}\mt{F}^{ab}_{cdef}A_\mu^{\phantom1 cd}A_\nu^{\phantom1 ef}\;,
\end{equation}
which is not invariant under gauge transformations, as usual in Yang-Mills gauge theories, but the gauge invariant Lagrangian for the model
\begin{equation}\label{action-for-A}
\mt{L}_A=-\tfrac{1}{4}\;F_{\mu\nu}^{\phantom1\ph ab}F^{\mu\nu}_{\phantom1\ph ab}\;,
\end{equation}
can be introduced. In flat space, the only real dynamical fields are the Lorentz gauge fields. Variation of the total action, derived by the total lagrangian density $\mt{L}_{tot}=\mt{L}_F(D^{\as}_\mu\psi)+\mt{L}_A$, wrt the new gauge connections leads to the dynamical equations
\begin{equation}
\nabla_{\mu}F^{\mu\nu}_{\phantom1\ph ab}= J^{\nu}_{\phantom1 ab}\;,\qquad\qquad
J^{\nu}_{\phantom1 ab}=-\tfrac{1}{4}\,\epsilon^{cd}_{ab}e_{\phantom1 c}^{\nu}\,j^{\,(ax)}_d\;,
\end{equation}
where $j^{\,(ax)}_d=\bar{\psi}\,\gamma_5\gamma_d\,\psi$ is the spin axial current. Such field equations correspond to the Yang-Mills Equations for the non-Abelian gauge fields of the LG in flat space-time. The source of these gauge fields is the conserved spin density of the fermion matter whose dynamics will be analyzed in the next section.
\newcommand{m_{\textrm{\tiny$j$}}}{m_{\textrm{\tiny$j$}}}
\newcommand{m_{\textrm{\tiny$\ell$}}}{m_{\textrm{\tiny$\ell$}}}
\newcommand{m_{\textrm{\tiny$s$}}}{m_{\textrm{\tiny$s$}}}
\newcommand{\mid\!n;\,\ell\,\ml\,s\,\ms\rangle}{\mid\!n;\,\ell\,m_{\textrm{\tiny$\ell$}}\,s\,m_{\textrm{\tiny$s$}}\rangle}
\newcommand{\langle n;\,\ell\,\ml\,s\,\ms\!\mid}{\langle n;\,\ell\,m_{\textrm{\tiny$\ell$}}\,s\,m_{\textrm{\tiny$s$}}\!\mid}
\newcommand{\mid\!n';\,\ell'\,\ml'\,s\,\ms'\rangle}{\mid\!n';\,\ell'\,m_{\textrm{\tiny$\ell$}}'\,s\,m_{\textrm{\tiny$s$}}'\rangle}
\newcommand{\langle n';\,\ell'\,\ml'\,s\,\ms'\!\mid}{\langle n';\,\ell'\,m_{\textrm{\tiny$\ell$}}'\,s\,m_{\textrm{\tiny$s$}}'\!\mid}
\newcommand{\mid\!n;\,\ell\,s\,j\,\mj\rangle}{\mid\!n;\,\ell\,s\,j\,m_{\textrm{\tiny$j$}}\rangle}
\newcommand{\langle n;\,\ell\,s\,j\,\mj\!\mid}{\langle n;\,\ell\,s\,j\,m_{\textrm{\tiny$j$}}\!\mid}
\newcommand{\mid\!n';\,\ell'\,s\,j'\,\mj'\rangle}{\mid\!n';\,\ell'\,s\,j'\,m_{\textrm{\tiny$j$}}'\rangle}
\newcommand{\langle n';\,\ell'\,s\,j'\,\mj'\!\mid}{\langle n';\,\ell'\,s\,j'\,m_{\textrm{\tiny$j$}}'\!\mid}
\newcommand{\mid\!\alpha;\,j\,\mj\rangle}{\mid\!\alpha;\,j\,m_{\textrm{\tiny$j$}}\rangle}
\newcommand{\langle\alpha;\,j\,\mj\!\mid}{\langle\alpha;\,j\,m_{\textrm{\tiny$j$}}\!\mid}
\newcommand{\mid\!\alpha';\,j'\,\mj'\rangle}{\mid\!\alpha';\,j'\,m_{\textrm{\tiny$j$}}'\rangle}
\newcommand{\langle\alpha';\,j'\,\mj'\!\mid}{\langle\alpha';\,j'\,m_{\textrm{\tiny$j$}}'\!\mid}
\section{Generalized Pauli Equation}
The aim of this Section is investigating the effects that the new gauge fields can generate in a flat space-time. In particular, we treat the interaction between the new connections $A_\mu^{\phantom1 ab}$ and the 4-spinor $\psi$ (of mass $m$) to generalize the well-known Pauli Equation, which corresponds to the motion equation of an electron in presence of an electro-magnetic field \cite{shapiro02}.
The implementation of the diffeomorphism-induced local Lorentz symmetry ($\partial_{\mu}\to D^{\as}_{\mu}$) in flat space, leads to the fermion Lagrangian density
\begin{equation}\label{lagrangian-tot}
\mathcal{L}_F=\tfrac{i}{2}\;\bar{\psi}\gamma^a e^{\mu}_{\phantom1 a}\partial_{\mu}\psi-
\tfrac{i}{2}\;e^{\mu}_{\phantom1 a}\partial_{\mu}\bar{\psi}\gamma^a\psi\,-\,m\,\bar{\psi}\psi\;+\;
\tfrac{1}{8}\,e^{\mu}_{\phantom1 c}\,\bar{\psi}\,\{\gamma^{c},
\Sigma_{ab}\}\,A^{\phantom1 ab}_{\mu}\,\psi\;,
\end{equation}
where $\{\gamma^c,\Sigma_{ab}\}=2\,\epsilon^{c}_{abd}\,\gamma_5\,\gamma^d$. To study the interaction term, let us now start from the explicit expression
\begin{equation}
\mathcal{L}_{int}=\tfrac{1}{4}\;\bar{\psi}\;
\epsilon^{c}_{abd}\,\gamma_5\,\gamma^d\,A^{ab}_{c}\;\psi\;.
\end{equation}
Let us now consider the role of gauge fields by analyzing its components $A^{0i}_{0}\,,\;A^{ij}_{0}\,,\;A^{0i}_{k}\,,
\;A^{ij}_{k}$, imposing the \emph{time-gauge} condition $A^{ij}_{0}\;=0$ associated to this picture and neglecting the term $A^{0i}_{0}\,$ since it summ over the completely anti-symmetric symbol $\epsilon^{0}_{0i d}\equiv0$. The interaction Lagrangian density rewrites now
\begin{equation}\label{lagrangian-split}
\mathcal{L}_{int}=\;\psi^{\dagger}\,C_0\,\gamma^{0}\gamma_5\gamma^0\,\psi\;+
\;\psi^{\dagger}\,C_i\,\gamma^{0}\gamma_5\gamma^i\,\psi\;,
\end{equation}
with the following identifications
\begin{equation}
C_0=\tfrac{1}{4}\;\epsilon^{k}_{ij0}A^{ij}_{k}\;,\qquad\quad
C_i=\tfrac{1}{4}\;\epsilon^{k}_{0ji}A^{0j}_{k}\;.
\end{equation}
Here the component $C_0$ is related to rotations, while $C_i$ to the boosts. Varying now the total action built up from the fermion Lagrangian density wrt $\psi^{\dagger}$, we get the \emph{Modified Dirac Equation}
\begin{equation}\label{modified-dirac}
(i\,\gamma^0\gamma^0\partial_0\;+\;
C_i\,\gamma^0\gamma_5\gamma^i\;+
\;i\,\gamma^0\gamma^i\partial_i\;+\;
C_0\,\gamma^0\gamma_5\gamma^0)\,\psi\;=
\;m\,\gamma^0\,\psi\;,
\end{equation}
which governs the 4-spinor $\psi$ interacting with the new Lorentz gauge fields described here by the fields $C_0$ and $C_i$.
Let us now look for stationary solutions of the Dirac Equation expanded as
\begin{equation}\nonumber
\psi(\textbf{r},t)\to\psi(\textbf{r})\;e^{-i\mathcal{E}t}\;,\qquad\quad
\psi=\left(\begin{array}{l}\!\chi\!\\\!\phi\!\end{array}\right)\;,\qquad\quad
\psi^\dagger=(\,\chi^\dagger\;,\;\phi^\dagger\,)\;,
\end{equation}
where $\mathcal{E}$ denotes the spinor total energy and the 4-component spinor $\psi(\textbf{r})$ is expressed in terms of the two 2-spinors $\chi(\textbf{r})$ and $\phi(\textbf{r})$. Using now the standard representation of the Dirac matrices, the modified Dirac Equation \reff{modified-dirac} splits into two coupled equations (here we write explicitly the \emph{3}-momentum $p^{\,i}$):
\begin{subequations}\label{stationary-eq}
\begin{align}
(\mathcal{E}-\sigma_i\,C^i)\,\chi\;
-\;(\sigma_i\,p^{\,i}+C_0)\,\phi\;&=\;m\,\chi\;,\label{stationary-eq1}\\
(\mathcal{E}-\sigma_i\,C^i)\,\phi\;
-\;(\sigma_i\,p^{\,i}+C_0)\,\chi\;&=-\;m\,\phi\;.\label{stationary-eq2}
\end{align}
\end{subequations}
Let us now investigate the non-relativistic limit by splitting the spinor energy in the form
$\mathcal{E}=E+m$. Substituting this expression in the system \reff{stationary-eq}, we note that both the $|E|$ and $|\,\sigma_i\,C^i|$ terms are small in comparison wrt the mass term $m$ in the low-energy limit. Then, eq. \reff{stationary-eq2} can be solved approximately as
\begin{equation}\label{small-components}
\phi\;=\;\tfrac{1}{2m}\;(\sigma_i\,p^{\,i}+C_0)\,\chi\;.
\end{equation}
It is immediate to see that $\phi$ is smaller than $\chi$ by a factor of order $\nicefrac{p}{m}$ (\emph{i.e.}, $\nicefrac{v}{c}$ where $v$ is the magnitude of the velocity): in this scheme, the 2-component spinors $\phi$ and $\chi$ form the so-called \emph{small} and \emph{large components}, respectively \cite{MandlShaw}.
Substituting the small components \reff{small-components} in eq. \reff{stationary-eq1}, after standard manipulation we get
\begin{equation}\label{generalized-pauli}
E\,\chi=
\tfrac{1}{2m}\left[p^{2}\,+\,C_0^{2}\,+\,2\,C_0\,(\sigma_i\,p^{\,i})\,+\,
\sigma_i\,C^i\right]\,\chi\;.
\end{equation}
This equation exhibits strong analogies with the electro-magnetic case. In particular, it is interesting to investigate the analogue of the so-called Pauli Equation used in the analysis of the energy levels as in the Zeeman effect \cite{bransden}:
\begin{equation}
E\,\chi=
\left[\tfrac{1}{2m}\;(p^{2}+e^{2}\mt{A}^{2}+2e\mt{A}_ip^i)+\mu_B
(\sigma_i B^i)\,-e\,\Phi^{\scriptscriptstyle (E)}\right]\chi\;,
\end{equation}
where $\mu_B=e/2m$ is the Bohr magneton and $\mt{A}_i$ denotes the vector-potential components, $B^i$ being the components of the external magnetic filed and $\Phi^{\scriptscriptstyle (E)}$ the electric potential.
Let us now neglect the second order term $C_0^{2}$ in eq. \reff{generalized-pauli} and implement the symmetry: $\partial_\mu\to\partial_\mu+\mt{A}_\mu^{U(1)}+A_\mu^{\phantom1 ab}\Sigma_{ab}$, with a vanishing electromagnetic vector potential, \emph{i.e.}, $\mt{A}_i\equiv0$. Introducing a Coulomb central potential $V(r)$ through the substitution $E\to E-V(r)$, we can derive the total Hamiltonian of the system, \emph{i.e.}, $H_{tot}=H_0+H^{{\scriptscriptstyle\prime}}$, where
\begin{align}
H_0&=\frac{p^{2}}{2m}-\frac{Ze^{2}}{(4\pi\epsilon_0)r}\;,\qquad\qquad H^{{\scriptscriptstyle\prime}}=H_1+H_2\;,\\
H_1&=\;C_0\,(\sigma_i\,p^{\,i})\;/\;m\;,\quad\qquad\quad
H_2=\sigma_i\,C^i\;/\;2m\;,
\end{align}
which characterize the electron dynamics in an hydrogen-like atom in presence of a gauge field of the LG (here, $Z$ is the atomic number and $\epsilon_0$ denotes the vacuum dielectric constant). The solutions of the unperturbed Hamiltonian are the well-known modified two-components Schroedinger wave function, that write $H_0\,\psi_{n\,\ell\,m_{\textrm{\tiny$\ell$}}\,m_{\textrm{\tiny$s$}}}=E_n\;\psi_{n\,\ell\,m_{\textrm{\tiny$\ell$}}}(\textbf{r})\;\chi_{\nicefrac{1}{2},\,m_{\textrm{\tiny$s$}}}$, where the energy levels are $E_n=-m\,(Z\alpha)^{2}\;/\;2n^{2}$.
Since $H_1$ and $H_2$ have to be treated like perturbations, the gauge fields can be considered as independent, in the low-energy (linearized) regime. The analysis of $H_1$ can be performed substituting the operator $(\sigma_i\,p^i)$ with $(J_i p^i)$, where $J_i$ denote the components of the total angular momentum operator (in fact, $L_i p^i=0$). $H_1$ is diagonal in the basis $\mid\!n;\,\ell\,s\,j\,\mj\rangle$ and according to basic tensor analysis, we decompose the term $(J_i p^i)$ into spherical-harmonics components. As a result, by means of the Wigner-Eckart Theorem \cite{Sakurai}, we find non-vanishing matrix elements corresponding to
\begin{equation}
j^{{\scriptscriptstyle\prime}}=j+1\;,\qquad\qquadm_{\textrm{\tiny$j$}}^{{\scriptscriptstyle\prime}}=m_{\textrm{\tiny$j$}}\;.
\end{equation}
Anyhow, since $(J_i p^i)$ is a pseudo-scalar operator (\emph{i.e.}, it connects states of opposite parity), no transition is eventually allowed.
The analysis of $H_2$ requires a different approach. We assume that the new Lorentz fields are directed along the $z$ direction. This way, only the component $C_3$ is considered and, for the sake of simplicity, we impose that only one between $A_1^{02}$ and $A_2^{01}$ contributes, in order to recast the correct degrees of freedom. The effect of $C_3$ corresponds to that of an external ``magnetic'' field generated by the fields $A_k^{0j}$, which can be considered the vector bosons (spin$-1$ and massless particles) of such an interaction. $H_2$ is now diagonal in the unperturbed basis $\mid\!n;\,\ell\,\ml\,s\,\ms\rangle$ and produce an energy-level split of the order
\begin{equation}
\Delta E=\tfrac{C_3}{\hbar m}\; m_{\textrm{\tiny$s$}}\;,
\end{equation}
where $m_{\textrm{\tiny$s$}}=\pm\nicefrac{1}{2}$. Nevertheless, because of electric-dipole selection rules \cite{bransden}, we have to impose $\Deltam_{\textrm{\tiny$s$}}=0$, and no correction to the well-known transitions is detectable.
Collecting all the results together, we conclude that no new spectral line arises. Because of this properties of the Hamiltonian, it is not possible to evaluate an upper bound for the coupling constant of the interaction.
\section{Curved Space-Time and the Role of Torsion}
The considerations developed in flat space-time can be generalized to curved space-time keeping in mind that the torsion-less assumption of GR (perfectly realized by the Hilbert-Palatini Aaction) does not allow for an independent gauge field of the LG \cite{afldb}. In what follows, we provide, in the First-Order Approach, a link between the dynamics of the contortion field $\mt{K}_\mu^{\phantom1 ab}$ and the new Lorentz gauge fields.
The need to introduce Lorentz connections $A_\mu^{\phantom1 ab}$ in curved space-time is motivated by the restoration of the spinor Lagrangian local invariance under diffeomorphism-induced Lorentz transformations, while spin connections $\omega_\mu^{\phantom1 ab}$ allow one to recover the proper Dirac algebra. The generalization consists in considering a curved manifold, in which the tetrad basis is the dynamical field describing pure gravity. Local Lorentz transformations are still considered as a gauge freedom and, like in flat space, new connection fields have to be introduced.
Considering the Riemann-Cartan space $U^{4}$ filled with \emph{general} affine connections $\tilde{\Gamma}_{\mu\nu}^{\rho}$, the torsion field $\mt{T}_{\mu\nu}^{\rho}$ is defined as
$\mt{T}_{\mu\nu}^{\rho}=\tilde{\Gamma}_{[\mu\nu]}^{\rho}$. Within the framework of the First-Order Approach \cite{kim}, the II Cartan Structure Equation writes
\begin{equation}\label{Cartan eq general}
\partial_\mu e^{\phantom1 a}_{\nu} -\partial_\nu e^{\phantom1 a}_{\mu}-\tilde{\omega}_{\mu}^{\phantom1 ab}e_{\nu b}
+\tilde{\omega}_{\nu}^{\phantom1 ab}e_{\mu b}=
e^{\phantom1 a}_{\rho}\,\tilde{\Gamma}_{[\mu\nu]}^{\rho}=
e^{\phantom1 a}_{\rho}\mt{T}_{\mu\nu}^{\rho}=
\mt{T}_{\mu\nu}^{\phantom1\ph a}\;.
\end{equation}
The total connections $\tilde{\omega}_{\mu}^{\phantom1 ab}$, solution of this equation, are
\begin{equation}\label{connection}
\tilde{\omega}_{\mu}^{\phantom1 ab}=\omega_{\mu}^{\phantom1 ab}+\mt{K}_{\mu}^{\phantom1 ab}\;,
\end{equation}
where $\mt{K}_{\mu}^{\phantom1 ab}$ is the projected contortion field which is derived by the usual relation
\begin{equation}
\mt{K}^{\mu}_{\nu\rho}=-\tfrac{1}{2}(\mathcal{T}^{\mu}_{\nu\rho}-\mathcal{T}_{\rho\nu}^{\mu}+\mathcal{T}^{\mu}_{\nu\rho})\;,
\end{equation}
while the $\omega_\mu^{\phantom1 ab}$'s are the standard spin connections, \emph{i.e.}, $\omega_{\mu}^{\phantom1 ab}=e^{\phantom1 c}_{\mu}\,\gamma^{ba}_{\phantom1\ph c}$.
As far as the formulation of a diffeomorphism-induced Lorentz gauge theory is concerned, new connections $A_\mu^{\phantom1 ab}$ have to be introduced. To establish the proper geometrical interpretation of such gauge fields, let us now introduce generalized connections $\bar{\omega}_\mu^{\phantom1 ab}$ for our model and postulate the following interaction term
\begin{equation}\label{interacting term}
S_{conn}=2{\textstyle \int} \mathrm{det}(e)\,d^{4}x\;\;
e_{\phantom1 a}^{\mu}e_{\phantom1 b}^{\nu}\;\bar{\omega}_{\mu c}^{\phantom1 [a}\,A_{\nu}^{\phantom1 bc]}\;.
\end{equation}
In such an approach, the action describing the dynamics of the fields $A_\mu^{\phantom1 ab}$ is derived form the gauge Lagrangian \reff{action-for-A}, \emph{i.e.},
\begin{equation}
S_A=-\tfrac{1}{4}{\textstyle \int} \mathrm{det}(e)\,d^{4}x
\;F_{\mu\nu}^{\phantom1\ph ab}F^{\mu\nu}_{\phantom1\ph ab}\;,
\end{equation}
while the action that accounts for the generalized connections can be taken as the gravitational action $S_G$ \reff{action for o}, but now the projected Riemann Tensor \reff{Riemann} is constructed by the generalized connections $\bar{\omega}_\mu^{\phantom1 ab}$. Such a new fundamental Lorentz invariant can be denoted by $\bar{R}_{\mu\nu}^{\phantom1\ph ab}$ yielding
\begin{equation}\label{action-for-o1}
S_G(e,\bar{\omega})=-\tfrac{1}{4}{\textstyle \int}
\mathrm{det}(e)\,d^{4}x\;\;e^{\phantom1\mu}_{a}e^{\phantom1\nu}_{b} \bar{R}^{\phantom1\ph ab}_{\mu\nu}\;.
\end{equation}
Collecting all terms together, one can get the total action for the model. Two cases can now be distinguished according to the absence or presence of spinors. If fermion matter is absent, variation of the total action wrt connections $\bar{\omega}_\mu^{\phantom1 ab}$ gives the generalized equation
\begin{align}\label{equation for omega}
\partial_\mu e^{\phantom1 a}_{\nu} -\partial_\nu e^{\phantom1 a}_{\mu}-\bar{\omega}_{\mu}^{\phantom1 ab}e_{\nu b}+
\bar{\omega}_{\nu}^{\phantom1 ab}e_{\mu b}=
A_{\mu}^{\phantom1 ab}e_{\nu b}-A_{\nu}^{\phantom1 ab}e_{\mu b}\,,
\end{align}
which admits the solution
\begin{equation}\label{solution vacuum}
\bar{\omega}_\mu^{\phantom1 ab}=\omega_\mu^{\phantom1 ab}+A_{\nu}^{\phantom1 ab}\;,
\end{equation}
As a result, confronting the expression above with the solution (\ref{connection}), the new gauge fields $A_\mu^{\phantom1 ab}$ mimic the dynamics of the contortion field $\mt{K}_\mu^{\phantom1 ab}$, once filed equations are considered.
If the fermion matter contribution is taken into account in the total action, variation wrt generalized connections leads to an additional term in the rhs of eq. \reff{equation for omega}, \emph{i.e.},
\begin{equation}\label{general equation for omega}
...-\tfrac{1}{4}\,\epsilon^{ab}_{cd}\,e^{\phantom1 c}_{\mu}e_{b\nu}\,j^{d}_{\,(ax)}+
\tfrac{1}{4}\,\epsilon^{ab}_{cd}\,e^{\phantom1 c}_{\nu}e_{b\mu}\,j^{d}_{\,(ax)}\;,
\end{equation}
being $j_{\,(ax)}^d=\bar{\psi}\,\gamma_5\gamma^d\,\psi$ the spin axial current. The presence of spinors prevents one to identify connections $A_\mu^{\phantom1 ab}$ as the only torsion-like components, since all the terms in the rhs of the II Cartan Structure Equation (which in this model can be identified with eqs. (\ref{equation for omega})+(\ref{general equation for omega})) have to be interpreted as torsion. This way, both the gauge fields and the spinor axial current contribute to the torsion of space-time. It is worth noting that, if the fields $A_\mu^{\phantom1 ab}$ vanishes, we obtain the usual result of PGT \cite{blago1,blago2}, \emph{i.e.}, the Einstein-Cartan contact Theory, in which torsion is directly connected with the density of spin and does not propagate \cite{hayashi}. In our scheme, collecting eqs. (\ref{equation for omega}) and (\ref{general equation for omega}) together, we obtain the the unique solution:
\begin{equation}
\bar{\omega}_\mu^{\phantom1 ab}=\omega_\mu^{\phantom1 ab}+A_{\nu}^{\phantom1 ab}
+\tfrac{1}{4}\,\epsilon^{ab}_{cd}\,e^{\phantom1 c}_{\mu}\,j^{d}_{\,(ax)}\;.
\end{equation}
Furthermore, the spin density of the fermion matter is present in the source term of the Yang-Mills Equations for the Lorentz connections, and the Einstein Equations contain in the rhs not only the energy-momentum tensor of the matter, but also a four-fermion interaction term. The dynamical equations of spinors are formally the same as those ones of the Einstein-Cartan Model with the addition of the interaction with the LG connections $A_\mu^{\phantom1 ab}$.
\section{Concluding Remarks}
The considerations developed in this paper have been prompted by observing that GR admits two physically different symmetries, namely the diffeomorphism invariance, defined in the real space-time, and the local Lorentz invariance, associated to the tangent fiber. Such two local symmetries reflect the different behavior of tensors and spinors: while tensors don't experience the difference between the two transformations, spinors do. In our proposal, the diffeomorphism invariance concerns the metric structure of the space-time, on the other hand, the real gauge symmetry corresponds to local rotations in the tangent fiber and admits new geometrical gauge fields.
In our analysis, the key point has been fixing the equivalence between isometric diffeomorphisms and local Lorentz transformations. In fact, under the action of the former, spin connections behave like a tensor and are not able to ensure invariance under the corresponding induced local rotations. This picture has led us to infer the existence of (metric-independent) compensating fields of the LG which interact with spinors. Ricci spin connections could not be identified with the suitable gauge fields, for they are not primitive objects (they depend on bein vectors).
In flat space-time, we have developed the model by choosing vanishing spin connections. In treating spinor fields, a covariant derivative that accounts for the new gauge fields (behaving like natural Yang-Mills ones) has been formulated. The analysis, in flat space, is addressed considering the non-relativistic limit of the interaction between spin$-\nicefrac{1}{2}$ fields and the Lorentz gauge ones. This way, a generalization of the so-called Pauli Equation has been formulated and applied to an hydrogen-like atom in presence of a Coulomb central potential. Energy-level modifications are present but selection rules do not allow for new detectable spectral lines.
In curved space-time, a mathematical relation between the Lorentz gauge fields and the contortion field has been found from the II Cartan Structure Equation if a (unique) interaction term between the gauge fields and generalized internal connections is introduced. \vspace{0.3cm}
\textbf{*} We would like to thank Simone Mercuri for his advice about these topics.
{\footnotesize
|
0903.4645
|
\section{Preliminaries}
{\defi \label{def1}\textbf{Pre-Crystalline Graded Ring}\\
Let $A$ be an associative ring with unit $1_A$. Let $G$ be an arbitrary group. Consider an injection $u: G \rightarrow A$ with $u_e = 1_A$, where $e$ is the neutral element of $G$ and $u_g \neq 0$, $\forall g \in G$. Let $R \subset A$ be an associative ring with $1_{R}=1_A$. We consider the following properties:
\begin{description}
\item[(C1)]\label{def2} $A = \bigoplus_{g \in G} R u_g$.
\item[(C2)]\label{def3} $\forall g \in G$, $R u_g = u_g R$ and this is a free left $R$-module of rank $1$.
\item[(C3)]\label{def4} The direct sum $A = \bigoplus_{g \in G} R u_g$ turns $A$ into a $G$-graded ring with $R = A_e$.
\end{description}
We call a ring $A$ fulfilling these properties a \textbf{Pre-Crystalline Graded Ring}.}\\
{\prop \label{def5} With conventions and notation as in Definition \ref{def1}:
\begin{enumerate}
\item For every $g \in G$, there is a set map $\sigma_g : R \rightarrow R$ defined by: $u_g r = \sigma_g(r)u_g$ for $r \in R$. The map $\sigma_g$ is in fact a surjective ring morphism. Moreover, $\sigma_e = \textup{Id}_{R}$.
\item There is a set map $\alpha : G \times G \rightarrow R$ defined by $u_g u_h = \alpha(g,h)u_{gh}$ for $g,h \in G$. For any triple $g,h,t \in G$ the following equalities hold:
\begin{eqnarray}
\alpha(g,h)\alpha(gh,t)&=&\sigma_g(\alpha(h,t))\alpha(g,ht) \label{def6},\\
\sigma_g(\sigma_h(r))\alpha(g,h)&=& \alpha(g,h)\sigma_{gh}(r) \label{def7}.
\end{eqnarray}
\item $\forall g \in G$ we have the equalities $\alpha(g,e) = \alpha(e,g) = 1$ and $\alpha(g,g^{-1}) = \sigma_g(\alpha(g^{-1},g)).$
\end{enumerate}
}
\begin{flushleft}\textbf{Proof}\end{flushleft} See \cite{NVO6}. $\hfill \Box$\\
{\prop Notation as above, the following are equivalent:
\begin{enumerate}
\item $R$ is $S(G)$-torsionfree.
\item $A$ is $S(G)$-torsionfree.
\item $\alpha(g,g^{-1})r=0$ for some $g \in G$ implies $r = 0$.
\item $\alpha(g,h)r=0$ for some $g,h \in G$ implies $r = 0$.
\item $R u_g = u_g R$ is also free as a right $R$-module with basis $u_g$ for every $g \in G$.
\item for every $g \in G$, $\sigma_g$ is bijective hence a ring automorphism of $R$.
\end{enumerate}
}
\begin{flushleft}\textbf{Proof}\end{flushleft} See \cite{NVO6}. $\hfill \Box$\\
{\defi Any $G$-graded ring $A$ with properties \textbf{(C1),(C2),(C3)}, and which is $G(S)$-torsionfree is called a \textbf{crystalline graded ring}. In case $\alpha(g,h) \in Z(R)$, or equivalently $\sigma_{gh}=\sigma_g \sigma_h$, for all $g,h \in G$, then we say that $A$ is \textbf{centrally crystalline}.}\\
{\lem \label{def9}Let $R \mathop \diamondsuit \limits_{\sigma ,\alpha} G$ be a pre-crystalline graded ring, $x \in R$, $g,h \in G$. $R$ is a domain, and define $K$ to be the quotient field of $R$. Then
\begin{enumerate}
\item $u_g^{-1} = u_{g^{-1}}\alpha^{-1}(x,x^{-1})=\alpha^{-1}(x^{-1},x)u_{x^{-1}}$.
\item $\sigma_g^{-1}(x)u_g^{-1} = u_g^{-1}x$.
\item $\sigma_{hg}^{-1}[\alpha(h,g)]=\sigma_g^{-1}[\sigma_h^{-1}(\alpha(h,g))]$.
\item $\sigma_g^{-1}[\alpha(g,g^{-1}h)]=\alpha^{-1}(g^{-1}, h)\sigma_g^{-1}[\alpha(g, g^{-1})]$.
\end{enumerate}
}
\begin{flushleft}\textbf{Proof}\end{flushleft}(inverses are defined in $K$ or $K \mathop \diamondsuit \limits_{\sigma ,\alpha} G$)
\begin{enumerate}
\item Just calculate the product and use that in an associative ring the left and right inverse coincide.
\item Let $g,h \in G, x \in A$:
\begin{align*}
&\sigma_g[\sigma_h(x)]\alpha(g,h)=\alpha(g,h)\sigma_{gh}(x)\\
\Rightarrow &\sigma_g[\sigma_{g^{-1}}(x)]\alpha(g,g^{-1})=\alpha(g,g^{-1})x\\
\Rightarrow &\sigma_{g^{-1}}(x)\sigma_g^{-1}(\alpha(g,g^{-1}))=\sigma_{g^{-1}}(\alpha(g,g^{-1}))\sigma_{g}^{-1}(x)\\
\Rightarrow &\sigma_g^{-1}(x) = \sigma_g^{-1}[\alpha^{-1}(g,g^{-1})]\sigma_{g^{-1}}(x)\sigma_g^{-1}[\alpha(g,g^{-1})].
\end{align*}
So
\begin{align*}
\sigma_g^{-1}(x)u_g^{-1} &= \sigma_g^{-1}[\alpha^{-1}(g,g^{-1})]\sigma_{g^{-1}}(x)\sigma_g^{-1}[\alpha(g,g^{-1})]\alpha^{-1}(g^{-1},g)u_{g^{-1}}\\
\ &= \sigma_g^{-1}[\alpha^{-1}(g,g^{-1})]\sigma_{g^{-1}}(x)\alpha(g^{-1},g)\alpha^{-1}(g^{-1},g)u_{g^{-1}}\\
\ &= \sigma_g^{-1}[\alpha^{-1}(g,g^{-1})]\sigma_{g^{-1}}(x)u_{g^{-1}}\\
\ &= \sigma_g^{-1}[\alpha^{-1}(g,g^{-1})]u_{g^{-1}}x\\
\ &= \alpha^{-1}(g,g^{-1})u_{g^{-1}}x\\
\ &= u_g^{-1}x.
\end{align*}
\item Let $g,h \in G, x \in A$:
\begin{align*}
&\sigma_h[\sigma_g(x)]\alpha(h,g) = \alpha(h,g)\sigma_{hg}(x)\\
\Rightarrow &\sigma_h[\sigma_g(\sigma_{hg}^{-1}(\alpha(h,g)))]\alpha(h,g) = \alpha(h,g)\sigma_{hg}(\sigma_{hg}^{-1}(\alpha(h,g)))\\
\Rightarrow &\sigma_{hg}^{-1}[\alpha(h,g)] = \sigma_g^{-1}[\sigma_h^{-1}(\alpha(h,g))].
\end{align*}
\item Let $g,h \in G$:
\[\alpha(g,g^{-1})\alpha(e,h)=\sigma_g[\alpha(g^{-1},h)]\alpha(g,g^{-1}h).\]
\ $\hfill \Box$
\end{enumerate}
\section{Global Dimension}
{\st \label{dim1} Let $R,S$ be rings with $R \subseteq S$ such that $R$ is an $R$-bimodule direct summand of $S$, then $\textup{r gld}R \leq \textup{r gld}S + \textup{pd} S_R$.}\\
\textbf{Proof} See \cite{MR},p. 237. $\hfill \Box$\\
{\st \label{dim2} Let $R$ be a ring, $G$ a finite group with $|G|$ a unit in $R$ and $A = R \mathop \diamondsuit \limits_{\sigma ,\alpha} G$ a pre-crystalline graded ring with $u_g$ units. Let $M$ be any right $A$-module. Then:
\begin{enumerate}
\item If $N \triangleleft M_A$ and $N$ is a direct summand of $M$ as an $R$-module, then $N$ is a direct summand over $A$.
\item $\textup{pd}M_{R} = \textup{pd}M_A$.
\item $\textup{r gld}R = \textup{r gld}A$.
\end{enumerate}
}
\begin{flushleft}\textbf{Proof}\end{flushleft}
\begin{enumerate}
\item \label{dim3} Let $\pi : M \rightarrow N$ be the $R$-module splitting morphism. Define the map $\lambda$ by
\[\lambda: M \rightarrow N :m \mapsto |G|^{-1}\sum_{g \in G}\pi(m u_g) u_g^{-1}.\]
\textbf{$\lambda$ is well-defined} : trivial.\\
\textbf{$\lambda$ is the identity on $N$} : let $k \in N$:
\begin{align*}
\lambda(k) &=|G|^{-1}\sum_{g \in G} \pi(k u_g)u_g^{-1}\\
&= |G|^{-1}\sum_{g\in G}k = k.
\end{align*}
\textbf{$\lambda$ is $A$-linear} : Let $m \in M, a\in A$:
\begin{align*}
\lambda(ma) =& |G|^{-1} \sum_{g\in G} \pi(mau_g)u_g^{-1}\\
=& |G|^{-1} \sum_{g\in G} \pi\left[m\left(\sum_{h \in G} t_h u_h\right)u_g\right]u_g^{-1}\\
=& |G|^{-1} \sum_{g,h\in G} \pi\left(m t_h u_h u_g\right)u_g^{-1}\\
{}^{(\textup{Lemma }\ref{def9}(2))}=& |G|^{-1} \sum_{g,h\in G} \pi\left(m u_h u_g\right)u_g^{-1}\sigma_h^{-1}(t_h)\\
\end{align*}
\begin{align*}
=& |G|^{-1} \sum_{g,h\in G} \pi\left(m \alpha(h,g)u_{hg}\right)u_g^{-1}\sigma_h^{-1}(t_h)\\
=& |G|^{-1} \sum_{g,h\in G} \pi\left(m u_{hg}\right)\sigma_{hg}^{-1}[\alpha(h,g)]u_g^{-1}\sigma_h^{-1}(t_h)\\
{}^{(\textup{Lemma }\ref{def9}(3))} =&|G|^{-1} \sum_{g,h\in G} \pi\left(m u_{hg}\right)\sigma_g^{-1}[\sigma_h^{-1}(\alpha(h,g))]u_g^{-1}\sigma_h^{-1}(t_h)\\
{}^{(\textup{Lemma }\ref{def9}(2))} =&|G|^{-1} \sum_{g,h\in G} \pi\left(m u_{hg}\right)u_g^{-1}\sigma_h^{-1}[\alpha(h,g)]\sigma_h^{-1}(t_h)\\
{}^{(x=hg)}=& |G|^{-1} \sum_{h\in G} \sum_{x \in G}\pi\left(m u_x\right)u_{h^{-1}x}^{-1}\sigma_h^{-1}[\alpha(h,h^{-1}x)]\sigma_h^{-1}(t_h)\\
{}^{(\textup{Lemma \ref{def9}(4)})}=& |G|^{-1} \sum_{h\in G} \sum_{x \in G}\pi\left(m u_x\right)[\alpha^{-1}(h^{-1},x)u_{h^{-1}}u_x]^{-1}\cdot\\
&\qquad \qquad \qquad \alpha^{-1}(h^{-1},x)\sigma_h^{-1}[\alpha(h,h^{-1})]\sigma_h^{-1}(t_h)\\
=& |G|^{-1} \sum_{h\in G} \sum_{x \in G}\pi\left(m u_x\right)u_x^{-1} u_{h^{-1}}^{-1}\sigma_h^{-1}[\alpha(h,h^{-1})]\sigma_h^{-1}(t_h)\\
=& |G|^{-1}\sum_{h \in G}\sum_{x \in G} \pi(mu_x)u_x^{-1}u_h \sigma_h^{-1}(t_h)\\
=& |G|^{-1}\sum_{x \in G}\pi(m u_x)u_x^{-1}\sum_{h \in G}t_h u_h\\
=& \lambda(m)\cdot a.
\end{align*}
\item\label{dim4}
Suppose $M_{R}$ is projective and
\[0 \rightarrow N \rightarrow F \rightarrow M \rightarrow 0\]
is a short exact sequence of $A$-modules with $F$ free, then the sequence splits over $R$ and hence over $A$ by (\ref{dim3}). So $M_A$ is also projective. Furthermore, $A_{R}$ is free. It now follows that an $A$-projective resolution of any module $M_A$ is also an $R$-projective resolution that terminates when a kernel is, equally, $R$-projective or $A$-projective, so $\textup{pd}M_{R} = \textup{pd}M_A$.
\item
Any $A$-module is naturally an $R$-module. So, since $\textup{pd}M_{R} = \textup{pd}M_A$, we find
\begin{eqnarray*}
\textup{r gld}A &=& \textup{sup}\left\{\textup{pd}M_A | M_A \textup{ right } A-\textup{module}\right\}\\
\ &\leq& \textup{sup}\left\{\textup{pd}M_{R} | M_{R} \textup{ right } R-\textup{module}\right\}\\
\ &=& \textup{r gld}R.
\end{eqnarray*}
So by Theorem \ref{dim1}:
\begin{eqnarray*}
\textup{r gld}R &\leq& \textup{r gld}A + \textup{pd}A_{R}\\
\ &\mathop = \limits^{(\ref{dim4})}& \textup{r gld}A + \textup{pd}A_A\\
\ &=& \textup{r gld}A.
\end{eqnarray*}
And in conclusion $\textup{r gld}R = \textup{r gld}A$.$\hfill \Box$
\end{enumerate}
The following result is well-known:
{\lem \label{dim5} Let $S$ be an Ore set for $R$ and suppose there is no $S$-torsion. Let $\{s_1, \ldots, s_n\} \subset S$, then $\exists s \in S \cap \bigcap_{i =1}^{n} R s_i$.}\\
\textbf{Proof} By induction. Let us take $s_1$, $\exists t_1 \in S^{-1}R$ such that $t_1 s_1 = 1$. Then of course we find $q_1 \in S$ such that $q_1t_1 \in R$. This means that $q_1 = st_1 s_1 \in R s_1$, and $q_1 \in S$. Now we try to do the same for the other $s_i$. We apply the left Ore condition on $q_1 \in S \subset R$ and $s_2 \in S$. We now find $v_2 \in R$ and $q_2 \in S$ such that $v_2 s_2 = q_2 q_1$. $\hfill \Box$\\
{\lem Let $A = R \mathop \diamondsuit \limits_{\sigma ,\alpha} G$ be crystalline graded, then the set of regular elements in $R$, $\textup{reg}R$, is a subset of $\textup{reg}A$, the regular elements of $A$. Furthermore, if $R$ is semiprime Goldie, $\textup{reg}R$ is a left (and right) Ore set in $A$. We have
\[\left(\textup{reg}R\right)^{-1}A = \bigoplus_{g \in G}Q_{\textup{cl}}(R)u_g.\]}
\begin{flushleft}\textbf{Proof}\end{flushleft} For the first part, take $a \in \textup{reg}R$, $x = \sum_{g \in G} x_g u_g$ and suppose $ax=0$, then $\sum_{g \in G} ax_gu_g =0$. This implies $a x_g=0$ $\forall g \in G$, and this means $x_g, \forall g \in G$. Suppose $xa = 0$, then $\sum_{g \in G}x_g u_g a = 0$. This implies $x_g \sigma_g(a)u_g = 0$, or $x_g \sigma_g(a) = 0, \forall g \in G$. Since $\textup{reg}R$ is invariant under $\sigma_g, \forall g \in G$, we again find $x_g = 0, \forall g \in G$. So we have proven $\textup{reg}R \subset \textup{reg}A$.\\
By Goldie's Theorem, we know that $\textup{reg}R$ is an Ore set in $R$. We first need to prove that $S=\textup{reg}R$ satisfies the left Ore condtion for $A$. We need that $\forall r \in R$, $s \in S$ we can find $r' \in R, s' \in S$ such that $s'r = r's$. Let $r = \sum_{g \in G} a_g u_g$. Since $S$ is left Ore for $R$, we can find $\forall g \in G$ elements $a'_g \in R$ and $s_g \in S$ such that $a'_g \sigma_g(s)=s_g a_g$. Now, we find $s' \in S \cap \bigcap_{g \in G} R s_g$ from Lemma \ref{dim5}, in other words, we find $s' \in S$ and $v_g \in R$ such that $\forall g \in G$ $s' = v_g s_g$. Now set $\forall g \in G$, $b_g = v_g a'_g$, and set $r' = \sum_{g \in G}b_g u_g$. Then $r' s = s' r$. The right Ore condition is similar. The third assertion is now clear. $\hfill \Box$
{\st \label{dim6} Let $A$ be crystalline graded over $R$, $R$ a semiprime Goldie ring. Assume $\textup{char}R$ does not divide $|G|$, then $A$ is semiprime Goldie.}\\
\textbf{Proof} Since $A$ is crystalline graded, the elements $\alpha(g,h), g,h \in G$ are regular elements. Denote $S = \textup{reg}R$. Since $R$ is semiprime Goldie, $S^{-1}R$ is semisimple Artinian. This implies that from Theorem \ref{dim2}, $S^{-1}A$ is semisimple Artinian, in particular, it is Noetherian. Let $I$ be an ideal in $A$, and consider $(S^{-1}A) I$. Claim: this is an ideal. Let $s \in S$ and consider the following chain:
\[(S^{-1}A) I\subset (S^{-1}A) I s^{-1} \subset (S^{-1}A) I s^{-2} \subset \ldots.\]
This implies that $(S^{-1}A) Is^{-n} = (S^{-1}A) Is^{-m}$, $m >n$, and so $(S^{-1}A) I = (S^{-1}A) I s^{n-m}$, and so we find $(S^{-1}A) I(S^{-1}A)\subset (S^{-1}A) I$, or $(S^{-1}A) I$ is an ideal in $S^{-1}A$. If $J$ is the nilradical of $A$ then $(S^{-1}\cdot J)^n = S^{-1} \cdot J^n$ follows. For some $n$ we have that $(S^{-1}\cdot J)^n = 0$ in the semisimple Artinian ring $S^{-1}A$, thus $S^{-1}A \cdot J = 0$ and $J=0$. $\hfill \Box$\\
{\gev If $A$ is crystalline graded with $D$ a Dedekind domain, $\textup{char}D$ does not divide $|G|$, then $A$ is semiprime.}\\
{\prop In the situation of Theorem \ref{dim6}, prime ideals of $S^{-1}A$ intersect in prime ideals of $A$, where $S = \textup{reg}R$.}\\
\textbf{Proof} Let $P$ be a prime of $S^{-1}A$, then $P \cap Q$ is an ideal such that for $IJ \subset P \cap A$, $I$ and $J$ ideals of $A$, we have $S^{-1}A\cdot IJ \subset P$ hence $(S^{-1}A\cdot I)(S^{-1}A\cdot J) \subset P$, or $S^{-1}A\cdot I \in P$ if $S^{-1}A \cdot J \not\subset P$. Thus $I \subset P\cap A$ if $J \not\subset P\cap A$ and conversely. $\hfill \Box$\\
{\opm The situation of Theorem \ref{dim6} arises when $A$ is centrally crystalline graded over the semiprime Goldie ring $R$ with $\textup{char}R$ does not divide $|G|$, such that $A$ (or $R$) is a P.I. ring.}
\section{Krull Dimension}
{\prop Let $A$ be crystalline graded over $D$, $D$ a Dedekind domain. Then the (Krull-)dimension of $A$ is smaller than or equal to $2$.}\\
\textbf{Proof} Consider the set $F = \{I \triangleleft A | I \cap D =0\}$ ordered by inclusion. If it is nonempty, then there is a maximal element for this family, say $P$. Suppose $IJ \subset P$, with $P \not\subset P+I$, $P \not\subset P+J$. Then $0\neq d_1 \in P + I \cap D$ and $0 \neq d_2 \in P+J \cap D$. This implies $0 \neq d_1d_2 \in P$, contradiction. So if $F \neq \emptyset$, there always exists a prime ideal $P$ in $A$ with $P \cap D = 0$.\\
Denote $S = D \backslash\{0\}$. Suppose that $0 \neq Q \subset P$, $Q$ a prime ideal in $A$. Then, since $S^{-1}A$ is Artinian semisimple (Theorem \ref{dim2}), we find that $S^{-1}Q = S^{-1}P$ since they are both primes ($Q \cap D \neq 0 \neq P \cap D$). Now let $y \in P \backslash Q$. Then $y \in S^{-1}P = S^{-1}Q$. This means $\exists d \in S$ such that $dy \in Q$. So if we set $d'= \prod_{g \in G} \sigma_g(d)$ then $d'y\in Q$. Since $d' \in Z(A)$ we find $d'Ay \subset Q$ and since $y \notin Q$ we see that $d' \in Q$ or $Q \cap D \neq 0$. Contradiction. We have established that two prime ideals that don't intersect $D$ cannot contain each other.\\
Suppose there exists a prime ideal $M$ of $A$ with $M \cap D \neq 0$. This means $A/M$ is Artinian, and prime, in other words it is a simple ring, or $M$ is a maximal ideal. We find that a maximal chain of prime ideals always is of the form
\[0 \subset P \subset M \subset A,\]
where $P \cap D = 0$ and $Q \cap D \neq 0$. $\hfill \Box$\\
|
0903.4417
|
\section{Introduccion}
Radio Galaxies are identified with strong radio sources in the range of $10^{41}$ to $10^{46}erg/s$. Most of them are giant elliptical galaxies that contain little dense and cold interstellar medium (ISM) (\cite{Kellerman88}). They have been detected in the far infra red (FIR), which is assumed to be thermal and to originate from dust heated by young massive stars or an AGN (\cite{Wiklind95b}). The powerful radio-AGNs are usually very poor in molecular gas, nevertheless the central black hole (BH) needs gas to feed the nuclear activity and this is why we need to study the origin, the distribution and kinematics of the molecular gas in such objects. \cite{Antonucci93} suggests that AGNs could be powered by accretion of ISM into super massive black holes (SMBHs), and according to \cite{deRuiter02} the presence of ISM in the circumnuclear region of AGNs is indeed inevitable and the large scale dust/gas system are related to nuclear activity.\\
\section{Sample and observations}
We have a total of 52 nearby radio galaxies in our sample, chosen on the basis of their radio continuum power. The selection criteria of this sample makes it different from other samples, \cite{Evans05,Mazzarella93,Bertram07}, which are chosen by their IR fluxes or because they show signs of interactions. The galaxies were observed using the IRAM-30m telescope for CO(1-0) and CO(2-1) at 115 and 230 GHz respectively.\\
Out of the 52 galaxies, 43 have 12CO(1-0) and 12CO(2-1) data and 9 of them only have 12CO(1-0) data. From the total 52 galaxies we have 55\% of the galaxies detected either on the 12CO(1-0) line, the 12CO(2-1) line or in both lines together; note that this 55\% includes galaxies tentatively detected as well as detected. From the clearly detected galaxies we do have a 38\% detection rate in either one of the transition lines. 7 of our galaxies have been detected in both transition lines and, except for one of them, all have a higher integrated emission in the 12CO(2-1) line than in the 12CO(1-0) line. 90\% of the 52 were detected in the continuum at 3mm, and 55\% of the 43 were detected in the continuum at 1mm.\\
\section{Molecular gas mass}
\begin{wrapfigure}[19]{r}{80mm}
\centering
\includegraphics[scale=0.30]{AsurvDetected_17dec08.eps}
\caption{$M_{H_2}$ histogram of all the galaxies calculated using ASURV. On the front, the darker color represents the part of the histogram that belongs to the detected galaxies only, leaving the rest of the histogram to the upper limits exclusively.}
\label{mh2histo}
\end{wrapfigure}
The total H$_2$ mass was calculated using the standard CO-to-H$_2$ conversion factor of $2.3\times10^{20} cm^{-2} (K km s^{-1})^{-1}$ with an average gas mass of $1\times10^8~ M_{\odot}$. This average was calculated using the survival analysis statistics (ASURV) which takes into account the upper limits. We compared the molecular gas mass average of our sample with the molecular gas mass of other samples. We noticed that \cite{Wiklind95a}, with a sample of elliptical galaxies, has an average molecular gas mass of the same order of magnitude as our sample. We also compared with \cite{Evans05,Mazzarella93,Bertram07}. Their sample is a FIR selected sample or galaxies in interaction, and their molecular gas masses are in the range of $10^9$. Finally, we also compared with \cite{Solomon97}, with a sample of ULIRGS which has an average of $10^{10}~ M_{\odot}$ of molecular gas.
\subsection{Fanaroff and Riley classification}
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[scale=0.25]{FRall_histogram_16Jan09.eps} & \includegraphics[scale=0.3]{histograms.eps}
\end{tabular}
\caption{$M_{H_2}$ distribution of the galaxies depending on their Fanaroff-Riley classification. On the left we represent a histogram with all of the galaxies together, with the white color for the FR-II type, the darker color for the FR-I and the brighter color for the FR-c; and on the right we can see the same classification but shown separately for each type.}
\label{FRhisto}
\end{center}
\end{figure}
\begin{wrapfigure}[16]{r}{75mm}
\centering
\includegraphics[scale=0.3]{MassZ_arrows_ll.eps}
\caption{$M_{H_2}$ versus z and a lower limit of the Theoretical value for the mass with respect to the distance.}
\label{malmquist}
\end{wrapfigure}
In our sample 69.23\% of the galaxies are FR-I galaxies, 19.23\% are FR-II galaxies and 11.54\% are FR-c. The median value of the masses calculated using ASURV for each subsample shows that the FR-I galaxies ($6.9\times10^7~M_{\odot}$) has less molecular gas mass than the FR-II ($2.9\times10^8~M_{\odot}$) by an order of magnitude. Note that the FR-c galaxies ($5.2\times10^6~M_{\odot}$) are only 6 and for statistical purposes the sample is not big enough.\\%, anyhow the median value of this type of galaxies for this sample is $5.2\times10^6~M_{\odot}$.\\% and the mean value is $1.67\times10^8~M_{\odot}$.
From Figure \ref{FRhisto} it is clear that: (1) Elliptical galaxies do not need much molecular gas mass to host a radio AGN - There is molecular gas mass in the range of 6.3-9.7 (logarithmic values, in units of $M_{\odot}$). (2) There are more FR-I galaxies than FR-II or FR-c in our sample. (3) The FR-I galaxies cover a range of molecular gas mass as large as FR-c and larger than the FR-II. (4) For the median value FR-II galaxies are clearly more massive than the FR-I and FR-c galaxies.\\
The difference in the subsamples can be explained by the Malmquist bias. Figure \ref{malmquist} represents the molecular gas mass vs. z where it is clearly visible that for higher z the galaxies tend to have more molecular gas mass. There is a larger number of FR-II galaxies, compared to FR-I and FR-c galaxies, at higher z implying a large threshold of the upper limit suggesting that this difference could be due to the Malmquist bias.\\% It is also represented on the Figure, the theoretical lower limit of the mass respect to the distance. For this we use the standard values, for the 300 km/s velocity width and 0.001 K for the noise. Most of the galaxies in the sample are over this limit.\\
The line ratio between the CO(2-1) and the CO(1-0) transitions is computed with the integrated intensity ratio $I_{CO(2-1)}/I_{CO(1-0)}$ of the CO lines, where the intensity was measured on one point at the center of each galaxy. As previously noticed by \cite{Lim2000}, the 2 galaxies studied in their paper have a stronger detection in CO(2-1) than CO(1-0). As \cite{Henkel97} noted, this ratio, well over unity, implies a warm ($>$20 K) gas of small column density. This sample has a line ratio well over unity ($>$2). 7 of our galaxies have been detected in both frequencies and seven more have been detected only in CO(2-1). From the galaxies detected in both frequencies, only one -NGC 7052- has an integrated emission more intense in the CO(1-0) than in the CO(2-1) line. The maximum line ratio is found to be 2.48 (for 3CR 31 and 3CR 272.1) and the average value is 2.2 $\pm$ 0.2. Although the beam size is different at the two frequencies, according to \cite{Braine92} it can be compared to the line ratio of about 0.8, the typical value for spiral galaxies, which also appears closer to the perturbed galaxies, they also suggest that high line ratios are associated with star formation. \\
For those galaxies detected in CO(2-1) emission line, and not detected in CO(1-0) we derived a value for the integrated velocity $I_{CO(1-0)}$ using the ratio of 2.2 calculated here with the galaxies detected in both lines.\\
7 of our galaxies show a double horn line profile typical of a molecular gas disk, representing 36\% of the detected galaxies. We have IRAM-PdBI observations (see Figure \ref{pdbi-3c31}) of 3CR31 in CO(1-0), where a molecular gas disk is clearly visible. \\
\begin{wrapfigure}[16]{r}{70mm}
\centering
\includegraphics[scale=0.3]{3c31co10.mom0.eps}
\caption{CO(1-0) map of the PdBI for 3CR 31 galaxy}
\label{pdbi-3c31}
\end{wrapfigure}
\begin{wrapfigure}[17]{r}{75mm}
\centering
\includegraphics[scale=0.3]{LfirLco_all_23Jan09.eps}\\
\caption{ Our galaxies compare to all the other samples used as comparission. The galaxy sample that fits the best is that of \cite{Solomon97} with ULIRGs.} \label{lum}
\end{wrapfigure}
\section{Dust}
The median value of dust temperature for this sample, using the galaxies detected by IRAS, is 35.8K and median value of the dust mass is $2.5\times10^6~M_{\odot}$. The dust in this sample is hotter than in spiral galaxies. Possible explanations are that the dust could be heated by an XDR or a high level of star formation. Moreover, the FIR could come directly from the AGN and not from the dust. Finally we noted that the star formation rate is found to be quite low. \\
The $M_{H_2}/M_{dust}$ ratio for this sample, using the galaxies detected for both IRAS and IRAM-30m, is $\sim$300. \\
\newpage
\subsection{FIR vs. CO}
The ratio of $L_{CO}$/$L_{FIR}$ is normally related in spiral galaxies with star formation efficiency. Figure \ref{lum}, is a plot of the galaxies in our sample plus samples used as comparison samples, representing the $L_{FIR}$ vs $L_{CO}$ for the detected galaxies in both FIR and CO luminosities. This plot shows a correlation that appears to be universal. Having a linear fit for all the samples together of $L_{CO}=0.9L_{FIR}-0.9$.\\
Since our data fit well in the plot compared to the other samples, we could argue that elliptical galaxies should exhibit at least a low level of star formation.
\section{Summary and conclusions}
- We performed a survey of CO(1-0) and CO(2-1) in a sample of nearby radio galaxies chosen only on the basis of their radio continuum.\\
- The detection rate in CO(1-0) and/or CO(2-1) was 38\%.\\
- There is a molecular gas disk in the center for 36\% of the detected galaxies.\\
- Our sample has a high CO(2-1)/CO(1-0) line ratio of 2.2.\\
- The sample has a low molecular gas mass content compared to FIR selected samples.\\
- Radio galaxies do follow the $L_{FIR}-L_{CO}$ correlation.
\bibliographystyle{astron
|
1704.02628
|
\subsection{Method subsection.}
\subsection{Infrared imaging of the burst.}
\label{sec:irimaging}
Near-infrared imaging at various epochs was performed with PANIC~\citep{baumeister} at the Calar Alto 2.2-m telescope and the Gamma-Ray Burst Optical/Near-Infrared Detector (GROND)~\citep{greiner} at the La Silla 2.2-m telescope.
Basic image processing was performed by the instrument teams using the corresponding data pipelines. The photometric calibration was done using the Two Micron All Sky Survey (2MASS) catalogue~\citep{2mass}.
Although short detector integration times were applied for the $K\!s$ band, partial saturation of the bright target and field stars of similar brightness was unavoidable,
in particular under good seeing conditions (our typical seeing was $\lesssim$1''). This was accounted for by a continuous extension of the linear fit between catalog and instrumental magnitudes with a parabola
for the brightest objects.
Flux densities for NIRS\,3 were generally derived using the APER procedure from the IDL Astronomy Library~\citep{landsman}, taking the local background into account.
Mid-and far-infrared flux densities were obtained by performing target-of-opportunity observations with FORCAST~\citep{adams} and FIFI-LS~\citep{klein,klein2} aboard SOFIA (PI J. Eisl\"offel, proposal ID 04\_0047). FORCAST images
were taken using narrow-band filters centered at 7.7, 11.1, 19.7, 31.5, and 37.1\,$\mu$m. The spectral windows for FIFI-LS were chosen to match the central wavelengths of the
far-infrared AKARI filters, i.e., 60, 90, 140, and 160\,$\mu$m. The FIFI-LS spectral data cubes, calibrated by a FIFI-LS team member (C. F.), were collapsed and photometry was performed on the mean image.
The photometric values of the burst shown in Fig.~\ref{fig:sed}, along with their error, photometric aperture, instrument and observation date, are reported in Table~\ref{tab:photometry}.
\subsection{Light echo.}
\label{sec:light echo}
To cancel the influence of non-uniform extinction for the assessment of the change of the scattered light distribution due to the burst
and to compensate the decreasing surface brightness with growing distance from the source, a ratio image between PANIC $K\!s$ and UKIDSS $K$ frames was calculated. Before doing so,
the PSFs of the $K$ frame was convolved with a proper kernel to match that of the $K\!s$ frame. The applied photometric scaling factor was derived from the corresponding zero points of the images.
This turned out to be correct since the brightness ratio for field stars is in the order of unity.
The resulting distribution has an asymmetric bipolar morphology.
The asymmetry results from the inclination of the scattering cavities relative to the sky plane, leading to larger light distances for the blue-shifted lobe
for a given propagation period and vice versa.
The surfaces of scattered light of fixed travel time can be approximated as paraboloids with the star at the origin. Thereby, it can be shown that at the onset of an outburst (t=0),
the size ratio between the back- and forward-scattering lobes is zero and increases to unity over time.
Thus, an approximately equal extent of the scattering lobes of a YSO seen close to edge-on is only expected for steady-state illumination.
Moreover, for the purpose of judging the lobe sizes, it must also be taken into account that forward
scattering dominates in the blue-shifted lobe while backward scattering is
prevalent in the red-shifted lobe. Because of the different scattering
efficiencies, the red-shifted lobe will be less bright in general, and thus
appear smaller for a given surface brightness sensitivity.
The same analysis on a later PANIC $K\!s$ image (February 2016) confirms the light echo by verifying both its propagation and dilution. For deriving the onset of the burst (June 2015)
we estimated the light travel time derived from the mean of the extent of both lobes.
\subsection{Spectral energy distribution.}
\label{sec:sed}
As the energy released by the burst is thermalized by dust grains and radiated away in the infrared, pre-burst fluxes in this wavelength range are crucial for deriving the increase in luminosity.
For this purpose, pre-outburst non-saturated IRAC images of S255IR (taken in sub-array mode, courtesy of G. Fazio, program ID 40440) were retrieved from the IPAC infrared science archive. Image mosaics
were obtained from the dithered images for each channel using a custom IDL\footnote{IDL is a trademark of Exelis Visual Information Solutions, Inc.} procedure. Flux densities for NIRS\,3 were estimated as described above.
Similarly, flux densities for the N60 and N160 AKARI bands were derived from the corresponding images after retrieval from the ISAS/JAXA archive (the wide AKARI channels centered at 90 and 140\,$\mu$m are saturated).
These data were complemented with an archival ISO/SWS spectrum (courtesy D. Whittet), $H$ and $K\!s$ VLT/ISAAC photometry (private communication by S. Correia, ESO proposal ID 074.C-0772(B)) as well as flux densities
from the literature~\citep{wang,zin,longmore,itoh,simpson} and surveys (AKARI, BGPS, MSX, UKIDSS).
The outburst SED was obtained using data from PANIC, GROND, SINFONI, FORCAST and FIFI-LS taken in February 2016. The pre- and burst luminosities were derived by integrating the dereddened SEDs and assuming a distance
of 1.8$\pm$0.1\,kpc.
To deredden the SED we adopt our visual extinction $A_V$=\,44$\pm$16\,mag
and $R_V$\,=\,3.1 extinction law~\citep{draine}. The resulting pre- and outburst luminosities are
(2.9$\pm^{1}_{0.7}$)$\times$10$^4$\,L$_\odot$ and (1.6$\pm^{0.4}_{0.3}$)$\times$10$^5$\,L$_\odot$, respectively. The uncertainties
were inferred from the small distance error and the uncertainty on the visual extinction.
We also note that because of the close to edge-on view of its circumstellar disk, the estimated luminosity might represent a lower limit. The proper value may be up to two times higher~\citep{whitney}.
\subsection{Infrared Integral Field Unit Spectroscopy.}
Our $K$-band (1.95--2.5\,$\mu$m) integral field unit (IFU) spectroscopic data of S255IR\,NIRS\,3 consist of three datasets
taken with SINFONI~\citep{eisenhauer} on VLT (ESO, Chile) with R$\sim$4000 and NIFS~\citep{mcgregor} on the Gemini North telescope with R$\sim$5300. Adaptive-optics assisted mode was used for all runs.
The first SINFONI dataset (26th of February 2016) was centred on NIRS\,3 (25 mas pixel scale and field of view - FoV - of 0''.8$\times$0''.8).
The second SINFONI dataset (9th of March 2016) was taken with the lowest spatial sampling (250\,mas pixel scale and FoV of 8''$\times$8'') and maps an area of $\sim$11''$\times$11'' around NIRS\,3, covering NIRS\,3, NIRS\,1
and their outflow cavities.
NIFS data (100\,mas pixel scale and FoV of 3''$\times$3'') were collected on the 8th of April 2016 and maps the red-shifted outflow cavity covering an area of $\sim$6''$\times$6''.
SINFONI data were reduced with the standard reduction pipeline in GASGANO~\citep{modigliani} that includes dark and bad pixel removal, flat-field and optical distortion correction, wavelength calibration with arc lamps,
and image combination to obtain the final 3D data cube.
NIFS data reduction was accomplished in a similar fashion using the Gemini package in IRAF.
All data were corrected for atmospheric transmission and flux calibrated by means of standard stars.
SINFONI pre-outburst IFU spectra, taken between February and March 2007, were retrieved from the ESO Data Archive and already published in a previous paper~\citep{wang}. They map an area (70''$\times$70'') larger
than our observations.
To compare pre- and outburst data, spectra were extracted from our data cubes within an area of 1''.5$\times$1.''5 (centred on NIRS\,3 source; $\rm RA(J2000):6^h12^m54.0^s; DEC(J2000):+17\degree59^{'}23.1^{''}$) and
6''$\times$6'' (centred on $RA(J200):6^h12^m54.4^s; DEC(J2000):+17\degree59^{'}24.7^{''}$) for NIRS\,3 (Figure~\ref{fig:spectra}, left panel)
and the red-shifted outflow cavity (Figure~\ref{fig:spectra}, right panel), respectively.
\subsection{Visual extinction variability vs. accretion burst.}
\label{sec:Av}
In principle, large variations of the extinction towards NIRS\,3 could be
a possible cause of the infrared variability of NIRS\,3. However, this argument does not fit our observations for the following reasons.
{\it a)} The increase in luminosity is detected at NIR, MIR and FIR wavelengths.
This implies that the variation in luminosity cannot be due to a change in visual extinction, that would indeed affect the NIR part of the SED but
would just marginally affect the MIR part of the spectrum and would not affect its FIR portion.
{\it b)} The increase in luminosity at IR wavelengths temporally matches the flares of the methanol masers in the radio.
Moreover the maser positions match that of NIRS 3 (Sanna et al., in preparation).
{\it c)} In addition, the light echo observed at NIR wavelengths matches the timing of the CH$_3$OH maser flares.
{\it d)} The increase in the SED luminosity matches the appearance (CO, He\,I, Na\,I, lines) and increase in luminosity (Br$\gamma$, H$_2$) of the IR lines.
{\it e)} Visual extinction affects the intensity of both lines and continuum as well as the continuum's color.
As the extinction affects to the same extent both lines and continuum, the equivalent width (EW) of the lines should not change.
On the other hand, EWs and fluxes of Br$\gamma$ and H$_2$ lines, already present in the pre-outburst spectrum in the outflow cavity, show a large variability
and are anti-correlated, as expected in accretion events~\citep{lorenzetti13}.
This cannot be explained with extinction variability.
Moreover, the slope of the K-band spectra on source and outflow cavities does not show a significant change
before and during the outburst, i.e., we do not detect any blueing of the spectra in 2016.
{\it f)} As reported in the next subsection, the visual extinction towards the outflow cavities does not change significantly.
Therefore we infer that the visual extinction did not significantly change between 2007 and 2016.
\subsection{Visual extinction towards the outflow cavities and on-source.}
To estimate the visual extinction towards both blue and red-shifted lobes, we use pairs of lines from [FeII]
(2.016/2.254\,$\mu$m) and H$_2$ (2.034/2.437\,$\mu$m, 2.122/2.424\,$\mu$m, 2.223/2.413\,$\mu$m)
species that originate from the same upper level.
We detect shocked emission lines ([FeII] and H$_2$) in two knots positioned in the blue- and red-shifted lobes, respectively.
Assuming that the emission arises from optically thin gas, the observed line ratios depend only on the differential extinction.
The theoretical values are derived from the Einstein coefficients~\citep{deb}
and frequencies of the transitions. We adopt the Rieke \& Lebofsky~\citep{rieke} extinction
law to correct for the differential extinction and compute A$_V$.
Values inferred are A$_V$=18$\pm$5\,mag (A$_V$([FeII])=16$\pm$10\,mag and A$_V$(H$_2$)=19$\pm$5\,mag ) for the blue-shifted lobe and A$_V$=28$\pm$9\,mag (A$_V$([FeII])=27$\pm$14\,mag and A$_V$(H$_2$)=29$\pm$12\,mag)
for the red-shifted lobe.
Similar values, but with larger uncertainties, are inferred from the pre-outburst spectra (2007) of the blue-shifted (A$_V$(H$_2$)=18$\pm$7\,mag) and red-shifted (A$_V$(H$_2$)=27$\pm$15\,mag) outflow cavities.
These latter measurements suggest that the visual extinction towards the lobes did not significantly change.
We also infer the visual extinction towards NIRS\,3 from the H$_2$ lines detected in the outburst spectrum (Fig.~\ref{fig:spectra}, left panel), obtaining A$_V$(H$_2$)=44$\pm$16\,mag.
The inferred value is consistent with A$_V$=46\,mag from Simpson et al. 2009~\citep{simpson}.
Finally, from the pre-outburst $J-H$ and $H-K$ colors of the UKIDSS photometry, we obtain A$_V\sim$48--62\,mag by assuming that NIRS\,3 is a O6 spectral type positioned on the ZAMS.
This latter is consistent with our previous estimate.
Therefore we adopt A$_V$(H$_2$)=44$\pm$16\,mag towards the source and use this value to deredden the SED.
\subsection{Line Luminosity.}
The line luminosities in the red-shifted lobe were inferred from the deredden line fluxes using A$_V$=28$\pm$9\,mag and assuming a distance to the object of 1.8$\pm$0.1\,kpc~\citep{burns}.
\subsection{Energy of burst, accreted mass and mass accretion rate.}
The burst energy ($E=\Delta L_{acc} \times \Delta t$, where $\Delta t$ is the length of the burst) delivered so far (until mid-April 2016, date of the last available observation)
by the burst is inferred from $\Delta L_{acc}$=(1.3$\pm^{0.4}_{0.3}$)$\times$10$^5$\,L$_\odot$, obtained from the pre- and outburst SED,
and considering that the burst begun around mid-June 2015. The accreted mass is inferred assuming that the stellar radius is $R_*$=10\,R$_\odot$ and using $E=GM_*M_{acc}/R_*$, where G is the gravitational constant, $M_*$ is
the mass of the star and $M_{acc}$ is the accreted mass. Finally, the mass accretion rate is obtained from $\dot{M}_{acc}\sim(2 \Delta L_{acc} R_*)/(G M_*)$.
\subsection{Disk accretion by fragmentation vs merging.}
A conceptual question involves whether our observations can rule out the
possibility that what we are seeing is not disk fragmentation but
stellar capture and merger via tidal disruption~\citep{bally}.
This scenario proposes that massive
stars build up by capturing other stars in disks, then tidally
disrupting them. However, both timescales
and energetics of the outburst of S255 NIRS\,3 seem to be
inconsistent with such a scenario.
For example, assuming that the mass of the central object is $\sim$20\,M$_\odot$,
the merger with a brown dwarf of 0.1\,M$_\odot$ would produce an energy of $\sim$5$\times$10$^{47}$\,erg released
in $\sim$10$^4$\,yr. These values are much larger than what we inferred from the outburst of NIRS\,3.
\subsection{Data availability.}
The datasets generated and analysed during the current study are not publicly available due to a proprietary period restriction of 12 months.
After this period ESO/SINFONI and GROND data will become publicly available from the European Southern Observatory science archive (http://archive.eso.org/eso/eso\_archive\_main.html)
under programs ID 296.C-5037(A) and 096.A-9099(A); Gemini/NIFS data from the Gemini Observatory archive (https://archive.gemini.edu/searchform) under program ID GN-2016A-DD-5;
SOFIA/FORCAST and FIFI-LS data from the SOFIA science archive (https://dcs.sofia.usra.edu/dataRetrieval/SearchScienceArchiveInfoBasic.jsp) under program ID 04\_0047.
Upon request the authors will provide all data supporting this study.
\end{methods}
\begin{addendum}
\item A.C.G., R.G.L., and T.P.R. were supported by Science Foundation Ireland, grant 13/ERC/I2907.
A.S. was supported by the Deutsche
Forschungsgemeinschaft (DFG) Priority Program 1573.
We thank the ESO Paranal and Gemini Observatory staff for their support.
B.S. thanks Sylvio Klose for helpful discussions concerning the light echo.
This research is partly based on observations collected at the VLT (ESO Paranal, Chile) with programme 296.C-5037(A) and
at the Gemini Observatory (Program ID GN-2016A-DD-5). Gemini Observatory
is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF
on behalf of the Gemini partnership: the National Science Foundation (United States), the National Research Council (Canada), CONICYT (Chile),
Ministerio de Ciencia, Tecnolog\'{i}a e Innovaci\'{o}n Productiva (Argentina), and Minist\'{e}rio da Ci\^{e}ncia, Tecnologia e Inova\c{c}\~{a}o (Brazil).
\item[Competing Interests] The authors declare that they have no
competing financial interests.
\item[Correspondence] Correspondence and requests for materials
should be addressed to Alessio Caratti o Garatti~(email: alessio@cp.dias.ie).
\item[Author contributions]
A.C.G., B.S. and R.G.L. wrote the intial manuscript and worked on the data reduction and analysis.
R.G.L. supported SINFONI observations.
A.C.G., B.S. and J.E. are the PIs of the ESO, Calar Alto as well as Gemini, and SOFIA proposals, respectively.
A.S., R.C., and L.M. worked on the maser and radio data.
T.P.R, A.S., R.C., L.M., C.M.W., R.D.O., W.J.dW. are coauthors of the proposals.
J.G. provided GROND data. A.K., C.F., and R.K. supported SOFIA observations.
J.M.I. supported PANIC observations.
All coauthors commented the manuscript.
\end{addendum}
\begin{figure}
\centering
\includegraphics[width=15cm]{Fig1.eps}
\caption{
{
Pre-outburst, outburst and brightness-ratio images of S255IR\,NIRS\,3.
{\bf [Upper left]} UKIDSS pre-outburst $K$-band image, December 2009: NIRS\,3 is centered on a bipolar nebula, towards north-east and south-west, namely the red- and blue-shifted outflow cavities of the protostar.
Another HMYSO (NIRS\,1) is situated $\sim$2.5'' west of NIRS\,3.
{\bf [Upper right]} PANIC outburst $K\!s$-band image (November 2015), showing the brightening of NIRS\,3 and its outflow cavities.
{\bf [Lower left]} Ratio between PANIC $K\!s$ (Nov. 2015) and UKIDSS $K$ (Dec. 2009) images.
The gradual increment of brightness ratio towards the HMYSO represents the light echo -- a record of the burst history.
The echo asymmetry is primarily due to the outflow inclination with respect to the sky plane. For guidance concentric circles mark light travel distances in the plane of the sky separated by one month.
{\bf [Lower right] Ratio between PANIC $K\!s$ (Feb. 2016) and UKIDSS $K$ (Dec. 2009) images showing the
motion of the light echo. The lower bar indicates the range of the relative brightness increase.}
}
\label{fig:Kband}
}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=6.5cm]{Fig2b.eps}
\includegraphics[width=10.4cm]{Fig2a.eps}
\caption{
{
Pre- and outburst K-band spectra of S255IR\,NIRS\,3 (left) and its red-shifted outflow cavity (right).
{\bf [Left]} SINFONI/VLT pre-outburst (in red) and outburst (in black) K-band spectra of S255IR\,NIRS\,3.
{\bf [Right]} SINFONI/VLT pre-outburst (in red) and outburst (in black) K-band spectra of the red-shifted outflow cavity of S255IR\,NIRS\,3. The spectrum in the outburst phase shows a large number
of emission lines typical of disk-mediated accretion outbursts.
}
\label{fig:spectra}
}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=11cm]{Fig3.eps}
\caption{
Pre- (cyan and blue) and outburst (orange and red) spectral energy distributions (SEDs) of S255IR\,NIRS\,3. Full colors indicate photometric measurements while light colors denote spectra.
\label{fig:sed}
}
\end{figure}
\begin{table}
\caption{Flux densities of SED burst values}
\centering
\begin{tabular}{ c c c c c c }
Wavelength & Flux density & Error & Aperture radius & Instrument & Date\\
$\mu$m & [Jy] & [Jy] & [''] & & [D/M/Y] \\
1.66 & 0.0011 & 0.0005 & 0.6 & PANIC & 15 01 2016\\
2.2 & 0.1620 & 0.0032 & 0.6 & GROND & 18 02 2016\\
7.7 & 352.8 & 18.8 & 5 & FORCAST & 04 02 2016\\
11.1 & 74.8 & 8.6 & 6 & FORCAST & 04 02 2016\\
19.7 & 580.9 & 24.1 & 8 & FORCAST & 04 02 2016\\
31.5 & 3223 & 56.8 & 10 & FORCAST & 04 02 2016\\
37.1 & 5136 & 71.7 & 11 & FORCAST & 04 02 2016\\
60 & 13720 & 97.2 & 10 & FIFI-LS & 01 03 2016\\
86 & 11880 & 81.3 & 12 & FIFI-LS & 01 03 2016\\
142 & 3490 & 24.2 & 20 & FIFI-LS & 01 03 2016\\
162 & 2630 & 29.0 & 22 & FIFI-LS & 01 03 2016\\
\end{tabular}
\label{tab:photometry}
\end{table}
\section{References}
\bibliographystyle{naturemag}
|
2103.02876
|
\section{The comparison of the $^{125}$T\MakeLowercase{e}-NMR spectrum with the previous report}
Figure \ref{FigS.1} shows normal-state $^{125}$Te-NMR spectra of a $^{125}$Te-enriched sample used on this measurement and of a natural-Te sample used on the previous measurement\cite{NakamineJPSJ2019}.
The NMR-signal intensity of the enriched sample is much stronger than that of the previous sample, although the sample volume of two samples is almost the same,
In addition, the linewidth of the two samples is almost the same, ensuring that the sample quality of the enriched one is unchanged with the natural one.
\begin{figure}[H]
\begin{center}
\includegraphics[width=6cm,clip]{FigS1.eps}
\end{center}
\caption{\label{FigS.1}Normal-state $^{125}$Te-NMR spectra of the $^{125}$Te-enriched sample used in this study and of a natural-Te sample used on the previous measurement\cite{NakamineJPSJ2019}.
The $\Delta f$ for horizontal axis is the difference in NMR frequency from the peak position($\Delta f = f - f_{\rm peak}$).}
\end{figure}
\section{Alignment of the magnetic field }
In order to apply magnetic field exactly parallel to the $b$ or $c$ axis, we used the split superconducting (SC) magnet generating a horizontal field combined with a single-axis rotator with the $a$-axis being the rotation axis.
Figure \ref{FigS.2} shows the angular dependence of $^{125}$Te-NMR spectra (A) and the resonance peaks (B) of both the Te(1) and Te(2) sites at 4.2 K under the field of 1 T in the $bc$ plane.
The obtained angular dependence is consistent with the previous reports\cite{TokunagaJPSJ2019, NakamineJPSJ2019}.
As well as the previous study\cite{NakamineJPSJ2019}, we measured the spectra near the $b$ axis with a small angle step and plotted the resonance peak as shown in Fig.~\ref{FigS.2} (C), and aligned the field exactly parallel to the $b$ axis.
\begin{figure}[H]
\begin{center}
\includegraphics[width=8.7cm,clip]{FigS2.eps}
\end{center}
\caption{\label{FigS.2}(A) The angular dependence of the $^{125}$Te-NMR spectra of both the Te(1) and Te(2) sites at 4.2 K.
The magnetic field is 1 T in the $bc$ plane.
(B) The angular dependence of the resonance peaks for both the Te(1) and Te(2) sites.
(C) The zoomed view of the angular dependence of the resonance frequency of the Te(2) site around the $b$ axis.}
\end{figure}
\section{Evaluation of the Knight shift and linewidth of NMR spectrum in $\bm{H \parallel c}$}
As shown in Fig.~S2, the Te(1) and Te(2) signals were distinct when $H \parallel b$ but overlapped when $H \parallel c$.
It is difficult to distinguish between the two signals because the line width is broad in $H \parallel c$.
Therefore, we determined the Knight shift from the spectral-peak frequency because the contribution of the Te(2) site was dominant even in the broad spectrum when $H \parallel c$.
In addition, we defined the linewidth as the left-side half width at half maximum (LHWHM) as shown in the inset of Fig.~\ref{FigS.3} to avoid the contribution of the Te(1) signal.
The broken curves in Figs. 1(D), 2(B), 2(C), and the inset of Fig. S3 indicating the Te(1) and Te(2) sites are guide to eye and are depicted as follows.
As seen in $H \parallel b$, the signal intensity ratio of Te(1) and Te(2) is not 1:1 by some reasons such as difference of nuclear spin-spin relaxation rate, and of pulse conditions.
Thus, the broken curves were determined by double Lorentzian fitting with the fixed signal area ratio [Te(1)/Te(2)~0.56], referring to the spectrum reported by Tokunaga {\it et al.}\cite{TokunagaJPSJ2019}
\section{temperature dependence of the linewidth of the NMR spectrum}
In general, the linewidth of the NMR spectrum becomes broader in the SC state than that in the normal state, by the presence of the SC diamagnetic shielding and the distribution of the spin susceptibility.
Therefore, the broadening of the linewidth below $T_{\rm c}$ is one of the reliable confirmation that the NMR spectrum in the SC state is measured.
\begin{figure}[H]
\begin{center}
\includegraphics[width=5.5cm,clip]{FigS3.eps}
\end{center}
\caption{\label{FigS.3}The temperature dependence of the increase of FWHM in the SC state ($\Delta$FWHM ) when $H \parallel b$ (A) and of the increase of left-side HWHM in the SC state when $H \parallel c$ (B).
$\Delta$FWHM is defined as $\Delta$FWHM $\equiv$ FWHM($T$) - FWHM in the normal state.
Dashed lines are added as guide to eye and arrows show superconducting transition temperature obtained from the AC susceptibility measurements.
(inset) The typical NMR spectrum in $H \parallel c$ to show the definition of left-side HWHM.
Dotted lines represents the result of fitting by the two-Lorentzian function.}
\end{figure}
Figure~\ref{FigS.3} shows the behavior of the linewidth increase of the NMR spectrum for $H\parallel b$ (A) and $H \parallel c$ (B).
The arrows in the figure show the $T_{\rm c}$ determined with the AC susceptibility measurement.
The full width at the half maximum (FWHM) in $H\parallel b$ is obtained by the Gaussian fitting of the whole spectrum.
The FWHMs below $\sim 0.8$ K are not plotted because a shoulder peak appears at the lower-frequency side (i.e. at the lower Knight shift) than the original peak frequency below 0.9 K, as mentioned in the main text.
The spectrum in $H \parallel b$ is broadened from just below $T_{\rm c}$.
When $H \parallel c$, we plotted LHWHM as mentioned above.
The spectra in $H\parallel c$ is also broadened from just below $T_{\rm c}$, as in the case when $H \parallel b$.
The broadening of the spectrum from just below $T_{\rm c}$ when $H\parallel b$ and $H\parallel c$ ensures that the NMR spectrum in the SC state is certainly measured.
|
2209.13271
|
\section{Introduction}
Let ${\boldsymbol x}_\star({\boldsymbol \theta})$ be a function defined implicitly as the solution to an optimization problem, \mbox{${\boldsymbol x}_\star({\boldsymbol \theta}) = \argmin_{{\boldsymbol x} \in {\mathbb R}^d} f({\boldsymbol x}, {\boldsymbol \theta})$}.
Implicitly defined functions of this form appear in different areas of machine learning, such as
reinforcement learning~\citep{pfau2016connecting,du2017stochastic}, generative adversarial networks~\citep{metz2016unrolled}, hyper-parameter optimization~\citep{bengio_2000,pedregosa2016hyperparameter,franceschi_2017,lorraine_2020,bertrand2020implicit},
meta-learning \citep{franceschi_2018,metalearning-implicit}, deep equilibrium models, \citep{bai2019deep} or optimization as a layer \citep{kim_2017,amos_2017, wang2019satnet}, to name a few.
The main computational burden of using implicit functions in a machine learning pipeline is that the Jacobian computation $\partial_{\boldsymbol \theta} {\boldsymbol x}_\star({\boldsymbol \theta})$ is challenging:
since the implicit function ${\boldsymbol x}_\star({\boldsymbol \theta})$ does not usually admit an explicit formula, classical automatic differentiation techniques cannot be applied directly.
Two main approaches have emerged to compute the Jacobian of implicit functions: \emph{implicit} differentiation and \emph{unrolled} differentiation. This paper focuses on unrolled differentiation while recent surveys on implicit differentiation are \citep{tutorial_implicit,blondel2021efficient}.
Unrolled differentiation, also known as iterative differentiation, starts by approximating the implicit function ${\boldsymbol x}_\star(\cdot)$ by the output of an iterative algorithm, which we denote ${\boldsymbol x}_t(\cdot)$, and then differentiates through the algorithm's computational path
\citep{wengert_1964,domke_2012,deledalle_2014, franceschi_2018,shaban2019truncated}.
\textbf{Contributions.} We analyze the convergence of the unrolled Jacobian by establishing worst-case bounds on the Jacobian suboptimality $\|\partial {\boldsymbol x}_t({\boldsymbol \theta})-\partial{\boldsymbol x}_{\star}({\boldsymbol \theta})\|$ for different methods, more precisely:
\begin{enumerate}[leftmargin=*]
\item We provide a general framework for analyzing unrolled differentiation on quadratic objectives for any gradient-based method (Theorem \ref{thm:master_identity}). For gradient descent and the Chebyshev iterative method, we derive closed-form worst-case convergence rates in Corollary \ref{cor:worst_case_rates_gd} and Theorem \ref{thm:bound_cheby}.
\item We identify the ``curse of unrolling'' as a consequence of this analysis: A fast asymptotic rate inevitably leads to a condition number-long burn-in phase where the Jacobian suboptimality increases. While it is possible to reduce the length, or the peak, of this burn-in phase, this comes at the cost of a slower asymptotic rate (Figure \ref{fig:empirical_comparison}).
\item Finally, we describe a novel approach to mitigate the curse of unrolling, motivated by the theory of Sobolev orthogonal polynomials (Theorem \ref{thm:soboloev_method}).
\end{enumerate}
\textbf{Related work} The analysis of unrolling was pioneered in the work of \citet{gilbert1992automatic}, who showed the asymptotic convergence of this procedure for a class of optimization methods that includes gradient descent and Newton's method. These results have been recently extended by \citet{ablin_2020}, where they develop a complexity analysis for non-quadratic functions. The rate they obtain are valid only for monotone optimization algorithms, such as gradient descent with small step size. In our case, we instead focus on the more restrictive quadratic optimization setting but obtain rates for a more general class of algorithms, including non-monotone and accelerated algorithms such as GD with large step size, or the Chebyshev method.
Thanks to the more restrictive quadratic setting, we're able not only to provide a tighter analysis, but also derive \emph{accelerated} variants. Finally, none of the discussed works observes or provides explanation for the \emph{burn-in} phase that we discuss in Section \ref{sec:cvg_rate}.
\section{Preliminaries and Notations}
In this paper, we consider an objective function $f$ parametrized by two variables ${\boldsymbol x} \in {\mathbb R}^d$ and ${\boldsymbol \theta} \in {\mathbb R}^k$. We are interested in the derivative of optimization problem solutions:
\begin{empheq}[box=\mybluebox]{equation*}\tag{OPT}\label{eq:obj_fun}
\begin{aligned}
\vphantom{\sum_i^k} &\text{{\bfseries Goal}: approximate Jacobian } \partial {\boldsymbol x}_\star({\boldsymbol \theta})\,, \text{ where } {\boldsymbol x}_\star({\boldsymbol \theta}) = \argmin_{{\boldsymbol x} \in {\mathbb R}^d} f({\boldsymbol x}, {\boldsymbol \theta}) \,.
\end{aligned}
\end{empheq}
We also assume ${\boldsymbol x}_\star({\boldsymbol \theta})\in {\mathbb R}^d$ is the unique minimizer of $f({\boldsymbol x}, {\boldsymbol \theta})$, for some fixed value of ${\boldsymbol \theta}$. In particular, we will describe the rate of convergence of $\|\partial {\boldsymbol x}_t({\boldsymbol \theta})-\partial {\boldsymbol x}_\star({\boldsymbol \theta})\|$ in the specific case where $f$ is a quadratic function in its first argument, and ${\boldsymbol x}_t$ is generated by a first-order method.
\textbf{Notations.} In this paper, we use upper-case letter for polynomials ($P,\, Q$), bold lower-case for vectors (${\boldsymbol x},\, {\boldsymbol b}$), and bold upper-case for matrices (${\boldsymbol H}$). We write $\mathcal{P}_t$ the set of polynomials of degree at least $t$. We distinguish $\nabla f({\boldsymbol x}, {\boldsymbol \theta})$, that refers to the gradient of the function $f$ in its first argument, and $\partial_{\boldsymbol \theta} f({\boldsymbol x}, {\boldsymbol \theta})$ is the partial derivative of $f$ w.r.t. its second argument, evaluated at $({\boldsymbol x}, {\boldsymbol \theta})$ (if there is no ambiguity, we write $\partial$ instead of $\partial_{\boldsymbol \theta}$). Similarly, $\partial {\boldsymbol x}({\boldsymbol \theta})$ is the Jacobian of the vector-valued function ${\boldsymbol x}(\cdot)$ evaluated at ${\boldsymbol \theta}$ and $P'(\cdot)$ the derivative of the polynomial $P$.
The Jacobian $\partial {\boldsymbol H}({\boldsymbol \theta})$ a tensor of size $k \times p \times p$. We'll denote its tensor multiplication by a matrix $\boldsymbol{Q} \in {\mathbb R}^{p \times q}$ by $\partial {\boldsymbol H}({\boldsymbol \theta})\boldsymbol{Q}$, with the understanding that this denotes the multiplication along the first axis, that is, the resulting tensor is characterized by $[\partial {\boldsymbol H}({\boldsymbol \theta})\boldsymbol{Q}]_i = \partial {\boldsymbol H}({\boldsymbol \theta})_i \boldsymbol{Q}$ for $1 \leq i \leq k$.
Finally, we denote by
$\ell$ the strong convexity constant of the objective function $f$, by $L$ its smoothness constant, and by $\kappa=\ell/L$ its inverse condition number.
\subsection{Problem Setting and Main Assumptions}
Throughout the paper, we make the following three assumptions. The first one assumes the problem is quadratic. The second one is more technical and assumes the Hessian commutes with its derivative respectively, which simplifies considerably the formulas. As we'll discuss in the Experiments section, we believe that some of these assumptions could potentially be relaxed. The third assumption restricts the class of algorithms to first-order methods.
\begin{assumption}[Quadratic objective] \label{assump:quadratic}
The function $f$ is a quadratic function in its first argument,
\begin{equation}
f({\boldsymbol x}, {\boldsymbol \theta}) \stackrel{\text{def}}{=} \tfrac{1}{2} {\boldsymbol x}^\top {\boldsymbol H}({\boldsymbol \theta})\, {\boldsymbol x} + {\boldsymbol b}({\boldsymbol \theta})^\top {\boldsymbol x}\,,
\end{equation}
where $\ell {\boldsymbol I} \preceq {\boldsymbol H}({\boldsymbol \theta}) \preceq L{\boldsymbol I}$ for $0< \ell < L$\,.
We write ${\boldsymbol x}_\star({\boldsymbol \theta})$ the minimizer of $f$ w.r.t. the first argument.
\end{assumption}
\vspace{0.5em}\begin{assumption}[Commutativity of Jacobian] \label{assump:commutative}
We assume that ${\boldsymbol H}({\boldsymbol \theta})$ commutes with its Jacobian, in the sense that
\begin{equation}
\partial {\boldsymbol H}({\boldsymbol \theta})_i {\boldsymbol H}({\boldsymbol \theta}) = {\boldsymbol H}({\boldsymbol \theta})\partial {\boldsymbol H}({\boldsymbol \theta})_i~\text{ for } 1 \leq i \leq k\,.
\end{equation}
In the case in which ${\boldsymbol \theta}$ is a scalar ($k=1$), this condition amounts to the commutativity between matrices $\partial {\boldsymbol H}({\boldsymbol \theta}) {\boldsymbol H}({\boldsymbol \theta}) = {\boldsymbol H}({\boldsymbol \theta}) \partial {\boldsymbol H}({\boldsymbol \theta}) $
\end{assumption}
\paragraph{Importance.} The previous assumption allows to have simpler expression for the Jacobian of ${\boldsymbol H}^t$. Notably, with this assumption the Jacobian of ${\boldsymbol H}$ can be expressed as
\begin{equation}
\partial \left[{\boldsymbol H}({\boldsymbol \theta})^t\right] = t \partial {\boldsymbol H}({\boldsymbol \theta}) {\boldsymbol H}^{t-1}({\boldsymbol \theta})\,.\label{eq:derivative_commute}
\end{equation}
This assumption is verified for example for Ridge regression (see below).
Although quite restrictive, empirical evidence (see Appendix~\ref{scs:commutatity}) suggest that this assumption could potentially be relaxed or even lifted entirely.
\begin{example}[Ridge regression] \label{example:ridge} Let us fix ${\boldsymbol A} \in {\mathbb R}^{n \times d}, \bar{{\boldsymbol x}} \in {\mathbb R}^d, \, {\boldsymbol b} \in {\mathbb R}^n$, and let ${\boldsymbol H}(\theta) = {\boldsymbol A}^\top{\boldsymbol A} + \theta{\boldsymbol I}$, where $\theta$ in this case is a scalar. The ridge regression problem
\[
f({\boldsymbol x}, \theta) = \tfrac{1}{2} \left(\|{\boldsymbol A} {\boldsymbol x}-{\boldsymbol y}\|_2^2 +\theta \|{\boldsymbol x}-\bar{{\boldsymbol x}}\|_2^2\right) ,
\]
satisfies Assumptions \ref{assump:quadratic}, \ref{assump:commutative}, as $f({\boldsymbol x}, \theta)$ is quadratic in ${\boldsymbol x}$, and $\partial {\boldsymbol H}(\theta) {\boldsymbol H}(\theta) = {\boldsymbol H}(\theta)\partial {\boldsymbol H}(\theta) = {\boldsymbol H}(\theta)$.
\end{example}
In our last assumption we restrict ourselves to \textit{first-order method}, widely used in large-scale optimization. This includes methods like gradient descent or Polyak's heavy-ball.
\begin{assumption}[First-order method] \label{assump:first-order}
The iterates $\{{\boldsymbol x}_t\}_{t=0\ldots}$ are generated from a first-order method:
\[
{\boldsymbol x}_t({\boldsymbol \theta}) \in {\boldsymbol x}_0({\boldsymbol \theta}) + \spn\{ \nabla f({\boldsymbol x}_0({\boldsymbol \theta}), {\boldsymbol \theta}) ,\; \ldots, \; \nabla f({\boldsymbol x}_{t-1}({\boldsymbol \theta}), {\boldsymbol \theta}) \}\,.
\]
\end{assumption}
\subsection{Polynomials and First-Order Methods on Quadratics} \label{sec:link_poly}
The polynomial formalism has seen a revival in recent years, thanks to its simple and constructive analysis~\citep{scieur2020regularized, pedregosa2020averagecase, pmlr-v139-agarwal21a}.
It starts from a connection between optimization methods and polynomials that allows to cast the complexity analysis of optimization methods as polynomial bounding problem.
\subsubsection{Connection with Residual Polynomials}
After $t$ iterations, one can associate to any optimization method degree $t$ polynomial $P_t(\lambda) = a_t \lambda^t + \cdots + 1$ such that $P_t(0) = 1$ and
the error at iteration $t$ then can be expressed as
\begin{equation}\label{eq:residual_polynomial}
{\boldsymbol x}_t({\boldsymbol \theta}) - {\boldsymbol x}_\star({\boldsymbol \theta}) = {\color{purple}P_t({\boldsymbol H}({\boldsymbol \theta}))}({\boldsymbol x}_0({\boldsymbol \theta}) - {\boldsymbol x}_\star({\boldsymbol \theta}))\,.
\end{equation}
This polynomial ${\color{purple}P_t({\boldsymbol H}({\boldsymbol \theta}))}$ is called the {\color{purple}\textit{residual polynomial}} and represents the output of evaluating the originally real-valued polynomial $P_t(\cdot)$ at the matrix ${\boldsymbol H}$.
\begin{example} \label{example:gd}
In the case of gradient descent, the update reads ${\boldsymbol x}_{t+1} - {\boldsymbol x}^\star = (\boldsymbol{I} - \gamma {\boldsymbol H}({\boldsymbol \theta}))({\boldsymbol x}_t - {\boldsymbol x}^\star)$, which yields the residual polynomial $P_t({\boldsymbol H}({\boldsymbol \theta})) = (\boldsymbol{I} - \gamma {\boldsymbol H}({\boldsymbol \theta}))^t$\,.
\end{example}
\subsubsection{Worst-Case Convergence Bound}\label{scs:worstcase}
From the above identity, one can quickly compute a worst-case bound on the associated optimization method.
Using the Cauchy-Schwartz inequality on \eqref{eq:residual_polynomial}, we obtain
\[
\|{\boldsymbol x}_t({\boldsymbol \theta}) - {\boldsymbol x}_\star({\boldsymbol \theta})\| \leq \|{\color{purple}P_t({\boldsymbol H}({\boldsymbol \theta}))}\|\,\|{\boldsymbol x}_0({\boldsymbol \theta}) - {\boldsymbol x}_\star({\boldsymbol \theta})\|\,.
\]
We are interested in the performance of the first-order method on a class of quadratic functions (see Assumption \ref{assump:quadratic}), whose Hessian has bounded eigenvalues. Using the fact that the $\ell_2$-norm of a matrix is equal to its largest singular value, the worst-case performance of the algorithm then reads
\begin{equation}\label{eq:worst_case_optim}
\|{\boldsymbol x}_t({\boldsymbol \theta}) - {\boldsymbol x}_\star({\boldsymbol \theta})\| \leq \max_{\lambda\in[\ell,L]}|{\color{purple}P_t(\lambda)}|\,\|{\boldsymbol x}_0({\boldsymbol \theta}) - {\boldsymbol x}_\star({\boldsymbol \theta})\|\,.
\end{equation}
Therefore, the worst-case convergence bound is a function of the polynomial associated with the first-order method, and depends on the bound over the eigenvalue of the Hessian ($\lambda\in[\ell,L]$).
\subsubsection{Expected Spectral Density and Average-Case Complexity}
We recall the average-case complexity framework \citep{pedregosa2020averagecase, paquette2022halting, cunha2021only}, which provides a finer-grained convergence analysis than the worst-case. This framework is crucial in developing an accelerated method for unrolled differentiation (Section \ref{sec:accelerated_unroling}).
Instead of considering the worst instance from a class of quadratic functions, average-case analysis considers that functions are drawn \textit{at random} from the class. This means that, in Assumption \ref{assump:quadratic}, the matrix ${\boldsymbol H}({\boldsymbol \theta})$, the vector ${\boldsymbol b}({\boldsymbol \theta})$ and the initialization ${\boldsymbol x}_0(\theta)$ in Assumption \ref{assump:first-order} are sampled from some (potentially unknown) probability distributions. Surprisingly, we do not require the knowledge of these distributions, instead, the quantity of interest is the \textit{expected spectral density} $\mu(\lambda)$, defined as
\begin{equation}\label{eq:def_expected_spectral_density}
\textstyle \mu(\lambda) \stackrel{\text{def}}{=} \mathbb{E}_{{\boldsymbol H}({\boldsymbol \theta})}[\mu_{{\boldsymbol H}({\boldsymbol \theta})}(\lambda)], \qquad \mu_{{\boldsymbol H}({\boldsymbol \theta})}(\lambda) \stackrel{\text{def}}{=} \frac{1}{d}\sum_{i=1}^d \delta(\lambda-\lambda_i({\boldsymbol H}({\boldsymbol \theta}))).
\end{equation}
In Equation~\ref{eq:def_expected_spectral_density}, $\lambda_i({\boldsymbol H}(\theta))$ is the $i$-th eigenvalue of ${\boldsymbol H}({\boldsymbol \theta})$, $\delta(\cdot)$ is the Dirac's delta and $\mu_{{\boldsymbol H}({\boldsymbol \theta})}(\lambda)$ is the \textit{empirical spectral density} (i.e., $\mu_{{\boldsymbol H}({\boldsymbol \theta})}(\lambda) = 1 / d$ if $\lambda$ is an eigenvalue of ${\boldsymbol H}({\boldsymbol \theta})$ and $0$ otherwise).
Assuming ${\boldsymbol x}_0({\boldsymbol \theta})-{\boldsymbol x}_\star({\boldsymbol \theta})$ is independent of ${\boldsymbol H}({\boldsymbol \theta})$,\footnote{This assumption can be removed as in \citep{cunha2021only} at the price of a more complicated expected spectral density.} the average-case complexity of the first-order method associated to the polynomial $P_t$ as
\begin{equation}\label{eq:average-case_analysis}
\mathbb{E}[\|{\boldsymbol x}_t({\boldsymbol \theta})-{\boldsymbol x}_\star({\boldsymbol \theta})\|^2] = \mathbb{E}[\|{\boldsymbol x}_0({\boldsymbol \theta})-{\boldsymbol x}_\star({\boldsymbol \theta})\|^2]\int P_t^2(\lambda) \;\mathop{}\!\mathrm{d}\mu(\lambda)\,.
\end{equation}
Here, the term in $P_t$ is algorithm-related, while the term in $\mathop{}\!\mathrm{d} \mu$ is related to the difficulty of the (distribution over the) problem class. As opposed to the worst-case analysis, where only the \textit{worst} value of the polynomial impacts the convergence rate, the average-case rate depends on the expected value of the squared polynomial $P_t$ over the \textit{whole} distribution. Note that in the previous equation, the expectation is taken over problem instances and not over any stochasticity of the algorithm.
\section{The Convergence Rate of differentiating through optimization}
\label{sec:cvg_rate}
We now analyze the rate of convergence of gradient descent and Chebyshev algorithm (optimal on quadratics). We first introduce the \textit{master identity} (Theorem~\ref{thm:master_identity}), which draws a link between how well a first-order methods estimates $\partial {\boldsymbol x}_\star({\boldsymbol \theta})$, its associated residual polynomial ${\color{purple}P_t}$, and its derivative ${\color{teal}P_t'}$.
\begin{restatable}[Master identity]{theorem}{masteridentity}\label{thm:master_identity}
Under Assumptions \ref{assump:quadratic}, \ref{assump:commutative}, \ref{assump:first-order}, let ${\boldsymbol x}_t({\boldsymbol \theta})$ be the $t^{\text{th}}$ iterate of a first-order method associated to the residual polynomial $P_t$. Then the Jabobian error can be written as
\begin{align}
\partial {\boldsymbol x}_t({\boldsymbol \theta}) - \partial {\boldsymbol x}_\star({\boldsymbol \theta}) & = \big({\color{purple}P_t({\boldsymbol H}({\boldsymbol \theta}))} - {\color{teal}P_t'({\boldsymbol H}({\boldsymbol \theta}))}{\boldsymbol H}({\boldsymbol \theta})\big) (\partial {\boldsymbol x}_0({\boldsymbol \theta})-\partial {\boldsymbol x}_\star({\boldsymbol \theta})) \nonumber\\
&\quad+ {\color{teal}P_t'({\boldsymbol H}({\boldsymbol \theta}))} \partial_{\boldsymbol \theta}\nabla f({\boldsymbol x}_0({\boldsymbol \theta}), {\boldsymbol \theta})\,.
\label{eq:distance_to_optimum}
\end{align}
\end{restatable}
The above identity involves the {\color{teal}{\bfseries derivative}} of the residual polynomials and not only the {\color{purple}{\bfseries residual}} polynomial, as was the case for minimization \eqref{eq:residual_polynomial}. This difference is crucial and will result in different rates for the Jacobian suboptimality than classical ones for objective or iterate suboptimality.
For conciseness, our bounds make use of the following shorthand notation
\[
G \stackrel{\text{def}}{=} \| \partial_{\boldsymbol \theta}\nabla f({\boldsymbol x}_0({\boldsymbol \theta}), {\boldsymbol \theta})\|_F.
\]
\subsection{Worst-case Rates for Gradient Descent}
\label{sub:worst_case_gd}
We consider the fixed-step gradient descent algorithm,
\begin{align}
{\boldsymbol x}_t({\boldsymbol \theta}) = {\boldsymbol x}_{t-1}({\boldsymbol \theta})-\nabla f({\boldsymbol x}_{t-1}({\boldsymbol \theta}), {\boldsymbol \theta}).
\end{align}
As mentioned in Example \ref{example:gd}, the associated polynomial reads $P_t = (1-h\lambda)^t$. We can deduce its convergence rate after injecting the polynomial in the master identity from Theorem \ref{thm:master_identity}.
\begin{theorem} \label{thm:rate_gd}
Under Assumptions \ref{assump:quadratic}, \ref{assump:commutative}, let ${\boldsymbol x}_t({\boldsymbol \theta})$ be the $t^{\text{th}}$ iterate of gradient descent scheme with step size $h>0$. Then,
\begin{align*}
\|\partial {\boldsymbol x}_t({\boldsymbol \theta}) - \partial {\boldsymbol x}_\star({\boldsymbol \theta})\|_F & \leq \max_{\lambda \in[\ell, L]} \Big| \left( 1-h\lambda \right)^{t-1}\big\{ (1+(t-1)h\lambda) \|\partial {\boldsymbol x}_0({\boldsymbol \theta})-\partial {\boldsymbol x}_\star({\boldsymbol \theta})\|_F + h t G \big\}\,\Big|.
\end{align*}
\end{theorem}
\paragraph{Discussion.}
The bound above is a product of two terms: the first term, $\left( 1-h\lambda \right)^{t-1}$, is the convergence rate of gradient descent and decreases exponentially for $h \leq \frac{2}{L + \ell}$, while the second term is increasing in $t$. This results in two distinct phases in training: an initial {\bfseries burn-in} phase, where the second term dominates and the Jacobian suboptimality might be increasing, followed by an {\bfseries linear convergence} phase where the exponential term dominates. This phenomenon can be seen empirically, see Figure \ref{fig:intro_jac}. As an illustration, the next corollary exhibits explicit rates for gradient descent in the two special cases where $h=\frac{1}{L}$ (short steps) and $h=\frac{2}{L+\ell}$ (large steps, maximize asymptotic rate).
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figures/phases.pdf}
\caption{{\bfseries The Phases of Unrolling}.
Gradient descent and the Chebyshev method exhibit two distinct phases during unrolled differentiation: an initial burn-in phase, where the Jacobian suboptimality increases, followed by a convergent phase with an asymptotic linear convergence. The maximum of the suboptimality is similar for both algorithms, but Chebyshev peaks sooner than gradient descent, after a number of iterations equal to (roughly) the square root of the one required by gradient descent. The distinct phases, as well as their relative duration, are predicted by the theoretical worst-case bounds of Corollary \ref{cor:worst_case_rates_gd} and Theorem \ref{thm:bound_cheby}.
Both plots are run on the same problem, a ridge regression loss on the \texttt{breast-cancer} dataset. }
\label{fig:phase_unrolling}
\end{figure}
\vspace{1em}
\begin{corollary}\label{cor:worst_case_rates_gd}
For the step size $h=1/L$, the rate of Theorem \ref{thm:rate_gd} reads
\begin{align*}
\|\partial {\boldsymbol x}_t({\boldsymbol \theta}) - \partial {\boldsymbol x}_\star({\boldsymbol \theta})\|_F & \leq \text{\colorbox{linearphase}{$\underbrace{\left(1-\kappa\right)^{t-1}}_{\text{exponential decrease}} $}}\Bigg\{\text{\colorbox{burnin}{$\underbrace{\vphantom{\left(1-\kappa\right)^{t-1}}\left(1+\kappa(t-1)\right)}_{\text{\vphantom{p}linear increase}}$}}\|\partial {\boldsymbol x}_0({\boldsymbol \theta})-\partial {\boldsymbol x}_\star({\boldsymbol \theta})\|_F + \text{\colorbox{burnin}{$\frac{t}{L}$}}G\,\Bigg\}.
\end{align*}
Assuming $G = 0$, the above bound is monotonically decreasing.
If instead we take the worst-case optimal (for minimization) step size $h = 2/(L+\ell)$, then we have
\begin{align*}
\|\partial {\boldsymbol x}_t({\boldsymbol \theta}) - \partial {\boldsymbol x}_\star({\boldsymbol \theta})\|_F & \leq \text{\colorbox{linearphase}{$\underbrace{\left(\tfrac{1-\kappa}{1+\kappa}\right)^{t-1}}_{\text{exponential decrease}}$}} {\Bigg\{}\text{\colorbox{burnin}{$\underbrace{\vphantom{\left(\tfrac{1-\kappa}{1+\kappa}\right)^{t-1}}|2t-1|}_{\text{\vphantom{p}linear increase}}$}}\,\|\partial {\boldsymbol x}_0({\boldsymbol \theta})-\partial {\boldsymbol x}_\star({\boldsymbol \theta})\|_F + \text{\colorbox{burnin}{$\frac{2t}{L+\ell}$}}{G\Bigg\}}\,.
\end{align*}
Moreover, assuming $G = 0$, the maximum of the upper bound over $t$ can go up to
\[
\|\partial {\boldsymbol x}_t({\boldsymbol \theta}) - \partial {\boldsymbol x}_\star({\boldsymbol \theta})\|_F \leq O_{\kappa\rightarrow 0}\left(\tfrac{1}{\kappa}\|\partial {\boldsymbol x}_0({\boldsymbol \theta})-\partial {\boldsymbol x}_\star({\boldsymbol \theta})\|_F\right) \quad
\text{at} \quad t \approx \tfrac{1}{\kappa}\,.
\]
\end{corollary}
In this Corollary, we see a trade-off between the linear convergence rate (exponential in $t$) and the linear growth in $t$. When the step size is small $(h=1/L)$, the linear rate is slightly slower than the rate of the larger step size ($h=2/(\ell+L)$. However, the term in $t$ is \textit{way smaller} for the small step size. This makes a big difference in the convergence: for $h=1/L$, there is \textit{no} local increase, which is not the case for $h=\frac{2}{\ell+L}$. In the next Corollary we provide a bound on the step size $h$ to guarantee a monotone convergence of the Jacobian.
\begin{restatable}{corollary}{monotonicity}\label{cor:monotonicity}
Assuming $G=0$, the bound of Theorem \ref{thm:rate_gd} is monotonically decreasing for $t\geq 1$ if the step size $h$ from Theorem \ref{thm:rate_gd} satisfies $0<h<\sqrt{2}/L$.
\end{restatable}
This bound contrasts with the condition on the step size of gradient descent, which is $h\leq 2/L$, with an optimal value of $h=2/(\ell+L)$ \citep{nesterov2004introductory}. This trade-off between asymptotic rate and length of the burn-in phase leads us to formulate:
\titlebox{The curse of unrolling}{\parbox{12.5cm}{
To ensure convergence of the Jacobian with gradient descent, we must either \textbf{1)} accept that the algorithm has a burn-in period proportional to the condition number $1/\kappa$, or \textbf{2)} choose
a small step size that will \textit{slow down} the algorithm's asymptotic convergence.
}}
\subsection{Worst-case Rates for the Chebyshev method}
\label{sub:worst_case_cheby}
We now derive a convergence-rate analysis for the Chebyshev method, which achieves the best worst-case convergence rate for the minimization of a quadratic function with a bounded spectrum.
\paragraph{Chebyshev method and Chebyshev polynomials.} We recall the properties of the Chebyshev method (see e.g. \citep[Section 2]{d2021acceleration} for a survey). As mentioned in \S \ref{scs:worstcase}, the rate of convergence of a first-order method associated with the residual polynomial $P_t$ can be upper bounded by $\max_{\lambda \in[\ell,\,L]} |P_t(\lambda)|$. Let $\tilde C_t$ be the Chebyshev polynomial of the first kind of degree $t$, and define
\begin{align}\label{def:chebyshev_poly}
C_t(\lambda) = \tfrac{\tilde C_t(m(\lambda))}{\tilde C_t(m(0))},\qquad m : [\ell,L]\rightarrow [0,1],\,\;\; m(\lambda) = \tfrac{2\lambda-L-\ell}{L-\ell}.
\end{align}
A known property of Chebyshev polynomials is that the \textit{shifted and normalized} Chebyshev polynomial $C_t$ is the residual polynomial with smallest maximum value in the $[\ell, L]$ interval.
This implies that the Chebyshev method, which is the method associated with this polynomial, enjoys the \textit{best} worst-case convergence bound on quadratic functions. Algorithmically speaking, the Chebyshev method reads,
\[
{\boldsymbol x}_{t}({\boldsymbol \theta}) = {\boldsymbol x}_{t-1}({\boldsymbol \theta}) - h_t \nabla f({\boldsymbol x}_{t-1}({\boldsymbol \theta}), {\boldsymbol \theta}) + m_t({\boldsymbol x}_{t-1}({\boldsymbol \theta})-{\boldsymbol x}_{t-2}({\boldsymbol \theta}))\,,
\]
where $h_t$ is the step size and $m_t$ the momentum. Those parameters are time-varying and depend only on $\ell$ and $L$.
The following Proposition shows the rate of convergence of the Chebyshev method.
\begin{restatable}{theorem}{boundcheby}\label{thm:bound_cheby} Under Assumptions~\ref{assump:quadratic},\ref{assump:commutative},
let $\xi \stackrel{\text{def}}{=} (1-\sqrt{\kappa})/(1+\sqrt{\kappa})$, and ${\boldsymbol x}_t({\boldsymbol \theta})$ denote the $t^{\text{th}}$ iterate of the Chebyshev method. Then, we have the following convergence rate
\begin{align*}
\|\partial {\boldsymbol x}_t({\boldsymbol \theta}) - \partial {\boldsymbol x}_\star({\boldsymbol \theta})\|_F & \leq \text{\colorbox{linearphase}{$\underbrace{\left(\tfrac{2}{\xi^t+\xi^{-t}}\right)}_{\text{exponential decrease}}$}} \Bigg\{\text{\colorbox{burnin}{$\underbrace{\vphantom{\left(\tfrac{1-\kappa}{1+\kappa}\right)^{t-1}}\left| \tfrac{2t^2}{1-\kappa}-1 \right|}_{\text{\vphantom{p}quadratic increase}}$}}\|\partial {\boldsymbol x}_0({\boldsymbol \theta}) - \partial {\boldsymbol x}_\star({\boldsymbol \theta})\|_F + \text{\colorbox{burnin}{$\vphantom{\left(\tfrac{1-\kappa}{1+\kappa}\right)^{t-1}} \frac{2t^2}{L-\ell}$}} G \Bigg\}\,.
\end{align*}
In short, the rate of the Chebyshev algorithm for unrolling in $O( t^2\xi^t)$. Moreover, assuming $G = 0$, the maximum of the upper bound over $t$ can go up to
\[
\|\partial {\boldsymbol x}_t({\boldsymbol \theta}) - \partial {\boldsymbol x}_\star({\boldsymbol \theta})\|_F \leq O_{\kappa\rightarrow 0}\left(\tfrac{2}{\kappa}\|\partial {\boldsymbol x}_0({\boldsymbol \theta})-\partial {\boldsymbol x}_\star({\boldsymbol \theta})\|_F\right) \quad \text{at}\quad t\approx 2\sqrt{\tfrac{1}{\kappa}}\,.
\]
\end{restatable}
\paragraph{Discussion.} Despite being optimal for minimization, the rate of the Chebyshev method has an additional $O(t^2)$ factor. Due to this term, the bound diverges at first, similarly to gradient descent with the optimal step size $h=\frac{2}{\ell+L}$, but sooner. This behavior is visible on Figure \ref{fig:phase_unrolling}.
\section{Accelerated Unrolling: How fast can we differentiate through optimization?}
\label{sec:accelerated_unroling}
We now show to accelerate unrolled differentiation. We first derive a lower bound on the Jacobian suboptimality and then propose a method based on \textit{Sobolev orthogonal polynomials} \citep{marcellan2015sobolev}, which are extremal polynomials for a norm involving both the polynomial and its derivative.
\subsection{Unrolling is at least as hard as optimization}
\begin{restatable}{proposition}{lowerboundunrolling}\label{prop:lower_bound_unrolling}
Let ${\boldsymbol x}_t$ be the $t$-th iterate of a first-order method. Then, for all iterations $t$ and for all ${\boldsymbol \theta}$, there exists a quadratic function $f$ that verifies Assumption \ref{assump:quadratic} such that $G=0$, and
\begin{equation}\label{eq:lower_bound_unrolling}
\|\partial {\boldsymbol x}_t({\boldsymbol \theta}) - \partial {\boldsymbol x}_\star({\boldsymbol \theta})\|_F \geq \frac{2}{\xi^t+\xi^{-t}}\|\partial {\boldsymbol x}_0({\boldsymbol \theta}) - \partial {\boldsymbol x}_\star({\boldsymbol \theta})\|_F, \quad \xi = \frac{1-\sqrt{\kappa}}{1+\sqrt{\kappa}}\,.
\end{equation}
\end{restatable}
This result tells us that unrolling is \textit{at least} as difficult as optimization. Indeed, the rate in \eqref{eq:lower_bound_unrolling} is known to be the lower bound on the accuracy for minimizing smooth and strongly convex function \citep{nemirovski1995information}. Moreover, although we are not sure if the lower bound is tight for all $t$, we have that when $t\rightarrow \infty$, the rate of Chebyshev method matches the above rate.
\subsection{Average-Case Accelerated Unrolling with Sobolev Polynomials}
\label{sub:acc_sobolev}
We now describe an accelerated method for unrolling based on Sobolev polynomials. We first introduce the definition of the Sobolev scalar product for polynomials.
\begin{definition}
The Sobolev scalar product (and its norm) for two polynomials $P,\,Q$ and a density function $\mu$ is defined as
\[
\langle P,\, Q\rangle_{\eta} \stackrel{\text{def}}{=} \int_{\mathbb{R}} P(\lambda)Q(\lambda) \mathop{}\!\mathrm{d} \mu + \eta \int_{\mathbb{R}} P'(\lambda)Q'(\lambda) \mathop{}\!\mathrm{d} \mu, \quad \|P\|^2_{\eta} \stackrel{\text{def}}{=} \langle P, P \rangle_{\eta}\,.
\]
\end{definition}
In the following we'll assume $\mu$ is the \textit{expected spectral density} associated with the current problem class and discuss in the next section some practical choices.
Using this scalar product, we can compute a (loose) upper-bound for $\|\partial {\boldsymbol x}_t({\boldsymbol \theta}) - \partial {\boldsymbol x}_\star({\boldsymbol \theta})\|_F$ and in Prop. \ref{prop:optimal_sobolev}, a polynomial minimizing this bound.
\begin{restatable}{proposition}{upperboundjacobian}\label{prop:upper_bound_jacobian}
Assume that $\|\partial{\boldsymbol H}({\boldsymbol \theta}) ({\boldsymbol x}_0({\boldsymbol \theta}) - {\boldsymbol x}_\star({\boldsymbol \theta}))\|_F\leq \eta \|\partial {\boldsymbol x}_0({\boldsymbol \theta}) - \partial {\boldsymbol x}_\star({\boldsymbol \theta})\|_F$. Then, under Assumption \ref{assump:quadratic}, \ref{assump:commutative} and \ref{assump:first-order}, we have the following bound for the average-case rate
\[
\mathbb{E}_{{\boldsymbol H}({\boldsymbol \theta})}\|\partial {\boldsymbol x}_t({\boldsymbol \theta}) - {\boldsymbol x}_\star({\boldsymbol \theta})\|_F^2 \leq 2 \|P_t\|^2_{\eta} \, \mathbb{E}_{{\boldsymbol H}({\boldsymbol \theta})}\|\partial {\boldsymbol x}_0({\boldsymbol \theta}) - \partial {\boldsymbol x}_\star({\boldsymbol \theta})\|_F^2 .
\]
\end{restatable}
\begin{restatable}{proposition}{optimalsobolev}\label{prop:optimal_sobolev}
Let $\{S_t\}$ be a sequence of orthogonal Sobolev polynomials, i.e., $\langle S_i,\;S_j \rangle >0$ if $i=j$ and $0$ otherwise, normalized such that $S_i(0)=1$. Then, the residual polynomial that minimizes the Sobolev norm can be constructed as
\[
P_t^\star = \argmin_{P\in\mathcal{P}_t:P(0)=1} \langle P,\;P\rangle_{\eta} = \frac{1}{A_t}\sum_{i=0}^t a_i S_i,\quad \text{where} \quad a_i = \frac{1}{\|S_t\|^2_{\eta}} \quad \text{and} \quad A_t = \sum_{i=0}^t a_i\,.
\]
Moreover, we have that $\|P_t^{\star}\|^2_{\eta} = 1/A_t$.
\end{restatable}
\paragraph{Limited burn-in phase.} Using the algorithm associated with $P^\star$ with parameters $\eta$ and $\mu$, we have
\[
\mathbb{E}_{{\boldsymbol H}({\boldsymbol \theta})} \|\partial {\boldsymbol x}_t({\boldsymbol \theta}) - \partial {\boldsymbol x}_\star({\boldsymbol \theta})\|_F^2 \leq 2\cdot \mathbb{E}_{{\boldsymbol H}({\boldsymbol \theta})} \|\partial {\boldsymbol x}_0({\boldsymbol \theta}) - {\boldsymbol x}_\star({\boldsymbol \theta})\|_F^2\,.
\]
This inequality follows directly from Proposition \ref{prop:upper_bound_jacobian} and the optimality of $P_t$ for $\|\cdot\|_\eta$: we have that $\|P_t\|_\eta \leq \|P_{t-1}\|_\eta$ (because $P_{t-1}$ is a feasible solution for $P_t$) and $\|P_t\|_\eta \leq 1$ (because $P_t=1$ is a feasible solution for any $t>1$). That is much better than the maximum bump of ($O(1/\kappa)$ from gradient descend (Theorem~\ref{cor:worst_case_rates_gd}) and from Chebyshev (Theorem \ref{thm:bound_cheby}).
\subsection{Gegenbaueur-Sobolev Algorithm}\label{scs:sobolevalgo}
\label{sub:avg_gegen}
In most practical scenarios one does not have access to the expected spectral density $\mu$. Furthermore, the rates of average-case accelerated algorithms have been shown to be robust with respect to distribution mismatch~\citep{cunha2021only}.
In these cases, we can approximate the expected spectral density by some distribution that has the same support. A classical choice is the Gegenbauer parametric family indexed by $\alpha \in\mathbb{R}$, which encompasses important distributions such as the density associated with Chebyshev's polynomials or the uniform distribution:
\begin{equation}\label{eq:gegenbaueur_density}
\mu(\lambda) = \tilde \mu(m(\lambda)), \quad \tilde \mu(x) = (1-x^2)^{\alpha-\frac{1}{2}}\quad \text{and} \quad m : [\ell,L]\rightarrow [0,1],\, m(\lambda) = \frac{2\lambda-L-\ell}{L-\ell}\,.
\end{equation}
We'll call the sequence of Sobolev orthogonal polynomials for this distribution \emph{Gegenbaueur-Sobolev} polynomials.
Although in general Sobolev orthogonal polynomials don't enjoy a three-term recurrence as classical orthogonal polynomials do, for this class of polynomials it's possible to build a recurrence for $S_t$ involving only $S_{t-2}$, $Q_t$ and $Q_{t-2}$, where $\{Q_t\}$ is sequence of Gegenbaueur polynomials \citep{marcellan1994gegenbauer}.
Unfortunately, existing work on Gegenbaueur and Gegenbaueur-Sobolev polynomials considers the un-shifted distribution $\tilde \mu$, but not $\mu$, and also doesn't consider the residual normalization $Q_t(0)=1$ or $S_t(0)=1$. After (painful) changes to shift and normalize the polynomials, we obtain a three-stages algorithm, summarized in the next Theorem.
\begin{figure}
\centering
\includegraphics[width=.8\linewidth]{figures/upper_bound_algo.pdf}
\caption{(Theoretical) \textbf{Worst-case convergence rates} for different algorithms, with $\ell = 0.5$, $L=10$, $\alpha = 1$ and $G = 0$. The upper bound of a method associated with the polynomials $\{P_t\}$ is defined as $\max_{\lambda\in[\ell,L]}|P_t(\lambda)|$, where $t$ is the iteration counter. The plot compares Gradient descent with large ($\tfrac{2}{L + \mu}$) and small ($\tfrac{1}{L}$) step size, Chebyshev, Sobolev (with $\eta = L/\ell$), and their asymptotic variants. We recognize in those curves the peaks of gradient descent and Chebyshev from Figure \ref{fig:phase_unrolling}.
}
\label{fig:upper_bound_algo}
\end{figure}
\begin{theorem}[Accelerated Unrolling] \label{thm:soboloev_method}
Let $P_t^{\star}$ be defined in Proposition \ref{prop:optimal_sobolev}, where the Sobolev product is defined with the density function $\mu$ \eqref{eq:gegenbaueur_density}. Then, the optimization algorithm associated to $P_t^\star$ reads
\begin{align}
{\boldsymbol y}_t & = {\boldsymbol y}_{t-1} - h_t \nabla f({\boldsymbol y}_{t-1}, {\boldsymbol \theta}) + m_t ({\boldsymbol y}_{t-1}-{\boldsymbol y}_{t-2}) \label{eq:gegenbaueur_algo} \\
{\boldsymbol z}_t & = c^{(1)}_t {\boldsymbol z}_{t-2} + c^{(2)}_t {\boldsymbol y}_t - c^{(3)}_t {\boldsymbol y}_{t-2} \label{eq:sobolev_algo} \\
{\boldsymbol x}_t & = \frac{A_{t-1}}{A_t} {\boldsymbol x}_{t-1} + \frac{a_t}{A_t} {\boldsymbol z}_t\,, \label{eq:avg_sobolev_algo}
\end{align}
for some step size $h_t$, momentum $m_t$, parameters $c^{(1)}_t,\,c^{(2)}_t,\,c^{(3)}_t$ that depend on $\alpha$, $\ell$, $L$ and $\eta$, and $A_t,\, a_t$ are defined in Proposition \ref{prop:optimal_sobolev}, whose recurrence are detailed in Appendix \ref{sec:sequences_sobolev}. Moreover, when $t \rightarrow \infty$, the recurrence simplifies into
\begin{align}
{\boldsymbol y}_t & = {\boldsymbol y}_{t-1} + h \nabla f({\boldsymbol y}_{t-1}, {\boldsymbol \theta}) + m ({\boldsymbol y}_{t-1}-{\boldsymbol y}_{t-2})\\
{\boldsymbol x}_t & = {\boldsymbol y}_{t} + m({\boldsymbol x}_{t-1}-{\boldsymbol y}_{t-2})\,,
\end{align}
where $m = \left(\frac{1-\sqrt{\kappa}}{1+\sqrt{\kappa}}\right)^2$ and $h = \left(\frac{2}{\sqrt{\ell}+\sqrt{L}}\right)^2$ are the momentum and step size of Polyak's Heavy Ball. Moreover, as $t\rightarrow\infty$, we have the same asymptotic linear convergence as the Chebyshev method, $
\lim\limits \sqrt[t]{\frac{\|\partial {\boldsymbol x}_t({\boldsymbol \theta}) - \partial {\boldsymbol x}_\star({\boldsymbol \theta})\|_F}{\|\partial {\boldsymbol x}_0({\boldsymbol \theta}) - \partial {\boldsymbol x}_\star({\boldsymbol \theta})\|_F}} \leq \sqrt{m}
$.
\end{theorem}
The accelerated algorithm for unrolling is divided into three parts. First,~\eqref{eq:gegenbaueur_algo} corresponds to an algorithm whose associated polynomials are Gegenbaueur polynomials. This is expected, as \citet{pedregosa2020averagecase} identified that all average-case optimal methods take the form of gradient descent with momentum. Second,~\eqref{eq:sobolev_algo} builds the Sobolev polynomial that corresponds to a weighted average of ${\boldsymbol y}_t$. Finally,~\eqref{eq:avg_sobolev_algo} is the weighted average of Sobolev polynomials that builds $P^\star$ in Proposition \ref{prop:upper_bound_jacobian}.
The non-asymptotic algorithm is rather complicated to implement; see Appendix \ref{sec:sequences_sobolev}. Moreover, it requires a bound on the spectrum of ${\boldsymbol H}({\boldsymbol \theta})$, namely $[\ell,\,L]$, and one also has to choose an associated expected spectral density $\mu(\lambda)$ (parametrized by $\alpha$) and the parameter $\eta$. Nevertheless, this is the method that achieves the best performance for problems that satisfy our assumptions.
Surprisingly, the \textit{asymptotic} version is extremely simple, as it corresponds to \textit{a weighed average of Heavy-Ball iterates}: the only required parameters are $\ell$ and $L$. It means that asymptotically, the algorithm is \textit{universally} optimal, i.e., it achieves the best performing rate as long as we can identify the bounds on the spectrum of ${\boldsymbol H}({\boldsymbol \theta})$.
Such universal properties have been identified previously in \citep{scieur2020universal}, who showed that all average-case optimal algorithms converge to Polyak's momentum independently of the expected spectral density $\mu$ (up to mild assumptions).
We have the same phenomenon here, but with the additional surprising (and counter-intuitive) feature that the asymptotic algorithm is also \textit{independent of $\eta$}.
\section{Experiments and Discussion} \label{sec:experiments}
\subsection{Experiments on least squares objective}
We compare multiple algorithms for estimating the Jacobian (\ref{eq:obj_fun}) of the solution of a ridge regression problem (Example \eqref{example:ridge}) for a fixed value of $\theta=10^{-3}$.
Figure~\ref{fig:intro_jac} shows the objective and Jacobian suboptimality on a ridge regression problem with the breast-cancer\footnote{\url{https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Diagnostic)}} as underlying dataset.
Figure~\ref{fig:empirical_comparison} shows the Jacobian suboptimality as a function of the number of iterations, on both the breast-cancer and bodyfat\footnote{\url{http://lib.stat.cmu.edu/datasets/}} dataset, and for a synthetic dataset (where ${\boldsymbol H}({\boldsymbol \theta})$ is generated as ${\boldsymbol A}^\top{\boldsymbol A}$, where each entry in ${\boldsymbol A}$ is generated from a standard Gaussian distribution).
Appendix \ref{apx:experiments} contains further details and experiments on a logistic regression objective.
We observe the early suboptimality increase of Gradient descent and Chebyshev algorithm as predicted by Theorem \ref{thm:rate_gd} and Theorem \ref{thm:bound_cheby}. Compared to Figure \ref{fig:upper_bound_algo}, that showed the theoretical rates, we see that there's a remarkable agreement between theory and practice, as both the early increase, the asymptotic rate and the ordering of the methods matches the theoretical prediction. We also see that Sobolev is the best performing algorithm in practice, as it avoids the early increase while matching the accelerated asymptotic rate of Chebyshev.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{figures/empirical_comparison.pdf}
\caption{{\bfseries Empirical comparison} of the Sobolev method introduced in \S \ref{scs:sobolevalgo} (with $\alpha=1$ and $\eta=1$), the Chebyshev method and Gradient descent on 3 different datasets. The Sobolev algorithm has the shortest burn-in phase, does not locally diverge and has an accelerated asymptotic rate of convergence. }
\label{fig:empirical_comparison}
\end{figure}
\subsection{Experiments on logistic regression objective}
In this section we provide some extra experiments on a non-quadratic objective. We choose the following regularized logistic regression objective
\begin{equation}
f({\boldsymbol x}, \theta) = \sum_{i=1}^n \varphi({\boldsymbol A}_i^\top {\boldsymbol x}, \sign({\boldsymbol b}))
\end{equation}
where $\varphi$ is the binary logistic loss, ${\boldsymbol A}, {\boldsymbol b}$ is the data, which we generated both from a synthetic dataset and the breast-cancer dataset as described in \S \ref{sec:experiments}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\linewidth]{figures/phases_logistic.pdf}
\caption{{\bfseries Two-phase dynamics in logistic regression.} The two-phase dynamics predicted by Corollary \ref{cor:worst_case_rates_gd} and Theorem \ref{thm:bound_cheby} empirically hold for a logistic regression objective. This objective not covered by our theory since it would violate the quadratic assumption (Assumption \ref{assump:quadratic}).}
\label{fig:phase_unrolling_logistic}
\end{figure}
The only significant difference with the least squares loss is the range of step-size values that exhibit the initial burn-in phase. While for the quadratic loss, these are step-sizes close to $2/(L + \mu)$, in the case of logistic regression, L is a crude upper bound and so this step-size is not necessarily the one that achieves the fastest convergence rate. The featured two-phase curve was computed using the step-size with a fastest asymptotic rate, computed through a grid-search on the step-size values.
\textbf{Limitations.} Our theoretical results are limited to first-order methods applied to quadratic functions. Many applications use first-order methods, the quadratic Assumption \ref{assump:quadratic}, as well as the commutativity Assumption \ref{assump:commutative}, are somewhat restrictive. However, experiments on objectives violating the non-quadratic and non-commutative assumption (Appendix \ref{apx:experiments} and \ref{scs:commutatity}) show that the two-phase dynamics empirically translate to more general objectives.
The Sobolev algorithm developed in this paper, however, might not generalize well outside the scope of quadratics. Nevertheless, the development of this accelerated method for unrolling highlights that we can adapt the design of current optimization algorithms so that they might perform better for automatic differentiation.
\clearpage
\printbibliography
\clearpage
\section*{Checklist}
\begin{enumerate}
\item For all authors...
\begin{enumerate}
\item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
\answerYes{}
\item Did you describe the limitations of your work?
\answerYes{}
\item Did you discuss any potential negative societal impacts of your work?
\textbf{N/A} (paper of theoretical nature)
\item Have you read the ethics review guidelines and ensured that your paper conforms to them?
\answerYes{}
\end{enumerate}
\item If you are including theoretical results...
\begin{enumerate}
\item Did you state the full set of assumptions of all theoretical results?
\answerYes{}
\item Did you include complete proofs of all theoretical results?
\answerYes{}
\end{enumerate}
\item If you ran experiments...
\begin{enumerate}
\item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?
\answerNo{}
\item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?
\answerYes{}
\item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?
\answerNo{}
\item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?
\answerYes{}
\end{enumerate}
\item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
\begin{enumerate}
\item If your work uses existing assets, did you cite the creators?
\answerNA{}
\item Did you mention the license of the assets?
\answerNA{}
\item Did you include any new assets either in the supplemental material or as a URL?
\answerNA{}
\item Did you discuss whether and how consent was obtained from people whose data you're using/curating?
\answerNA{}
\item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?
\answerNA{}
(Datasets are not sensitive)
\end{enumerate}
\item If you used crowdsourcing or conducted research with human subjects...
\answerNA{}
\end{enumerate}
\clearpage
|
astro-ph/9311051
|
\section*{Abstract}
We review how the various large-scale data constrain cosmological parameters
and, consequently, theories for the origin of large-scale structure in the
Universe. We discuss the form of the power spectrum implied by the correlation
data of galaxies and argue by comparing the velocity field implied by the
distribution of light with the observed velocity flows that the bias parameter,
$b$,
is likely to be constant in the linear regime. This then allows one to estimate
the density parameter, $\Omega$,
and $b$ directly from the \underline{data} on $\xi(r)$ and
the velocity fields. We show that it is consistent with low values of
$\Omega^{0.6}/b$. We discuss the ways to normalise the optical data at $z\sim0$
directly to the COBE (or other microwave background) data. The data on high-$z$
\underline{galaxies} allows one to further constrain the shape of the
\underline{primordial} power spectrum at scales which are non-linear today
($< 8h^{-1}$Mpc) and we discuss the consistency of the data with inflationary
models normalised to the large-scale structure observations.
\section{Introduction}
The purpose of this review is to discuss the constraints the current
astronomical data place on the values of cosmological parameters, such
as $\Omega$, the \underline{primordial} power spectrum $P(k)$ etc. Indeed,
the last few years brought wealth of observational data in optical, radio and
and microwave bands that allow now to constrain these parameters more tightly
and thus to gain further insight into the early Universe physics, particularly
in light of the inflationary picture.
We first discuss the implications for the inflationary scenario of
the (realistic) possibility that the Universe may turn out to be open.
We point out that the Grischuk-Zeldovich (1978) effect combined
with the COBE observations of the quadrupole
anisotropy of the microwave background radiation (MBR) would then preclude the
possibility that the Universe's homogeneity was produced by inflationary
expansion during its early evolution. Next, in Section 3 we discuss the
constraints on the \underline{primordial} power spectrum of the
\underline{light} distribution on scales $10-100
h^{-1}$Mpc from the optical data at $z\sim0$. Section 4
deals with comparing the (mass) power spectrum deduced from the peculiar
velocities data and that of the light distribution. We point out that the
two are consistent with each other and therefore it is indeed plausible to
assume that they are proportional to each other, i.e. the bias factor $b$
is constant. We further note that the comparison of the two leads to low
values of $\Omega^{0.6}/b$. In Section 5 we discuss the normalisation
of the power spectrum to the COBE results and whether the latter necessarily
imply flat Universe and/or Harrison-Zeldovich spectrum of the primordial
density fluctuations. Section 6 discusses the constraints on $P(k)$ on small
scales, which are non-linear today, from the data on high redshifts
\underline{galaxies} and the Uson et al (1992) object. We conclude in Section
7.
\section{$\Omega$ and inflation}
The value of the density parameter, or more precisely the curvature radius
of the Universe, is certainly the most critical test of inflation. Some
data seems consistent with the flat Universe (Kellerman 1993 and these
proceedings; Yahil 1993,these proceedings). But I think it is fair to say
that most observational data seems to point to low values of $\Omega$. In
particular the age ($t_0$) measurements indicate that $H_0t_0 > \frac{2}{3}$
(e.g. Lee 1992 and these proceedings) and dynamics of the Local Group strongly
prefers $\Omega \sim 0.1-0.2$ (Peebles 1989 and Tully 1993, these proceedings;
see also Kashlinsky 1992a and discussion in Sec.4).
What would it mean if the \underline{data} were to prove that the Universe is
old and open? There were some suggestions in the past attempting to accomodate
the open Universe with the
finely-tuned inflation (e.g. Ellis 1988; Steinhardt 1990), the idea being
that the Universe has undergone only the minimal numbers of $e$-foldings
(of order $\sim$65) necessary to make it homogeneous on scales not much
greater than the present horison, $R_{hor}
\sim 6000h^{-1}$Mpc. This would then make the curvature radius $\sim R_{hor}$
leading to $\Omega$ as low as $\sim 0.1-0.3$, but would also imply that we
just happened to be born at the time when the horison did not yet grow large
enough to encompass more than the inflation-blown homogeneous bubble.
The problem with such finely-tuned inflationary models is that they would
violate the COBE measurements of the quadrupole anisotropy
as discussed by Frieman, Kashlinsky and Tkachev (1993) (see also Turner 1991).
In that case the Universe would be inhomogoneous on scales greater than the
horison. If the amplitude of the superhorison harmonics of lengthscale $l$
is $h$, it would cause, via the Grischuk-Zeldovich (1978) effect, the
quadrupole microwave background anisotropy of the amplitude
$Q \simeq h(R_{hor}/l)^2$. Now both $l$ and $\Omega$ are, in inflationary
scenarios, functions of the number of $e$-foldings, $N$, while the amplitude
of $Q$ produced by the superhorison inhomogenieties cannot exceed the COBE
found value of $Q_{COBE} \simeq 5\times 10^{-6}$.
Thus one can express $l$ in terms of $\Omega$ and then
rewrite the constraint imposed by the Grischuk-Zeldovich
effect \underline{and} the COBE data
directly as a constraint on the present value
of the density parameter (Frieman, Kashlinsky and Tkachev 1993):
\begin{equation}
1-\Omega \leq \frac{Q_{COBE}}{h} \simeq O(10^{-6})
\end{equation}
The above means that if the Universe went through an inflationary phase
during its evolution (so that $h \gg 1$ on scales where inflationary smoothing
was inefficient), the COBE \underline{observations} require it to have the
density parameter within a factor $\sim 10^{-6}$ of unity.
Conversely, if it turns out
that observations prove that the Universe is open this would mean
that the homogeniety of the Universe has not originated by the inflationary
expansion.
\section{Structure at $z\sim 0$ and the primordial power spectrum}
The distribution of galaxies and galaxy systems (light) is now measured
fairly accurately
on scales up to $\sim 100h^{-1}$Mpc from various and independent datasets. All
the datasets give results which are in good agreement with each other and, if
the light distribution is at least proportional to that of mass and all find
substantially more (light) power on large scales than simple inflationary
models (e.g. cold-dark-matter - CDM) would predict. As
was mentioned, the data discussed
in this section measure the distribution of light and in order to get
information about the mass power spectrum one has to make assumptions about
the interdependence between the light and mass distributions.
We discuss below in chronological order the most accurate datasets
for determining the power spectrum of light on large scales:
1) Cluster-cluster correlation function was measured for Abell (Bahcall and
Soneira 1983) and Zwicky (Postman et al 1986) clusters. The data showed
that the correlation function of light remains positive on scales up to
$\simeq 100h^{-1}$Mpc and is roughly $\xi(r) \propto r^{-2}$ thus implying
the power spectrum of light, $P(k) \propto k^n$, of $n\simeq -1$ (Kaiser 1984;
Kashlinsky 1987,1991a). Furthermore, the data showed a systematic increase of
the correlation amplitude with the cluster richness (or mass). This as
suggested
by Kashlinsky (1987,1991a) can be explained within the gravitational clustering
model of cluster formation and the dependence of the increase on cluster masses
requires $n \simeq -1$ and is inconsistent with the standard CDM model
(Kashlinsky 1991a). There were some suggestions that the cluster-cluster
correlation is strongly biased on large scales by projection effects
(Sutherland
1988), but there is some evidence to the contrary (e.g. Szalay et al 1989).
Furthermore, since the power
implied by it is consistent with other and later findings discussed below,
there is good chance it reflects a true distribution of light, if not of the
mass itself.
2) The two-point galaxy correlation function of galaxies has been measured
in the APM survey (Maddox et al 1991). The measurements were done in the
$b$-band and covered approximately one square steradian of $\sim$2.5 million
galaxies. This is probably the most accurate measurement of the projected
angular correlation function $w(\Theta)$ with the systematic error being
less than $2\times 10^{-3}$. The results show significantly more power than
the standard CDM model would predict and imply $n<0$ on scales $<100h^{-1}$Mpc.
3) Picard (1991) has compiled the POSS catalogue in the $r$-band
of aprroximately 400,000
galaxies in projection and determined $w(\Theta)$ which is roughly
consistent with the APM survey. Similarly, the COSMOS machine results give
$w(\Theta)$ in good agreement with the APM data and the cluster-cluster
correlations (Collins et al 1992).
4) Redshift surveys allow one to map the galaxy distribution in the 3-D space,
but here one has to correct for the distortions induced by the peculiar
velocity flows (Kaiser 1988). Vogeley et al (1992) have measured the power
spectrum of the light distribution from the CfA redshift survey and
find results consistent with the ones mentioned above. Fisher et al
(1993) find similar results from the catalogue of $\sim$5,000 IRAS galaxies.
5) QDOT analysis of counts in cells (Saunders et al 1991) for
$\sim$2,000 IRAS galaxies gives results consistent with the above.
Thus the consistency of the above independent studies of different objects,
in different wavebands and at different depths indicates a good probability
of the
reality of these results. Consequently it may make little sense to discuss
cosmological models (e.g. CDM) in the context of one set of mearuments only
(e.g. COBE). One has to compare theories with \underline{all} observational
data, i.e. to normalise them simultaneously to the large-scale data, that
map the distribution of matter at the present epoch ($z\sim0$) and MBR
observations which map the mass distribution at the last scattering surface
(in the absence of reheating $z\sim 1100$). We devote the rest of this review
to discussing how such normalisation can be done (Juszkiewicz et al 1987;
Gorski 1991; Kashlinsky, 1991b;1992a,b,c) and what information it carries.
So what is the power spectrum (of light) implied by the large-scale structure
data? Below, we use mainly the APM data, but as discussed above the other
datasets would give consistent results. For small angles $\Theta$ the two-point
correlation function $w(\Theta)$ decreases with $\Theta$ as $\Theta^{-\gamma}$
with $\gamma\simeq 0.7$. This implies a power spectrum $P(k) \propto
k^{\gamma-2}$ for sufficiently large $k$ (small scales). On larger angular
scales $w(\Theta)$ falls off rapidly and its signal becomes lost in the
systematic noise ($\simeq 2\times 10^{-3}$). The fall-off may imply the
following things: 1) the power spectrum goes into the white-noise regime
($n=0$) leading to $\xi(r)\simeq0$; 2) $P(k)$ goes into the Harrison-Zeldovich
regime, $n=1$, where the correlation function is negative and falls off rapidly
with $r$: $\xi(r) \propto -r^{-4}$; 3) the power spectrum has the power index
$n$ even greater than the Harrison-Zeldovich value of 1 in which case the
correlation function decreases even more rapidly with $r$: $\xi(r) \propto
-r^{n+3}$ and the signal gets lost in the noise. Thus the only conclusion one
can make from the fall-off in $w(\Theta)$ at large $\Theta$ is that
these scales correspond to the power index $n\geq0$ at sufficiently small $k$.
Various forms for the fit were proposed by Kashlinsky (1991a,b) and Peacock
(1991).
Below we will adopt the form from Kashlinsky (1992c) which is particularly
simple to use for quick estimates and which gives essentially the same results
as the former two:
\begin{equation}
P(k) = \frac{2 \pi^2 \xi(r_8)}{k_0^3 \Phi_\xi(k_0 r_8; n)} \times
\left\{ \begin{array}{ll}
(\frac{k}{k_0})^n, & k<k_0\\
(\frac{k}{k_0})^{\gamma-2}, & k>k_0
\end{array}
\right.
\end{equation}
The above has been normalised to to the observed value, $\xi(r_8)=
(r_8/r_*)^{-\gamma-1}$ at $r_8=8h^{-1}$Mpc and $r_*=5.5h^{-1}$Mpc and
$\Phi(y,n)$$\equiv$$\int_0^1x^{2+n}j_0(xy)dx$$+$$\int_1^\infty x^{\gamma}
j_0(xy)dx$.
The value of the transition scale, $k_0^{-1}$, must be determined from matching
eq.(2) to the APM data on $w(\Theta)$. Note, that because of the rapid fall-off
in $\xi(r)$ for $n\geq0$, the value of $k_0^{-1}$ is rather insensitive to the
precise value of the free parameter $n$ as long as it remains positive on
large scales. Kashlinsky (1992c) has analysed the APM
data in the narrow magnitude slices, each slice $\Delta m\simeq 0.5$ wide,
thereby reducing effects due to evolution of galaxies lying at different
depths and finds $k_0^{-1}=50h^{-1}$Mpc from matching (2) to the APM data.
Peacock's (1991) form for $P(k)$ would give similar results when applied to
the non-scaled APM data in the narrow magnitude bins.
\section{Normalising to velocity field: does light trace mass?}
How is the power spectrum of the light distribution discussed in the previous
section related to the power spectrum of mass, which if one accepts the
inflationary prejudices is uniquely specified by the early Universe physics?
Obviously within the framework of inflationary models, the transition scale
to the Harrison-Zeldovich regime is not a free parameter and is approximately
equal to the horison scale at the epoch of the matter-radiation equallity.
It is thus expected that for such models $k_0^{-1}$ should be $\simeq 13
(\Omega h)^{-1}h^{-1}$Mpc, so that the correlation function of mass
for the standard CDM model ($\Omega=1, h=1/2$) has zero crossing at $\simeq
30h^{-1}$Mpc. This is considerably smaller than the value of $k_0^{-1}$
indicated by the data.
The answer to the above
question comes from comparing the power spectrum implied
by the peculiar velocity data (which map the \underline{mass} distribution)
with that of light discussed in the previous section. This has been done by
Kashlinsky (1992a), who proposed a method to relate the velocity field directly
to the correlation function, thereby eliminating the power spectrum entirely
from discussion. He then computed the peculiar velocity field implied directly
by the APM data on $w(\Theta)$ (assuming that the latter is proportional to
the mass power spectrum) and compared the results with the Great Attractor
peculiar field. The comparison is plotted on Fig.2 of Kashlinsky (1992a), which
shows that the peculiar velocity field (its amplitude \underline{and} coherence
length) due to the APM data is in good agreement with the Great Attractor
data. This therefore suggests that it is indeed reasonable to assume that at
least in the linear regime (scales $r>8h^{-1}$Mpc) the bias factor, $b$, can
(and must?) be assumed to be constant with scale.
Furthermore, using the methods developed in Kashlinsky (1992a) one can
determine
the density parameter $\Omega$ and the bias factor $b$ by comparing the
peculiar velocity data with the velocity field predicted by the APM. For
simplicity, we reformulate this method in terms of the three-dimensional
correlation function, $\xi(r)$. The ``dot" peculiar velocity correlation
function is defined as $\nu(r) =<\!\mbox{\boldmath$v$}(\mbox{\boldmath$x$})
\cdot \mbox{\boldmath$v$}(\mbox{\boldmath$x$}+\mbox{\boldmath$r$})\!>$, and,
if the light and mass power spectra are proportional to each
other, the power spectrum can be eliminated from discussion
and one can relate $\nu(r)$ directly
to $\xi(r)$ (see Kashlinsky 1992a for details). The relation between the two is
given by the second order differential equation:
\begin{equation}
\nabla^2 \nu(r) = -\frac{\Omega^{1.2}}{b^2} H_0^2 \xi(r)
\end{equation}
As discussed in the previous section, the APM data suggest that the zero
crossing of $\xi(r)$ occurs on large scale,
$\simeq 2.5k_0^{-1}\simeq 130h^{-1}$Mpc. Thus, as follows
from eq.(2), on scales, on which the velocity
data is most reliably determined ($\leq 60-70h^{-1}$Mpc) $\xi(r)$ is to a good
approximation given by $\xi(r) \simeq (r/r_*)^{-\gamma-1}$. Using this
approximation for $\xi(r)$ (rooted in \underline{observations}), one can
solve eq.(3)
analytically using as one boundary condition the fact that $\nu(0)$ must be
finite. Defining $V_* \equiv H_0r_* = 550$km/sec, the solution is given by:
\begin{equation}
\nu(r) = \nu(0) - \frac{\Omega^{1.2}}{b^2} \frac{V_*^2}{(1-\gamma)(2-\gamma)}
(\frac{r}{r_*})^{1-\gamma}
\end{equation}
The observed value of
the dot velocity correlation function at zero is given by e.g.
pairwise velocities, cluster velocity dispersions etc and is thought to be
$\sqrt{\nu(0)}\simeq 500-700$km/sec (e.g. Peebles 1987 and references cited
therein). The lower end of that range would be in better agreement with the
dipole MBR anisotropy (local) motion of $\simeq 630$km/sec.
The data on $\nu(r)$ were determined on scales $\leq 60h^{-1}$Mpc by
Bertschinger et al (1990) and we use their values at two linear
scales, 40 and 60$h^{-1}$Mpc, to determine from eq.(4) the values of $\nu(0)$
implied by the data for various values of $\Omega^{0.6}/b$. The results are
shown in the Table 1 below for $\gamma=0.7$.\vspace{0.3cm}\\
Table 1.
\begin{tabular}
{c | c c c } & $\Omega^{0.6}/b$ & {$r=40h^{-1}$Mpc} & {$r=60h^{-1}$Mpc} \\
$\sqrt{\nu(r)}$ & & 388 km/sec & 330 km/sec \\
\hline
& 1 & 1250 km/sec & 1300 km/sec \\
$\sqrt{\nu(0)}$ & 0.5 & 708 km/sec & 709 km/sec \\
& 0.3 & 526 km/sec & 500 km/sec\\
\hline
\end{tabular}
\vspace{0.3cm}\\
The second row shows the values for the data for $\sqrt{\nu(r)}$ at 40 and
60$h^{-1}$Mpc used in computing $\sqrt{\nu(0)}$ shown in Table 1 against the
values of $\Omega^{0.6}/b$. One can see from the Table that the values of
$\Omega^{0.6}/b$ preferred by the data on $\xi(r), \nu(r)$ and $\nu(0)$ are:
\begin{equation}
(0.2-0.3) < \frac{\Omega^{0.6}}{b} < (0.4-0.5)
\end{equation}
This analytical estimate
is consistent with our earlier results (Kashlinsky 1992a) and the
results presented by Brent Tully at this meeting, but disagrees with the POTENT
determination of these parameters (Yahil 1993, these proceedings).
\section{Normalising to COBE}
In order to determine/constrain $P(k)$ one must use the large scale data
discussed in the previous sections in conjunction with the COBE observations
on the MBR correlation function $C(\theta)$. However, such determination
can at present be made reliably only on scales below the curvature radius,
$R_{curv}=cH_0^{-1}/\sqrt{1-\Omega}$. Since the smallest scales subtended by
COBE ($\sim 10^o$) exceed the curvature radius for values of $\Omega$ even as
high as $\Omega \simeq 0.2-0.3$ (and the quadrupole scale always exceeds
$R_{curv}$ in open Universe) this analysis can be done only for the case of
flat Universe. We therefore, concentrate in this section on discussing how well
the standard inflationary scenario fits \underline{both} the APM/peculiar
velocity data which restrict $P(k)$ on scales $< 100h^{-1}$Mpc and the COBE
data
which subtend scales $>$$600h^{-1}$Mpc if $\Omega$=1.
In its conventional form, the inflationary
scenario makes two predictions: 1) the Universe must be flat ($\Omega$$=$$1$)
to a very high accuracy, and 2) the \underline{initial} spectrum of the
primordial density fluctuations must have the Harrison-Zeldovich
form, $P(k)$$\propto$$k$. In the standard model fluctuations do not grow
on sub-horison scales during the radiation dominated era; on larger scales
the growth would be self-similar. This leads to a uniquie shape
of the transfer function
accounting for the modification of the power spectrum, such that
the Harrison-Zeldovich
shape must be preserved on sufficiently large scales. (Constraints on the
transfer function or $P(k)$ on small scales are discussed in the next section).
The first-year COBE data give the values of
$\sqrt{C_{10^o}}$, the signal convolved with the $10^o$ FWHM beam and the
quadrupole anisotropy $Q$. For $P(k)$ given by (2) with the Harrison-Zeldovich
spectrum ($n=1$) and normalised to the APM
data via $k_0$ we obtain within the uncertainty of the bias factor:
\begin{equation}
\sqrt{C_{10^o}(0)} \simeq
\frac{2.6\times 10^{-5}}{b} \frac{k_0^{-1}}{50h^{-1}Mpc}
\; \; ; \; \;
Q \simeq \frac{1.2\times 10^{-5}}{b} \frac{k_0^{-1}}{50h^{-1}Mpc}
\end{equation}
One can see that it is possible to fit the COBE results if the bias factor is
sufficiently large, $b$$>$2. Indeed, the intrinsic unceratinty in the APM data
would probably restrict $k_0$ to lie in the range 40$h^{-1}$Mpc$<$$k_0^{-1}$$
<$60$h^{-1}$Mpc with $k_0^{-1}$=50$h^{-1}$Mpc being the best fit.
COBE data give $\sqrt{C_{10^o}(0)}$=(1.1$\pm$0.18)$\times
$$10^{-5}$ and $Q$=(4.8$\pm$1.5)$\times$$10^{-6}$ (Smoot et al 1992),
thus leading to $b$=(2.3$\pm$0.4).
One can tighten these constraints further by normalising the APM data
to the observed peculiar velocities in the Great Attractor region,
thus explicitly eliminating $b$ (Kashlinsky 1992c). We do this using
the peculiar velocity data at $r$=$40h^{-1}$Mpc from Bertschinger et al (1990)
The numbers
for the quadrupole anisotropy, $Q$, and the smoothed MBR correlation amplitude,
$\sqrt{C_{10^o}(0)}$, we obtain are consistent within the error bars
with the COBE
results and can then be interpreted as supporting the standard inflationary
picture.
We emphasize at the same time that inflation would be inconsistent
with the COBE results and the large-scale structure data if either the
transition to the HZ regime is less sharp than assumed in (2) or if more power
is found on scales that currently cannot be probed by galaxy samples.
Furthermore, the presence of the gravitational
wave background, which is an inevitable consequence of inflation as discussed
by Paul Steinhardt in these proceedings, would produce an extra contribution
to (6) and lead to (much) \underline{larger}
values of $b$ required by matching the APM data to COBE.
\section{High-$z$ objects: constraints on $P(k)$ on small scales}
On scales which are non-linear \underline{today},
$r<8h^{-1}$Mpc, the APM data and eq.(2)
give little direct information on the \underline{primordial} form of $P(k)$.
However, inflation makes also very robust predictions what this form should
be once $P(k)$ is normalised to the large-scale data.
Precisely because inflationary models have no free parameters
($b$ can now be fixed as discussed above), the early
evolution of density fluctuations would lead to
a unique (for a given $\Omega_{CDM}$, $\Omega_{HDM}$ and $\lambda$) transfer
function thus also constraining the small scale power spectrum
which is responsible for collapse of objects at high $z$. This was discussed by
Cavaliere and Szalay (1986) and Efstathiou and Rees (1978) and in the context
of the inflationary models normalised to the large-scale data by Kashlinsky
(1993). We briefly review the results here.
As discussed above in order to account for all the data,
inflationary models have to
be normalised to the power spectrum seen in the APM catalog. I.e. the zero
crossing of the two-point correlation function
should occur on scale $ r $$ \simeq $$
2.5k_0^{-1} $$ \simeq $100$ - $$ 150h^{-1} $Mpc instead of 30$(\Omega h)^{-1}
h^{-1}$Mpc for the standard, $\Omega$=1 and $h$=$\frac{1}{2}$, CDM model.
Two ways have been suggested to overcome this problem and to increase
the power on large scales: 1) Introduce the
cosmological constant $\Lambda$$\equiv$$3H_0^2\lambda$ such that
$\Omega$+$\lambda$=1 ; in Efstathiou et al (1988)
it is shown that such model with
$\Omega h$$\sim$0.1$-$0.2 would produce the large scale power seen in the APM.
2) Introduce two types of dark matter: HDM+CDM; e.g. Davis et al (1992) and
Taylor and Rowan-Robinson (1992)
shows that if $\Omega_{HDM}$$\simeq$0.3 with the remaining contribution
to $\Omega_{total}$=1 coming from CDM, such model gives good fits to a variety
of large-scale structure data.
However, CDM/inflationary models would at the same time suppress the small
scale
power and hence have difficulty
in accounting for the observed objects at $z$$>$3$-$4. Indeed, for the
$\Omega$$+$$\lambda$=1 models the transfer function is given by
$T(k)$$=\{1+[ak+(bk)^{3/2}+(ck)^2]^\nu\}^{-1/\nu}$, where $\nu$=1.13 and
$a$=$6.4 (\Omega h)^{-1} h^{-1}$Mpc; $b$=$3.2 (\Omega h)^{-1} h^{-1}$Mpc;
and $c$=$1.7 (\Omega h)^{-1} h^{-1} $Mpc (Bond and Efstathiou 1984).
The range of $T(k)$$\simeq$1 or the effective power
spectrum index $n$=1 corresponds to scales where the Harrison-Zeldovich
form of the power spectrum got preserved and so $P(k)$ enters the
Harrison-Zeldovich regime
for scales $>$$2a$. On smaller scales the power index
varies from $n$$\simeq$1 through $n$$\simeq$$-1$, required by the APM data,
to $n$$\simeq$$-3$ for scales $\ll$$c$. The scales where
$n$$\simeq$$-3$ correspond to very little small scale power and this
suppresses collapse of fluctuations (and galaxy formation) until a fairly
low $z$ for CDM models.
Lowering $\Omega h$ increases $a$ and
thus can provide the power found in the APM survey; at the same time this
increases $c$$\propto$$(\Omega h)^{-1}$ and further suppresses early
collapse of density fluctuations on the relevant scales.
A similar effect would be
achieved if part of the contribution to $\Omega_{total}$ is due to HDM
(van Dalen and Schaeffer 1992).
The predictions of so normalised CDM/inflationary models can and must be
compared to the
observational data on 1) QSOs at $z$$\geq$$4.5$; 2) the recently
found protogalaxies at $z$$\sim$4; and 3) the protocluster-size object
recently discovered by Uson et al (UBC) (1991). As discussed (Kashlinsky
1993) one can avoid the difficulty with the currently
observed QSO abundances ($z\simeq 4.5$) mainly
because of the freedom one has in determining their total collapsed masses
(cf. Nusser and Silk 1993). But the data on the high-$z$ galaxies ($z\simeq 4$
with the \underline{total} collapsed masses of $>3\times 10^{12}M_\odot$
as the data indicate, see e.g. Chambers and Charlot 1990 and the references
cited
therein) would be difficult to account for on the basis on the modified
CDM models. In other words, it may be
difficult within the framework of CDM models to
account simultaneously for 1) large-scale optical data; 2) COBE results and
3) high-$z$ objects. I.e. if one normalises CDM
models to the large-scale and COBE data, by lowering $\Omega h$ and putting
in $\lambda$=1$-$$\Omega$ or by having HDM as well as CDM, thereby
reducing the small-scale power of the density field, one should then expect
to see 1) significant reduction in the QSO number densities at $z$$>$(4$-$6)
in any modified CDM models;
2) no protogalaxies collapse at $z$$\geq$4; and 3)
no protocluster-size objects, such as the UBC object, at $z$=3.4.
Thus the data on the high-$z$
galaxies may require a power spectrum that has more power on small scales
than CDM models,
e.g. one that has $n$$\sim$$-1$ on scales down to at least
$10^8 M_\odot$, which
scale-invariant inflationary models cannot provide. The existence of the UBC
object would put even stronger constraints on hierarchical models: indeed,
this object corresponds to the comoving scale $\sim$$8h^{-1}$Mpc;
this fixes its r.m.s. density contrast to be $b^{-1}$ at the present epoch
almost independent of $P(k)$. Its existence at $z$=3.4 would in e.g. CDM
models correspond to a $>$10$-$$\sigma$ fluctuation and one should not see
any of such objects within the horison. To reduce this number
can be achieved by \underline{both} 1) requiring the
light to trace mass, i.e. $b$=1, and 2) making the Universe open, which would
slow down collapse of fluctuations out to and lead to structures forming by
$z_{in}$$\simeq$$\Omega^{-1}$$-$1 as
opposed to $z_{in}$$\simeq$$\Omega^{-\frac{1}{3}}$$-$1 for
$\Omega$$+$$\lambda$=1 models. Such models would require to reconsider the
validity of the standard inflationary assumptions.
\section{Conclusions}
In this review we have discussed the shape of the
\underline{primordial} power spectrum and the values of the cosmological
parameters implied by the data. We have shown that the peculiar velocity
field is in good agreement with that predicted by galaxy correlation data and
that this suggests that the bias factor is constant at least in the linear
regime. The comparison also allows one to estimate $\Omega^{0.6}/b$ and the
data is consistent with $0.2<\Omega^{0.6}/b \leq 0.5$. If the Uinverse turns
out open, it would be impossible to fine-tune inflation as in that case the
super-horison scale inhmogenieties would induce, via the Grischuk-Zeldovich
effect, MBR quadrupole in excess of the COBE data.
We further discussed how 1)the microwave background
data from COBE, 2) the optical data on the distribution of galaxies at $z
\sim0$, and 3) the data on high-$z$ \underline{galaxies} constrain the
primordial $P(k)$ over a range of scales from $<1h^{-1}$Mpc to $>1000h^{-1}$Mpc
and the implications of \underline{all} the data for inflationary scenario(s).
\vspace{1pc}
Bahcall, N. and Soneira, R. 1983, {\it Ap.J.}, {\bf 270}, 20.\\
Collins, C.A. et al 1992, {\it Ap. \ J.}, {\bf 254}, 295.\\
Bertschinger, E. et al 1990, {\it Ap. \ J.}, {\bf 364}, 370.\\
Bond, J.R. and Efstathiou, G. 1984, {\it Ap. \ J.}, {\bf 285}, L45.\\
Cavaliere, A. and Szalay, A. 1986, {\it Ap. \ J.},{\bf 311}, 589.\\
Chambers, K.C. and Charlot, S. 1990, {\it Ap. \ J. \ Lett.}, {\bf 348}, L1.\\
Davis, M. et al 1992, {\it Nature}, {\bf 359}, 393.\\
Efstathiou, G. and Rees, M. 1988, {\it MNRAS}, {\bf 230}, 5p.\\
Efstathiou, G. et al 1990, {\it Nature}, {\bf 348}, 705.\\
Ellis, G. 1988, {\it Class. \ Quantum \ Grav.}, {\bf 5}, 891.\\
Fisher, K. et al 1993, {\it Ap. \ J.}, {\bf 402}, 42.\\
Frieman, J., Kashlinsky, A. and Tkachev, I. 1993, {\it in preparation}.\\
Gorski, K. 1991, {\it Ap. \ J. \ Lett.}, {\bf 370}, L5.\\
Grischuk, L. and Zeldovich, Ya.B. 1978, {\it Sov. Astron.}, {\bf 22}, 125.\\
Juszkiewicz, R. et al 1987, {\it Ap. \ J. \ Lett.}, {\bf 323}, L1.\\
Kaiser, N. 1984, {\it Ap. \ J. \ Lett.}, {\bf 284}, L9.\\
Kaiser, N. 1988, {\it MNRAS}, {\bf 231}, 149.\\
Kashlinsky, A. 1987, {\it Ap.J.}, {\bf317}, 19.\\
Kashlinsky, A. 1991a, {\it Ap. \ J. \ Lett.}, {\bf376}, L5.\\
Kashlinsky, A. 1991b, {\it Ap. \ J. \ Lett.}, {\bf 383}, L1.\\
Kashlinsky, A. 1992a, {\it Ap. \ J. \ Lett.}, {\bf 386}, L37.\\
Kashlinsky, A. 1992b, {\it Ap. \ J. \ Lett.}, {\bf 387}, L1.\\
Kashlinsky, A. 1992c, {\it Ap. \ J. \ Lett.}, {\bf 399}, L1.\\
Kashlinsky, A. 1993, {\it Ap. \ J. \ Lett.}, {\bf 406}, L1.\\
Kellerman, K. 1993, {\it Nature}, {\bf 361}, 134.\\
Kellerman, K. 1993, {\it these proceedings}.\\
Lee, Y.-W. 1992, {\it Astron. J}, {\bf 104}, 1780.\\
Lee, Y.-W. 1993, {\it these proceedings}.\\
Maddox, S. et al 1990, {\it MNRAS}, {\bf 242}, 43p.\\
Nusser, A. and Silk, J. 1993, {\it Ap. \ J. \ Lett.}, {\bf 411}, L1.\\
Peacock, J. 1991, {\it MNRAS}, {\bf 253}, 1p.\\
Peebles, P.J.E. 1987, {\it Nature}, {\bf 327}, 210.\\
Peebles, P.J.E. 1989, {\it Ap. \ J. \ Lett.}, {\bf 339}, L5.\\
Picard, A. 1991, {\it Ap. \ J. \ Lett.}, {\bf 368}, L7.\\
Postman, M. et al 1986, {\it Astron. J.}, {\bf 91}, 1267.\\
Saunders, W. et al 1991, {\it Nature}, {\bf 349},32.\\
Smoot, G. et al 1992, {\it Ap. \ J. \ Lett.}, {\bf 396}, L1.\\
Sutherland, W. 1988, {\it MNRAS}, {\bf 234}, 159.\\
Steinhardt, P. 1990, {\it Nature}, {\bf 345}, 47.\\
Steinhardt, P. 1993, {\it these proceedings}.\\
Szalay, A. et al 1989, {\it Ap. \ J. \ Lett.}, {\bf 339}, L5.\\
Taylor, A.N. and Rowan-Robinson, M. {\it Nature}, {\bf 359}, 396.\\
Turner, M. 1991, {\it Phys. \ Rev. \ D}, {\bf 44}, 3737.\\
Tully, B. 1993, {\it these proceedings}.\\
Uson, J. et al 1992, {\it Phys. \ Rev. \ Lett.}, {\bf 67}, 3328.\\
van Dalen, A. and Schaeffer, R. 1992, {\it Ap. \ J.}, {\bf 398}, 33.\\
Vogeley, M. et al 1992, {\it Ap. \ J. \ Lett.}, {\bf 391}, L5.\\
Yahil, A. 1993, {\it these proceedings}.
\end{document}
|
1506.00277
|
\section{Introduction}
In a symmetrical key cryptosystem, such as AES (Advanced Encryption
Standard), two users Alice and Bob must first agree on a common secret
key {[}1{]}. If Alice communicates the secret key to Bob, a third
party, Eve, might intercept the key, and decrypt the messages. In
order to avoid such a situation one can use an asymmetric public key
cryptosystem, which provides a mechanism to securely exchange information
via open networks {[}1{]}.
In public key cryptography a user has a pair of cryptographic keys,
consisting on a widely distributed public key and a secret private
key. These keys are related through a hard mathematical inversion
problem, such that the private key cannot be practically derived from
the public key. The two main directions of public key cryptography
are the public key encryption and the digital signatures. Public key
encryption is used to ensure confidentiality. In this case, the data
encrypted with the public key can only be decrypted with the corresponding
private key. Digital signatures are used to ensure authenticity or
prevent repudiation. In this case, any message signed with a user's
private key can be verified by anyone who has access to the user's
public key, proving the authenticity of the message.
A standard implementation of public key cryptography is based on the
Diffie-Hellman (DH) key agreement protocol {[}2{]}. The protocol allows
two users to exchange a secret key over an insecure communication
channel. The platform of the DH protocol is the multiplicative group
$\mathbb{Z}_{p}$ of integers modulo a prime $p$. The DH protocol
can be described as following:
\begin{enumerate}
\item Alice and Bob agree upon the public integer $g\in\mathbb{Z}_{p}$.
\item Alice chooses the secret integer $a$.
\item Alice computes $A=g^{a}\,\mathrm{mod}\, p$, and publishes $A$.
\item Bob chooses the secret integer $b$.
\item Bob computes $B=g^{b}\mathrm{\, mod}\, p$, and publishes $B$.
\item Alice computes the secret integer $K_{A}=B^{a}\mathrm{\, mod\,}p=g^{ba}\,\mathrm{mod\,}p$.
\item Bob computes the secret integer $K_{B}=A^{b}\mathrm{\, mod\,}p=g^{ab}\mathrm{\, mod\,}p$.
\end{enumerate}
It is obvious that both Alice and Bob calculate the same integer $K\equiv K_{A}=K_{B}$,
which then can be used as a secret shared key for symmetric encryption.
Assuming that the eavesdropper Eve knows $p,g,A$ and $B$, she needs
to compute the secret key $K$, that is to solve the discrete logarithm
problem:
\begin{equation}
A=g^{a}\mathrm{\, mod}\, p,
\end{equation}
for the unknown $a$. If $p$ is a very large prime of at least 300
digits, and $a$ and $b$ are at least 100 digits long, then the problem
becomes computationally hard (exponential time in $\log\, p$), and
it is considered infeasible. For maximum security it is also recommended
that $p$ is a safeprime, i.e. $(p-1)/2$ is also a prime, and $g$
is a primitive root of $p$. This protocol is secure, assuming that
it is infeasible to factor a large integer composed of two or more
large prime factors.
Here, we discuss a public key cryptosystem, which extends the key
exchange to a matrix protocol, and avoids the traditional number theory
approach based on the discrete logarithm problem. The key exchange
mechanism is based on a commutative function defined as the product
of two matrix polynomials. We provide a detailed description of this
encoding mechanism, and a practical numerical implementation.
\section{Commutativity of Matrix Polynomials}
Let us consider the complex matrix $X\in\mathbb{C}^{N\times N},$
and two complex matrix polynomials of the form:
\begin{equation}
P(X,a):\mathbb{C}^{N\times N}\times\mathbb{C}^{M}\rightarrow\mathbb{C}^{N\times N},
\end{equation}
\begin{equation}
P(X,a)=\sum_{m=1}^{M}a_{m}X^{m},
\end{equation}
and
\begin{equation}
P(X,b):\mathbb{C}^{N\times N}\times\mathbb{C}^{K}\rightarrow\mathbb{C}^{N\times N},
\end{equation}
\begin{equation}
P(X,b)=\sum_{k=1}^{K}b_{k}X^{K},
\end{equation}
where $a\in\mathbb{C}^{M}$ and $b\in\mathbb{C}^{K}$.
One can easily show that the product of these polynomials is commutative:
\[
P(X,a)P(X,b)=\left(\sum_{m=1}^{M}a_{m}X^{m}\right)\left(\sum_{k=1}^{K}b_{k}X^{k}\right)=
\]
\[
=\sum_{m=1}^{M}\sum_{k=1}^{K}a_{m}b_{k}X^{m+k}=
\]
\begin{equation}
=\left(\sum_{k=1}^{K}b_{k}X^{k}\right)\left(\sum_{m=1}^{M}a_{m}X^{m}\right)=P(X,b)P(X,a).
\end{equation}
However, because the matrix product is non-commutative, we also have:
\begin{equation}
P(X,a)P(Y,b)\neq P(Y,b)P(X,a),
\end{equation}
if $X\neq Y\in\mathbb{C}^{N\times N}$ and $a\in\mathbb{C}^{M}$,
$b\in\mathbb{C}^{K}$.
\section{Key Exchange Protocol}
Using the above property, the proposed key exchange protocol can be
formulated as following:
\begin{enumerate}
\item Alice chooses the secret vectors $a\in\mathbb{C}^{M_{1}}$ and $\tilde{a}\in\mathbb{C}^{M_{2}}$
(Alice's private key).
\item Alice randomly generates and publishes the matrix $U\in\mathbb{C}^{N\times N}$
(Alice's matrix public key).
\item Bob chooses the secret vectors $b\in\mathbb{C}^{J_{1}}$ and $\tilde{b}\in\mathbb{C}^{J_{2}}$
(Bob's private key).
\item Bob randomly generates and publishes the matrix $V\in\mathbb{C}^{N\times N}$
(Bob's matrix public key).
\item Alice computes and publishes the matrix $A=P(U,a)P(V,\tilde{a})$
(Alice's public key).
\item Bob computes and publishes the matrix $B=P(U,b)P(V,\tilde{b})$ (Bob's
public key).
\item Alice calculates the secret matrix $S_{a}=P(U,a)BP(V,\tilde{a})$
(Alice's secret key).
\item Bob calculates the secret matrix $S_{b}=P(U,b)AP(V,\tilde{b})$ (Bob's
secret key).
\end{enumerate}
One can see that both Alice and Bob obtain the same secret key $S\equiv S_{a}=S_{b}$,
since the matrix polynomials satisfy the commutativity property. Also
we assume that we always have $U\neq V$.
Assuming that the eavesdropper Eve knows $U,V$ and $A,B$, the hard
problem is to compute $S$, which requires the unknown private key
vectors $a,\tilde{a}$ or $b,\tilde{b}$. We should note that the
number of elements ($M_{1}$, $M_{2}$, $J_{1}$, $J_{2}$) in these
vectors is also unknown.
One may attempt to find these quantities directly from $A$ and $B$.
For example we have:
\begin{equation}
A=\sum_{m=1}^{M_{1}}\sum_{n=1}^{M_{2}}a_{m}\tilde{a}_{n}U^{m}V^{n}=\sum_{k=1}^{K}\alpha_{k}\Psi^{(k)}.
\end{equation}
Thus, we obtain a system of equations,$\Psi\alpha=A$, with the unknown
$\alpha_{k}=a_{m}\tilde{a}_{n}$, $k\equiv(m,n)$. The columns of
the matrix $\Psi$, $\Psi^{(k)}=U^{m}V^{n}$, and the matrix $A$
are written as vectors by stacking their constituent columns. In total
there are $K=M_{1}M_{2}$ columns in $\Psi$, and each column has
$N^{2}$ elements. Therefore, the resulted system is badly underdermined
if $M_{1},M_{2}\gg N$. Thus, we should consider that $M_{1},M_{2},J_{1},J_{2}\gg N$,
in order to increase the security.
\section{Practical Implementation Aspects}
The goal of this section is to discuss some practical aspects regarding
the numerical implementation of the proposed matrix public key cryptosystem.
We choose the \texttt{C} language for implementation, and the standard
\texttt{GCC} compiler on a desktop running \texttt{Ubuntu 14.04 LTS}.
We focus only on building a simulation environment, where we can test
the functionality of the algorithms. The code is listed in the Appendix
1, and here we provide a detailed discussion of its main components.
The required compilation and run steps are:
\begin{lstlisting}[basicstyle={\ttfamily},tabsize=2]
gcc -O3 -lm keys.c -o keys
./keys > results.txt
\end{lstlisting}
The program starts by setting the size of the required square matrices.
We choose the size value \texttt{N = 4}, which means that all the
matrices will contain \texttt{NN = N{*}N = 16} double precision complex
numbers. This value of \texttt{N} is enough to obtain a secret key
with the length \texttt{K = NN{*}8 = 128 bytes}.
Since this is a simulation only, we choose also to use the standard
random number generator from \texttt{C}, \texttt{rand()}, in order
to initialize the variables in the problem. Of course in a real application
\texttt{rand()} is a poor choice, and should be avoided, however our
goal here is to simply show that the proposed algorithm works. For
a secure implementation one can use successive applications of secure
hash functions (\texttt{sha256)}, or secure ciphers (\texttt{AES}),
to generate the required pseudo-random numbers.
The private keys, $a$, $\tilde{a}$, $b$, $\tilde{b}$, are stored
in the byte arrays \texttt{a}, \texttt{aa},\textsf{ }\texttt{b}, \texttt{bb},
and they are generated with the \texttt{private\_key(...)} function.
The private keys are stored as byte arrays of length \texttt{M1{*}16},
\texttt{M2{*}16},\textsf{ }\texttt{J1{*}16}, \texttt{J2{*}16}. Longer
private keys will provide a better security. For simulation purposes,
the \texttt{M1}, \texttt{M2}, \texttt{J1}, \texttt{J2} values are
randomly generated in the range \texttt{{[}NN, 2{*}NN-1{]}}, such
that the condition $M_{1},M_{2},J_{1},J_{2}\gg N$ is marginally satisfied
(one can make them much longer if necessary).
Each double precision complex number needs \texttt{16 bytes} to store
it. The real and imaginary part of the complex numbers are generated
with the \texttt{drand()} function. This function is designed to generate
``dense'' double precision random numbers in the \texttt{(0, 1)}
interval. For each decimal in the mantissa, we use a separate \texttt{rand()}
call, until the precision gets below $2\cdot10^{-16}$, which is the
machine epsilon for double precision numbers.
The matrix public keys, $U$ and $V$, are stored in the byte arrays
\texttt{U} and\textsf{ }\texttt{V}, and are generated with the \texttt{matrix\_public\_key(...)}
function. Each array holds \texttt{NN} double precision complex numbers
in \texttt{NN{*}16 bytes}, using the same approach described above
for the \texttt{private\_key(...)} function.
The public keys, $A$ and $B$, are stored in the byte arrays \texttt{A}
and \texttt{B}, and are generated with the \texttt{public\_key(...)}
function. Since these are also double precision complex matrices with
the size \texttt{NN}, they will need \texttt{NN{*}16 bytes} of storage.
The power matrices $U^{m}$ are also normalized as following:
\begin{equation}
\left\Vert U^{m}\right\Vert _{F}^{-1}U^{m}\leftarrow U^{m},
\end{equation}
where $\left\Vert .\right\Vert _{F}$ is the Frobenius norm. Thus,
all the polynomials are rewritten as:
\begin{equation}
P(U,a)=\sum_{m=1}^{M}a_{m}\left\Vert U^{m}\right\Vert _{F}^{-1}U^{m},
\end{equation}
This way, the elements of the power matrices will be in the same ``range'',
while the polynomial commuting properties described before are preserved.
Also, since the order $U$, $V$ or $V$, $U$ cannot be specified
exactly, we made the convention that if the first element of $U$
is smaller than the first element of $V$, then the order $U$, $V$
is switched to $V$, $U$. This makes irrelevant the order in which
the arguments, $U$ and $V$, are passed to the functions.
The secret keys, $S_{a}$ and $S_{b}$, are stored in the byte arrays
\texttt{Sa} and \texttt{Sb} with a length \texttt{K = NN{*}4}, and
they are computed with the \texttt{secret\_key(...)} function. The
secret keys are extracted from the matrices $S_{a}$ and $S_{b}$,
each containing \texttt{NN} elements. The computation of these matrices
is affected by rounding errors, due to the finite floating point number
representation, and inherently will lead to $S_{a}\neq S_{b}$, which
of course is undesirable. In order to counter the accumulation of
the floating point rounding errors, we extract the significand of
each double precision number, and from the significand we extract
a\texttt{ 4 byte} unsigned integer (see the code for more details).
Thus, from each double precision complex number we extract two unsigned
integers in the range $[0,2^{32}-1]$. The \texttt{128 bytes} corresponding
to these \texttt{32} integers form the secret keys, and are stored
into the byte arrays \texttt{Sa} and \texttt{Sb}.
For illustration purposes, the results obtained for one instance run
are given in Appendix 2. One can see that both Alice and Bob compute
the same secret key (\texttt{Sa = Sb}), using completely different
secret and public keys.
\section{Conclusion}
In conclusion, we have presented a matrix public key cryptosystem,
which avoids the cumbersome key generation of the traditional number
theory approach. The key exchange mechanism is ensured by the commutative
property of the product of two matrix polynomials. Also, we have provided
a detailed description of this encoding mechanism, and a practical
numerical implementation.
|
1506.00092
|
\section{Introduction \& Motivation}
The astonishing amount of information available on the Web
and the highly variable quality of its content generate the need for an absolute measure of importance for Web pages, that can be used to improve the performance of Web search. Link Analysis algorithms such as the celebrated \textit{PageRank}, try to answer this need by using the link structure of the Web to assign authoritative weights to the pages~\cite{pagerank}.
\begin{figure}
\centering
\subfloat[][PageRank]{\epsfig{file=PageRank.eps}}
\qquad \qquad
\subfloat[][NCDawareRank]{\hspace*{2.0em}\epsfig{file=NCDawareRank.eps}\hspace*{1.3em}}
\caption{In the left figure we see a tiny graph as viewed by PageRank and in the right, the same graph as viewed by NCDawareRank. Same colored nodes belong to the same block and are considered related according to a given criterion.}
\label{tiny_web}
\end{figure}
PageRank's approach is based on the assumption that links convey human endorsement. For example, the existence of
a link from page $3$ to page $7$ in Fig.~\ref{tiny_web}(a) is seen as a testimonial of the importance of page $7$. Furthermore, the amount of importance conferred to page $7$ is proportional to
the importance of page $3$ and inversely proportional to the number of pages
$3$ links to. In their original paper, Page et al.~\cite{pagerank} imagined of a
\textit{random surfer} who, with probability $\alpha$ follows the links of a Web page, and with probability $1-\alpha$ jumps to a different page uniformly at random. Then, following this metaphor, the overall importance of a page was defined to be equal to the fraction of time this random surfer spends on it, in the long run.
Formulating PageRank's basic idea with a mathematical model, involves viewing the Web as a directed graph with Web pages as vertices and hyperlinks as edges. Given this graph, we can construct a \textit{row-normalized hyperlink matrix} $\mathbf{H}$, whose element $[\mathbf{H}]_{uv}$ is one over the outdegree of $u$ if there is a link from $u$ to $v$, or zero otherwise. The matter of dangling nodes is fixed with some sort of stochasticity adjustment, thereby transforming the initial matrix $\mathbf{H}$, to a stochastic matrix.
A second adjustment is needed to certify that the final matrix is irreducible and aperiodic, so that it possesses a unique positive stationary probability distribution. That is ensured by the introduction of the \textit{damping factor} $\alpha$ and a \textit{teleportation} matrix $\mathbf{E}$, usually defined by $\mathbf{E} = \frac{1}{n}\mathbf{e}\mathbf{e}^{\intercal}$. The resulting matrix is given by:
\begin{equation}
\label{PageRank_Matrix_G}
\mathbf{G} = \alpha \mathbf{H} + (1-\alpha)\mathbf{E}
\end{equation}
PageRank vector is the unique stationary distribution of the Markov chain corresponding to matrix $\mathbf{G}$.
The choice of the damping factor has received much attention since it determines the fraction of the importance of a node that is propagated through the edges rather than scattered throughout the graph via the teleportation matrix. Obviously, picking a very small damping factor ignores the link structure of the graph and results in uninformative ranking vectors. On the other hand, setting the damping factor very close to one, causes a number of serious problems. From a computational perspective, as $\alpha\to 1$, the number of iterations till convergence to the PageRank vector grows prohibitively, and also makes the computation of the rankings numerically ill-conditioned~\cite{kamvar2003condition,LangvilleMeyer06}. Moreover, from a qualitative point of view, various studies indicate that damping factors close to 1
result into counterintuitive ranking vectors where all the PageRank gets concentrated mostly in irrelevant nodes, while the Web's core component is assigned null rank~\cite{Avrachenkov:2007:DPM:1777879.1777881,DBLP:conf/dagstuhl/BoldiSV07,Boldi:2009:PFD:1629096.1629097,Nikolakopoulos:2013:NNR:2433396.2433415}. Finally, the very existence of the damping factor and the related teleportation matrix ``opens the door'' to direct manipulation of the ranking score through link spamming~\cite{constantine2009random,Eiron:2004:RWF:988672.988714}.
In the literature there have been proposed several ranking methods that try to address these issues. Boldi~\cite{Boldi:2005:TRW:1062745.1062787} proposed an algorithm that eliminates PageRank's dependency on the arbitrarily chosen parameter $\alpha$ by integrating the ranking vector over the entire range of possible damping factors. Baeza-Yates et al.~\cite{Baeza-Yates:2006:GPD:1148170.1148225} introduced a family of link-based ranking algorithms parametrised by the selection of a damping function that describes how rapidly the importance of paths decays as the path length increases. Constantine and Gleich~\cite{constantine2009random} proposed a ranking method that considers the influence of a population of random surfers, each choosing its own damping factor from a probability distribution.
All the above methods attack the problem from the damping factor point of view, while taking the teleportation matrix as granted. Nikolakopoulos and Garofalakis~\cite{Nikolakopoulos:2013:NNR:2433396.2433415}, on the other hand, focus on the teleportation model itself. Building on the intuition behind Nearly Decomposable Systems~\cite{Courtois:1985:TSD:3812.3814,Simon:1996:SA:237774,simon1961aggregation}, the authors proposed \textit{NCDawareRank}; a novel ranking framework that generalizes and
refines PageRank by enriching the teleportation model in a computationally efficient way. NCDawareRank decomposes the underlying space into NCD blocks, and uses these blocks to define indirect relations between the nodes in the graph (Fig.~\ref{tiny_web}(b)) which lead to the introduction of a new inter-level proximity component. A comprehensive set of experiments done by the authors using real snapshots of the Web Graph showed that the introduction of this decomposition alleviates the negative effects of uniform teleportation and produces ranking vectors that display low sensitivity to sparsity and, at the same time, exhibit resistance to direct manipulation through link spamming (see the discussion in Sections 4.2 and 4.3 in~\cite{Nikolakopoulos:2013:NNR:2433396.2433415} for further details). However, albeit reducing some of its negative effects, NCDawareRank model also includes the standard teleportation matrix as a purely mathematical necessity. But, is it?
The main questions we try to address in this work are the following:
\textit{Is it possible to discard the uniform teleportation altogether? And if so, under which conditions?} Thankfully, the answer is yes. In particular, we show that, the definition of the NCD blocks, can be enough to ensure the production of well-defined ranking vectors without resorting to uniform teleportation. The criterion for this to be true is expressed solely in term of properties of the proposed decomposition, which makes it very easy to check and at the same time gives insight that can lead to better decompositions for the particular ranking problems under consideration.
The rest of the paper is organized as follows: After discussing NCDawareRank model (Section~\ref{Sec:NCDawareRank}) we derive sufficient and necessary conditions under which the inter-level proximity matrix enables us to discard the teleportation matrix completely (Section~\ref{SubSec:Primitivity}). In Section~\ref{Sec_Overlapping}, we generalize NCDawareRank model, in order to allow the definition of overlapping blocks without compromising its theoretical and computational properties. Finally, in Section~\ref{Sec_Conclussions} we discuss future direction and conclude this work.
\section{NCDawareRank Model}
\label{Sec:NCDawareRank}
Before we proceed to our main result, we present here the basic definitions behind the NCDawareRank model. Our presentation follows the one given in~\cite{Nikolakopoulos:2013:NNR:2433396.2433415}.
\subsection{Notation}
All vectors are represented by bold lower case letters and they are column vectors (e.g., $\boldsymbol{\pi}$). All matrices are represented by bold upper case letters (e.g., $\mathbf{P} $). The $i^{\text{th}}$ row and $j^{\text{th}}$ column of matrix $\mathbf{P}$ are denoted $\mathbf{p}^\intercal_{i}$ and $\mathbf{p}_{j}$, respectively. The $ij^{th}$ element of matrix $\mathbf{P}$ is denoted $[\mathbf{P}]_{ij}$. We use $\operatorname{\textbf{Diag}}(\boldsymbol{\omega})$ to denote the matrix having vector $\boldsymbol{\omega}$ on its diagonal, and zeros elsewhere. We use calligraphic letters to denote sets (e.g., $\mathcal{U,V}$). $[1,n]$ is used to denote the set of integers $\{1,2,\dots,n\}$. Finally, symbol $\triangleq$ is used in definition statements.
\subsection{Definitions}
Let $\mathcal{U}$ be a set of nodes (e.g. the universe of Web pages) and denote $n\triangleq|\mathcal{U}|$. Consider a node $u$ in $\mathcal{U}$. We denote $\mathcal{G}_u$ to be the set of nodes that can be visited in a single step from $u$. Clearly, $d_u\triangleq|\mathcal{G}_u|$ is the out-degree of $u$, i.e. the number of outgoing edges of $u$.
We consider a partition of the underlying space $\mathcal{U}$ that defines a \textbf{decomposition}:
\begin{equation}
\mathcal{M} \triangleq \{\mathcal{D}_1,\dots,\mathcal{D}_K\}
\end{equation}
such that, $\mathcal{D}_k\neq \emptyset$, for all $k$ in $[1,K]$.
Each set $\mathcal{D}_I$ is referred to as an \textbf{NCD Block}, and its elements are considered related according to a given criterion, chosen for the particular ranking problem (e.g. the partition of the set of Web Pages into websites).
We define $\mathcal{M}_u$ to be the set of \textit{proximal} nodes of $u$, i.e the union of the NCD blocks that contain $u$ and the nodes it links to. Formally, the set $\mathcal{M}_u$ is defined by:
\begin{equation}
\mathcal{M}_u \triangleq \bigcup_{{w \in (u\cup\mathcal{G}_u)}}\mathcal{D}_{(w)}
\label{def:proximal}
\end{equation}
where $\mathcal{D}_{(u)}$ is used to denote the unique block that includes node $u$.
Finally, $N_u$ denotes the number of different blocks in $\mathcal{M}_u$.
\begin{description}
\item[Hyperlink Matrix.] The hyperlink matrix $\mathbf{H}$, as in the standard PageRank Model, is a row normalized version of the adjacency matrix induced by the graph, and its $uv^{th}$ element is defined as follows:
\begin{equation}
[\mathbf{H}]_{uv} \triangleq \left\{
\begin{array}{l l}
\frac{1}{d_u} & \quad \mbox{if $v \in \mathcal{G}_u$}\\
0 & \quad \mbox{otherwise}\\
\end{array} \right.
\end{equation}
Matrix $\mathbf{H}$ is assumed to be a row-stochastic matrix. The matter of dangling nodes (i.e. nodes with no outgoing edges) is considered fixed through some sort of stochasticity adjustment.
\item[Inter-Level Proximity Matrix.] The Inter-Level Proximity matrix $\mathbf{M}$ is created to depict the interlevel connections between the nodes in the graph.
In particular, each row of matrix $\mathbf{M}$ denotes a probability vector $\mathbf{m}^\intercal_u$, that distributes evenly its mass between the $N_u$ blocks of $\mathcal{M}_u$, and then, uniformly to the included nodes of each block. Formally, the $uv^{th}$ element of matrix $\mathbf{M}$, that relates the node $u$ with node $v$, is defined as
\begin{equation}
[\mathbf{M}]_{uv}\triangleq \left\{
\begin{array}{l l}
\frac{1}{N_u|\mathcal{D}_{(v)}|} & \quad \mbox{if $v \in \mathcal{M}_u$}\\
0 & \quad \mbox{otherwise}\\
\end{array} \right.
\label{def:M}
\end{equation}
From the definition of the NCD blocks and the proximal sets, it is clear that whenever the number of blocks is smaller than the number of nodes in the graph, i.e. $K<n$, matrix $\mathbf{M}$ is necessarily low-rank; in fact, a closer look at the definitions~(\ref{def:proximal}) and~(\ref{def:M}) above, suggests that matrix $\mathbf{M}$ admits a very useful factorization, which was shown in~\cite{Nikolakopoulos:2013:NNR:2433396.2433415} to ensure the tractability of the resulting model. In particular, matrix
$\mathbf{M}$ can be expressed as a product of 2 extremely sparse matrices,
$\mathbf{R}$ and $\mathbf{A}$, defined below.
Matrix $\mathbf{A} \in \mathbb{R}^{K\times n}$ is defined as follows:
\begin{equation}
\mathbf{A} \triangleq
\begin{bmatrix}
\mathbf{e}^{\intercal}_{|\mathcal{D}_1|} & \boldsymbol{0} & \boldsymbol{0} & \cdots & \boldsymbol{0} \\
\boldsymbol{0} & \mathbf{e}^{\intercal}_{|\mathcal{D}_2|} & \boldsymbol{0} & \cdots & \boldsymbol{0} \\
\boldsymbol{0} & \boldsymbol{0} & \mathbf{e}^{\intercal}_{|\mathcal{D}_3|} & \cdots & \boldsymbol{0} \\
\vdots & \vdots & \vdots & \ddots & \boldsymbol{0} \\
\boldsymbol{0} & \boldsymbol{0} & \boldsymbol{0} & \cdots & \mathbf{e}^{\intercal}_{|\mathcal{D}_K|}
\end{bmatrix}
\label{matrixA}
\end{equation}
where $\mathbf{e}^{\intercal}_{|\mathcal{D}_k|}$ denotes a row vector in
$\mathbb{R}^{|\mathcal{D}_k|}$ whose elements are all 1. Now, using the diagonal matrix $\mathbf{\Delta}$:
\begin{equation}
\mathbf{\Delta} \triangleq \operatorname{\mathbf{Diag}}\left(\begin{bmatrix}|\mathcal{D}_1| & & & |\mathcal{D}_2| & & & \cdots & & & |\mathcal{D}_K|\end{bmatrix}\right)
\label{matrixDelta}
\end{equation}
and a row normalized matrix $\mathbf{\Gamma} \in \mathbb{R}^{n\times K}$, whose rows correspond to nodes and columns to blocks and its elements are given by
\begin{equation}
[\mathbf{\Gamma}]_{ij} \triangleq \left\{
\begin{array}{l l}
\frac{1}{N_u} & \quad \mbox{if $\mathcal{D}_{j} \in \mathcal{M}_{u_i}$}\\
0 & \quad \mbox{otherwise}\\
\end{array} \right.
\label{matrixGamma}
\end{equation}
we can define the matrix $\mathbf{R}$ as follows:
\begin{equation}
\mathbf{R} \triangleq \mathbf{\Gamma}\mathbf{\Delta}^{-1}
\label{matrixR}
\end{equation}
Using~(\ref{matrixA}) and~(\ref{matrixR}), it is straight forward to verify that:
\begin{eqnarray}
\mathbf{M} &=& \mathbf{R} \mathbf{A} \\
\mathbf{R} \in \mathbb{R}^{n\times K}& & \mathbf{A} \in \mathbb{R}^{K\times n} \nonumber
\end{eqnarray}
As pointed out by the authors~\cite{Nikolakopoulos:2013:NNR:2433396.2433415}, this factorization can lead to significant advantages in realistic scenarios, in terms of both storage and computability (see~\cite{Nikolakopoulos:2013:NNR:2433396.2433415}, Section 3.2.1).
\item[Teleportation Matrix.] Finally, NCDawareRank model also includes a teleportation matrix $\mathbf{E}$,
\begin{equation}
\mathbf{E} \triangleq \mathbf{e} \mathbf{v}^\intercal
\end{equation} where, $\mathbf{v>0}$ such that $\mathbf{v^\intercal e} = 1$. The introduction of this matrix, can be seen as a remedy to ensure that the underlying Markov chain, corresponding to the final matrix, is irreducible and aperiodic and thus has a unique positive stationary probability distribution \cite{Nikolakopoulos:2013:NNR:2433396.2433415}.
\end{description}
\bigskip
\noindent The resulting matrix which we denote $\mathbf{P}$ is expressed by:
\begin{equation}
\label{NCDawareRank}
\mathbf{P} = \eta \mathbf{H} + \mu \mathbf{M} + (1-\eta - \mu) \mathbf{E}
\end{equation}
Parameter $\eta$ controls the fraction of importance delivered to the outgoing edges and parameter $\mu$ controls the fraction of importance that will be propagated to the proximal nodes. In order to ensure the irreducibility and aperiodicity of the final stochastic matrix in the general case, $\eta + \mu$ must be less than $1$. This leaves $1-\eta-\mu$ of importance scattered throughout the graph through matrix $\mathbf{E}$.
\bigskip
\section{Necessary and Sufficient Conditions for Random Surfing Without Teleportation}
Although in the general case the teleportation matrix is required to ensure the final stochastic matrix produces a well-defined ranking vector, in this Section we show that NCDawareRank model carries the possibility of discarding matrix $\mathbf{E}$ altogether. Before we proceed to the proof of our main result (Section~\ref{SubSec:Primitivity}) we present here the necessary preliminary definitions and theorems.
\subsection{Preliminaries}
\begin{definition}[Irreducibility]
An $n\times n$ non-negative matrix $\mathbf{P}$ is called \textit{irreducible} if for every pair of indices $i,j \in [1,n]$, there exists a positive integer $m \equiv m(i,j)$ such that $[\mathbf{P}^m]_{ij}>0$. The class of all non-negative irreducible matrices is denoted $\mathfrak{I}$.
\end{definition}
\begin{definition}[Period]
The \textit{period} of an index $i\in[1,n]$ is defined to be the greatest common divisor of all positive integers $m$ such that $[\mathbf{P}^m]_{ii}>0$.
\end{definition}
\begin{proposition}[Periodicity as a Matrix Property]
For an irreducible matrix, the period of every index is the same and is referred to as the period of the matrix.
\end{proposition}
\begin{definition}[Primitivity]
An irreducible matrix with period $d=1$, is called \textit{primitive}. The important subclass of all primitive matrices will be denoted $\mathfrak{P}$.
\end{definition}
Finally, we give here, without proof, the following fundamental result of the theory of non-negative matrices\footnote{For thorough treatment of the theory as well as proofs to several formulations of the Perron-Frobenius theorem the interested reader can see~\cite{seneta2006non}}.
\begin{theorem}[Perron-Frobenius Theorem for Primitive Matrices\cite{Frobenius-1908-theorem,Perron-1907-theorem}]
Suppose $\mathbf{T}$ is an $n\times n$ non-negative primitive matrix. Then, there exists an eigenvalue $r$ such that:
\begin{enumerate}
\item[{\upshape(a)}] $r$ is real and positive,
\item[{\upshape(b)}] with $r$ can be associated strictly positive left and right eigenvectors,
\item[{\upshape(c)}] $r>\lvert\lambda\rvert$ for any eigenvalue $\lambda\neq r$
\item[{\upshape(d)}] the eigenvectors associated with $r$ are unique to constant multiples,
\item[{\upshape(e)}] if $0\leq \mathbf{B} \leq \mathbf{T}$ and $\beta$ is an eigenvalue of $\mathbf{B}$, then $\lvert \beta \rvert \leq r$. Moreover,
\begin{displaymath}
\lvert \beta \rvert = r \quad \Longrightarrow \quad \mathbf{B}=\mathbf{T}
\end{displaymath}
\item[{\upshape(f)}] $r$ is a simple root of the characteristic equation of $\mathbf{T}$.
\end{enumerate}
\end{theorem}
\subsection{NCDawareRank Primitivity Criterion}
\label{SubSec:Primitivity}
Mathematically, in the standard PageRank model the introduction of the teleportation matrix can be seen as a \textit{primitivity adjustment} of the final stochastic matrix. Indeed, the hyperlink matrix is typically reducible~\cite{LangvilleMeyer06,pagerank}, so if the teleportation matrix had not existed the PageRank vector would not be well-defined.
In the general case, the same holds for NCDawareRank, as well. However, for suitable decompositions of the underlying graph, matrix $\mathbf{M}$ opens the door for achieving primitivity without resorting to the uninformative teleportation matrix. Here, we show that this ``suitability'' of the decompositions can, in fact, be reflected on the properties of a low dimensional \textbf{Indicator Matrix} defined below:
\begin{definition}[Indicator Matrix]
For every decomposition $\mathcal{M}$, we define an Indicator Matrix $\mathbf{W}\in \mathbb{R}^{K \times K}$ designed to capture the inter-block relations of the underlying graph. Concretely, matrix $\mathbf{W}$ is defined as follows:
\begin{displaymath}
\mathbf{W} \triangleq \mathbf{A}\mathbf{R},
\end{displaymath}
where $\mathbf{A,R}$ are the factors of the inter-level proximity matrix $\mathbf{M}$.
\end{definition}
Clearly, whenever $[\mathbf{W}]_{IJ}$ is positive, there exists a node $u \in \mathcal{D}_I$ such that $\mathcal{D}_J \in \mathcal{M}_u$. Intuitively, one can see that a positive element in matrix $\mathbf{W}$ implies the existence of possible inter-level ``random surfing paths'' between the nodes belonging to the corresponding blocks. Thus, if the indicator matrix $\mathbf{W}$ is irreducible, these paths exist between every pair of nodes in the graph, which makes the stochastic matrix $\mathbf{M}$ also irreducible.
In fact, in the following theorem we show that the irreducibility of matrix $\mathbf{W}$ is enough to certify the primitivity of the final NCDawareRank matrix, $\mathbf{P}$. Then, just choosing positive numbers $\eta,\mu$ that sum to one, leads to a well-defined ranking vector produced by an NCDawareRank model without a teleportation component.
\begin{theorem}[Primitivity Criterion]
The NCDawareRank matrix $\mathbf{P} = \eta\mathbf{H} + \mu\mathbf{M}$, with $\eta$ and $\mu$ positive real numbers such that $\eta + \mu = 1$, is primitive if and only if the indicator matrix $\mathbf{W}$ is irreducible. Concretely, $\mathbf{P} \in \mathfrak{P} \iff \mathbf{W} \in \mathfrak{I}$.
\label{Theorem:PrimitivityConditions}
\end{theorem}
\begin{proof}
We will first prove that
\begin{equation}
\mathbf{W} \in \mathfrak{I} \implies \mathbf{P} \in \mathfrak{P}
\end{equation}
First notice that whenever matrix $\mathbf{W}$ is irreducible then it is also primitive. In particular, it is known that when a non-negative irreducible matrix has at least one positive diagonal element, then it is also primitive. In case of matrix $\mathbf{W}$, notice that by the definition of the proximal sets and matrices $\mathbf{A,R}$, we get that $[\mathbf{W}]_{ii}>0$ for every $i$ in $[1,K]$. Thus, the irreducibility of the indicator matrix ensures its primitivity also. Formally, we have
\begin{equation}
\mathbf{W} \in \mathfrak{I} \implies \mathbf{W} \in \mathfrak{P}
\end{equation}
Now if the indicator matrix $\mathbf{W}$ is primitive the same is true for the inter-level proximity matrix $\mathbf{M}$. We prove this in the following lemma.
\begin{lemma} The primitivity of the indicator matrix $\mathbf{W}$ implies the primitivity of the inter-level proximity matrix $\mathbf{M}$, defined over the same decomposition, i.e
\begin{equation}
\mathbf{W} \in \mathfrak{P} \implies \mathbf{M} \in \mathfrak{P}
\end{equation}
\label{lemma:W2M}
\end{lemma}
\begin{proof}
It suffices to show that there exists a number $m$, such that for every pair of indices $i,j$, $[\mathbf{M}^m]_{ij}>0$ holds. Or equivalently there exists a positive integer $m$ such that $\mathbf{M}^m$ is a positive matrix (see \cite{seneta2006non}).
This can be seen easily using the factorization of matrix $\mathbf{M}$ given above. In particular, since $\mathbf{W}\in\mathfrak{P}$, there exists a positive integer $k$ such that $\mathbf{W}^k>0$. Now, if we choose $m = k+1$, we get:
\begin{eqnarray}
\mathbf{M}^m & = & (\mathbf{RA})^{k+1} \nonumber \\
& = & \underbrace{\mathbf{(RA)(RA)\cdots (RA)}}_{k+1\text{ times}} \nonumber \\
& = & \mathbf{R} \underbrace{\mathbf{(AR)(AR)\cdots (AR)}}_{k\text{ times}}\mathbf{A} \nonumber \\
& = & \mathbf{R} \mathbf{W}^k \mathbf{A}
\label{rel:M_Prim}
\end{eqnarray}
However, matrix $\mathbf{W}^k$ is positive and since every row of matrix $\mathbf{R}$ and every column of matrix $\mathbf{A}$ are -- by definition -- non-zero, the final matrix, $\mathbf{M}^m$, is also positive. Thus, $\mathbf{M} \in \mathfrak{P}$, and the proof is complete. \qed
\end{proof}
Now, in order to get the primitivity of the final stochastic matrix $\mathbf{P}$, we use the following useful lemma which shows that any convex combination of stochastic matrices that contains at least one primitive matrix, is also primitive.
\begin{lemma}
Let $\mathbf{A}$ be a primitive stochastic matrix and $\mathbf{B_1,B_2,\dots,B_n}$ stochastic matrices, then matrix
\begin{displaymath}
\mathbf{C} = \alpha \mathbf{A}+\beta_1\mathbf{B_1}+\dots+\beta_n\mathbf{B_n}
\end{displaymath} where $\alpha>0$ and $\beta_1,\dots,\beta_n\geq0$ such that $\alpha+\beta_1+\dots+\beta_n=1$ is a primitive stochastic matrix.
\label{Lemma1}
\end{lemma}
\begin{proof}
Clearly matrix $\mathbf{C}$ is stochastic as a convex combination of stochastic matrices (see~\cite{horn2012matrix}).
For the primitivity part it suffices to show that there exists a natural number, $m$, such that $\mathbf{C}^m>0$. This can be seen very easily. In particular, since matrix $\mathbf{A} \in \mathfrak{P}$, there exists a number $k$ such that every element in $\mathbf{A}^{k}$ is positive.
Consider the matrix $\mathbf{C}^m$:
\begin{eqnarray}
\mathbf{C}^m & = & (\alpha\mathbf{A}+\beta_1\mathbf{B_1}+\dots+\beta_n\mathbf{B_n})^m \nonumber \\
& = & \alpha^m\mathbf{A}^m + (\text{sum of non-negative matrices})
\end{eqnarray}
Now letting $m=k$, we get that every element of matrix $\mathbf{C}^{k}$ is strictly positive, which completes the proof. \qed
\end{proof}
As we have seen, when $\mathbf{W}\in \mathfrak{I}$, matrix $\mathbf{M}$ is primitive. Furthermore, $\mathbf{M}$ and $\mathbf{H}$ are by definition stochastic. Thus, Lemma~\ref{Lemma1} applies and we get that the NCDawareRank matrix $\mathbf{P}$, is also primitive.
In conclusion, we have shown that:
\begin{equation}
\mathbf{W}\in\mathfrak{I}
\implies\mathbf{W}\in\mathfrak{P}\implies\mathbf{M}\in\mathfrak{P}\implies\mathbf{P}\in\mathfrak{P}
\label{ReverseProof}
\end{equation}
which proves the reverse direction of the theorem.
To prove the forward direction (i.e. $\mathbf{P} \in \mathfrak{P} \implies \mathbf{W} \in \mathfrak{I}$) it suffices to show that whenever matrix $\mathbf{W}$ is reducible, matrix $\mathbf{P}$ is also reducible (and thus, not primitive \cite{seneta2006non}). First observe that when matrix $\mathbf{W}$ is reducible the same holds for matrix $\mathbf{M}$.
\begin{lemma}
The reducibility of the indicator matrix $\mathbf{W}$ implies the reducibility of the inter-level proximity matrix $\mathbf{M}$. Concretely,
\begin{equation}
\mathbf{W} \notin \mathfrak{I} \implies \mathbf{M} \notin \mathfrak{I}
\end{equation}
\end{lemma}
\begin{proof}
Assume that matrix $\mathbf{W}$ is reducible. Then, there exists a permutation matrix $\mathbf{\Pi}$ such that $\mathbf{\Pi W \Pi^\intercal}$ has the form
\begin{equation}
\begin{bmatrix}
\mathbf{X} & \mathbf{Z} \\
\mathbf{0} & \mathbf{Y}
\end{bmatrix}
\label{rel:BlockUpperDiagonal}
\end{equation}
where $\mathbf{X,Y}$ are square matrices~\cite{seneta2006non}. Notice that a similar block upper triangular form can be then achieved for matrix $\mathbf{M}$. In particular, the existence of the block zero matrix in~(\ref{rel:BlockUpperDiagonal}), together with the definition of matrices $\mathbf{A,R}$ ensures the existence of a set of blocks, that have the property none of their including nodes to have outgoing edges to the rest of the nodes in the graph\footnote{notice that if this was not the case, there would be a nonzero element in the block below the diagonal necessarily.}. Thus, organizing the rows and columns of matrix $\mathbf{M}$ such that these nodes are assigned the last indices, results in a matrix $\mathbf{M}$ that has a similarly block upper triangular form. This makes $\mathbf{M}$ reducible too. \qed
\end{proof}
Thus, we only need to show that the reducibility of matrix $\mathbf{M}$ implies the reducibility of matrix $\mathbf{P}$ also. This can arise from the fact that by definition
\begin{equation}
[\mathbf{M}]_{ij}=0 \implies [\mathbf{H}]_{ij}=0.
\end{equation} So, the permutation matrix that brings $\mathbf{M}$ in the form of~(\ref{rel:BlockUpperDiagonal}), has exactly the same effect on matrix $\mathbf{H}$. Similarly the final stochastic matrix $\mathbf{P}$ has the same block upper triangular form as a sum of matrices $\mathbf{H}$ and $\mathbf{M}$. This makes matrix $\mathbf{P}$ reducible and hence non-primitive.
Therefore, we have shown that $\mathbf{W} \notin \mathfrak{P} \implies \mathbf{P} \notin \mathfrak{I}$, which is equivalent to
\begin{equation}
\mathbf{P} \in \mathfrak{P} \implies \mathbf{W} \in \mathfrak{I}
\label{ForwardProof}
\end{equation}
Putting everything together, we see that both directions of our theorem have been established. Thus we get,
\begin{equation}
\mathbf{P} \in \mathfrak{P} \iff \mathbf{W} \in \mathfrak{I}
\end{equation} and our proof is complete. \qed
\end{proof}
Now, when the stochastic matrix $\mathbf{P}$ is primitive, from the Perron-Frobenius theorem it follows that its largest eigenvalue -- which is equal to 1 -- is unique and it can be associated with strictly positive left and right eigenvectors. Therefore, under the conditions of Theorem~\ref{Theorem:PrimitivityConditions}, the ranking vector produced by the NCDawareRank model -- which is defined to be the stationary distribution of the stochastic matrix $\mathbf{P}$: (a) is uniquely determined as the (normalized) left eigenvector of $\mathbf{P}$ that corresponds to the eigenvalue 1 and, (b) its support includes every node in the underlying graph. The following corollary, summarizes the result.
\begin{corollary}
When the indicator matrix $\mathbf{W}$ is irreducible, the ranking vector produced by NCDawareRank with $\mathbf{P} = \eta\mathbf{H} + \mu\mathbf{M}$, where $\eta,\mu$ positive real numbers such that $\eta + \mu = 1$ holds, denotes a well-defined distribution that assigns positive ranking to every node in the graph.
\end{corollary}
\section{Generalizing the NCDawareRank Model}
\label{Sec_Overlapping}
\subsection{The Case of Overlapping Blocks}
In our discussion so far, we assumed that the block decomposition defines a partition of the underlying space. However, in many realistic ranking scenarios it would be useful to be able to allow the blocks to overlap. For example, if one wants to produce top N lists of movies for a ranking-based recommender system, using NCDawareRank, a very intuitive criterion for decomposition would be the one depicting the categorization of movies into genres~\cite{NikolakopoulosG14}. Of course, such a decomposition naturally results in overlapping blocks, since a movie usually belongs to more than one genres.
Fortunately, the factorization of the inter-level proximity matrix, paves the path towards a straight forward generalization, that inherits all the useful mathematical properties and computational characteristics of the standard NCDawareRank model.
In particular, it suffices to modify the definition of decompositions as indexed families of non-empty sets
\begin{equation}
\mathcal{\hat{M}} \triangleq \{\mathcal{\hat{D}}_1,\dots,\mathcal{\hat{D}}_K\}
\end{equation}
that collectively cover the underlying space, i.e.
\begin{equation}\mathcal{U}=\bigcup_{k=1}^{K}\mathcal{\hat{D}}_k
\end{equation}
and to change slightly the definitions of the:
\begin{itemize}
\item Proximal Sets: \begin{equation}
\mathcal{\hat{M}}_u \triangleq \bigcup_{{w \in (u\cup\mathcal{G}_u),w \in \mathcal{\hat{D}}_k}}\mathcal{\hat{D}}_k
\label{def:proximal_ovelapping}
\end{equation}
\item Inter-Level Proximity Matrix: \begin{equation}
[\mathbf{\hat{M}}]_{uv}\triangleq \sum_{\mathcal{\hat{D}}_k \in \mathcal{\hat{M}}_{u}, v \in \mathcal{\hat{D}}_k}\frac{1}{N_{u}\lvert \mathcal{\hat{D}}_k\rvert}
\label{def:M_overlapping}
\end{equation}
\item Factor Matrices $\mathbf{\hat{A}},\mathbf{\hat{R}}$: We first define a matrix $\mathbf{X}$, whose $ik^{\textit{th}}$ element is 1, if $\mathcal{\hat{D}}_k \in \mathcal{\hat{M}}_i$ and zero otherwise, and a matrix $\mathbf{Y}\in \mathbb{R}^{K\times n}$, whose $kj^{\textit{th}}$ element is 1 if $v_j \in \mathcal{\hat{D}}_k$ and zero otherwise.
Then, if $\mathbf{\hat{R}}$, $\mathbf{\hat{A}}$ denote the row-normalized versions of $\mathbf{X}$ and $\mathbf{Y}$ respectively, matrix $\mathbf{\hat{M}}$ can be expressed as:
\begin{equation}
\mathbf{\hat{M}} = \mathbf{\hat{R}} \mathbf{\hat{A}}, \quad \mathbf{\hat{R}} \in \mathbb{R}^{n\times K}, \mathbf{\hat{A}} \in \mathbb{R}^{K\times n}.
\end{equation}
\end{itemize}
\begin{remark}
Notice that the Inter-Level Proximity Matrix above is a well-defined stochastic matrix, for every possible decomposition. Its stochasticity can arise immediately from the row normalization of matrices $\mathbf{\hat{R}}, \mathbf{\hat{A}}$, together with the fact that neither matrix $\mathbf{X}$ nor matrix $\mathbf{Y}$ have zero rows. Indeed, the existence of a zero row in matrix $\mathbf{X}$ implies
\begin{math}
\mathcal{U} \neq \bigcup_{k=1}^{K}\mathcal{\hat{D}}_k,
\end{math}
which contradicts the definition of $\mathcal{\hat{M}}$; similarly the existence of a zero row in matrix $\mathbf{Y}$ contradicts the definition of the NCD blocks $\mathcal{\hat{D}}$ which are defined to be non-empty.
\end{remark}
\begin{remark}
Also notice that our primitivity criterion given by Theorem~\ref{Theorem:PrimitivityConditions}, applies in the overlapping case too, since our proof made no assumption for mutual exclusiveness for the NCD-blocks. In fact, it is intuitively evident that overlapping blocks promote the irreducibility of the indicator matrix $\mathbf{W}$.
\end{remark}
\section{Discussion and Future Work}
\label{Sec_Conclussions}
In this work, using an approach based on the theory of non-negative matrices,
we study NCDawareRank's inter-level proximity model and we derive necessary and sufficient conditions, under which the underlying decomposition alone could result in a well-defined ranking vector -- eliminating the need for uniform teleportation. Our goals here were mainly theoretical. However, our first findings in applying this ``no teleportation'' approach in realistic problems suggest that the conditions for primitivity are not prohibitively restrictive, especially if the criterion behind the definition of the decomposition implies overlapping blocks~\cite{NikolakopoulosG14,Nikolakopoulos2015,nikolakopoulos2015top}.
A very exciting direction we are currently pursuing involves the spectral implications of the absence of the teleportation matrix. In particular, a very interesting problem would be to determine bounds of the subdominant eigenvalue of the stochastic matrix $\mathbf{P} = \eta\mathbf{H} + \mu\mathbf{M}$, when the indicator matrix $\mathbf{W}$ is irreducible. Another important direction would be to proceed to randomized definitions of blocks that satisfy the primitivity criterion and to test the effect on the quality of the ranking vector.
In conclusion, we believe that our results, suggest that the NCDawareRank model presents a promising approach towards generalizing and enriching the standard random surfer model, and also carries the potential of providing an intuitive alternative teleportation scheme to the many applications of PageRank in hierarchical or otherwise specially structured graphs.
\bibliographystyle{splncs03}
|
2105.08641
|
\section{Introduction}
\subsection{Setting the problem}
It is a common wisdom that spectral properties of second-order differential operators and Jacobi operators are in many respects similar. Recently this analogy was used in articles \cite{Y/LD, JLR} to recover known and obtain some new results for
Jacobi operators with coefficients stabilizing at infinity. In this paper we move in the opposite direction and study
differential operators in the limit circle case relying on an analogy with similar problems for Jacobi operators.
Actually, we follow rather closely an approach developed for Jacobi operators in \cite{Jacobi-LC}.
We consider second-order differential operators ${\mathcal A}$ defined by the formula
\begin{equation}
( {\mathcal A} u)(x)=- \big(p(x) u' (x)\big)' + q(x) u(x) ,\qquad x\in {\mathbb R}_{+},
\label{eq:ZP+}\end{equation}
and acting in
the space $L^2 ({\mathbb R}_{+})$. The scalar product in this space is denoted $\langle \cdot, \cdot\rangle$; $I$ is the identity operator. We always suppose that functions $p(x)$ and $q(x)$ are real. Then the opera\-tor~${\mathcal A}$ defined on the set $ C_{0} ^\infty ({\mathbb R}_{+}) $ is symmetric, but to make it self-adjoint, one has to add boundary conditions at $x=0$ and, eventually, for $x\to\infty$. We suppose that conditions at these two points are separated. The boundary condition at the point $x=0$ looks as
\begin{equation} \label{eq:BCz}
u'(0)= \alpha u(0), \qquad\mbox{where}\quad \alpha =\bar{\alpha}.
\end{equation}
The value $\alpha=\infty$ is not excluded. In this case \eqref{eq:BCz} should be understood as the equality $u(0)=0$. We always require condition \eqref{eq:BCz}, fix $\alpha$ and do not keep track of $\alpha$ in notation.
Our objective is to study a singular case, where all solutions $u$ of the equation ${\mathcal A}u=z u $ for $z\in{\mathbb C}$ are in $L^2 ({\mathbb R}_{+})$. This instance is known as the limit circle (LC) case. In this case the operator ${\mathcal A}$ with boundary condition~\eqref{eq:BCz} has a one parameter family of self-adjoint realizations distinguished by some conditions for $x\to\infty$. Their description can be performed in various terms. We here adopt an approach similar to the one used for Jacobi operators as presented in the book \cite[Section~16.3]{Schm} or in the survey \cite[Section~2]{Simon}.
\subsection{Structure of the paper}\label{section1.2}
In Sections~\ref{section2.1} and~\ref{section2.2}, we collect standard information about differential equations of second-order and realizations of differential operators ${\mathcal A}$ in the space $L^2 ({\mathbb R}_{+})$. We first define symmetric operators $A_{\min}$ with minimal domains
$\mathcal{D}(A_{\min})$. Their self-adjoint extensions $A$ satisfy the condition
\begin{equation}
A_{\min}\subset A =A^*\subset A_{\min}^*=: A_{\max}.
\label{eq:Neum3}\end{equation}
In the LC case the operators $A_{\max}$ are not symmetric. In Section~\ref{section2.3}, we recall the traditional procedure of constructing self-adjoint extensions of the operator $A_{\min}$ in terms of some boundary conditions for $x\to\infty$. Then we suggest in Section~\ref{section2.4} an alternative approach to this problem, where self-adjoint extensions $A_{t}$, $t\in {\mathbb R}\cup\{ \infty\}$, of $A_{\min}$ are defined in a way analogous to the case of the Jacobi operators.
Our main result, an explicit formula for the resolvents $R_t (z)= (A_t-zI)^{-1}$, is obtained in Section~\ref{section3.2}, Theorem~\ref{RES}. Previously, we construct in Section~\ref{section3.1} (see Theorem~\ref{res}) an operator~${\mathcal R}(z)$ playing, in some sense, the role of the resolvent of the maximal operator $A_{\max}$. The operator~${\mathcal R}(z)$, we call it the {\it quasiresolvent}, is the key element of our construction.
Note that the operator valued function~${\mathcal R}(z)$ depends analytically on $z\in{\mathbb C}$. Then, using the operator~${\mathcal R}(z)$, we prove Theorem~\ref{RES}. This also yields a representation (see Section~\ref{section3.3}) for spectral families~$E_{t}(\lambda)$ of~$A_{t}$ which is a modification of the Nevalinna formula in the theory of Jacobi operators; see the original paper \cite{Nevan} or \cite{Schm, Simon}.
\section{Differential equations and associated operators}\label{section2}
We refer to the books \cite[Section~17]{Nai} and \cite[Section~X.1]{RS} for necessary background information on the theory of symmetric differential operators. A lot of relevant results can also be found in the encyclopedic book \cite{Zettl};
see, in particular, Chapter~10.
\subsection{Limit point versus limit circle}\label{section2.1}
Let us consider a second-order differential equation
\begin{equation}
- (p(x) u' (x))' + q(x) u(x)=z u(x)
\label{eq:Jy}\end{equation}
associated with operator \eqref{eq:ZP+}. To avoid inessential technical complications, we
always suppose that $ p \in C^1 ({\mathbb R}_{+})$, $ q \in C ({\mathbb R}_{+})$ and the functions $p(x)$, $q(x)$ have finite limits as $x\to 0$.
More general conditions on the regularity of $p(x)$ and $q(x)$ are stated, for example, in \cite[Section~15]{Nai}. We assume that $p(x)> 0$ for $x\geq 0$. The solutions of equation~\eqref{eq:Jy} exist, belong to $C^2 ({\mathbb R}_{+})$ and they have limits $u(+0)= : u(0)$, $u'(+0)= : u'(0)$. A solution $u(x)$ is distinguished uniquely by boundary conditions $u(0)=u_{0}$, $u'(0)=u_{1}$.
Recall that for arbitrary solutions $u$ and $v$ of equation \eqref{eq:Jy} their Wronskian
\[
\{ u, v \} : = p(x) (u' (x) v (x)- u (x) v' (x))
\]
does not depend on $x\in{\mathbb R}_{+}$. Clearly, the Wronskian $\{ u, v \} =0$ if and only if the solutions $u$ and $v$ are proportional.
We introduce a couple of standard solutions of equation \eqref{eq:Jy} by boundary conditions
\begin{equation}
\begin{cases}
\varphi_{z}(0)= 1,&\quad \varphi'_{z}(0)=\alpha,
\\
\theta_{z}(0)= 0,& \quad \theta'_{z}(0)=- p(0)^{-1},
\end{cases} \qquad \mbox{if} \quad \alpha\in{\mathbb R}
\label{eq:Pz}\end{equation}
and
\begin{equation}
\begin{cases}
\varphi_{z}(0)= 0,&\quad \varphi'_{z}(0)=1,
\\
\theta_{z}(0)= p(0)^{-1},&\quad \theta'_{z}(0)= 0,
\end{cases} \qquad \mbox{if}\quad \alpha=\infty.
\label{eq:Pzi}\end{equation}
Clearly, $\varphi_{z} (x)$ (but not $\theta_{z} (x)$) satisfies boundary condition \eqref{eq:BCz}. Note also that the Wronskian
$\{\varphi_{z}, \theta_{z}\}=1$.
The Weyl limit point/circle theory (see, e.g., \cite[Chapter~IX]{CoLe}) states that dif\-fe\-ren\-tial equation~\eqref{eq:Jy} always has a non-trivial solution in $L^2 ({\mathbb R}_{+})$ for $\operatorname{Im} z \neq 0$. This solution is either unique (up to a constant factor) or all solutions of~\eqref{eq:Jy} belong to $L^2 ({\mathbb R}_{+})$.
The first instance is known as the limit point (LP) case and the second one~-- as the limit circle (LC) case.
In the LC case we have
\begin{equation}
\varphi_{z}\in L^2 ({\mathbb R}_{+}),\qquad \theta_{z}\in L^2 ({\mathbb R}_{+})\qquad \mbox{for all}\quad z\in {\mathbb C}.
\label{eq:PQz}\end{equation}
\subsection{Minimal and maximal operators}\label{section2.2}
We first define a minimal
operator $A_{00}$ by the equality $A_{00} u= {\mathcal A} u$ on domain
$ {\mathcal D} (A_{00}) $ that consists of functions $u\in C ^2 ({\mathbb R}_{+}) $ such that $u(x)=0$ for sufficiently large $x$, limits $u(+0)=: u(0)$, $u'(+0)=: u'(0)$ exist and condition \eqref{eq:BCz} is satisfied.
Thus, the boundary condition \eqref{eq:BCz} at $x=0$ is included in the definition of the operator $A_{00}$ so that its self-adjoint extensions are determined by conditions for $x\to\infty$.
The closure of $A_{00}$ will be denoted $A_{\min} $. This operator is
symmetric in the space $L^2 ({\mathbb R}_{+})$, but without additional assumptions on the coefficients $p(x)$ and $q(x)$ its domain $ {\mathcal D} (A_{\min}) $ does not admit an efficient description. The adjoint operator $A^*_{\min} =: A_{\max}$ is again given by the formula $A_{\max} u={\mathcal A} u$ on a set $ {\mathcal D} (A_{\max})$ that consists of functions
$u(x)$ belonging locally to the Sobolev space ${\sf H}^2$, satisfying boundary condition \eqref{eq:BCz} and such that $u\in L^2 ({\mathbb R}_{+})$, ${\mathcal A} u\in L^2 ({\mathbb R}_{+})$.
In the LC case, the operator $A_{\max}$ is not symmetric. Integrating by parts, we see that
for all $u, v \in {\mathcal D} (A_{\max})$
\[
\langle {\mathcal A} u,v \rangle - \langle u, {\mathcal A} v \rangle = \lim_{x\to\infty} p(x) \big(u'(x)\bar{v} (x)- \bar{u}' (x) v(x)\big),
\]
where the limit in the right-hand side exists but is not necessarily zero.
Recall that
\[
A_{\min}= A_{\min}^{**} = A_{\max}^{*}.
\]
The operator $A_{\min}$ is self-adjoint if and only if the LP case occurs.
In this paper we are interested in the LC case when
\begin{equation}
A_{\min}\neq A_{\max}=A_{\min}^*.
\label{eq:cl}\end{equation}
Since the operator $A_{\min} $ commutes with the complex conjugation, its deficiency indices
\[
d_{\pm}: =\dim\ker (A_{\max}-z I),\qquad \pm \operatorname{Im} z>0,
\]
are equal, i.e., $d_{+} =d_{-}=:d $, and, so, $A_{\min} $ admits self-adjoint extensions. For an arbitrary $z\in{\mathbb C}$, all solutions of equation \eqref{eq:Jy} with boundary condition~\eqref{eq:BCz}
are given by the formula $u (x)= c \varphi_{z}(x)$ for some $c\in{\mathbb C}$. They belong to $ {\mathcal D} (A_{\max})$ if and only if $\varphi_{z} \in L^2 ({\mathbb R}_{+})$. Therefore $d=0$ if $\varphi_{z} \not\in L^2 ({\mathbb R}_{+})$ for $\operatorname{Im} z\neq 0$; otherwise $d=1$.
\subsection{Boundary conditions at infinity}\label{section2.3}
In this paper we are interested in the case, where equation \eqref{eq:Jy} is in the limit circle (LC) case at infinity. This means that all solutions of this equation for some (and then for all) $z\in{\mathbb C}$ are in $L^2({\mathbb R}_{+})$ or, equivalently, that relation~\eqref{eq:cl} is satisfied.
First, we briefly recall the traditional description of self-adjoint extensions of the minimal operator $J_{\min}$ in terms of boundary conditions at infinity.
We refer to the classical books \cite[Chapter~IX, Section~4]{CoLe} and \cite[Section~18]{Nai} for detailed presentations. We mention also the relatively recent book~\cite{Schm}, where a concise exposition of the case of second-order differential operators is given. The results stated below can be found, for example, in \cite[Proposition~15.14]{Schm}.
Let $v_{j}(x)$, $j=1,2$, be some real valued functions of $x\in{\mathbb R}_{+}$ such that
\begin{equation}
\lim_{x\to\infty} p(x) \big(v_{1}'(x) v_2 (x)- v_{1}(x) v_{2}'(x)\big)=1.
\label{eq:bc1}\end{equation}
Let a set ${\mathcal D}^{(s)} \subset {\mathcal D} ( A_{\max})$ consist of functions $u(x)$ satisfying the condition
\begin{equation}
\lim_{x\to\infty} p(x) \big(u'(x) (s v_{1}(x) + v_{2}(x))- u(x) (s v_{1}'(x) + v_{2}'(x))\big)=0
\label{eq:bc}\end{equation}
if $s\in{\mathbb R}$; if $s=\infty $, then the function $s v_{1}(x) + v_{2}(x)$ in this formula should be replaced by~$v_{1} (x)$.
Then the restriction ${\sf A}^{(s)}$ of the operator $ A_{\max}$ on domain $ {\mathcal D} ({\sf A}^{(s)}):= {\mathcal D}^{(s)} $ is self-adjoint, and each self-adjoint extension of the operator $ A_{\min}$ coincides with an operator
${\sf A}^{(s)}$ for some $s\in{\mathbb R}\cup\{\infty\}$.
The resolvents of the operators ${\sf A}^{(s)}$ are determined by a formula similar to the regular case (see formula~\eqref{eq:R-LP} below). It turns out that equation~\eqref{eq:Jy}, where $\operatorname{Im} z\neq 0$ has a solution $u(x)=: f^{(s)}_{z} (x)$ satisfying boundary condition~\eqref{eq:bc}. Then, for all $ h\in L^2 ({\mathbb R}_{+})$ and $ \operatorname{Im} z\neq 0$, one has
\[
\big( \big({\sf A}^{(s)}-z I \big)^{-1} h\big) (x) = \frac{1}{\big\{\varphi_{z}, f^{(s)}_{z}\big\}} \bigg(f_{z}^{(s)}(x) \int_{0}^x \varphi_{z}(y) h (y) \,{\rm d} y+
\varphi_{z}(x) \int_x^\infty f_{z}^{(s)}(y) h (y) \,{\rm d}y\bigg),
\]
where $\big\{\varphi_{z}, f^{(s)}_{z}\big\}$ is the Wronskian of the solutions $\varphi_{z}$ and $ f^{(s)}_{z}$ of equation~\eqref{eq:Jy}.
Note that the results stated above are obtained by approximating the problem on the half-axis~${\mathbb R}_{+}$ by regular problems on intervals $(0,\ell)$ and studying the limit $\ell\to\infty$.
\subsection{Self-adjoint extensions}\label{section2.4}
The description of self-adjoint extensions of the operator $A_{\min}$ given in the previous subsection seems to be not very efficient. In particular, it depends on a choice of the functions $v_{j}(x)$, $j=1,2$, satisfying condition \eqref{eq:bc1}. We suggest an alternative approach motivated by an analogy with Jacobi operators in Theorem~\ref{Neum2} (cf.\ \cite[Lemma~6.22 and Theorem~6.23]{Schm} or \cite[Theorem~2.6]{Simon}). In the long run, it relies on von Neumann formulas but is adapted to operators~\eqref{eq:ZP+} with real coefficients~$p(x)$ and~$q(x)$.
Our descriptions of various domains are given in terms of the solutions $\varphi_{z}(x)$ and $\theta_{z}(x)$ of differential equation \eqref{eq:Jy}. Note that the function $\varphi_{z}(x)$ satisfies boundary condition \eqref{eq:BCz} so that $\varphi_{z} \in {\mathcal D}(A_{\max})$, but this is not the case for $\theta_{z}(x)$. To get rid of this nuisance, we introduce a function $\tilde{\theta}_z (x)= \omega(x) \theta_z (x)$, where the cut-off $\omega\in C^\infty ({\mathbb R}_{+})$, $\omega(x)=0$ for small $x$ and $\omega(x)=1$ for large $x$; then $\tilde\theta_{z} \in {\mathcal D}(A_{\max})$. A direct calculation shows that
\begin{equation}
\big( {\mathcal A}\tilde{\theta}_z \big)(x)-z\tilde{\theta}_z (x)=\psi_{z}(x),
\label{eq:the}\end{equation}
where
\begin{equation}
\psi_{z}(x)= -p(x) \omega'(x) \theta'_{z} (x) - \big(p(x) \omega'(x) \theta_{z} (x) \big)'
\label{eq:the1}\end{equation}
has a compact support.
Now we are in a position to describe ${\mathcal D}(A_{\max})$.
For a vector $h\in L^2 ({\mathbb R}_{+})$, we denote by $\{h\}$ the one dimensional subspace of $L^2 ({\mathbb R}_{+})$ spanned by the vector $h$. The symbol $\dotplus$ denotes the direct sum of subspaces.
\begin{Theorem}\label{Neum}
Let inclusions \eqref{eq:PQz} hold true.
Then
\begin{equation}
{\mathcal D}(A_{\max})= {\mathcal D}(A_{\min}) \dotplus \{\varphi_{0 } \}\dotplus \big\{\tilde{\theta}_{0} \big\}.
\label{eq:Neum}\end{equation}
\end{Theorem}
\begin{Remark}\label{Neur}
Since the difference of functions $\tilde{\theta}_0 $ corresponding to two different cut-offs $\omega(x)$ is in ${\mathcal D}(A_{\min})$, the direct sum in~\eqref{eq:Neum} does not depend on a particular choice of $\tilde{\theta}_0$.
\end{Remark}
\begin{proof}
We start a proof of Theorem~\ref{Neum} with a direct calculation.
\begin{Lemma}\label{Neum1}
Suppose that
\begin{equation}
u=u_{0}+ \alpha_{1} \varphi_{0}+ \alpha_{2} \tilde{\theta}_{0 }\qquad\mbox{and}\qquad v=v_{0}+ \beta_{1} \varphi_{0}+ \beta_{2} \tilde{\theta}_{0 },
\label{eq:Ne1}
\end{equation}
where $u_{0}, v_{0} \in {\mathcal D}(A_{\min})$ and $\alpha_{j}, \beta_{j}\in {\mathbb C}$. Then
\begin{equation}
\langle A_{\max} u,v\rangle -\langle u, A_{\max} v\rangle = \alpha_{2} \overline{\beta_{1}}-\alpha_{1} \overline{\beta_{2}}.
\label{eq:Ne6}\end{equation}
\end{Lemma}
\begin{proof}
Let us calculate
\[
\langle A_{\max} u,v\rangle = \big\langle A_{\max} \big(u_{0}+ \alpha_{1} \varphi_{0}+ \alpha_{2} \tilde{\theta}_{0 }\big),v_{0}+ \beta_{1} \varphi_0+ \beta_{2} \tilde{\theta}_{0 }\big\rangle.
\]
Using \eqref{eq:the}, we see that
\[
A_{\max} \big(u_{0}+ \alpha_{1} \varphi_{0}+ \alpha_{2} \tilde{\theta}_{0 }\big)= A_{\min} u_{0} + \alpha_{2}\psi_{0}
\]
and
\begin{align*}
\big \langle A_{\min} u_{0} ,v_{0}+ \beta_{1} \varphi_0+ \beta_{2} \tilde{\theta}_{0 }\big\rangle
& =
\langle A_{\min} u_{0} ,v_{0}\rangle + \big\langle u_{0} , A_{\max} \big(\beta_{1} \varphi_0+ \beta_{2} \tilde{\theta}_{0 }\big)\big\rangle
\nonumber \\
& = \langle A_{\min} u_{0} ,v_{0}\rangle + \overline{\beta_{2}} \langle u_{0} , \psi_{0}\rangle,
\end{align*}
whence
\[
\langle A_{\max} u,v\rangle = \langle A_{\min} u_{0} ,v_{0}\rangle + \overline{\beta_{2}} \langle u_{0} , \psi_{0}\rangle
+ \alpha_{2} \langle \psi_{0},v_{0}\rangle
+ \alpha_{2}\overline{\beta_{1}} \langle \psi_{0} , \varphi_{0}\rangle
+ \alpha_{2}\overline{\beta_{2}} \big\langle \psi_{0} , \tilde{\theta}_{0}\big\rangle .
\]
Similarly, we find that
\[
\langle u, A_{\max} v\rangle = \langle u_{0} , A_{\min}v_{0}\rangle + \alpha_{2} \langle \psi_{0}, v_{0}\rangle
+ \overline{\beta_{2}}\langle u_{0}, \psi_{0}\rangle
+ \overline{\beta_{2}}\alpha_{1}\langle \varphi_{0}, \psi_{0}\rangle
+ \overline{\beta_{2}}\alpha_{2} \big\langle \tilde{\theta}_{0}, \psi_{0}\big\rangle .
\]
Comparing the last two equalities and taking into account that the functions $\psi_{0}$, $\varphi_{0}$ and $\tilde{\theta}_{0}$ are real, we see that
\begin{equation}
\langle A_{\max} u,v\rangle - \langle u, A_{\max} v \rangle= \big(\alpha_{2} \overline{\beta_{1}} - \alpha_{1} \overline{\beta_{2}} \big)\langle \varphi_{0}, \psi_{0}\rangle.
\label{eq:Ne5}\end{equation}
Using definition \eqref{eq:the1} and integrating by parts, it is easy to calculate
\[
\langle \varphi_{0}, \psi_{0}\rangle=\int_{0}^\infty p(x) \omega'(x) \big(\theta_{0} (x)\varphi_{0}' (x) - \theta'_{0} (x)\varphi_{0} (x)
\big) \,{\rm d}x.
\]
Since $\{ \varphi_{0}, \theta_{0}\}=1$, this integral equals $1$. Therefore identity~\eqref{eq:Ne5} can be rewritten as~\eqref{eq:Ne6}.
\end{proof}
Now it is easy to prove Theorem~\ref{Neum}. First we check that the sum in the right-hand side of~\eqref{eq:Neum} is direct, that is, an inclusion
$
\alpha_{1} \varphi_{0}+ \alpha_{2} \tilde{\theta}_{0 }\in {\mathcal D}(A_{\min})
$
implies that $\alpha_{1}= \alpha_{2} =0$. Indeed, if this inclusion is true, then
\[
\big\langle A_{\max} \big(\alpha_{1} \varphi_{0}+ \alpha_{2} \tilde{\theta}_{0 }\big), \beta_{1} \varphi_{0}+ \beta_{2} \tilde{\theta}_{0 }\big\rangle = \big\langle \alpha_{1} \varphi_{0}+ \alpha_{2} \tilde{\theta}_{0 }, A_{\max} \big(\beta_{1} \varphi_{0}+ \beta_{2} \tilde{\theta}_{0 }\big)\big\rangle
\]
for all $\beta_{1}, \beta_{2}\in {\mathbb C}$. Therefore it follows from Lemma~\ref{Neum1} for the particular case $u_{0}=v_{0}=0$ that $ \alpha_{2} \overline{\beta_{1}}-\alpha_{1} \overline{\beta_{2}}=0$ whence $ \alpha_{1} = \alpha_{2}=0$ because $ \beta_{1} $ and $ \beta_{2}$ are arbitrary.
Obviously, the right-hand side of~\eqref{eq:Neum} is contained in its left-hand side. Actually, there is the equality here because the operator $A_{\min}$ has deficiency indices $(1,1)$ so that the dimension of the factor space $ {\mathcal D}(A_{\max}) / {\mathcal D}(A_{\min}) $ equals~$2$. This concludes the proof of Theorem~\ref{Neum}.
\end{proof}
All self-adjoint extensions $A_{t}$ of the operator $A_{\min}$ are parametrized by numbers $ t\in {\mathbb R}$ and $t=\infty$. Let sets ${\mathcal D}(A_{t})\subset {\mathcal D}(A_{\max})$ be distinguished by conditions
\begin{equation}
{\mathcal D}(A_{t})= {\mathcal D}(A_{\min})\dotplus \big\{ t \varphi_{0} + \tilde{\theta}_{0}\big\}, \qquad t\in {\mathbb R},
\label{eq:Neum1}\end{equation}
and
\begin{equation}
{\mathcal D}(A_\infty)= {\mathcal D}(A_{\min})\dotplus \{ \varphi_{0 } \}.
\label{eq:Neum2}\end{equation}
\begin{Theorem}\label{Neum2}
Let inclusions \eqref{eq:PQz} hold true. Then all operators $A_{t}$ are self-adjoint. Conversely, every operator $A$ satisfying condition
\eqref{eq:Neum3}
coincides with one of the operators $A_{t}$ for some $t\in{\mathbb R}\cup \{\infty\}$.
\end{Theorem}
\begin{proof}
We proceed from Lemma~\ref{Neum1}. Let $u,v \in{\mathcal D}(A_{\max})$ so that equalities \eqref{eq:Ne1} are satisfed.
If $u,v \in{\mathcal D}(A_{t})$, then according to \eqref{eq:Neum1} or \eqref{eq:Neum2} we have $\alpha_{1}= t\alpha_{2}$, $\beta_{1}= t \beta_{2}$ if $t\in {\mathbb R}$ and $\alpha_2= \beta_{2}=0$ if $t=\infty$. Therefore it follows from relation \eqref{eq:Ne6} that $\langle A_{t}u,v \rangle= \langle u, A_{t} v \rangle$, and hence the operators $A_{t} $ are symmetric.
If $v\in {\mathcal D}(A_{t}^*)$, then $\langle A_{t}u,v \rangle= \langle u, A_{t} v \rangle$ for all $u\in {\mathcal D}(A_{t})$.
Thus, according again to \eqref{eq:Ne6}, $\alpha_{2} \overline{\beta_{1}}-\alpha_{1} \overline{\beta_{2}}=0$ for all $\alpha_{1}$, $\alpha_{2}$ such that $\alpha_{1}=t \alpha_{2}$ if $t\in {\mathbb R}$ and such that $ \alpha_{2}= 0$ if $t=\infty$.
Let first $t\in {\mathbb R}$. Then $\alpha_{2} \big( \overline{\beta_{1}}- t \overline{\beta_{2}}\big)=0 $ whence $\beta_{1}=t\beta_{2}$ because $\alpha_{2}$ is arbitrary. If $t=\infty$, we have $\alpha_{1} \overline{\beta_{2}}=0$ whence $\beta_{2}=0$ because $\alpha_1$ is arbitrary. It follows that $v\in {\mathcal D}(A_{t})$, and consequently $A_{t}=A_{t}^*$.
Suppose that an operator $A$ satisfies \eqref{eq:Neum3}. Since $A$ is symmetric, it follows from Lemma~\ref{Neum1} that $\alpha_{2} \overline{\beta_{1}}=\alpha_{1} \overline{\beta_{2}}$ for all $u, v\in {\mathcal D} (A)$ and the corresponding coefficients $\alpha_{j}$, $\beta_{j}$ defined in \eqref{eq:Ne1}. Suppose that $\alpha_{2}\neq 0$ for some $u \in {\mathcal D} (A)$. Then setting $u=v$, we see that $\alpha_{2} \overline{\alpha_{1}}=\alpha_{1} \overline{\alpha_{2}}$ whence $ \alpha_{1}\alpha_{2}^{-1}=:t \in{\mathbb R}$. Now equality $\alpha_{2} \overline{\beta_{1}}=\alpha_{1} \overline{\beta_{2}}$ implies that
$\beta_{1}= t \beta_{2}$ for all $v \in {\mathcal D} (A)$ so that $A=A_{t}$. If $\alpha_{2}= 0$ for all $u \in {\mathcal D} (A)$, then $A=A_\infty$.
\end{proof}
\section{Resolvents of self-adjoint extensions}\label{section3}
Our goal in this section is to construct resolvents of the operators $A_{t}$. We start however with a~construction of a similar object for the operator $A_{\max}$.
\subsection{Quasiresolvent of the maximal operator}\label{section3.1}
Recall that in the LC case
inclusions~\eqref{eq:PQz} are satisfied. Let us define, for all $z\in {\mathbb C}$, a~bounded operator ${\mathcal R} (z)$ in the space $L^2 ({\mathbb R}_{+})$ by the equality
\begin{equation}
( {\mathcal R} (z)h) (x) = \theta_{z} (x) \int_{0}^x \varphi_{z} (y) h(y) \,{\rm d}y+
\varphi_{z} (x) \int_x^\infty \theta_{z} (y) h(y) \,{\rm d} y .
\label{eq:RR11}\end{equation}
We prove (see Theorem~\ref{res}) that, in a natural sense, ${\mathcal R} (z)$ can be considered as a quasiresolvent of the operator~$A_{\max}$. It plays the role of the resolvent of the operator~$A_{\max}$.
Let us enumerate some simple properties of the operator ${\mathcal R} (z)$.
Obviously, the operator ${\mathcal R} (z)$ belongs to the Hilbert--Schmidt class. It depends analytically on $z\in {\mathbb C}$ and ${\mathcal R} (z)^*={\mathcal R} (\bar{z})$. Differentiating
definition \eqref{eq:RR11}, we see that
\begin{equation}
( {\mathcal R} (z)h)' (x) = \theta_{z}' (x) \int_{0}^x \varphi_{z} (y) h(y) \,{\rm d}y+
\varphi_{z}' (x) \int_x^\infty \theta_{z} (y) h(y) \,{\rm d}y
\label{eq:RR12}\end{equation}
for all $h\in L^2 ({\mathbb R}_{+})$.
In particular, it follows from relations~\eqref{eq:RR11} and~\eqref{eq:RR12} that
\begin{equation}
({\mathcal R} (z)h)(0)= \varphi_{z} (0)\langle h, \theta_{\bar{z}}\rangle
\label{eq:r01}\end{equation}
and
\begin{equation}
({\mathcal R} (z)h)'(0)= \varphi_{z}' (0)\langle h, \theta_{\bar{z}}\rangle,
\label{eq:r02}\end{equation}
where $\varphi_{z} (0)$ and $\varphi_{z}' (0)$ are defined by equalities \eqref{eq:Pz} or \eqref{eq:Pzi}.
A proof of the following statement is close to the construction of the resolvent for essentially self-adjoint Schr\"odinger operators.
\begin{Theorem}\label{res}
Let inclusions \eqref{eq:PQz} hold true.
For all $z \in {\mathbb C}$, we have
\begin{equation}
{\mathcal R} (z)\colon \ L^2 ({\mathbb R}_{+})\to {\mathcal D} (A_{\max} )
\label{eq:qres}\end{equation}
and
\begin{equation}
(A_{\max} -zI) {\mathcal R} (z)=I.
\label{eq:qres1}\end{equation}
\end{Theorem}
\begin{proof}
Let $h\in L^2 ({\mathbb R}_{+})$ and $u(x)= ({\mathcal R} (z)h)(x)$. Boundary condition \eqref{eq:BCz} is a direct consequence of relations \eqref{eq:r01} and \eqref{eq:r02}. Differentiating \eqref{eq:RR12}, we see that
\begin{gather}
(p(x)u' (x))' = (p(x) \theta_{z}' (x) )'\int_{0}^x \varphi_{z} (y) h(y) \,{\rm d}y+ (p(x) \varphi_{z}' (x) )' \int_x^\infty \theta_{z} (y) h(y) \,{\rm d}y \nonumber \\
\hphantom{(p(x)u' (x))' =}{} + p(x) \big( \theta'_{z}(x) \varphi_{z}(x)-\theta
_{z}(x) \varphi'_{z}(x)\big) h(x).
\label{eq:RR13}\end{gather}
Since the Wronskian $\{ \varphi_{z}, \theta_{z}\}=1 $, the last term in the right-hand side equals $-h(x)$. Putting now equalities~\eqref{eq:RR11} and~\eqref{eq:RR13} together and using equation~\eqref{eq:Jy} for the functions $\varphi_{z}(x)$ and $\theta_{z}(x)$, we obtain the equation
\[
- (p(x) u' (x))' + q(x) u(x)-z u(x)=h(x) .
\]
Taking also into account boundary condition \eqref{eq:BCz}, we see that $ A_{\max} u -z u=h $. Since $h\in L^2 ({\mathbb R}_{+})$, this yields both \eqref{eq:qres} and \eqref{eq:qres1}.
\end{proof}
\begin{Remark}\label{res3}
In definition \eqref{eq:RR11}, only boundary condition \eqref{eq:BCz} for $\varphi_{z} (x)$ and the relation $\{ \varphi_{z}, \theta_{z}\}=1 $ for the Wronskian are essential. For example, one can replace the solution $\theta_{z}(x)$ by $\theta_{z}(x)+ \delta \varphi_{z} (x)$ for some $\delta \in {\mathbb C}$. Then the operator
$ {\mathcal R} (z)$ will be replaced by
$\widetilde{\mathcal R} (z)= {\mathcal R} (z) +\delta \langle \cdot, \varphi_{\bar{z} }\rangle \varphi_{z}$ and formulas \eqref{eq:qres}, \eqref{eq:qres1} remain true for $\widetilde{\mathcal R} (z)$.
\end{Remark}
Note that solutions $u(x)$ of differential equation \eqref{eq:Jy} satisfying condition \eqref{eq:BCz} are given by the formula $u(x)= \Gamma \varphi_{z} (x)$ for some $ \Gamma \in{\mathbb C}$. Therefore we
can state
\begin{Corollary}\label{res1}
All solutions of the equation
\[
(A_{\max} -zI) u =h, \qquad \mbox{where} \quad z\in {\mathbb C} \quad \mbox{and}\quad h\in L^2 ({\mathbb R}_{+}),
\]
for $u\in {\mathcal D} (A_{\max} )$ are given by the formula
\begin{equation}
u = \Gamma \varphi_{z}+ {\mathcal R} (z) h \qquad \mbox{for some} \quad \Gamma=\Gamma (z;h) \in{\mathbb C}.
\label{eq:qres3}\end{equation}
\end{Corollary}
A relation below is a direct consequence of definition \eqref{eq:RR11}
and condition \eqref{eq:PQz}:
\[
( {\mathcal R} (z)h) (x) = \theta_{z} (x) \langle h, \varphi_{\bar{z}} \rangle+ o(|\varphi_{z} (x)| +|\theta_{z} (x)| )\qquad {\rm as}\quad x\to\infty.
\]
This asymptotic formula can be supplemented by the following result.
\begin{Proposition}\label{AS}
For all $z\in {\mathbb C}$ and all $h\in L^2 ({\mathbb R}_{+})$, we have
\begin{equation}
u:= {\mathcal R} (z)h - \tilde{\theta}_{z} \langle h, \varphi_{ \bar{z}} \rangle\in {\mathcal D}(A_{\min}).
\label{eq:Ras1}\end{equation}
\end{Proposition}
\begin{proof}
If the support of $h(x)$ is compact in ${\mathbb R}_{+}$, then $ ( {\mathcal R} (z)h) (x) = \varphi_{z}(x) \langle h, \theta_{\bar{z}} \rangle$ for sufficiently small $x$ and $ ( {\mathcal R} (z)h) (x) = \theta_{z}(x) \langle h, \varphi_{\bar{z}} \rangle$ for sufficiently large $x$. Therefore $u(x)$ satisfies boundary condition \eqref{eq:BCz} at $x=0$ and $u(x)=0$ for large $x$ whence $u\in{\mathcal D}(A_{00})$.
Let now $h$ be an arbitrary vector in $L^2 ({\mathbb R}_{+})$. Observe that $ u \in {\mathcal D}(A_{\min})$ if and only if there exists a sequence $u^{(k)}\in {\mathcal D}(A_{00})$ such that
\begin{equation}
u^{(k)}\to u \qquad \mbox{and} \qquad {\mathcal A}u^{(k)}\to {\mathcal A}u
\label{eq:Ras2}\end{equation}
in $L^2 ({\mathbb R}_{+})$ as $ k\to\infty$.
Let us take a sequence of functions $h^{(k)} $ with compact supports in ${\mathbb R}_{+}$
such that $h^{(k)}\to h$ and set
\[
u^{(k)}= {\mathcal R} (z)h^{(k)} - \big \langle h^{(k)}, \varphi_{\bar{z}} \big\rangle \tilde{\theta}_{z} .
\]
Then, as was already shown, $u^{(k)} \in {\mathcal D} (A_{00})$ and $u^{(k)}\to u$ as $k\to\infty$ because the operator $ {\mathcal R} (z)$ is bounded. It follows from formula \eqref{eq:qres1} that
\[
({\mathcal A} -z) u^{(k)}= h^{(k)} - \big\langle h^{(k)}, \varphi_{\bar{z}} \big\rangle ({\mathcal A} -z) \tilde{\theta}_{z} \to h - \langle h, \varphi_{\bar{z}} \rangle ({\mathcal A} -z) \tilde{\theta}_{z}
\]
as $k\to\infty$. The right-hand side equals $({\mathcal A} -z) u$ by formula~\eqref{eq:qres1} and definition~\eqref{eq:Ras1}. This proves relations~\eqref{eq:Ras2} whence $u\in {\mathcal D}(A_{\min})$.
\end{proof}
\subsection{Resolvent representation}\label{section3.2}
First, we find a link between the solutions $\varphi_{z}$, $\theta_{z}$ of equation \eqref{eq:Jy} for an arbitrary $z \in {\mathbb C}$ and for $z=0$.
\begin{Lemma}\label{PQ}
For all $z \in {\mathbb C}$, we have
\begin{equation}
\varphi_{z} - z {\mathcal R}(0) \varphi_{z}=\big( 1-z \langle \varphi_{z}, \theta_{0}\rangle \big) \varphi_{0}
\label{eq:PP1}\end{equation}
and
\begin{equation}
\theta_{z} - z {\mathcal R}(0) \theta_{z}= - z \langle \theta_{z}, \theta_{0} \rangle \varphi_{0} + \theta_{0}.
\label{eq:QQ}\end{equation}
\end{Lemma}
\begin{proof}
To prove \eqref{eq:PP1}, we set
$ u= \varphi_{z} -z {\mathcal R}(0) \varphi_{z} $
and observe that $ {\mathcal A } u= {\mathcal A }\varphi_{z}-z \varphi_{z} =0$ according to equation \eqref{eq:Jy} for $ \varphi_{z} $ and relation \eqref{eq:qres1} (where $z=0$). Since both $ \varphi_{z} (x) $ and $( {\mathcal R}(0) \varphi_{z} ) (x) $ satisfy boundary condition \eqref{eq:BCz}, it follows that $u(x)=c \varphi_{0} (x)$ and hence
\begin{equation}
\varphi_{z}(x) - z ({\mathcal R}(0) \varphi_{z} )(x)= c \varphi_{0} (x)
\label{eq:PP1A}\end{equation}
for some constant $c\in{\mathbb C}$.
It remains to find this constant. If $\alpha\in {\mathbb R}$, we set $x=0$. Then $\varphi_{z} (0)=\varphi_0 (0)=1$ and
$({\mathcal R}(0) \varphi_{z} )(0)$ is given by \eqref{eq:r01} so that \eqref{eq:PP1A} for $x=0$ yields $c= 1-z \langle \varphi_{z}, \theta_{0}\rangle$. This proves \eqref{eq:PP1}. In the case
$\alpha=\infty$, we first differentiate \eqref{eq:PP1A} and then set $x=0$. Since $\varphi_{z}' (0)=\varphi_0' (0)=1$,
using \eqref{eq:r02} we again get equality \eqref{eq:PP1}.
The proof of \eqref{eq:QQ} is quite similar. We now set
\begin{equation}
v=\theta_{z} - \theta_{0} -z {\mathcal R}(0) \theta_{z}
\label{eq:QQ1}\end{equation}
and find that $ {\mathcal A } v= 0$ according to equation \eqref{eq:Jy} for $\theta_{z} $ and relation \eqref{eq:qres1} (where $z=0$). Next, we observe that
$v (0)= -z \varphi_{z}(0)\langle \theta_{z}, \theta_{0}\rangle$ because $\theta_{z} (0)= \theta_{0} (0)$ and $ ({\mathcal R}(0) \theta_{z})(0)$ is given by~\eqref{eq:r01}. Similarly, it follows from equalities $\theta_{z}' (0)= \theta_{0}' (0)$ and \eqref{eq:r02} that $v' (0)= -z \varphi_{z}'(0)\langle \theta_{z}, \theta_{0}\rangle$. Thus, $v(x)$ satisfies equation~\eqref{eq:Jy}
and boundary condition~\eqref{eq:BCz} whence $v(x)=c \varphi_{0} (x)$ for some constant $c\in{\mathbb C}$.
In the case
$\alpha\in {\mathbb R}$ we use this equality for $x=0$ and in the case
$\alpha=\infty$ we use that $v'(x)=c \varphi_{0}' (x)$. In both cases we obtain that $c=- z \langle \theta_{z}, \theta_{0} \rangle$. In view of \eqref{eq:QQ1} this ensures~\eqref{eq:QQ}.
\end{proof}
Putting together Lemma~\ref{PQ} with Proposition~\ref{AS} (for $z=0$), we can also state the following result.
\begin{Lemma}\label{PQ+}
For all $z \in {\mathbb C}$, we have
\[
\varphi_{z} -\big( 1-z \langle \varphi_{z}, \theta_{0} \rangle \big) \varphi_{0} - z \langle \varphi_{z}, \varphi_0\rangle \tilde\theta_{0} \in {\mathcal D} (A_{\min})
\]
and
\[
\tilde\theta_{z} + z \langle \theta_{z}, \theta_0\rangle \varphi_0 - (1+z \langle \theta_{z}, \varphi_{0}\rangle ) \tilde\theta_{0} \in {\mathcal D} (A_{\min}).
\]
\end{Lemma}
Now we are in a position to construct the resolvents of the self-adjoint operators $A_t$.
\begin{Theorem}\label{RES}
Let inclusions \eqref{eq:PQz} hold true.
For all $z\in {\mathbb C}$ with $\operatorname{Im} z\neq 0$ and all $h\in L^2 ({\mathbb R}_{+})$, the resolvent $R_t (z)= (A_t-zI)^{-1}$ of the operator $A_t$ is given by an equality
\begin{equation}
R_{t} (z) h = \gamma_{t} (z) \langle h, \varphi_{\bar{z}}\rangle \varphi_{z}+ {\mathcal R}(z)h,
\label{eq:RES1}\end{equation}
where
\begin{equation}
\gamma_{t}(z)= \frac{z \langle \theta_{z}, \theta_{0}\rangle +\big( 1+ z \langle \theta_{z}, \varphi_{0}\rangle \big)t }
{1- z \langle \varphi_{z}, \theta_{0}\rangle -z \langle \varphi_{z}, \varphi_{0}\rangle t } \qquad \mbox{if} \quad t\in {\mathbb R}
\label{eq:RES2}\end{equation}
and
\begin{equation}
\gamma_{\infty}(z)= - \frac{ 1+ z \langle \theta_{z}, \varphi_{0}\rangle }
{z \langle \varphi_{z}, \varphi_{0}\rangle }.
\label{eq:RES3}\end{equation}
\end{Theorem}
\begin{proof} According to Theorem~\ref{res} and Corollary~\ref{res1} a vector $u=R_{t} (z) h$ is given by equali\-ty~\eqref{eq:qres3}, where
$ \Gamma= \Gamma_{t}(z;h)$ is a bounded linear functional of $h\in L^2 ({\mathbb R}_{+})$ so that
$\Gamma_{t}(z;h)= \big\langle h, f_{z}^{(t)}\big\rangle$
for some vector $ f_{z}^{(t)} \in L^2 ({\mathbb R}_{+})$.
Since $R_{t} (z)^*=R_{t} (\bar{z})$ and ${\mathcal R} (z)^*={\mathcal R} (\bar{z})$, we see that
\[
\big\langle h, f_{\bar{z}}^{(t)} \big\rangle \varphi_{\bar{z}} =\langle h, \varphi_{z}\rangle f_{z}^{(t)}
\]
for all $h\in L^2 ({\mathbb R}_{+})$. It follows that $f_{z}^{(t)} = \overline{\gamma_{t} (z)} \varphi_{\bar{z}}$ for some $\gamma_{t} (z) \in{\mathbb C}$. This yields representation~\eqref{eq:RES1}, where the constant $\gamma_{t} (z) $ is determined by the condition
\begin{equation}
R_{t} (z) h\in {\mathcal D} (A_{t}).
\label{eq:RER}\end{equation}
Let us show that this inclusion leads to expressions~\eqref{eq:RES2} or~\eqref{eq:RES3} for $\gamma_{t}(z)$.
By definitions~\eqref{eq:Neum1} or~\eqref{eq:Neum2} of the set ${\mathcal D} (A_{t})$, inclusion \eqref{eq:RER}
means that
\begin{gather}
R_{t} (z) h - X \big( t \varphi_{0} + \tilde{\theta}_{0}\big)\in {\mathcal D} (A_{\min}) \quad \mbox{if} \ t\in {\mathbb R}
\qquad \mbox{and} \qquad R_\infty (z) h - X \varphi_{0} \in {\mathcal D} (A_{\min})
\label{eq:REA}\end{gather}
for some number $X=X_{t}(z)\in {\mathbb C}$. On the other hand, it follows from
relations \eqref{eq:Ras1} and \eqref{eq:RES1} that
\begin{equation}
R_{t} (z) h -\langle h, \varphi_{\bar{z}}\rangle \big( \gamma_{t} (z) \varphi_{z} + \tilde{\theta}_{z}\big)\in{\mathcal D} (A_{\min}).
\label{eq:RES4}\end{equation}
Comparing \eqref{eq:REA} and \eqref{eq:RES4}, we see that \eqref{eq:RER} is equivalent to inclusions
\begin{equation}
\langle h, \varphi_{\bar{z}}\rangle \big(\gamma_{t} (z) \varphi_{z} + \tilde{\theta}_{z} \big) - X \big( t \varphi_{0} + \tilde{\theta}_{0}\big)\in {\mathcal D} (A_{\min})\qquad \mbox{if} \quad t\in {\mathbb R}
\label{eq:REA1}\end{equation}
and
\begin{equation}
\langle h, \varphi_{\bar{z}}\rangle \big(\gamma_\infty (z) \varphi_{z}+ \tilde{\theta}_{z} \big) - X \varphi_{0} \in {\mathcal D} (A_{\min})
\label{eq:REA2}\end{equation}
Note that $\langle h, \varphi_{\bar{z}}\rangle\neq 0$ because the sum in \eqref{eq:Neum} is direct and set
$Y= \langle h, \varphi_{\bar{z}}\rangle^{-1}X$.
It follows from Lemma~\ref{PQ+} that
inclusion \eqref{eq:REA1} is equivalent to an equality
\begin{gather*}
\!\gamma_{t} (z) \big( ( 1-z \langle \varphi_{z}, \theta_{0}\rangle ) \varphi_{0} +z \langle \varphi_{z}, \varphi_{0}\rangle \tilde{\theta}_{0}\big)
+ \big({-} z \langle \theta_{z}, \theta_{0} \rangle \varphi_{0} + ( 1+ z \langle \theta_{z}, \varphi_{0}\rangle ) \tilde{\theta}_{0} \big) = Y \big(t \varphi_{0} + \tilde{\theta}_{0}\big) .
\end{gather*}
Comparing here the coefficients at $\varphi_{0}$ and $ \tilde{\theta}_{0}$, we obtain equations
\begin{gather*}
\gamma_{t} (z) ( 1-z \langle \varphi_{z}, \theta_{0}\rangle )- z \langle \theta_{z}, \theta_{0}\rangle =t Y,
\\
\gamma_{t} (z) z \langle \varphi_{z}, \varphi_{0}\rangle + 1+ z \langle \theta_{z}, \varphi_{0}\rangle=Y,
\end{gather*}
which yield
\[
\frac{ \gamma_{t} (z) ( 1-z \langle \varphi_{z}, \theta_{0}\rangle )- z \langle \theta_{z}, \theta_{0}\rangle }
{\gamma_{t} (z) z \langle \varphi_{z}, \varphi_{0}\rangle + 1+ z \langle \theta_{z}, \varphi_{0}\rangle } =t.
\]
Solving this equation with respect to $\gamma_{t} (z) $, we arrive at formula \eqref{eq:RES2}.
Similarly, using again Lemma~\ref{PQ+}, we see that
inclusion \eqref{eq:REA2} is equivalent to an equality
\[
\gamma_\infty (z) \big(( 1-z \langle \varphi_{z}, \theta_{0}\rangle ) \varphi_{0} + z \langle \varphi_{z}, \varphi_{0}\rangle \tilde{\theta}_{0} \big)
+ \big({-} z \langle \theta_{z}, \theta_{0} \rangle \varphi_{0} + ( 1+ z \langle \theta_{z}, \varphi_{0}\rangle ) \tilde{\theta}_{0} \big) = Y \varphi_{0} .
\]
Inclusion \eqref{eq:REA2} holds true if and only if the coefficient at $\tilde{\theta}_{0} $ equals zero. This yields formula~\eqref{eq:RES3}.
\end{proof}
\begin{Corollary}\label{RESb}
If $z\in {\mathbb C}$ is a regular point of the operator $A_{t}$, then its resolvent~$R_{t} (z)$ is in the Hilbert--Schmidt class.
In particular, the spectra of all operators~$A_{t}$ are discrete.
\end{Corollary}
The result of this corollary is well known. It follows, for example, from Theorem~1 in Section~19.1 of the book \cite{Nai}.
We emphasize that, for different~$t$, the resolvents $R_{t}(z)$ of the operators $A_{t}$ differ from each other only by the coefficient $\gamma_{t}(z)$ at the rank one operator $\langle\cdot, \varphi_{\bar{z}}\rangle \varphi_{z}$.
This is consistent with the fact that the operator $A_{\min}$ has deficiency indices $(1,1)$.
Observe also that
$ \overline{\gamma_{t}(z)}= \gamma_t (\bar{z})$.
The functions $z \langle \theta_{z}, \theta_{0}\rangle$, $-1 +z \langle \varphi_{z}, \theta_{0}\rangle$, $ 1+ z \langle \theta_{z}, \varphi_{0}\rangle$ and $z \langle \varphi_{z}, \varphi_{0}\rangle$ in formulas~\eqref{eq:RES2} and~\eqref{eq:RES3} play the role of Nevanlinna's functions (denoted usually $A$, $B$, $C$ and~$D$) in the theory of Jacobi operators.
\subsection{Spectral measure}\label{section3.3}
In view of the spectral theorem, Theorem~\ref{RES} yields a representation for the Cauchy--Stieltjes transform of the spectral measure ${\rm d}E_{t} (\lambda)$ of the operator $A_{t}$.
\begin{Theorem}\label{RESc}
Let inclusions \eqref{eq:PQz} hold true.
Then for all $z\in {\mathbb C}$ with $\operatorname{Im} z\neq 0$ and all $h\in L^2 ({\mathbb R}_{+})$, we have an equality
\begin{equation}
\int_{-\infty}^\infty
(\lambda-z)^{-1} \,{\rm d} (E_{t}(\lambda)h,h)= \gamma_{t} (z)|\langle \varphi_{z},h\rangle |^2 + ({\mathcal R}(z) h,h).
\label{eq:E1}\end{equation}
\end{Theorem}
Recall that the operators ${\mathcal R}(z)$ are defined by formula \eqref{eq:RR11}. Therefore $({\mathcal R}(z) h,h)$ are entire functions of $z\in{\mathbb C}$, and the singularities of the integral in \eqref{eq:E1} are determined by the function~$\gamma_{t} (z)$. Thus, \eqref{eq:E1} can be considered as a modification of the classical Nevanlinna formula (see his original paper \cite{Nevan} or, for example, formula (7.6) in the book \cite{Schm}) for the Cauchy--Stieltjes transform of the spectral measure in the theory of Jacobi operators. We mention however that, for Jacobi operators acting in the space $\ell^2 ({\mathbb Z}_{+})$, there is the canonical choice of a generating vector and of a spectral measure. This is not the case for differential operators in $L^2 ({\mathbb R}_{+})$.
Let us discuss spectral consequences of Theorem~\ref{RES}. Since the functions $ \langle \varphi_{z}, \varphi_{0}\rangle $ and $ \langle \varphi_{z}, \theta_{0}\rangle $ are entire, it again follows from \eqref{eq:RES2} and \eqref{eq:RES3} that the spectra of the operators $A_{t}$ are discrete. Theorem~\ref{RES} yields also an equation for their eigenvalues.
\begin{Theorem}\label{RESp}
Let inclusions \eqref{eq:PQz} hold true.
Then eigenvalues $\lambda $ of the operators $A_{t}$ are given by the equations
\begin{equation}
1 - \lambda \langle \varphi_{\lambda}, \varphi_{0}\rangle t - \lambda \langle \varphi_{\lambda}, \theta_{0}\rangle =0 \qquad \mbox{if}\quad t\in {\mathbb R}
\label{eq:RESp2}\end{equation}
and
\begin{equation}
\lambda \langle \varphi_{\lambda}, \varphi_{0}\rangle =0 \qquad \mbox{if} \quad t=\infty.
\label{eq:RES3p}\end{equation}
\end{Theorem}
This assertion is a modification of a R.~Nevanlinna's result obtained by him for Jacobi ope\-rators.
We finally note an obvious fact: if $\lambda $ is an eigenvalue of an operator $A_{t}$, then the corresponding eigenfunction equals $c\varphi_{\lambda} (x)$, where $c\in{\mathbb C}$. In particular, this implies that all eigenvalues of the operators~$A_{t}$ are simple.
\subsection{Concluding remarks}\label{section3.4}
Here are some final observations.
A. Equations \eqref{eq:RESp2} and \eqref{eq:RES3p} can be rewritten in terms of asymptotics for $x\to\infty$ of solutions to equations \eqref{eq:Jy} for $x\to\infty$.
Multiplying differential equation \eqref{eq:Jy} for $\varphi_{\lambda}
$ by $\varphi_{0}$, integrating over a bounded interval $(0,x)$ and then integrating by parts, we find that
\begin{gather*}
\lim_{x\to\infty} \bigg( {-}p(x) \varphi_{\lambda}' (x)\varphi_{0}(x)+\int_{0}^x p(y) \varphi_{\lambda}' (y)\varphi_{0}' (y)\,{\rm d}y
+\int_{0}^x q(y) \varphi_{\lambda} (y) \varphi_{0} (y)\,{\rm d}y\bigg)
\\
\qquad{} + p(0) \varphi_{\lambda}' (0)\varphi_{0}(0) =\lambda\langle \varphi_{\lambda},\varphi_{0}\rangle.
\end{gather*}
Similarly, multiplying equation~\eqref{eq:Jy} for $\varphi_{0}$ by $\varphi_{\lambda}$, integrating over a bounded interval $(0,x)$ and then integrating by parts, we see that
\begin{gather*}
\lim_{x\to\infty} \bigg( {-}p(x) \varphi_{0}' (x)\varphi_{\lambda}(x)+\int_{0}^x p(y) \varphi_{0}' (y) \varphi_{\lambda}' (y)\,{\rm d}y
+\int_{0}^x q(y) \varphi_{0} (y)\varphi_{\lambda} (y)\,{\rm d}y\bigg)
\\
\qquad{} + p(0) \varphi_{0}' (0)\varphi_{\lambda}(0)=0.
\end{gather*}
Comparing these two formulas and taking into account boundary condition \eqref{eq:BCz}, we find that
\begin{equation}
\lambda\langle \varphi_{\lambda},\varphi_{0}\rangle = \lim_{x\to\infty} p(x) \big( \varphi_{\lambda} (x)\varphi_{0}'(x) - \varphi_{\lambda}' (x)\varphi_{0}(x)\big).
\label{eq:Na1}\end{equation}
Thus, equation \eqref{eq:RES3p} is satisfied if and only if the right-hand side of \eqref{eq:Na1} is zero.
The scalar product $\langle \varphi_{\lambda}, \theta_{0}\rangle$ can be calculated in an analogous way. We only have to observe that according to \eqref{eq:Pz} or \eqref{eq:Pzi}
\[
p(0) \varphi_{\lambda}'(0) \theta_{0} (0) - p(0) \theta_{0}' (0)\varphi_{\lambda}(0) =1,
\]
so that instead of \eqref{eq:Na1} we now have
\begin{equation}
\lambda\langle \varphi_{\lambda},\theta_{0}\rangle = 1+ \lim_{x\to\infty} p(x) \big( \varphi_{\lambda} (x)\theta_{0}'(x) - \varphi_{\lambda}' (x)\theta_{0}(x)\big).
\label{eq:Na2}\end{equation}
Putting together \eqref{eq:Na1} and \eqref{eq:Na2}, we see that equation \eqref{eq:RESp2} for $\lambda$ can be written as
\[
\lim_{x\to\infty} p(x) \big( \varphi_{\lambda} (x)\big(\theta_{0}'(x) + t\varphi_{0}'(x)\big)- \varphi_{\lambda}' (x) \big(\theta_{0}(x) + t\varphi_{0}(x)\big) \big)=0.
\]
Of course the scalar products $\langle \theta_z, \varphi_{0}\rangle$ and $\langle \theta_z, \theta_{0}\rangle$ in the numerators of \eqref{eq:RES2} and \eqref{eq:RES3} can be written in the same way as~\eqref{eq:Na1} and~\eqref{eq:Na2}.
B. Starting from Theorem~\ref{Neum}, we can everywhere replace the functions $\varphi_{0}$ and $\theta_{0}$ by $\varphi_{\zeta}$ and $\theta_{\zeta}$, where $\zeta$ is an arbitrary real fixed number. Then the construction of the paper remains unchanged if the factor $z$ in formulas
\eqref{eq:RES2} and \eqref{eq:RES3} for $\gamma_{t}(z)$ is replaced by $z-\zeta$. The simplest way to see this is to apply the results obtained above to the operator ${\mathcal A}-\zeta I$ instead of~${\mathcal A}$.
C.~Finally, we compare resolvent formulas in the LP and LC cases. In the LP case the resolvent $R(z)$ of a self-adjoint operator $A= A_{\min}$ is given by the relation
\begin{equation}
( R (z)h) (x) = \frac{1}{\{\varphi_{z}, f_{z}\}} \bigg(f_{z}(x) \int_{0}^x \varphi_{z}(y) h (y) \,{\rm d}y+
\varphi_{z}(x) \int_x^\infty f_{z}(y) h (y)\,{\rm d}y\bigg),
\label{eq:R-LP}\end{equation}
where $f_{z}(x)$ is a unique (up to a constant factor) solution of equation \eqref{eq:Jy} belonging to $L^2 ({\mathbb R}_{+})$. It can be chosen in a form $f_{z}= \theta_{z}+ w(z) \varphi_{z}$, where $w(z)$ is known as the Weyl function. Substituting this expression into~\eqref{eq:R-LP}, we see that {\it formally}
\begin{equation}
R (z) = w(z)\langle\cdot, \varphi_{\bar{z}}\rangle \varphi_{z} + {\mathcal R} (z),
\label{eq:R-LP1}\end{equation}
where $ {\mathcal R} (z)$ is given by equality \eqref{eq:RR11}. This relation looks algebraically similar to~\eqref{eq:RES1}, where~$\gamma_{t}(z)$ plays the role of the Weyl function $w(z)$. Note, however, that in the LP case~$w(z)$ is determined uniquely by the condition $ \theta_{z}+ w(z) \varphi_{z} \in L^2 ({\mathbb R}_{+})$, while in the LC case
$\gamma_{t}(z)$ depends on the choice of a self-adjoint extension of the operator $ A_{\min}$. We also emphasize that relation~\eqref{eq:R-LP1} is only formal because $\varphi_{z}$ and $\theta_{z}$ are not in $L^2 ({\mathbb R}_{+})$
in the LP case.
\subsection*{Acknowledgements}
Supported by project Russian Science Foundation 17-11-01126.
\pdfbookmark[1]{References}{ref}
|
astro-ph/0611353
|
\section{Introduction}
The increasingly significant evidence for the dark universe has established a strong paradigm in cosmology, in which the dynamics of the universe at the largest scales are governed by two components of energy which, up to this point, have only been observed by their gravitational consequences \cite{Colless:2001gk,Tegmark:2003uf,Riess:1998cb,Perlmutter:1998np,Spergel:2006hy}. These two, dark matter and dark energy, appear to behave in fundamentally different ways, with dark matter clustering into galaxies and diluting as the universe expands, while dark energy appears to remain smooth and dilutes either slowly or not at all, with equation of state near $w=-1$.
In spite of this, there is great effort to explore whether or not these substances might somehow be related. The strongest motivation for this is the similarity of the energy densities of $\rho_{DM}$ and $\rho_{DE}$ at the present epoch. Such attempts to connect these substances inevitably must confront the above differences, and attempts to unify them into one fluid often leads to dramatic observational consequences (see, for example \cite{Sandvik:2002jz}).
There is a slightly more restrained approach, however. Rather than viewing these substances as necessarily the same fluid, we might instead view them of a similar type. That is, dark matter may be a substance which, at some time in the past behaved as dark energy, and dark energy may, in the future, behave as dark matter. The fact that physics in the standard model has a generational structure, with repeated fields at different mass scales especially motivates such duplication. In particular, in theories where the dark energy is connected to a new neutrino force as recently explored in \cite{Fardon:2003eh,Peccei:2004sz,Fardon:2005wc}, such generational structure is expected in the dark energy sector.
Such a consideration immediately raises the question: for how long must dark matter have behaved as dark matter? Certainly, at least since matter radiation equality dark matter has been clustering and diluting more or less as $a^{-3}$. However, even at eras earlier than this, the clustering behavior of the dark matter can be observed in the power spectrum, at least to scales of $0.1h^{-1}\ Mpc$, where the matter power spectrum becomes non-linear.
It is quite natural to consider a scalar field which at some point in the history of the universe transitions to a dark matter state. Chaotic inflation \cite{Linde:1981mu,Albrecht:1982wi}, for instance, ends when the slow roll condition ends, and, for a suitable potential, begins to evolve as dark matter. A very familiar example of such dark matter is the axion \cite{Abbott:1982af,Dine:1982ah,Preskill:1982cy}, which acquires a (relatively) large mass after the QCD phase transition, at which point it begins to behave as dark matter. A conversion to dark matter is the natural final state of numerous quintessence theories \cite{Peebles:1987ek,Frieman:1995pm,Chacko:2004ky}
Our focus here will be on a transition that occurs much later in the universe, in order to make connections to theories of dark energy. In fact, we shall see that this transition naturally occurs near the era of matter-radiation equality. With such a late-time transition, effects on the CDM power spectrum are possible.
This ``Late Forming Dark Matter'' (LFDM), arises simply from a scalar field coupled to a thermal bath, initially sitting in a metastable state, behaving like a cosmological constant. When the thermal bath dilutes, the scalar transitions near matter-radiation equality (MRE) to dark matter, yielding interesting observable consequences.
The layout of this paper is as follows: in section \ref{sec:lfdm}, we lay out the basic structure of a general theory. In section \ref{sec:pheno} we will explore the effects of such a scenario on the power spectrum of dark matter. In section \ref{sec:neutrinos} we will see how this sort of dark matter naturally arises in theories of neutrino dark energy. In section \ref{sec:experiments} we will review what experimental studies constrain this scenario, and may test it in the future. Finally, in section \ref{sec:conclusions} we conclude.
\section{Late Forming Dark Matter}
\label{sec:lfdm}
Let us consider a single scalar field $\phi$ coupled to some other relativistic particle $\psi$ which is in thermal equilibrium. For simplicity, we will assume that $\phi$ is at zero temperature (i.e., its couplings to $\psi$ are sufficiently small that it is not thermalized). At zero temperature for $\psi$, we assume a potential of the form
\begin{equation}
V(\phi) = V_0-\frac{m^2}{2} \phi^2 - \epsilon \phi^3 + \frac{\lambda}{4} \phi^4,
\end{equation}
where $V_0$ is a constant which sets the true cosmological constant to zero. We assume the presence of the thermal $\psi$ contributes a term to the potential
\begin{equation}
\delta V = D T^2 \phi^2
\end{equation}
where $D$ is a coefficient determined by the spin, coupling, and number of degrees of freedom in $\psi$.
The behavior of this system is simple to understand. At high temperature, there is a thermal mass for $\phi$ which will confine it to the origin. At
\begin{equation}
T = \sqrt{\frac{\lambda m^2 + 2 \epsilon^2}{2 D \lambda}}
\end{equation}
a new minimum appears at energy lower than at $\phi=0$. However, because of the thermal mass, $\phi$ remains trapped at the origin.
At a temperature
\begin{equation}
T_{tach}=\frac{m}{\sqrt{2 D}}
\end{equation}
$\phi$ becomes tachyonic about the origin, and will begin to oscillate about what then evolves into its true minimum. These oscillations then behave as dark matter. Note that the energy in the dark matter is set by the depth of the global minimum relative to $\phi=0$ at $T_{tach}$, in this case $O(\epsilon^4/\lambda^3)$. If all the dimensionful parameters are of the same order (i.e., $\epsilon \sim m$), then the temperature at which dark matter is formed is soon followed by matter-radiation equality. Such correlation leaves a strong imprint on the power spectrum which will discuss in section \ref{sec:pheno}.
The above gives an extremely simple example of a model in which for most of the history of the universe, $\phi$ acted as a cosmological constant and only at very late times does $\phi$ begin to behave as conventional dark matter.
Such a form of dark matter is very natural when similar structures explain the existence of dark energy, for instance in neutrino theories of dark energy.
\section{Cosmological Consequences}
\label{sec:pheno}
Unlike weak-scale dark matter, which necessitates some interactions with ordinary matter which may be tested at underground experiments, and unlike axions, which require a coupling to photons giving again an experimental test, LFDM theories need not have strong couplings to standard model fields. Even within theories of neutrino dark energy, where LFDM is motivated, direct tests are difficult, if not impossible.
The best hope of detection for such a scenario is cosmological. Because we expect $z_{tach}$ naturally to lie near $z_{MRE}$, we expect deviations in the power spectrum of DM at small $(k \mathop{}_{\textstyle \sim}^{\textstyle >} h Mpc^{-1})$ scales.
In this section we will discuss the signatures of LFDM and the predictions it makes for cosmological experiments.
In general, for our scenario, effects on the CMB are negligible. We will return to this issue later. As LFDM behaves as ordinary CDM after $z_{tach}$, we should not expect visible consequences on scales $k<k_{tach}$, where $k_{tach}$ is the scale of the horizon at $z_{tach}$.
\subsection{Power Spectra}
Let us consider the power spectrum for dark matter near \ensuremath{z_{tach}}. Since this is when CDM is formed, after this point we can evolve it quite simply. The relevant quantity for the local density of dark matter is the redshift when it formed. Since all dark matter forms with the same initial energy density, regions where it forms earlier will have diluted more at later times, and regions where it forms later will have diluted less.
Dark matter forms at $\ensuremath{z_{tach}} = \ensuremath{{\bar z}_{tach}} + \delta \ensuremath{z_{tach}} (x)$. By definition $\ensuremath{z_{tach}}$ is the redshift when $T(\ensuremath{z_{tach}},x) = T_{ tach}$. We can reexpress the local temperature as
\begin{eqnarray}
T(\ensuremath{{\bar z}_{tach}} + \delta z(x)) &=& \bar T(\ensuremath{{\bar z}_{tach}}+\delta z)+\delta T(\ensuremath{{\bar z}_{tach}} + \delta z,x)\\&=&(\bar T(\ensuremath{{\bar z}_{tach}})+\delta T(\ensuremath{{\bar z}_{tach}} ,x))\frac{(1+\ensuremath{{\bar z}_{tach}})}{(1+\ensuremath{{\bar z}_{tach}} + \delta z)}
.
\end{eqnarray}
In the last equality is clearly true only for regions over which sound waves cannot propagate between $\ensuremath{{\bar z}_{tach}}$ and $\ensuremath{{\bar z}_{tach}} +\delta \ensuremath{z_{tach}}$. Since this will be at scales of order $10^5$ smaller than the horizon size, we can neglect it for our purposes. By definition, $T_{tach}=T(\ensuremath{{\bar z}_{tach}}+\delta \ensuremath{z_{tach}},x)=
{\bar T} (\ensuremath{{\bar z}_{tach}})$, and thus we can easily find that
\begin{equation}
\delta T(\ensuremath{{\bar z}_{tach}},x)/\bar T(\ensuremath{{\bar z}_{tach}}) = \delta \ensuremath{z_{tach}}/(1+\ensuremath{{\bar z}_{tach}})
\end{equation}
Similarly, $\rho(z,x)/\bar \rho(z) =(1+\ensuremath{{\bar z}_{tach}})^3/(1+\ensuremath{{\bar z}_{tach}} + \delta z(x))^3$, from which we can find
\begin{equation}
\delta \rho(\ensuremath{{\bar z}_{tach}},x)/\bar \rho (\ensuremath{{\bar z}_{tach}}) = 3 \delta \ensuremath{z_{tach}}(x)/(1+ \ensuremath{{\bar z}_{tach}}) = 3 \delta T(\ensuremath{{\bar z}_{tach}}, x)/T(\ensuremath{{\bar z}_{tach}}).
\end{equation} Thus, at $z=\ensuremath{z_{tach}}$ the CDM power spectrum is proportional to the $\psi$ temperature power spectrum at \ensuremath{z_{tach}}. From this point, the density perturbations will grow as CDM, so determining the power spectrum of CDM today is tantamount to determining the $\psi$-temperature power spectrum at \ensuremath{z_{tach}}.
We will ultimately want to identify $\psi$ with a more conventional particle-physics candidate, and, in particular, the neutrino.
In general, the neutrino is highly relativistic at the time of its decoupling,
after which it free-streams until it becomes non relativistic, yielding a suppression of its power at scales
$k> k_{fs}=0.018 \Omega^{1/2} (\frac{m_{\nu}}{eV})^{1/2}$$Mpc^{-1} $. However, in models of neutrino dark energy, there are additional neutrino interactions, and these may serve to keep the neutrino tightly coupled until \ensuremath{z_{tach}}. If this is the case, this should be imprinted on the CDM power spectrum. Similar studies have been performed for scenarios where the neutrino was significantly heavier, and such strong interactions were proposed in order to retain neutrinos as dark matter
\cite{Raffelt:1987ah}.
More recently, the implications of such neutrino interactions for cosmology have been studied \cite{Beacom:2004yd,Bell:2005dr,Cirelli:2006kt,Sawyer:2006ju}.
\subsection{Calculation of power spectra for LFDM}
We will consider LFDM with both an interacting and a non-interacting coupled bath. As described above, we will compute the power spectrum of the relativistic fluid, and match that to the initial power spectrum of the CDM at $z=\ensuremath{z_{tach}}$. The non-interacting case is straightforward. The interacting case can be got by considering earlier studies of the evolution of density perturbations for interacting neutrinos \cite{Atrio-Barandela:1996ur,Hannestad:2004qu}, where the the interaction makes different components behave as a single tightly coupled fluid. Under this assumption, the shear or anisotropic stress in the perturbation is negligible. The evolution is characterized by density and velocity perturbations only, and we can truncate all the higher order moments. The evolution of density and velocity perturbations is given by \cite{Ma:1995ey}
\begin{equation}
\dot{\delta}=-(1+w) (\theta +\dot{h}/2)-3\frac{\dot{a}}{a} (c_{s}^{2}-w)\delta
\end{equation}
\begin{equation}
\dot{\theta}=-\frac{\dot{a}}{a} (1-3w) \theta -\frac{\dot{w}}{1+w} \theta + \frac{c_{s}^{2}}{1+w}k^{2} \delta
\end{equation}
We are interested in the case where the thermal bath is made of essentially massless particles, so the equation of state and sound speed are given by $w=1/3=c_{s}^{2}$. To get the amplitude of the perturbation at any redshift, the above two equations need to be solved with the background equations of motion for the metric perturbations. We have modified the publicly available CAMB and CMBFAST to solve and get the power spectra at $z_{tach}$.
After \ensuremath{z_{tach}}, LFDM follows the same evolution equation as CDM, and it is straightforward to grow the perturbations to today.
We are principally interested in situations where LFDM makes up all or nearly all or the dark matter, but we can also consider situations where it is only some fraction. As we see in figure \ref{fig:LFDMpower}, there is a suppression of power at small scales, and the possibility of acoustic oscillations imprinted on the power spectrum.
For comparison, we also include the power spectrum for $\Lambda$CDM with a $0.75$ eV massive neutrino, near the experimental limit
\cite{Seljak:2004xh,MacTavish:2005yk,Hannestad:2004bu}. Though both LFDM and a massive neutrino give suppression in power, there is a distinct difference in power spectra between the two. The suppression of power for a massive neutrino turns on much more gradually than the abrupt suppression for LFDM.
As we make \ensuremath{z_{tach}}\ smaller (larger), we move the break to larger (smaller) scales. At scales much smaller than \ensuremath{k_{tach}}\ we would expect the acoustic oscillations to be damped out (which is not captured by our tightly coupled approximation). If LFDM is merely a fraction of the dark matter, the observability of such oscillations would depend on how much LFDM existed. If LFDM is all or nearly all of the dark matter, the oscillations are already severely constrained, and must lie in the non-linear regime \footnote{Though in different context, \cite{Mangano:2006mp} has found similar oscillations for power spectra of a neutrino interacting with dark matter.}.
It is important to point out here, though we get a large suppression beyond $k\approx 0.01 h Mpc^{-1}$ we cannot
compare it directly to the linear power spectra of standard $\Lambda$CDM cosmology in this regime as the the non-linear effects in structure formations \cite{nl1,seljak} become very important for $k \mathop{}_{\textstyle \sim}^{\textstyle >} 0.15 h Mpc^{-1}$. We return to this issue in section \ref{sec:experiments}. Only if LFDM forms later in time ($z_{tach} \ll 15000 $), does the power get suppressed in the linear regime. In this case a rigorous statistical analysis would be needed to place legitimate constraints on this scenario, which is beyond the scope of this paper.
\begin{figure*}[t]
a)
\includegraphics[width=120mm]{srig.eps}
\vskip 0.01in
b) \includegraphics[width=120mm]{srig1.eps}
\caption{Power spectra (a), and power compared to CDM (b) for CDM, LFDM (with different fractions, interacting and free-streaming) and a 0.75 eV freestreaming neutrino for $z_{tach}=50,000$
}
\label{fig:LFDMpower}
\end{figure*}
\section{Models of LFDM in theories of Neutrino Dark Energy}
\label{sec:neutrinos}
The idea of LFDM is appealing, largely because it offers to make a connection to theories of dark energy. If the dark energy is associated with a scalar field trapped at a false minimum in its potential due to thermal effects, then, quite likely, ``copies'' of such physics may have existed earlier. If so, the energy stored there would now behave as dark matter.
Remarkably, there is already a class of models that fit these criteria, specifically the recently discussed ``hybrid'' models of neutrino dark energy \cite{Fardon:2005wc}.
There has been a long motivation to make a connection between neutrino masses and dark energy \cite{Hill:1988vm,Hung:2000yg,Gu:2003er,Fardon:2003eh,Peccei:2004sz}. In these most recent models, the generational structure of the neutrinos is copied in the dark energy sector. The finite density of relic neutrinos modifies the potential and stabilizes a scalar field at a false minimum. These hybrid models arise naturally when MaVaN models are promoted to a supersymmetric theory (see \cite{Takahashi:2005kw,Takahashi:2006ha,Takahashi:2006be} for other supersymmetric extensions).
The natural extension to LFDM comes in these supersymmetric theories. We refer readers to \cite{Fardon:2005wc} for details, and only briefly summarize here. Because there are three neutrinos, these theories contain three singlet neutrinos $N_i$. Each one of these is associated by supersymmetry with a scalar field. Arguments related to naturalness suggest the the lightest of the three neutrinos is associated with the dark energy today. Energy previously trapped in the other scalar neutrinos would appear as dark matter today, and it is this that we consider.
We shall now present a simple model of LFDM within the context of neutrino dark energy theories. It is not intended to be representative of all such models, but merely a simple example of one with the relevant phenomenology.
Consider the fermion fields $\psi_{2,3}$, and scalars $n_{2,3}$, with a Lagrangian
\begin{equation}
{\cal L}=\lambda n_2 \psi_3^2+ 2 \lambda \psi_2 \psi_3+m_3 \psi_3 \nu_3+m_2 \psi_2 \nu_2+V_{susy}+V_{soft}+V_{\epsilon}
\end{equation}
where
\begin{equation}
V_{susy} = 4 \lambda^2 |n_2|^2 |n_3|^2 + \lambda^2 |n_3|^4,
\end{equation}
\begin{equation}
V_{soft} = \tilde m_2^2 |n_2|^2-\tilde m_3^2 |n_3|^2+ (\tilde a_3 n_3^3+h.c.)
\end{equation}
and
\begin{equation}
V_{\epsilon} = 4 \lambda \epsilon (n_3^* n_2^3 + n_3^3 n_2^*+h.c.) + \epsilon^2 (|n_2|^4 + 4 |n_3|^2 |n_2|^2).
\end{equation}
Such a Lagrangian can easily be constructed supersymmetrically with soft terms of their natural size. The terms in $V_\epsilon$ are included in order to generate a Majorana mass for the neutrino in the vacuum.
We also expect couplings to the ``acceleron'' (again, see \cite{Fardon:2003eh,Fardon:2005wc}), which is directly tied to the stability of dark energy today. Both these couplings as well as $V_\epsilon$ do not influence our discussion here. It has been demonstrated that the vevs of such fields do not spoil the success of the dark energy theory in these hybrid models \cite{Spitzer:2006hm}.
The natural size of each soft term is of the order of the associated Dirac mass (i.e., $\tilde a_3 \sim \tilde m_3 \sim m_3$ which is expected to be of order 0.1 eV), assuming the dark energy sector has no approximate R-symmetry.
If $n_2$ has a large expectation value, it generates a Majorana mass for $\psi_3$ of order $m_3^2/{\lambda n_2}$. The presence of the relic neutrinos affects the dynamics of $n_2$, in particular by driving it to larger values.
Assuming that the relic neutrinos are in the light mass eigenstate (see \cite{bbn}), the relic neutrinos contribute a term to the effective potential for $n_2$,
\begin{equation}
V_{eff} = \frac{T^2 m_3^4}{24 \lambda^2 n_2^2}
\end{equation}
which is minimized for large $n_2$, competing with the $n_2$ mass term which is minimized at $n_2=0$. The competition drives an expectation value $\vev{\lambda n_2} \sim m_3 \sqrt{\lambda T/m_2}$. (We should note all temperatures here refer to neutrino temperature, which are slightly lower than the CMB temperature.) The non-zero value of $n_2$ creates a positive value for the mass squared of $n_3$, stabilizing it in the false vacuum with an effective cosmological constant. Such a model is analogous to hybrid inflation models, with $n_2$ playing the role of the slow-roll field, and $n_3$ playing the role of the waterfall field.
The temperature where $n_3$ becomes tachyonic is $T_{\rm tach}=\sqrt{3/2} m_2 \tilde m_3^2/\lambda m_3^2$, and the energy converted to dark matter at that time is $\rho_{LFDM} \sim 10^{-3} \tilde a_3^4/\lambda^6$. The time of matter radiation equality is $T_{MRE} = 3 \sqrt{3/2} \tilde a_3^4 m_3^6/64 \lambda^3 m_2^3 \tilde m_3^6 $. Because of the high powers of parameters, each uncertain by factors of order one, there is a high uncertainty in $T_{MRE}$. Simply varying the mass parameters in the ranges $10^{-1.5} eV <\tilde m_3, \tilde a_3,m_3 < 10^{-.5}$, $10^{-2} eV < m_2 < 10^{-1} eV$ and the parameter $10^{-2} <\lambda < 1$, we find $10^{-3} eV \mathop{}_{\textstyle \sim}^{\textstyle <} T_{MRE} \mathop{}_{\textstyle \sim}^{\textstyle <} 10^7 eV$. Similarly, we find $10{-1} \mathop{}_{\textstyle \sim}^{\textstyle <} T_{MRE}/T_{DMDE} \mathop{}_{\textstyle \sim}^{\textstyle <} 10^{13}$. Hence, the solution to the coincidence problem is present in that such a crossing should occur relatively soon after matter-radiation equality. However, the precise value is clearly uncertain, so the success is limited.
A more precise statement of the success of the solution to the coincidence problem is that {\em if} this is the explanation of the present coincidence (by relating the energy densities to neutrino masses), then a break should exist in the power spectrum. Given that we can set $\lambda$ by fixing $T_{MRE}$, we can more precisely determine $T_{tach}$, even with the uncertainties of parameters. Thus, using the same ranges above, and requiring $\lambda<1$, one finds that $1 {\rm eV} \mathop{}_{\textstyle \sim}^{\textstyle <} T_{tach} \mathop{}_{\textstyle \sim}^{\textstyle <} 10^3 {\rm eV}$ and thus $2\times 10^{-2} h Mpc^{-1} \mathop{}_{\textstyle \sim}^{\textstyle <} k_{tach} \mathop{}_{\textstyle \sim}^{\textstyle <} 20 h Mpc^{-1}$. Such limits are certainly model dependent, but clearly there is a strong expectation of a break in the power spectrum in the observable range.
\section{Experiments}
\label{sec:experiments}
A great deal of data already would constrain such a scenario. For instance,
one immediate concern would be from the CMB.
In general, neutrinos are not freestreaming at recombination, which affects the gravitational potential well which boosts the first peak of the CMB. Such constraints have been considered \cite{Hannestad:2005ex,Bell:2005dr}, but one interacting neutrino seems acceptable.
One also must consider ($1.6 \leq N_{eff}^{\nu} \leq 7.1 $), the constraint on total number of freestreaming neutrinos during decoupling because
having extra radiation degrees of freedom could delay the matter radiation equality resulting in the early ISW effect\cite{Crotty:2003th,Hannestad:2003xv,Pierpaoli,Hannestad:2006mi}.
Structure formation is where LFDM is most likely to be tested. Many experiments such as 2dF Galaxy Redshift survey \cite{Colless:2001gk,Percival:2001hw}, Sloan Digital Sky Survey (SDSS) \cite{SDSS}, Ly-$\alpha$ forest \cite{Viel:2004bf,Viel:2004np,Seljak:2006bg,Viel:2005ha,McDonald:2004eu}, and weak gravitational lensing \cite{Refregier:2003xe} have measured the matter power spectrum over a wide range of scales. Though these experiments are in good agreement with the $\Lambda$CDM model, small scales remain an open question, with a possible modifications seen in Lyman-$\alpha$ systems \cite{seljak}, as well as some studies of dwarf galaxies \cite{Gilmore:2006iy}.
The studies most promising to test this scenario in the future would include Ly-$\alpha$ data, but one still needs non-linear simulations to extract the linear power spectra information on these length scales. Future weak lensing experiments \cite{Abazajian:2002ck} will measure the power at higher $z$ when the relevant scales would be more linear.
Other experiments like 21 cm tomography \cite{Loeb:2003ya} will also measure power in very small scales (sub-Mpc) and may find signatures of LFDM. As discussed before, to compare LFDM power spectra with experiments in this range we need detailed N-body
simulation which includes the non-linear hydrodynamical effects of gravitational clustering.
\section{Conclusions}
\label{sec:conclusions}
We have considered the scenario of ``Late Forming Dark Matter'' (LFDM) in which a scalar field converts the energy of a metastable point to dark matter at times late in the history of the universe, near the era of matter-radiation equality. Such effects arise when the potential of the scalar field is strongly effected by finite temperature effects from some additional thermal species. These theories arise naturally in hybrid models of neutrino dark energy, in which new scalar fields arise in association with neutrinos.
The power spectrum of such theories naturally has a sharp near the scale of the horizion at matter-radiation equality, due to the streaming of the thermal species. The presence strong scattering of these particles can modify the depth of the break, and the presence of acoustic oscillations.
Within the context of theories of neutrino dark energy, the scale of dark energy is controlled by the scale of neutrino masses, and, similarly, the amount of dark matter, and the redshift at which it forms, \ensuremath{z_{tach}}, are also determined by the neutrino masses. In these simple theories, consistency requires a sharp break in the CDM power spectrum in the approximate range $10^2\ h Mpc^{-1} \mathop{}_{\textstyle \sim}^{\textstyle >} k_{tach} \mathop{}_{\textstyle \sim}^{\textstyle >} 10^{-3}\ h Mpc^{-1}$. Future studies at small scales, such as of Lyman-$\alpha$ systems, gravitational lensing or 21 cm absorption may be able to test these theories.
\section*{Acknowledgements} We thank Kris Sigurdson for his help in including interactions into CMBFAST and CAMB, as well as Roman Scoccimarro and Ann Nelson for reading a draft of the manuscript and providing useful comments. This work was supported by NSF CAREER grant PHY-0449818 and by the DOE OJI program under grant DE-FG02-06ER41417.
\
|
2206.05758
|
\section{Introduction}
We say a projective planar graph $G$ is \textit{separating projective planar} if for every embedding of $G$ into $\mathbb{R}P^2$, there is a disk bounding cycle $C$ in $G$ such that one vertex of $G$ is in one connected component of ${\mathbb{R}P}^2\setminus C$ and another vertex of $G$ is in the other connected component of $\mathbb{R}P^2\setminus C$. If there exists an embedding of $G$ that does not have this property, $G$ is \textit{nonseparating projective planar}. We say a projective planar graph $G$ is \textit{strongly nonseparating} if there exists an embedding of $G$ into $\mathbb{R}P^2$, such that for every pair $(u, v)$ of vertices of $G$, there is a path in $\mathbb{R}P^2$ from $u$ to $v$ that intersects $G$ only at its endpoints. If a graph is not strongly nonseparating projective planar, this means it is \textit{weakly separating}. If a graph $G$ is separating, this implies it is weakly separating.
Dehkordi and Farr characterized the set of nonseparating planar graphs, identifying the forbidden minors for nonseparating planar graphs. They proved that a graph $G$ is a nonseparating planar graph if and only if it does not contain any of $K_1 \dot\cup K_4$ or $K_1 \dot\cup K_{2,3}$ or $K_{1,1,3}$ as a minor \cite{Dehkordi}. Their work showed that separating graphs are built from nonouterplanar graphs. Our work extends these concepts from the plane to the projective plane. There are 32 minor-minimal nonouter-projective-planar graphs \cite{Archdeacon}. One of the main results of this paper is the classification of these graphs as separating or nonseparating. We show every such graph is either separating, or its disjoint union with $K_1$ is weakly separating, which is a necessary condition for separating projective planar graphs. A nonplanar and nonouter-projective-planar graph disjoint union with $K_1$ is also separating projective planar. The set of minor-minimal separating projective planar graphs must be finite by Robertson and Seymour's result on graph minors \cite{robertson}. Though we have not yet characterized the complete set of minor-minimal separating projective planar graphs, we have identified many members. These graphs can be used to relate separating projective planar graphs and links in graphs embedded in projective space \cite{REU2021IPL}.\\
Define $S^k$ to be the $k$-sphere. A \textit{projective planar 3-link} is a disjoint collection of $3-m$ 1-spheres and $m$ 0-spheres, embedded into the projective plane, where $m\in\{1,2\}$. If there are two $S^1$'s, this is a \textit{type I 3-link}. A graph $G$ is \textit{intrinsically projective planar type I 3-linked (IPPI3L)} if every embedding of $G$ in the projective plane has a nonsplit type I 3-link. If there are two $S^0$'s, this is a \textit{type II 3-link}. A graph $G$ is \textit{intrinsically projective planar type II 3-linked (IPPII3L)} if every embedding of $G$ in the projective plane has a nonsplit type II 3-link.
Burkhart et al examined 3-links in graphs embedded in $S^2$. They proved that the graphs $K_4 \dot\cup K_4$, $K_4 \dot\cup K_{3,2}$, and $K_{3,2} \dot\cup K_{3,2}$ are minor-minimal with respect to being intrinsically type I 3-linked, and conjectured it forms a complete set \cite{Burkhart} of such minor-minimal graphs. Our research built on this foundation, identifying many types of projective planar graphs that are minor-minimal IPPI3L. We have proven there are no minor-minimal IPPI3L graphs that have four or more components, but there is a set with three components that is the disjoint union of $K_4$ and $K_{3,2}$ components. If a graph $G$ is \textit{closed nonseparating}, that means any nonseparating embedding of $G$ in the projective plane is a \textit{closed cell embedding}. If a drawing of graph $G$ in the projective plane is a \textit{closed cell embedding}, that means every face of the graph can be bounded by a 0-homologous cycle. We proved the disjoint union of two closed nonseparating graphs is also minor-minimal IPPI3L. Additionally, we prove separating or closed nonseparating graphs that are glued at a vertex can be IPPI3L under specified conditions. Finally, we prove graphs that are the disjoint union of two components, where one component is a graph that is nonplanar and nonouter-projective-planar, and the second component is a graph that is nonouterplanar, are IPPI3L, and can be minor-minimal in that regard under certain conditions.\\
\section{Definitions and notation}
Let $\mathbb{R}P^2$ denote the real projective plane. We represent $\mathbb{R}P^2$ as the unit disk in $\mathbb{R}^2$ with antipodal points identified. Let $\mathbb{R}P^3$ denote real projective space, which can be defined as the unit ball in $\mathbb{R}^3$ with antipodal points identified.
All of our graphs will be embedded piecewise linearly. Thus, for every graph embedded in ${\mathbb R}P^2$, we may assume every cycle intersects the boundary at a finite number of points. A cycle embedded in $\mathbb{R}P^2$ or $\mathbb{R}P^3$ is \textit{0-homologous} if and only if it bounds a disk. This is also called a \textit{null cycle} \cite{Glover}. A cycle embedded in $\mathbb{R}P^2$ or $\mathbb{R}P^3$ is \textit{1-homologous} if and only if it does not bound a disk. This is also called an \textit{essential cycle} \cite{Glover}. An embedding of graph in $\mathbb{R}P^2$ or $\mathbb{R}P^3$ is an \textit{affine embedding} if the graph embedding does not intersect the boundary of the ball used to define the projective plane.
A graph that can be obtained from a graph $G$ by a series of edge deletions, vertex deletions and edge contractions is called a \textit{minor} of $G$. The graph $G$ is \textit{minor-minimal} if, whenever $G$ has property $P$ and $H$ is a minor of $G$, then $H$ does not have property $P$. A property $P$ is \textit{minor closed} if, whenever a graph $G$ has property $P$ and $H$ is a minor of $G$, then $H$ also has property $P$. If $P$ is a minor closed property and the graph $G$ does not have property $P$, then $G$ is a \textit{forbidden graph} for $P$. Robertson and Seymour's Minor Theorem states if $P$ is a minor-closed graph property, then the minor-minimal forbidden graphs for $P$ form a finite set \cite{robertson}.
An \textit{outer-projective-planar graph} is one that can be embedded in the projective plane with all vertices in the same face - this is a minor closed property. Other minor closed properties include having a nonseparating projective planar planar embedding and having a type I 3-linkless embedding in the projective plane.
A \textit{complete graph} is a graph with an edge between all possible pairs of vertices in the graph. We represent a complete graph as $K_m$, where $m$ is the order of the graph. A \textit{complete bipartite graph} is a graph whose vertices can be divided into two disjoint sets, where no two vertices in the same set are adjacent and every vertex of the first set is adjacent to every vertex of the second set. We represent a complete bipartite graph as $K_{n,m}$, where the cardinalities of the two sets are $n$ and $m$. A \textit{complete $k$-partite graph} is a graph whose vertices can be divided into $k$ disjoint sets, where no two vertices in the same set are adjacent and every vertex of a given set is adjacent to every vertex in every other set. We represent a complete $k$-partite graph as $K_{n_1, \dots, n_k}$, where the cardinalities of the sets are $n_1, \dots, n_k$.
\section{Results on cycles in the projective plane}
In this section, we prove some results about the characteristics of 0-homologous and 1-homologous cycles in the projective plane. These theorems are utilized in our proofs in the following sections. They lay the foundation for understanding how cycles function and interact in the projective plane.\\
The following result is well-know in graph theory. See for example, Diestel's text \cite{Diestel}.
\begin{lem}\label{lemma1}
A graph is 2-connected if and only if it can be constructed from a cycle by successively adding H-paths to graphs H already constructed.
\end{lem}
\begin{theo}\label{Theorem1}
Let G be a 2-connected planar graph embedded in the projective plane with all cycles 0-homologous. Then the embedding of $G$ can be isotoped to an affine embedding.
\end{theo}
\begin{proof}
Consider an arbitrary embedding of $G$ in the projective plane, with all its cycles 0-homologous. Pick the `first' cycle as in Lemma \ref{lemma1}. Since this cycle is 0-homologous and bounds a disk $D$, it can be isotoped (along with the rest of $G$, which may not all be affine yet) to an affine embedding with no point arbitrarily close to the boundary.
Consider adding a H-path $H_{1}$ to the cycle. It intersects the cycle at two vertices $v_{1}$ and $v_{2}$, which divide the cycle into two paths $P_{1}$ and $P_{2}$. Let $L_{1}$ be the cycle consisting of $H_{1}$ and $P_{1}$, and $L_{2}$ of $H_{1}$ and $P_{2}$. By assumption both $L_{1}$, $L_{2}$ are 0-homologous and let $L_{1}$ bound $D_{1}$, $L_{2}$ bound $D_{2}$. Then $D$ is contained in exactly one of $D_{1}$, $D_{2}$. Without loss of generality assume $D \subseteq D_{1}$, $\interior D \cap \interior D_{2} = \varnothing$, $D \cup D_{2} = D_{1}$.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.5]{Evan/5.1.2.png}
\caption{Adding a H-path $H_{1}$ to the cycle}
\end{center}
\end{figure}
Isotope $L_{2}$ (and the rest of $G$) to an affine embedding by deforming $D_{2}$ towards $D$, then every cycle in this embedding is affine. Isotope $L_{1}$ so that no point is arbitrary close to the boundary. Inductively add H-paths until the original 2-connected graph G is obtained in an affine embedding, equivalent to the original embedding.
\end{proof}
Consider the following two definitions: the \textit{standard affine cycle} is the affine 0-homologous cycle $\{(x,y) | x^2 + y^2 = 1/2\}$ and the \textit{standard vertical cycle} is the 1-homologous cycle $\{(x,y) | x=0\}$ intersecting the boundary at exactly one point.
In order to prove the second theorem, that every 0-homologous cycle is isotopic to the standard affine cycle and every 1-homologous cycle is isotopic to the standard vertical cycle in the projective plane, the following lemma is needed.
\begin{lem}
Given a cycle in $\mathbb{R}P^2$ crossing the boundary transversely at exactly $p_{1}$, $p_{2}$, ... $p_{2k}$ labeled clockwise, where $p_{i}$, $p_{j}$ represent the same point if and only if $i \equiv j \mod k$, then there exists an $i$ where $1\leq i \leq 2k-1$ such that there is a path that is a subgraph of the cycle, connecting $p_{i}$ and $p_{i+1}$ and that intersects the boundary only at $p_{i}$ and $p_{i+1}$.
\end{lem}
\begin{proof}
Consider the path connecting $p_{1}$ to some $p_{n_{1}}$, $2 \leq n_{1} \leq 2k$, which intersects the boundary at only $p_{1}$ and $p_{n_{1}}$. We call such a path an \textit{interior path}. If $n=2$, this is the desired path. Else, $p_{2}$ must connect to some $p_{n_{2}}$, $3 \leq n_{2} \leq n_{1}-1$ via an interior path that intersects the boundary only at $p_{2}$, $p_{n_{2}}$. Inductively consider paths connecting $p_{i}$ with $p_{n_{i}}$, $1 \leq i \leq k$.
Set $m_{i}=n_{i}-i$, then $m_{i}$ is positive and strictly decreasing. Then $m_{a} = 1$ or $m_{a} = 2$ for some $a$: if $m_{b} \geq 3$ for all $b$, then for some $b$ there is an interior path connecting $p_{b+1}$ with $p_{n_{b+1}}$ such that $m_{b+1} < m_{b}$.
If $m_{a} = 1$, then $p_{a}$ with $p_{n_{a}}$ is the desired interior path. If $m_{a} = 2$, then the path connecting $p_{a+1}$ to another point must intersect the path connecting $p_{a}$ with $p_{n_{a}}$, contradicting the graph is projective planar.
\end{proof}
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.52]{Evan/5.2.1.png}
\caption{Existence of a path connecting $p_{i}$, $p_{i+1}$ intersecting the boundary only at $p_{i}$, $p_{i+1}$}
\end{center}
\end{figure}
A path connecting $p_{i}$, $p_{i+1}$ that intersects the boundary only at points $p_{i}$, $p_{i+1}$ is called a \textit{half-moon path}.
\begin{theo}\label{Theorem2}
Every 0-homologous cycle in the projective plane is isotopic to the standard affine cycle. Every 1-homologous cycle in the projective plane is isotopic to the standard vertical cycle.
\end{theo}
\begin{proof}
Suppose a cycle $C$ crosses the boundary at $n$ points. Without loss of generality, apply ambient isotopy such that $C$ intersects the boundary only transversely. If $n=0$, $C$ bounds a disk and is isotopic to the standard affine cycle. If $n=1$, $C$ does not bound a disk and is isotopic to the standard vertical cycle.
Otherwise, consider a half-moon path $L$ which exist by Lemma 3.2. Without loss of generality, apply ambient isotopy so that $L$ does not touch other paths. Then take a sufficiently small neighbourhood $U$ that contains only $p_{i}$, $p_{i+1}$, $L$, and the connected pieces of the paths through $p_{i}$, $p_{i+1}$. Isotope the small neighbourhood over the boundary. Note the isotopy preserves whether the cycle bounds a disk, and the cycle after isotopy crosses the boundary at $n-2$ points.
Inductively applying this argument, and the resulting cycle $C'$ crosses the boundary at either 0 points or 1 point. If $C'$ crosses the boundary at 0 points, it bounds a disk and is isotopic to the standard affine cycle. Since $C$ is isotopic to $C'$, every 0-homologous cycle is isotopic to the standard affine cycle. If $C'$ crosses the boundary at 1 point, it does not bound a disk and is isotopic to the standard vertical cycle. Since $C$ is isotopic to $C'$, every 1-homologous cycle is isotopic to the standard vertical cycle.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.4]{Evan/5.2.2.png}
\caption{Reduced number of crossing with the boundary}
\end{center}
\end{figure}
\end{proof}
\begin{col}
A 1-homologous cycle crosses the boundary an odd number of times. A 0-homologous cycle crosses the boundary an even number of times.
\end{col}
Next we generalize the following result:
\begin{lem}[Glover et al \cite{Glover}] \label{gloverlemma}
Any two 1-homologous cycles $C_1$ and $C_2$ in the projective plane intersect each other.
\end{lem}
\begin{theo}\label{Theorem3}
Given two different 1-homologous cycles $C_{1}$ and $C_{2}$ in the projective plane that intersect only transversely, the number of crossings must be odd.
\end{theo}
\begin{proof}
Suppose we have two different 1-homologous cycles $C_{1}$ and $C_{2}$ that intersect only transversely. By Theorem \ref{Theorem2}, we may isotope so that one of the cycles, $C_{1}$, is the standard vertical cycle. Without loss of generality, isotope the cycles so that they do not intersect on the boundary. We may assume that $C_{2}$ intersects the boundary at points $p_{1}$, ..., $p_{4k+2}$ where $p_{i}$, $p_{j}$ represent the same point if and only if $ i \equiv j \mod 2k+1$.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.39]{Sherry/5.3Sherry.png}
\caption{Reduced number of crossings of $C_{2}$ with the boundary by applying isotopy to $L$}
\end{center}
\end{figure}
If $k=0$, since $C_{1}$ and $C_{2}$ intersect only transversely, the number of crossings is odd. Otherwise, consider a half-moon interior path $L$ connecting $p_{i}$, $p_{i+1}$ which exists by Lemma 3.2. Take a sufficiently small neighbourhood $U$ that contains only $p_{i}$, $p_{i+1}$, $L$, and the connected pieces of the paths through $p_{i}$, $p_{i+1}$. Isotope the small neighbourhood over the boundary.
Note the number of crossing of $C_{1}$ with $C_{2}$ is preserved mod 2 while the resulting curve intersects the boundary at $4k-2 = 4(k-1)+2$ points. By induction on $k$, $C_{1}$ and $C_{2}$ can be isotoped to two cycles intersecting the boundary only once. Thus $C_1$ and $C_2$ intersect an odd number of times.
\end{proof}
\section{Separating projective planar graphs}
The property of being outer-projective-planar is a minor closed property. By Robertson and Seymour's result \cite{robertson}, the set of minor-minimal nonouter-projective-planar graphs is finite. All such 32 graphs were characterized and divided into nine families \cite{Archdeacon}. The family members are related to each other by $\Delta-Y$ exchanges.
Because outer-projective-planar graphs are nonseparating, nonouter-projective-planar graphs are of interest to us because they are good candidates to produce the beginning of the set of minor-minimal separating projective planar graphs. For each of the 32 graphs, we examined if they were separating. If the graph was nonseparating, we examined the given graph with a vertex or edge added, or a vertex splitting, to find a separating graph. In this way, we have hopefully characterized much of the set of minor-minimal separating projective planar graphs, but we have not yet characterized the complete set.
\subsection{$\alpha$ family}
The $\alpha$ family of minor-minimal nonouter-projective-planar graphs consists of $\alpha_1$, $\alpha_2$, and $\alpha_3$. The graph $\alpha_1$ is two disjoint copies of $K_4$. The graph $\alpha_2$ is two disjoint components: $K_4$ and $K_{3,2}$. The graph $\alpha_3$ is two disjoint copies of $K_{3,2}$.
\begin{proposition}
Every member of the $\alpha$ family is minor-minimal separating projective planar.
\end{proposition}
\noindent The proof of this proposition relies on the following theorem:
\begin{theo}[Halin \cite{Halin}, Chartrand and Harary \cite{CH}]\label{halin}
A graph $G$ is outerplanar if and only if $G$ does not contain $K_4$ nor $K_{3,2}$ as a minor.
\end{theo}
\begin{proof}
Each of the members of the $\alpha$ family have two components. Both of these components contain separating cycles if they are affine embedded with another component. If both components were embedded in the projective plane with 1-homologous cycles, they would intersect, by Lemma \ref{gloverlemma}. This means, in a projective planar drawing, that at least one component has all 0-homologous cycles.
Since the embedding of the component with all 0-homologous cycles is equivalent to an affine embedding of the component by Theorem \ref{Theorem1}, the embedding of the component with all 0-homologous cycles will also contain a separating cycle. Since at least one component of every embedding of the $\alpha$ family has all 0-homologous cycles, every embedding will contain a separating cycle. So, the $\alpha$ family is separating projective planar.
The graphs $K_4$ and $K_{3,2}$ are minor-minimal nonouterplanar graphs. So, for a minor of a graph in the $\alpha$ family, one component can be affine embedded without a separating cycle. The other component can be embedded with a 1-homologous cycle, with both components together nonseparating. Therefore, the members of the $\alpha$ family are minor-minimally separating projective planar.
\end{proof}
\subsection{$\beta$ family}
The $\beta$ family of minor-minimal nonouter-projective-planar graphs are pictured in Figure \ref{fig:minipage1}-\ref{fig:minipage6}. The $\beta$ family consists of $K_4$ and $K_{3,2}$ copies glued at a vertex.
\begin{proposition}
Every member of the $\beta$ family is nonseparating projective planar.
\end{proposition}
\begin{proof}
Every member in the $\beta$ family has a nonseparating embedding in the projective plane as illustrated in Figure \ref{fig:minipage1} to Figure \ref{fig:minipage6}.
\end{proof}
\begin{figure}[H]
\centering
\begin{minipage}[b]{0.45\linewidth}
\includegraphics[scale=0.4]{Ángel/B1.PNG}
\caption{$\beta_1$}
\label{fig:minipage1}
\end{minipage}
\quad
\begin{minipage}[b]{0.45\linewidth}
\includegraphics[scale=0.4]{Ángel/B2.PNG}
\caption{$\beta_2$}
\label{fig:minipage2}
\end{minipage}
\end{figure}
\begin{figure}[H]
\centering
\begin{minipage}[b]{0.45\linewidth}
\includegraphics[scale=0.4]{Ángel/B3.PNG}
\caption{$\beta_3$}
\label{fig:minipage3}
\end{minipage}
\quad
\begin{minipage}[b]{0.45\linewidth}
\includegraphics[scale=0.4]{Ángel/B4.PNG}
\caption{$\beta_4$}
\label{fig:minipage4}
\end{minipage}
\end{figure}
\begin{figure}[H]
\centering
\begin{minipage}[b]{0.45\linewidth}
\includegraphics[scale=0.4]{Ángel/B5.PNG}
\caption{$\beta_5$}
\label{fig:minipage5}
\end{minipage}
\quad
\begin{minipage}[b]{0.45\linewidth}
\includegraphics[scale=0.4]{Ángel/B6.PNG}
\caption{$\beta_6$}
\label{fig:minipage6}
\end{minipage}
\end{figure}
Note that each embedding in Figure \ref{fig:minipage1} to Figure \ref{fig:minipage6} is weakly separating. In Figure \ref{fig:minipage1}, no topological path connects vertices 2 and 5 without its interior intersecting the graph; in Figure \ref{fig:minipage2}, no path connects vertices 2 and 6; in Figure \ref{fig:minipage3}, no path connects vertices 2 and 7; in Figure \ref{fig:minipage4}, no path connects vertices 2 and 8; in Figure \ref{fig:minipage5}, no path connects vertices 2 and 8; in Figure \ref{fig:minipage6}, no path connects vertices 1 and 8. We conjecture that every member in the $\beta$ family is weakly separating.
\subsection{$\epsilon$ family}
The $\epsilon$ family of minor-minimal nonouter-projective-planar graphs are pictured in Figure \ref{fig:FE1} and \ref{fig:FE2}.
\begin{proposition}
Every member of the $\epsilon$ family is strongly nonseparating, and hence nonseparating projective planar.
\end{proposition}
\begin{proof}
For $\epsilon_1$, $\epsilon_2$, $\epsilon_3$ and $\epsilon_5$ consider the the embeddings in Figure \ref{fig:FE1}. Since all vertices can be connected by paths that intersect the graph only at endpoints, they are strongly nonseparating. For $\epsilon_4$ and $\epsilon_6$, consider the the embeddings in Figure \ref{fig:FE2}. They are also strongly nonseparating projective planar. Therefore all members of the $\epsilon$ family are nonseparating projective planar.
\end{proof}
\begin{figure}[H]
\centering
\includegraphics[scale=0.23]{Lucy/FE2.png}
\caption{Strongly nonseparating projective planar drawings of $\epsilon_4$ and $\epsilon_6$}
\label{fig:FE2}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=0.35]{Uriel/FE1.png}
\caption{Strongly nonseparating projective planar drawings of $\epsilon_1$, $\epsilon_2$, $\epsilon_3$ and $\epsilon_5$}
\label{fig:FE1}
\end{figure}
\subsection{$\delta$ family}
The $\delta$ family of minor-minimal nonouter-projective-planar graphs has two graphs shown in Figure \ref{D1D2}.
\begin{center}
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.18]{Evan/D1D2.png}
\caption{The $\delta$ Family includes $\delta_1$ (left) and $\delta_2$ (right)}
\label{D1D2}
\end{center}
\end{figure}
\end{center}
\begin{lem}\label{CycleAddion}
For any two cycles $C_1$ and $C_2$ intersecting along an arc, $D$, define the sum of $C_1$ and $C_2$ to be $(C_1\cup C_2)\setminus D$. The sum of two 0-homologous cycles and the sum of two 1-homologous cycles are 0-homologous cycles, and the sum of a 1-homologous cycle and a 0-homologous cycle is 1-homologous
\end{lem}
\begin{proposition}
Both graphs in the $\delta$ family are separating.
\end{proposition}
\begin{proof}
By Theorem 3.1, a 2-connected planar graph embedded in the projective plane with all cycles 0-homologous can be isotoped to an affine embedding. Namely, if a 2-connected subgraph $H$ of $\delta_1$ or $\delta_2$ contains either $K_4\dot\cup K_1$ or $K_{3,2}\dot\cup K_1$ as a minor, then any embedding of $H$ with all cycles 0-homologous is separating because such an embedding can be isotoped to an affine embedding containing either $K_4\dot\cup K_1$ or $K_{3,2}\dot\cup K_1$ \cite{Dehkordi}.
Since every cycle bounding a face $F$ in $\delta_1$ or $\delta_2$ can be viewed as the sum of the cycles that bound every face except for $F$, every embedding of $\delta_1$ and $\delta_2$ has an even number of 1-homologous cycles. Thus, since all 1-homologous cycles intersect each other and both $\delta_1$ and $\delta_2$ have reflection symmetry, the number of cases we must consider is considerably small.
All projective planar embeddings of $\delta_1$ and $\delta_2$, up to symmetry and the homology class of each cycle, are represented in Figure \ref{AllEmbeddings}, where a face is shaded if and only if its bounding cycle is 1-homologous cycle in the corresponding embedding. The highlighted subgraphs are 2-connected with all 0-homologous cycles that contain either $K_4\dot\cup K_1$ or $K_{3,2}\dot\cup K_1$ as minors, and since one exists for every embedding of $\delta_1$ and $\delta_2$, both graphs are separating.
\end{proof}
\begin{figure}[H]
\centering
\includegraphics[scale=0.4]{Evan/AllD1.png}
\includegraphics[scale=0.4]{Evan/AllD2.png}
\caption{Projective planar embeddings of $\delta_1$ and $\delta_2$}
\label{AllEmbeddings}
\end{figure}
\begin{comment}
\begin{lem}\ref{Dekhordi}\label{DLem}
$K_{3, 2}\dot\cup K_1$ is a separating planar graph.
\end{lem}
\begin{ske} Let $G=(V, E)$ be a graph. Let $\le$ be a total ordering of $V\cup E$ such that for each $u, v\in V$ and each $e\in E$, if $e=(u, v)$
, then $u<e$ and $v<e$. Let $D$ be an embedding of a subgraph of $G$.
Let $D'$ be a drawing of $G$ that contains $D$ as a subdrawing. If $D' = D$ then $D$ is a drawing of $G$, and $D$ is the only drawing of $G$ that contains $D$. Suppose $D'\ne D$. Let $\{x_1, x_2, \ldots, x_n\}$ be the ordered set of vertices and edges that differ between $D'$ and $D$. Then $\big\{D, D'\setminus\{x_2, x_3, \ldots, x_n\}, \ldots, D'\setminus \{x_n\}, D'\big\}$ is a set of drawings obtained from adding the remaining vertices and edges to $D$ in the order granted by $\le$. Thus, $D'$ is a drawing of $G$ obtained from adding the remaining vertices and edges to $D$ in the order granted by $\le$.
\end{ske}
We first show that $\delta_1$ is separating.
\begin{ske}
Observe that if the following subgraphs of $\delta_1$ are affine embedded, there is already a separating cycle. Each has $K_{3,2} \dot\cup K_1$ as a minor no matter how the remaining part is embedded.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.3]{Evan/D1Subs.png}
\caption{Separating planar subgraphs of $\delta_1$}
\label{separatingsubgraph}
\end{center}
\end{figure}
Let $D$ be a projective planar drawing of a graph that contains $K_3, 2$ as a minor. Suppose every cycle in $D$ is homologous to zero. By Theorem \ref{Theorem1}, $D$ is equivalent to an affine embedding. Hence, $D$ is separating. Thus, if there is a drawing of $\delta_1$ such that every cycle in at least one graph in Figure \ref{separatingsubgraph} is 0-homologous, then that drawing has a separating cycle. We therefore need only to check the embeddings of $\delta_1$ for which this is not the case.
By Lemma \ref{DLem}, the order we draw the pieces of each embedding in does not matter. We may thus begin by drawing some cycles as homologous to one and find each embedding containing those one homologous cycles by adding remaining vertices and edges in every possible position until the drawing is complete. The graphs in the delta family each have just one projective planar embedding for each combination of one-homologous cycles.
\begin{center}
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.4]{Evan/D1A.png}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.4]{Evan/D1B.png}
\caption{Embeddings of $\delta_1$, choosing each cycle bounding a pink face to be one homologous}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.4]{Evan/D1C.png}
\caption{Embeddings of $\delta_1$, choosing each cycle bounding a pink face to be one homologous.}
\end{center}
\end{figure}
\end{center}
We show that $\delta_2$ is separating in a similar way, as is demonstrated in Figures \ref{delta2_1} and \ref{delta2_2}.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.5]{Evan/D2Subs.png}
\caption{Separating planar subgraphs of $\delta_2$}
\label{delta2_1}
\end{center}
\end{figure}
\begin{center}
\begin{figure}[H]
\includegraphics[scale=0.3]{Evan/D2A.png}
\includegraphics[scale=0.3]{Evan/D2B.png}
\caption{Embeddings of $\delta_2$, choosing each cycle bounding a pink face to be one homologous.}
\label{delta2_2}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.4]{Evan/D2C.png}
\caption{Embeddings of $\delta_2$, choosing each cycle bounding a pink face to be one homologous.}
\end{center}
\end{figure}
\end{center}
\end{ske}
\end{comment}
\subsection{$\gamma$ family}
The $\gamma$ family of minor-minimal nonouter-projective-planar graphs are pictured in Figure \ref{gammafamily}.
\begin{proposition}
Every graph in the $\gamma$ family, excluding $\gamma_{6}$, is strongly nonseparating and thus nonseparating
\end{proposition}
\begin{proof}
Every graph in the $\gamma$ family, excluding $\gamma_{6}$, has a strongly nonseparating embedding in the projective plane as illustrated in Figure \ref{gammafamily}.
\end{proof}
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.62]{Evan/GammaFamily.png}
\caption{Every graph in the $\gamma$ family, excluding $\gamma_{6}$, is strongly nonseparating and thus nonseparating}
\label{gammafamily}
\end{center}
\end{figure}
\subsubsection{$\gamma_{6}$}
The graph $\gamma_{6}$ is obtained from $\gamma_{1}$ via two $\Delta - Y$ exchanges.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.45]{Evan/Five_path_new.png}
\caption{The graph $\gamma_{6}$ with vertices labelled}\label{Gamma}
\end{center}
\end{figure}
In a given embedding, a \textit{0-homologous path} is a path that crosses the boundary an even number of times. Similarly, in a given embedding, a \textit{1-homologous path} is a path that crosses the boundary an odd number of times.
\begin{proposition}
The graph $\gamma_6$ is separating projective planar.
\end{proposition}
\begin{proof}
Label the vertices of $\gamma_6$ as in Figure~\ref{Gamma}. Note that the planar embedding of $\gamma_6$ is separating and that there are 5 paths connecting vertex 1 with vertex 2, including $P_{1} = (1, 3, 2)$, $P_{2} = (1,4,2)$, $P_{3} = (1,5,2)$, $P_{4} = (1,5,7,6,2)$, and $P_{5} = (1,5,8,6,2)$. Embed $\gamma_6$ in the projective plane. By the pigeonhole principle, either at least three of these paths are 0-homologous or at least three paths are 1-homologous. If at least three paths are 1-homologous, isotope a sufficiently small neighbourhood containing vertex $1$ and no other vertex over the boundary. Then every 0-homologous path becomes a 1-homologous path and every 1-homologous path become a 0-homologous path. In the resulting embedding, there are at least three 0-homologous paths.
Suppose at least three paths are 0-homologous, then the other two paths may be both 0-homologous, one 0-homologous and one 1-homologous, or both 1-homologous.
\begin{itemize}
\item If both of the two other paths are 0-homologous, then the paths $(1,5)$, $(5,2)$, $(5,7,6,2)$, $(5,8,6,2)$ must either be all 0-homologous or all 1-homologous. Therefore every cycle in the embedding is 0-homologous. The embedding can be isotoped into an affine embedding and is therefore separating.
\item If both of the two other paths are 1-homologous, separately consider cases according to where these two paths lie. Note that not every pair of paths can be the two 1-homologous paths: suppose $P_{1}$, $P_{5}$ are the two 1-homologous paths, then the cycles $(1,4,3,2)$ and $(5,8,6,7)$ are two 1-homologous cycles with no intersection, contradicting Lemma \ref{gloverlemma}.
By symmetry there are only two cases, with the two 1-homologous paths being $P_{1}$, $P_{2}$ or $P_{1}$, $P_{3}$. In each of the two cases, the three 0-homologous paths together contains a minor equivalent to $K_{3,2}$ which is nonouterplanar. Therefore a vertex must be on one side of the 0-homologous cycle formed by the other two paths, and the two 1-homologous paths must lie on the other side. In particular, at least one of the two paths has a vertex on it. Therefore the cycle is separating.
\item If exactly one of the two other paths is 1-homologous, separately consider cases according to where the 1-homologous path lies. If the 1-homologous path is $P_{1}$ or $P_{2}$, consider the minor obtained by deleting $(1,3,2)$ or $(1,4,2)$. If the 1-homologous path is $P_{3}$, consider the minor obtained by deleting $(5,2)$. If the 1-homologous path is $P_{4}$ or $P_{5}$, consider the minor obtained by deleting $(5,7,6)$ or $(5,8,6)$. In each case, the resulting minor has only 0-homologous cycles and can be isotped to an affine embeddings. In addition, in each case the resulting minor has a subgraph of $K_1 \dot\cup K_{3,2}$. Therefore the graph has a separating cycle.
\begin{figure}[ht]
\centering
\begin{minipage}[b]{0.4\linewidth}
\centering
\includegraphics[scale=0.6]{Evan/Case_3_a.png}
\caption{$C_{1}$ is the 1-homologous path}
\label{fig:C1is1homologous}
\end{minipage}
\quad
\begin{minipage}[b]{0.4\linewidth}
\centering
\includegraphics[scale=0.6]{Evan/Case_3_b.png}
\caption{$C_{3}$ is the 1-homologous path}
\label{fig:C3is1homologous}
\end{minipage}
\end{figure}
\end{itemize}
Therefore the graph $\gamma_{6}$ is separating.
\end{proof}
\subsection{$\zeta$ family}
The members of the $\zeta$ family of minor-minimal nonouter-projective-planar graphs are shown in Figures \ref{ZetaNot3} and \ref{Zeta3}.
\begin{theo}
Besides $\zeta_3$, every member of the $\zeta$ family is strongly nonseparating.
\end{theo}
\begin{proof}
To show that $\zeta_1, \zeta_2, \zeta_4, \zeta_5,$ and $\zeta_6$ are strongly nonseparating, it suffices to show a single strongly nonseparating embedding of each graph. See Figure \ref{ZetaNot3}.
\begin{figure}[H]
\centering
\subfigure[]{\includegraphics[width=0.3\textwidth]{Evan/Zeta_6.png}}
\subfigure[]{\includegraphics[width=0.3\textwidth]{Evan/Zeta_12_2.png}}
\subfigure[]{\includegraphics[width=0.3\textwidth]{Evan/Zeta_45.png}}
\subfigure[]{\includegraphics[width=0.3\textwidth]{Evan/Zeta_45_2.png}}
\subfigure[]{\includegraphics[width=0.3\textwidth]{Evan/Zeta6.png}}
\caption{strongly nonseparating embeddings of $\zeta_1, \zeta_2, \zeta_4, \zeta_5$, and $\zeta_6$}
\label{ZetaNot3}
\end{figure}
We now show that $\zeta_3$, the vertices and edges of a cube, is weakly separating, and indeed separating.
Label the faces in the planar embedding of $\zeta_3$ as in Figure \ref{Zeta3}.
\begin{center}
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.17]{Evan/ABCDEFCube.png}
\caption{A planar embedding of $\zeta_3$ with faces labelled}
\label{Zeta3}
\end{center}
\end{figure}
\end{center}
It is well known that for any cycle $X$ in any drawing of $\zeta_3$ in ${\mathbb{R}P}^2$, the homology class of $X$ is equal to the modulo 2 sum of the homology classes of the boundaries of the faces on one side of $X$ in the above drawing. Let $D^*$ be a drawing of $\zeta_3$.
The cube has $24$ symmetries, and in particular we may first choose any of the $6$ faces to be in the place of $A$ and then choose any of the $4$ faces adjacent to the first to be in the place of $B$. There are two case to consider:
\begin{enumerate}
\item Suppose every cycle in $D^*$ is homologous to zero. Then, by Theorem \ref{Theorem1}, $D^*$ may be isotoped to an affine embedding. Since $\zeta_3$ contains $K_4\dot\cup K_1$ as a minor, any affine embedding of $\zeta_3$ is separating. Thus, $D^*$ is separating.
\item Let $f$ map the planar embedding of $\zeta_3$ to $\mathbb{R}P^2$. Without loss of generality, suppose $f(Bd(A))$ is homologous to one. Then $f(Bd(D))$ may not be homologous to one since all one homologous cycles in $D^*$ intersect, so as $Bd(A)$ bounds the rest of the graph in the above embedding, an odd number of the boundaries of $B$, $C$, $E$, or $F$ is also homologous to one in $\mathbb{R}P^2$. Without loss of generality, suppose $f(Bd(B))$ is homologous to one. Then $f(Bd(F))$ is not homologous to one because $f(Bd(F))$ does not intersect $f(Bd(B))$. Thus, since $f(Bd(C))$ and $f(Bd(E))$ can not both be homologous to one, neither may be. This means that $f(Bd(A))$ and $f(Bd(B))$ are the only one homologous face bounding cycles. The drawing of $f(Bd(A))$ and $f(Bd(B))$ as one homologous cycles separates $\mathbb{R}P^2$ into two connected regions and contains six of the eight vertices of $\zeta_1$. Since the remaining two vertices are connected by an edge, they must be in the same region. This leaves one option for $D^*$ as in Figure \ref{Zeta3Sep}, where the boundary of $C,D,F$ is a separating cycle.
\end{enumerate}
\begin{figure}[H]
\centering
\includegraphics[scale=.35]{Evan/Zeta3Separating.png}
\caption{Up to symmetry, the only projective planar embedding of $\zeta_1$ with a one-homologous cycle}
\label{Zeta3Sep}
\end{figure}
\end{proof}
\subsection{$\eta$ family}
The $\eta$ family of minor-minimal nonouter-projective-planar graphs contains only one member, pictured in Figure \ref{etaaffine}.
\begin{figure}[H]`
\centering
\includegraphics[scale=0.27]{Lucy/planar_eta.png}
\caption{An affine embedding of $\eta_1$}
\label{etaaffine}
\end{figure}
A \textit{subdivision} of a graph $G$ is a graph resulting from the subdivision of the edges of $G$. If an edge $e$ with endpoints $v_1$ and $v_2$ has a subdivision, this creates a graph identical to $G$ except with a new vertex $v_3$, where the edge $e$ is replaced by the two edges $(v_1,v_3)$ and $(v_3, v_2)$. A \textit{0-homologous region} is a face of an embedded graph that is bounded by a 0-homologous cycle.
\begin{proposition}
The graph $\eta_1$ is separating projective planar.
\end{proposition}
\begin{proof}
Consider the affine embedding of $\eta_1$ shown in Figure
\ref{etaaffine}, with regions labelled $A$ through $F$. It is unique, as $\eta_1$ is a subdivision of a 3-connected graph \cite{Whitney}.
The affine embedding has a separating cycle, as evident from the drawing in Figure \ref{etaaffine}. Observe that in an embedding of $\eta_1$, if the subgraphs of $\eta_1$ in Figure 23 are affine embedded, there is already a separating cycle in the embedding, since there is an affine embedded $K_{3,2} \dot\cup K_{1}$, or subdivision.
\begin{figure}[H]
\centering
\includegraphics[scale=0.28]{Lucy/eta_subgraphs.png}
\caption{Separating subgraphs of $\eta_1$}
\label{sepsubgraph}
\end{figure}
We know from Theorem \ref{Theorem1} that embeddings in which every cycle is 0-homologous are equivalent to affine embeddings. This means we should consider the possible combinations of regions that can be embedded with a 0-homologous cycle boundary, without creating these subgraphs.
We will show there is no projective planar embedding with no regions bounded by 0-homologous curves. We will begin by supposing the region boundaries of $A$, $B$, $C$, and $D$ are embedded as 1-homologous cycles, as seen in the far left graph of Figure \ref{fig:eta}. Thus far, there are no regions bounded by 0-homologous curves. However, to complete the graph, we must connect vertex 8 to vertex 4. In doing this, $E$ and $F$ are embedded with boundaries that are 0-homologous cycles. Thus, there is no way to embed all region boundaries as 1-homologous cycles.
\begin{figure}[H]
\centering
\includegraphics[scale=0.28]{Lucy/eta.png}
\caption{As seen in the top left graph, if we embed region boundaries of $A$, $B$, $C$, and $D$ as 1-homologous cycles, three regions remain for the vertex 8 and its adjacent edges to be embedded into. The three other graphs show the three cases of when vertex 8 and its adjacent edges are embedded in these three regions. This will produce the regions $E$ and $F$, $D$ and $F$, or $D$ and $E$ with region boundaries that are 0-homologous cycles.}
\label{fig:eta}
\end{figure}
Every region is equivalent to all the others. Without loss of generality, choose region $A$ to be embedded with a region boundary that is 0-homologous. This means we cannot also have the region boundary of $D$ be embedded as a 0-homologous cycle, because that creates one of the separating subgraphs. This also means we cannot have both the region boundaries of $B$ and $F$ also be 0-homologous, nor $C$ and $E$, because this would create the other separating subgraphs. We also cannot have $A$, $B$, and $C$ all have 0-homologous region boundaries - since we have supposed the region boundary of $A$ is 0-homologous, we cannot have both the region boundaries of $A$ and $B$ also be 0-homologous.
There are no embeddings that have just the region boundary of $A$ as a 0-homologous cycle. If so, we have in homology $[A+B+C]=[D+E+F]$ results in $0=1$. For the same reason, there are no embeddings with precisely three or five region boundaries as 0-homologous cycles. If the region boundaries of $A$ and $B$ are 0-homologous, with a 1-homologous region boundary for $C$, then two or zero of the boundaries of regions $D$, $E$, and $F$ are 0-homologous. There cannot be four regions with boundaries that are 0-homologous, because that would create a separating subgraph, so the only remaining case is two chosen regions to have 0-homologous boundaries.
The possible combinations of regions to have 0-homologous boundaries are $AB$, $AC$, $AE$, and $AF$. The combinations $AB$ and $AC$ are equivalent, as are $AE$ and $AF$. Without loss of generality, we embed $AB$ with both 0-homologous region boundaries. We will show there is only one such embedding, pictured in Figure \ref{AB affine}. The embedding is separating.
Now, we will show this embedding is unique up to equivalence. First, affine embed regions $A$ and $B$. Now embed vertex 6. We must connect this vertex to vertices 4 and 5. If neither of these edges intersects the boundary of the projective plane, this will either create region $C$ as a 0-homologous cycle or a region bounded by the cycle $\{5,6,4,3,1,7\}$ as a 0-homologous cycle. We cannot have region $C$ as a 0-homologous cycle, and if we have the cycle $\{5,6,4,3,1,7\}$ as a 0-homologous cycle, we cannot connect vertex 8 to its neighbours without an edge crossing. Thus, the cycle $\{2,4,6,5\}$ must be embedded as a 1-homologous cycle. We must connect vertex 8 to vertices 1, 4, and 5, and there is only one way to do that. Thus, we can conclude the embedding is unique.
\begin{comment}
This embedding is unique, because we cannot use any of the switching moves detailed in Mohar et al \cite{Mohar}. The types of switching moves are the Whitney 2-switching, the 3-switching, the cross-cap switching, and the operation shown in Figure 12 of Mohar et al \cite{Mohar}. First, we cannot perform a Whitney 2-switching, because $\eta_1$ is a subdivision of a 3-connected graph. Second, we cannot perform the 3-switching move as there are no 3-patches. Third, we cannot perform the cross-cap switching move, as the embedding intersects the boundary of the projective plane more than once. Finally, we cannot perform the operation that is shown in Figure 12 of \cite{Mohar}, as there is no vertex with a large enough degree inside of a 0-homologous cycle. Thus, we can conclude that this embedding is unique.
\end{comment}
\begin{figure}[H]
\centering
\includegraphics[scale=0.28]{Lucy/eta_AB_affine.png}
\caption{Only regions $A $ and $B$ are 0-homologous. The vertices 3 and 7 are separated.}
\label{AB affine}
\end{figure}
The only embedding of the graph that contains $A$ and $E$ as 0-homologous cycles is shown in Figure \ref{AE affine}. The embedding is separating.
As in Figure 26, consider the embedding where the region boundary of $E$ is an affine embedded cycle, and the region boundary of $A$ is a 0-homologous cycle created using two 1-homologous paths. To do this, embed the cycle $\{5, 8, 1, 2\}$ as a standard vertical cycle, and then embed the cycle $7,1,8,5$ as a 1-homologous cycle as well. Now, connect vertices 2, 4 and vertices 4, 8 with an edge - there is only one way to do this. This creates a face bounded by the cycle $\{8,4,2,1\}$. Now we must embed vertex 3. Assume we embed it inside the face bounded by the cycle $\{8,4,2,1\}$. Now we must connect vertex 3 to vertices 1 and 4. This creates the 0-homologous regions $B$ and $F$. Thus, we must embed vertex 3 outside of that face, as well as outside of region $A$ and $E$. Now connect vertex 3 to vertices 1 and 4 in that face. This is the only possible embedding.
\begin{comment}
This embedding is unique, because we cannot use any of the switching moves detailed in Mohar et al \cite{Mohar}. First, as before, we cannot perform a Whitney 2-switching. Second, we cannot perform the 3-switching move as there are no 3-patches. Third, we cannot perform the cross-cap switching move, as the embedding intersects the boundary of the projective plane more than once. Finally, we cannot perform the operation that is shown in Figure 12 \cite{Mohar}, as there is no vertex with a large enough degree inside of a 0-homologous cycle. Thus, we can conclude that this embedding is unique.
\end{comment}
\begin{figure}[H]
\centering
\includegraphics[scale=0.26]{Lucy/eta_AE_affine.png}
\caption{Only regions $A$ and $E$ are 0-homologous. Region $A$ is formed by two 1-homologous paths. The vertices 2 and 6 are separated.}
\label{AE affine}
\end{figure}
We can conclude that every embedding of $\eta_1$ is separating.
\end{proof}
\subsection{$\theta$ family}
The $\theta$ family of minor-minimal nonouter-projective-planar graphs has only one member, $\theta_{1}$, which is $K_{5,2}$. Take the vertex set of $\theta_{1}$, or $K_{5,2}$, to be $V = \{ v_{1}, v_{2} \}$ union $U = \{ u_{1}, u_{2}, u_{3}, u_{4}, u_{5} \}$.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.45]{Evan/Theta1.png}
\caption{$\theta_{1}$ and two embeddings in the projective plane}
\end{center}
\end{figure}
\begin{proposition}
The graph $\theta_{1}$ is separating projective planar.
\end{proposition}
\begin{proof}
Embed $\theta_{1}$ in the projective plane. Consider the five paths connecting $v_{1}$, $v_{2}$ and note that every cycle consists of exactly two of the five paths. If all five paths are all 0-homologous or 1-homologous, then every cycle is 0-homologous and the embedding can be isotoped into an affine embedding. Since $K_{5,2}$ contains $K_{3,2} \dot\cup K_{1}$ as a subgraph, the affine embedding has a separating cycle.
Otherwise by pigeonhole, either at least three paths connecting $v_{1}$, $v_{2}$ are 0-homologous or at least three paths connecting $v_{1}$, $v_{2}$ are 1-homologous. If at least three paths are 1-homologous, isotope a sufficiently small disk containing $v_{1}$ and no other vertex over the boundary. Then every 0-homologous path becomes a 1-homologous path and every 1-homologous path become a 0-homologous path. In the resulting embedding, there are at least three 0-homologous paths.
The graph formed by the three 0-homologous paths is equivalent to $K_{3,2}$, where one vertex $w_{0}$ lies in the disk bounded by the cycle $C$ formed by two other paths. Consider that there is at least one 1-homologous path connecting $v_{1}$, $v_{2}$ and that a vertex $w_{1}$ lies on this 1-homologous path. Since this 1-homologous path only intersects the three 0-homologous paths at $v_{1}$ and $v_{2}$, $w_{1}$ does not lie in the affine disk bounded $C$. Therefore $C$ separates $w_{0}$, $w_{1}$.
\end{proof}
\begin{proposition}
The graph $\theta_{1}$ is minor-minimal separating projective planar.
\end{proposition}
\begin{proof}
The diagram below illustrates embeddings of key minors of $\theta_1$, which are all nonseparating. These embeddings are $\theta_{1}-e$, $\theta_{1}$ with an edge contraction, $\theta_{1}-u_{i}$ and $\theta_{1}-v_{i}$ from left to right.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.5]{Evan/theta_1_minor.png}
\caption{Key minors of $\theta_{1}$}
\end{center}
\end{figure}
\end{proof}
\subsection{$\kappa$ family}
The $\kappa$ family of minor-minimal nonouter-projective-planar graphs has only one member, $\kappa_{1}$.
\begin{proposition}
The graph $\kappa_{1}$ is strongly nonseparating.
\end{proposition}
\begin{proof}
Figure \ref{Kappa} illustrates a strongly nonseparating embedding of $\kappa_{1}$ in the projective plane, where antipodal points are identified.
\end{proof}
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.25]{Sherry/Kappa.png}
\caption{A nonseparating embedding of $\kappa_1$ \cite{Archdeacon}.}
\label{Kappa}
\end{center}
\end{figure}
\subsection{Weakly separating graphs}
\begin{theo}\label{notvertexconnected}
Let $G$ be a nonouter-projective-planar graph. Then $G\dot\cup K_1$ is weakly separating.
\end{theo}
\begin{proof} Suppose $K_1 = \{w\}$.
Let $G$ be a nonouter-projective-planar graph and let $D$ be a projective planar drawing of $G\dot\cup w$. Then $w\in F$ for some face $F$ of $D$. Since $G$ is nonouter-projective-planar, there is a vertex $v$ of $D$ such that $v\not\in Bd(F)$ and $v\not\in F$. Thus, the component of $v$ in $[\mathbb{R}P^2\setminus D]\cup\{w, v\}$ is a subset of $\mathbb{R}P^2\setminus F$. Since $F$ is a face, $F$ is the component of $w$. Since the path component of any point is a subset of its component, the path components of $w$ and $v$ are disjoint in $\mathbb{R}P^2\setminus D$. Thus, every path from $w$ to $v$ intersects $D$, not just at its endpoints. Hence, $D$ is weakly separating. Since $D$ was arbitrary, $G\dot\cup w$ is weakly separating.
\end{proof}
Here we have an example of a weakly separating projective planar graph that is a nonseparating projective planar graph. For example, $\beta_4$ is a nonouter-projective-planar graph, so, if we add a vertex to $\beta_4$, this new graph is a weakly separating projective planar graph, and the following embedding does not have separating cycles:
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.4]{Ángel/B4v.PNG}
\caption{$\beta_4 \dot{\cup} K_1$}
\end{center}
\end{figure}
\begin{theo}\label{stronglynonsep&nonouterpp}
If $G$ is a strongly nonseparating and minor-minimally nonouter-projective-planar graph, then $G\dot\cup K_1$ is minor-minimally weakly separating.
\end{theo}
\begin{proof} Let $G$ be a minor-minimally nonouter-projective-planar strongly nonseparating graph. By Theorem \ref{notvertexconnected}, $G\dot\cup K_1$ is weakly separating. Let $H$ be a proper minor of $G\dot\cup K_1$. Then $H=G'\dot\cup K'$, where $G'$ is a minor of $G$ and $K'$ is a minor of $K_1$. Since $H$ is a proper minor, either $G'$ or $K'$ is a proper minor.
\begin{enumerate}
\item If $G'$ is a proper minor of $G$, then since $G$ is minor-minimally nonouter-projective-planar, there is a drawing of $G'$ with every vertex on the boundary of the same face. Thus, there is a drawing $D$ of $H$ with $K'$ on the interior of one such face of $G'$. Hence, there is a face $F$ of $H$ with every vertex of $H$ on its boundary. Hence, $H$ is strongly nonseparating.
\item If $K'$ is a proper minor of $K_1$, then $K'$ is the empty graph. Thus, $H=G'$. Since $G'$ is a minor of $G$ and strongly nonseparating is a minor-closed property, $H$ is strongly nonseparating.
\end{enumerate}
In both cases, $H$ is strongly nonseparating. Since $H$ was arbitrary, every proper minor of $G\dot\cup K_1$ is strongly nonseparating. Therefore, $G\dot\cup K_1$ is minor-minimally weakly separating.
\end{proof}
An example of Theorem \ref{stronglynonsep&nonouterpp} would be the if $G$ was the graph $\kappa$ from Section 4.9. We know $\kappa \dot\cup K_1$ is minor-minimally weakly separating.
\subsection{Nonplanar and nonouter-projective-planar graphs}
In this section, we show that if a graph is nonplanar projective planbar and nonouter-projective-planar, then that graph disjoint union a vertex must be separating. We also show the conditions for that graph to be a minor-minimal graph in regard to being both nonplanar and nonounter-projective-planar.
Recall that a drawing of graph $G$ in the projective plane is a \textit{closed cell embedding} if every face of the graph is bounded by a 0-homologous cycle.
\begin{proposition}\label{addavertex}
Let $G$ be a graph that is nonplanar projective planar and nonouter-projective-planar, then $G \dot\cup K_1$ is a separating projective planar graph.
\end{proposition}
\begin{proof}
All the projective planar embeddings of $G$ are closed cell embeddings, by Corollary \ref{closed cell}. This means all the faces of $G$ are bounded by a 0-homologous cycle. If you add a vertex to one of the faces, there will be a vertex inside and outside of the bounding 0-homologous cycle, because the graph is nonouter-projective-planar.
\end{proof}
In Section 5.5 of this paper, we have identified some members of the set of minor-minimal nonplanar and nonouter-projective-planar graphs. From this, we can conclude that $(K_6-2e) \dot\cup K_1$, $\epsilon_4 \dot\cup K_1$, $\epsilon_6 \dot\cup K_1$, $\kappa_1 \dot\cup K_1$, $LU_1 \dot\cup K_1$, $LU_2 \dot\cup K_1$, and $LU_3 \dot\cup K_1$ are separating projective planar graphs. The graphs $LU_i$ will be defined later, in Section 5.5
\begin{proposition}
The graphs $\kappa_1 \dot\cup K_1$, $\epsilon_4 \dot\cup K_1$, and $\epsilon_6 \dot\cup K_1$ are minor-minimal separating projective planar graphs.
\end{proposition}
\begin{proof}
By Proposition \ref{addavertex}, these graphs must be separating projective planar. Now, we will show they are minor-minimal in this regard. Without loss of generality, consider $\kappa_1 \dot\cup K_1$. Call a minor of this graph $H$.
There are two cases to consider: if $H$ results from deleting the $K_1$ or if $H$ results from taking a minor of $\kappa_1$. First, suppose we have deleted $K_1$. We know $\kappa_1$ is strongly nonseparating, so this minor is not separating. The graph $\kappa_1$ is minor-minimal nonouter-projective-planar, so any minor of it will be outer-projective-planar. Thus, if we embed this minor as its outer-projective-planar drawing, and we embed $K_1$ in the face where all the vertices are in the boundary, this will create a nonseparating drawing. Thus, we can conclude that $\kappa_1 \dot\cup K_1$ is minor-minimal separating projective planar, and by extension, so are $\epsilon_4 \dot\cup K_1$ and $\epsilon_6 \dot\cup K_1$.
\end{proof}
\subsection{Multipartite graphs}
In this section, we explore which complete multipartite graphs are nonseparating.
\begin{proposition}
The nonseparating complete multipartite graphs are exactly $K_{2,2}$, $K_{2,3}$, $K_{2,4}$, $K_{3,3}$, $K_{3,4}$, and $K_{1,n}$ where $n$ is a positive integer.
\end{proposition}
\begin{proof}
Consider $K_{2,n}$. By Proposition 4.9, it is nonseparating if and only if $n \leq 4$. The graphs $K_{3,3}$ and $K_{3,4}$ are nonseparating, as seen in Figures \ref{k33} and \ref{k34}. Consider $K_{4,n}$ and note that $K_{4,4}$ is not projective planar. Note also that every graph $K_{1,n}$ is nonseparating and that every graph $K_{m,n}$ where $m \geq 4$, $n \geq 2$ is separating since $K_{2,5} = K_{5,2}$ is separating.
\end{proof}
\begin{figure}[H]
\centering
\begin{minipage}[b]{0.4\linewidth}
\centering
\includegraphics[scale=0.35]{Evan/K3,3.png}
\caption{This graph is $K_{3,3}$}
\label{k33}
\end{minipage}
\quad
\begin{minipage}[b]{0.33\linewidth}
\centering
\includegraphics[scale=0.15]{Lucy/k34.png}
\caption{This graph is $K_{3,4}$}
\label{k34}
\end{minipage}
\end{figure}
\subsection{Conclusion}
This table shows a summary of our results on the 32 minor-minimal nonouter-projective-planar graphs:
\begin{table}[H]
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{|c|c|c|c|c|}
\hline
Family & SPP? & SNSPP? & If not SPP, what related graph is weakly SPP?\\
\hline
$\alpha$ & Yes & No & - \\
\hline
$\beta$ & No & No & $\beta \dot\cup K_1$ \\
\hline
$\epsilon$ & No & Yes & $\epsilon \dot\cup K_1$ \\
\hline
$\delta$ & Yes & No & - \\
\hline
$\gamma$ & No, except $\gamma_6$ & Yes, except $\gamma_6$ & $\gamma \dot\cup K_1$\\
\hline
$\zeta$ & No, except $\zeta_3$ & Yes, except $\zeta_3$ & $\zeta \dot\cup K_1$\\
\hline
$\eta_1$ & Yes & No & - \\
\hline
$\theta_1$ & Yes & No & -\\
\hline
$\kappa_1$ & No & Yes & $\kappa \dot\cup K_1$\\
\hline
\end{tabular}
}
\caption{Status of minor-minimal nonouter-projective-planar graphs, with respect to being separating projective planar (SPP) or strongly nonseparating projective planar (SNSPP), as well as weakly SPP.}
\label{tab:my_label}
\end{table}
If a minor-minimal nonouter-projective-planar graph is nonseparating, there must be a graph that contains the graph as a minor that is minor-minimal separating projective planar. For example, $\kappa_1 \dot\cup K_1$, $\epsilon_4 \dot\cup K_1$, and $\epsilon_6 \dot\cup K_1$ are minor-minimal separating projective planar graphs, though we don't know if they are minor-minimal. By Proposition \ref{addavertex} and Section 5.5, we know $(K_6-2e) \dot\cup K_1$, $LU_1 \dot\cup K_1$, $LU_2 \dot\cup K_1$, and $LU_3 \dot\cup K_1$ are separating.
\begin{theo}
The following are minor-minimal separating projective planar graphs: graphs in the $\alpha$ family, graphs in the $\delta$ family, $\gamma_6$, $\zeta_3$, $\eta_1$, and $\theta_1$.
\end{theo}
\begin{theo}
The following are minor-minimal weakly separating graphs: graphs in the $\beta$ family, $\epsilon_1 \dot\cup K_1$, $\epsilon_2 \dot\cup K_1$, $\epsilon_3 \dot\cup K_1$, $\epsilon_4 \dot\cup K_1$, $\epsilon_5 \dot\cup K_1$, $\epsilon_6 \dot\cup K_1$, $\gamma_1 \dot\cup K_1$, $\gamma_2 \dot\cup K_1$, $\gamma_3 \dot\cup K_1$, $\gamma_4 \dot\cup K_1$, $\gamma_5 \dot\cup K_1$, $\zeta_1 \dot\cup K_1$, $\zeta_2 \dot\cup K_1$, $\zeta_4 \dot\cup K_1$, $\zeta_5 \dot\cup K_1$, $\zeta_6 \dot\cup K_1$, and $\kappa_1 \dot\cup K_1$
\end{theo}
Dehkordi and Farr \cite{Dehkordi} characterized the set of nonseparating planar graphs as graphs that are outerplanar or a subgraph of wheel graphs, or a subgraph of elongated triangular prism graphs \cite{Dehkordi}. We have extended this research to the projective plane. In the plane, if a graph is nonseparating, it is also strongly nonseparating, so Dehkordi and Farr only have one theorem on the topic. For the projective plane, we have two theorems, though we are only able to characterize some such graphs, at this point.
\begin{theo}
The set of nonseparating projective planar graphs includes the following:
\begin{itemize}
\item outer-projective-planar graphs
\item subgraphs of wheel graphs
\item subgraphs of elongated prism graphs
\item the $\beta$ family, the $\epsilon$ family, the $\gamma$ family except $\gamma_6$, the $\zeta$ family except $\zeta_3$, and $\kappa_1$
\end{itemize}
\end{theo}
\begin{theo}
The set of strongly nonseparating projective planar graphs includes the following:
\begin{itemize}
\item outer-projective-planar graphs
\item subgraphs of wheel graphs
\item subgraphs of elongated prism graphs
\item the $\epsilon$ family, the $\gamma$ family except $\gamma_6$, the $\zeta$ family except $\zeta_3$, and $\kappa_1$
\end{itemize}
\end{theo}
\section{3-linked graphs}
A \textit{split 3-link} is a 3-link embedded in the plane with two pieces of the 3-link contained within an embedded $S^1$ with the third piece on the other side of the $S^1$. If there exists no such $S^1$, then the link is a \textit{nonsplit 3-link}. A graph, $G$, is \textit{intrinsically type I 3-linked (II3L)} if every embedding of $G$ in the plane contains a nonsplit type I 3-link. Burkhart et al found three minor-minimal graphs in this set.
\begin{proposition}[Burkhart et al \cite{Burkhart}]\label{burkhart}
The graphs $K_4 \dot\cup K_4$, $K_4 \dot\cup K_{3,2}$, and $K_{3,2} \dot\cup K_{3,2}$ are II3L.
\end{proposition}
They conjectured that this is the complete minor-minimal set. We have used their research as a foundation to explore the set of minor-minimal 3-linked graphs in the projective plane.
A \textit{projective planar 3-link} is a disjoint collection of $3-m$ $S^1$'s and $m$ $S^0$'s, embedded into the projective plane, where $m\in\{1,2\}$. If $m=1$, this is a type I 3-link. If $m=2$, this is a type II 3-link. A \textit{split projective planar 3-link} is a 3-link embedded in the projective plane with two pieces of the 3-link contained within an embedded $S^1$ with the third piece on the other side of the $S^1$. A \textit{nonsplit projective planar 3-link} is when such an $S^1$ does exist. In the figures below, there are only two cases of type I nonsplit 3-links, which are labeled type Ia and type Ib.
\begin{figure}[H]
\centering
\includegraphics[scale=0.23]{3Links/Type_1a_3-links.png}
\caption{A type I 3-link with two $S^1$ and an $S^0$. The embedding on the far left is nonsplit, and the rest are split. These embeddings are type Ia, as neither $S^1$ lies in a disk bounded by the other.}
\label{fig:typeIa}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=0.31]{3Links/Type_1b_3-links.png}
\caption{A type I 3-link with two $S^1$ and an $S^0$. The embedding on the far left is nonsplit, and the rest are split. These embeddings are type Ib, as one $S^1$ lies in a disk bounded by the other.}
\label{fig:typeIb}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=0.27]{3Links/type_2_3-links.png}
\caption{A type II 3-link with two $S^0$ and an $S^1$. The embedding on the left is nonsplit, and the embedding in the middle and on the right are split.}
\label{fig:1S1-2S0}
\end{figure}
A projective planar graph, $G$, is \textit{intrinsically projective planar type I 3-linked (IPPI3L)} if every embedding of $G$ in the projective plane has a type I projective planar 3-link. A graph, $G$, is \textit{intrinsically projective planar type II 3-linked (IPPII3L)} if every embedding of $G$ in the projective plane has a type II projective planar 3-link.
Suppose $G$ is an embedded graph with a 0-homologous cycle. That cycle is a \textit{weak separating cycle} if every vertex of the graph that is not in the cycle is in the interior of the cycle.
If a drawing of graph $G$ in the projective plane is a \textit{closed cell embedding}, that means every face of the graph can be bounded by a 0-homologous cycle. If a graph $G$ is \textit{closed nonseparating}, that means any nonseparating embedding of $G$ in the projective plane is a closed cell embedding. An immediate result of this definition is the following.
\begin{proposition}
If $G$ closed nonseparating and nonouter-projective planar, then $G\dot\cup K_1$ is separating.
\end{proposition}
\begin{proof}
Embed $G \dot\cup K_1$ in the projective plane. This drawing is called $D$. Remove the $K_1$, creating a of drawing of $G$, $D_1$. This drawing has two cases. If $D_1$ is separating, then $D$ is also separating. If $D_1$ is not separating, then the drawing is a closed cell embedding. Since it is nonouter-projective planar, no matter what face we embed $K_1$ into, there exists a cycle $C$ such that $C$ bounds the face and there exists a vertex $v$ outside of $C$. Thus, drawing $D$ must also be separating in this case.
\end{proof}
\subsection{IPPI3L graphs with three components}
\begin{proposition}\label{threecomponent}
There are four minor-minimal IPPI3L graphs made of three components. These graphs are $K_4\dot\cup K_4 \dot\cup K_4$, $K_4\dot\cup K_{4} \dot\cup K_{3,2}$, $K_4 \dot\cup K_{3,2} \dot\cup K_{3,2}$, and $K_{3,2} \dot\cup K_{3,2} \dot\cup K_{3,2}$.
\end{proposition}
\begin{proof}
First, consider the graph $K_4\dot\cup K_4 \dot\cup K_4$. We embed these components into the projective plane. If two $K_4$ components are embedded in the projective plane with 1-homologous cycles, they would intersect. So the two cases for the projective planar embedding are if all three $K_4$ components are embedded with all 0-homologous cycles, or two components are embedded with all 0-homologous cycles and the other is embedded with a 1-homologous cycle. When all three $K_4$ components are embedded with 0-homologous cycles, by Theorem \ref{Theorem1}, this is equivalent to if they were all affine embedded. Similarly, if one of the $K_4$ components is embedded with a 1-homologous cycle, the other two components can be deformed into affine embedded graphs. It has been proven that $K_4 \dot\cup K_4$ is II3L, by Proposition \ref{burkhart}. Thus, this embedding also has a nonsplit type I 3-link, which means $K_4\dot\cup K_4 \dot\cup K_4$ is intrinsically projective planar type I 3-linked.
Now, we will verify that $K_4\dot\cup K_4 \dot\cup K_4$ is minor-minimal with respect to being an intrinsically projective planar type I 3-linked graph. Without loss of generality, take a minor that consists of two $K_4$ components and a component that is a minor of $K_4$, which is outerplanar. Embed one $K_4$ with a 1-homologous cycle. Up to equivalence, there is only one such embedding. Now affine embed the other two components. The component that is a minor of $K_4$ is outerplanar. If a component is outerplanar, it does not contain a cycle with a vertex in its interior. This embedding does not contain a nonsplit type I 3-link. Therefore, $K_4\dot\cup K_4 \dot\cup K_4$ is minor-minimal with respect to being an intrinsically projective planar type I 3-linked graph.
Next, consider the graphs $K_4\dot\cup K_4 \dot\cup K_{3,2}$, $K_4 \dot\cup K_{3,2} \dot\cup K_{3,2}$, and $K_{3,2} \dot\cup K_{3,2} \dot\cup K_{3,2}$. These cases follow the same logic as the first case, because $K_{3,2}$ and $K_4$ are both minor-minimal nonouter-projective-planar graphs.
\end{proof}
\begin{proposition}
There is no minor-minimal IPPI3L graph with four or more components.
\end{proposition}
\begin{proof}
Suppose $G$ is a minor-minimal IPPI3L graph with exactly four components. Since Proposition \ref{threecomponent} describes minor-minimal graphs on three components, we know that at most two of the four components of $G$ have $K_4$ or $K_{3,2}$ as a minor. By Theorem \ref{halin}, we know that at least two of the components of $G$ will be outerplanar. Suppose the four components of $G$ are labelled $G_1$, $G_2$, $G_3$, and $G_4$. Without loss of generality, suppose $G_3$ and $G_4$ are outerplanar.
Let $D$ be an arbitrary drawing of $G_1 \dot\cup G_2$. Embed $G_3$ and $G_4$ into any face of $D$ as affine outer-projective-planar drawings, so that neither $G_3$ lies in a face of $G_4$ or vice versa. Call this drawing $D_1$. Since $G$ is IPPI3L, there exists a projective planar type I 3-link in $D_1$. If every vertex or edge of the 3-link is not in $G_3$ and $G_4$, then $G_1 \dot\cup G_2$ is IPPI3L. Therefore, we can delete $G_3 \dot\cup G_4$, which means $G$ is not minor-minimal.
If there is a vertex or edge of the 3-link in $G_3 \dot\cup G_4$, it is at most a vertex of the $S_0$ - they are outerplanar graphs, and by the way they were embedded, there cannot be a cycle bounding a vertex. Those two components contain only one vertex of the $S^0$, and $G_1$ or $G_2$ contains for the second vertex.
Suppose we have a Type Ia nonsplit link, and $G_3$ or $G_4$ contains a vertex within an $S^1$. The graph can be re-embedded where the components $G_3$ and $G_4$ are within the cycle with the other $S^0$ vertex.
Suppose we have a Type Ib link as in Case 2. Without loss of generality, suppose $G_3$ or $G_4$ contains the external vertex. Then, the graph can be re-embedded where the components $G_3$ and $G_4$ are within the disks bounded by the cycles of both $S^1$ pieces.
Thus, in either case, the graph does not have a nonsplit type I 3-link. Thus, this is a contradiction, which means that $G_3$ and $G_4$ must be disjoint from the 3-link. Thus, $G_1 \dot\cup G_2$ is IPPI3L. Therefore, we can delete $G_3 \dot\cup G_4$, which means $G$ is not minor-minimal. Therefore, there is no minor-minimal IPPI3L graph with four components.
The argument is similar for every $n$ component graph with $n \geq 4$ has 4 components. We can conclude that there is no minor-minimal IPPI3L graph with four or more components.
\end{proof}
\subsection{Planar separating graphs}
The following generalizes part of Proposition \ref{threecomponent}.
\begin{proposition}\label{separatingIPPI3L}
If $G$ and $H$ are separating projective planar graphs, and $G$ is a planar graph, then $G\dot{\cup}H$ is IPPI3L.
\end{proposition}
\begin{proof}
Consider an arbitrary embedding of $G\dot{\cup}H$, creating drawing $D_1$. Since $G$ and $H$ are separating projective planar graphs, there exist two disjoint 0-homologous cycles $C_1$ and $C_2$ that contains two vertices within them, $v_1$ and $v_2$ respectively. There are 2 cases: one cycle is in the interior of the other, or neither cycle contains the other. Consider the first case. Without loss of generality, $C_1$ is in the interior of $C_2$. Because $C_2$ is separating, we know there is a vertex in the exterior of $C_2$. This creates a Type $1a$ 3-link graph. Consider the second case. Since neither is in the interior of the other, we have two cycles with a vertex inside of them. This would be a Type $1b$ 3-link graph. Since we chose an arbitrary embedding of $G\dot{\cup}H$, this means we can conclude $G\dot{\cup}H$ is IPPI3L.
\end{proof}
\begin{comment}
\begin{proposition}
Let $G$ be a separating projective planar graph. All drawings of $G$ that have only one separating cycle must contain at least one 1-homologous cycle.
\end{proposition}
\begin{proof}
Suppose graph $G$ is separating on the projective plane. Embed $G$ in the projective plane such that it has only one separating cycle and all of its cycles are 0-homologous. $G$ can be isotoped to an affine embedded. Call this drawing $D_1$. Choose the separating cycle to be embedded as a 1-homologous cycle - we know this is possible because $D_1$ is a planar drawing, as well as the fact that $D_1$ has only one separating cycle, which means that the separating cycle cannot be deep within the graph. This embedding does not contain a separating cycle - this is a contradiction. We can conclude that all drawings of $G$ that have only one separating cycle must contain at least one 1-homologous cycle.
\end{proof}
\begin{proposition}\label{novertexininterior}
If $G$ has a nonseparating planar drawing, there is a way to embed $G$ in the projective plane without a 0-homologous cycle that contains a vertex in its interior.
\end{proposition}
\begin{proof}
Suppose graph $G$ is a nonseparating graph that has been embedded into the projective plane with all 0-homologous cycles. Choose a 0-homologous cycle. The first case is there are only vertices along this cycle, without any vertices in its interior or exterior. In this case, we do not have to change the graph in any way, as we already have projective planar drawing without a 0-homologous cycle that contains a vertex in its interior.
Now consider the second case, where there are vertices that lie in the interior of the chosen 0-homologous cycle. Since the graph is nonseparating, this means there are no vertices on the exterior of the 0-homologous cycle. Now, we embed this graph with that 0-homologous cycle as a 1-homologous cycle, as seen in Figure \ref{fig:non-sep}.
\begin{figure}[H]
\centering
\includegraphics[scale=0.33]{3Links/affine_embedded_nonseparating graph2.png}
\caption{A nonseparating graph is shown on the left. The middle and right images show how to embed the graph with a 1-homologous cycle.}
\label{fig:non-sep}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=0.3]{3Links/affine_embedded_nonseparating_graph_with_cycle2.png}
\caption{These graphs have a 1-homologous cycle and a 0-homologous cycle that contains a vertex in its interior. These graphs are separating.}
\label{fig:non-sep with cycle}
\end{figure}
Now, suppose there is still a 0-homologous cycle that contains a vertex in its interior in this graph after re-embedding it with the 1-homologous cycle. Examples of this can be seen in Figure \ref{fig:non-sep with cycle}. This would make the graph separating, which is a contradiction. Therefore, we can conclude that if $G$ is a nonseparating graph that has a planar embedding, there is a way to embed $G$ in the projective plane without a 0-homologous cycle that contains a vertex in its interior.
\end{proof}
\begin{proposition}\label{notclosedcell}
If graph $G$ has a nonseparating planar drawing, there exists a nonseparating drawing in the projective plane that has a face that is not the interior of a 0-homologous cycle, and also does not have a 0-homologous cycle that contains a vertex in its interior.
\end{proposition}
\begin{proof}
Suppose graph $G$ is a nonseparating planar graph. By Proposition \ref{novertexininterior}, this graph has an embedding that does not have a 0-homologous cycle that contains a vertex in its interior. This embedding would resemble the right drawing of Figure \ref{fig:non-sep}. Observe there is a 1-homologous cycle drawn in black that can be isotoped to a vertical line, by Theorem \ref{Theorem2}. One side of that black 1-homologous cycle will be empty since there is only one intersection. Without loss of generality, suppose the left side of the cycle is an empty region. The left side is contained in a face, which includes all the points on the black 1-homologous cycle. The face also includes points on the right side near the boundary. This means that we must include points from the left and the right side to bound the face, which means the cycle must go through the intersection of the projective plane. However, since there is only one transverse intersection with the projective plane boundary, the cycle only intersects the projective plane once, which means the cycle is a 1-homologous cycle. Thus, the left region is not the interior of a 0-homologous cycle.
\end{proof}
\begin{con}\label{noweaksepcycle}
Suppose graph $G$ is nonseparating, and all its nonseparating drawings are nonplanar. Also suppose that the nonseparating drawing has a weak separating cycle and is not a closed cell embedding. There exists a drawing of the graph that does not have a weak separating cycle and also is not a closed cell embedding.
\end{con}
First, consider the set of graphs that are nonseparating, but all of their nonseparating drawings are nonplanar. Also suppose that the nonseparating drawing has a weak separating cycle. We will examine these graphs by categorizing them by the degree, $n$, of the vertex within the weak separating cycle. First, if $n = 0$, then that disconnected vertex can be re-embedded outside of the weak separating cycle. Since it is not a closed cell embedding, this new embedding will not a weak separating cycle.
Next, suppose $n=1$. This graph can also be easily be re-embedded with the vertices on the exterior of the cycle instead of the interior, flipping them outside.
Next, suppose $n=2$. This case is more difficult, as there is more variation. There are some limits on the graph. There can only be two 1-homologous cycles that go through the boundary of the projective plane, and they cannot share vertices. They are the first two graphs of Figure \ref{fig:LB2}.
Next, suppose $n=3$. In this case, there is also one more restriction. There cannot be any vertices added as a subdivision between where the 1-homologous cycles attach to the center wheel - this would create a separating cycle. There is also a limitation to how many vertices of degree 3 are allowed in the center. The maximum is two vertices. These two limitations also hold when $n>3$
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{3Links/LB1.png}
\caption{These are graphs where $n=2$ or 3}
\label{fig:LB1}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{3Links/LB2.png}
\caption{These are graphs where $n=3$ or 4}
\label{fig:LB2}
\end{figure}
\end{comment}
\begin{con}\label{IPPI3l-planars}
Suppose $G$ and $H$ are minor-minimal separating projective planar graphs that have planar embeddings. Also, suppose every minor of $G$ and every minor of $H$ is not closed nonseparating. Then $G\dot{\cup}H$ is minor-minimal IPPI3L if both $G$ and $H$ are not II3L.
\end{con}
\begin{comment}
\begin{proof}
Suppose $G$ and $H$ are minor-minimal separating projective planar graphs that have a planar embedding. Also, suppose that graphs $G$ and $H$ have an affine embedding that do not contain two disjoint separating cycles. We have proven in Proposition \ref{separatingIPPI3L} that $G\dot{\cup} H$ is IPPI3L. Now, we will prove that $G\dot{\cup} H$ is minor-minimal under this property. Without loss of generality, let $L$ be a proper minor of $H$. Since $H$ is a minor-minimal separating projective planar graph, then $L$ is nonseparating.
Suppose $L$ has a nonseparating planar drawing. Then by Proposition \ref{novertexininterior}, there is drawing $D$ such that $L$ is embedded without a 0-homologous cycle that contains a vertex in its interior. Next, we will create a new drawing called $D_1$. We affine embed $G$ into any face of the drawing $D$ where the face is not the interior of a 0-homologous cycles. By Proposition \ref{notclosedcell}, we know that face must exist for this drawing $D$. The affine drawing of $G$ does not contain two disjoint separating cycles. It can at most contain one $S^1$ and one piece of $S^0$. We have embedded $L$ such that it does not contain a cycle bounding a vertex. Thus, $D_1$ does not contain a nonsplit type I 3-link, as it has at most one cycle bounding a vertex. This means $G\dot{\cup}L$ is not IPPI3L.
Now, suppose $L$ does not have a nonseparating planar drawing. Graph $L$ has an projective planar embedding that is a nonseparating drawing that is not closed cell. Therefore, there exists a face $F$ which is not bounded by a 0-homologous cycle. Embed $G$ in $F$. Since $G$ is not II3L, this only produces one cycle bounding a vertex.
We must prove that if $L$ has a nonseparating drawing but not an affine one, then it has embedding does not contain a weak separating cycle and is not closed cell. \textbf{This has not been completed. There are many cases, as seen in Figures \ref{fig:LB1} and \ref{fig:LB2}. We have examined quite a few, but there are many more. It might be hard to complete them all. To finish this proof, we must prove Conjecture \ref{noweaksepcycle}.}
\end{proof}
\end{comment}
An example of this is seen in Figure \ref{fig:thetagamma}.
\begin{figure}[H]
\centering
\includegraphics[scale=0.27]{3Links/minorminimalIPPT13L-2_components.png}
\caption{Graph $\gamma_6 \dot\cup \theta_1$ is IPPI3L, as both components are separating projective planar graphs.}
\label{fig:thetagamma}
\end{figure}
\subsection{Closed nonseparating graphs}
Throughout this section, we assume $G$ and $H$ are projective planar graphs.
\begin{proposition}\label{closed-nonseparating}
Suppose $G$ and $H$ are nonouter-projective-planar. If $G$ and $H$ are closed nonseparating, then $G\dot\cup H$ is IPPI3L.
\end{proposition}
\begin{proof}
If we embed both $G$ and $H$ with affine embeddings, this will create a nonsplit type I 3-link, as both are nonouterplanar. So, without loss of generality, consider a drawing $D_1$ of $G$ with at least one 1-homologous cycle. If $D_1$ is a separating drawing, it has a separating cycle $C$. If we embed $H$ outside of $C$, this creates two disjoint separating cycles, which is a Type $1a$ 3-link. If we embed $H$ inside the cycle $C$, this will create a Type $1b$ 3-link. Now, suppose we have a drawing $D_2$ of $G$ that is nonseparating. This means that it is a closed cell embedding. No matter which face we embed $H$ into, it will produce a Type $1b$ 3-link, as $G$ is nonouterplanar so there will be a vertex outside of the face that $H$ is embedded into.
\end{proof}
\begin{proposition}\label{minorminimalclosed}
Suppose $G$ and $H$ are minor-minimal nonouter-projective-planar, where both of them are planar. Also, $G$ and $H$ are not II3L. If $G$ and $H$ are closed nonseparating, then $G\dot\cup H$ is minor-minimally IPPI3L.
\end{proposition}
\begin{proof}
Suppose $G$ and $H$ are minor-minimal nonouter-projective-planar and closed nonseparating. Also, suppose $G$ and $H$ are not II3L. We have proven in Proposition \ref{closed-nonseparating} that $G\dot{\cup} H$ is IPPI3L. Now, we will prove that $G\dot{\cup} H$ is minor-minimal under this property. Without loss of generality, suppose $L$ is a minor of $H$. This means $L$ is outer-projective-planar. Embed $L$ as an outer-projective drawing, called $D_1$. All vertices of $L$ are contained in the boundary of one face, $F$. If we embed $G$ into $F$, there is no 3-link, since $G$ is not II3L.
\end{proof}
\begin{con}\label{ep,zeta}
The graphs $\epsilon_1$ and all members of the $\zeta$ family except $\zeta_3$ are closed nonseparating.
\end{con}
By Proposition \ref{minorminimalclosed} and Conjecture \ref{ep,zeta}, there are many graphs that we conjecture to be minor-minimal IPPI3L. An example of this would be $\epsilon_1 \dot\cup \epsilon_1$, as shown in Figure \ref{fig:epsilon}.
\begin{figure}[H]
\centering
\includegraphics[scale=0.28]{3Links/epsilon.png}
\caption{This graph is $\epsilon_1 \dot\cup \epsilon_1$}
\label{fig:epsilon}
\end{figure}
\subsection{Graphs glued at a vertex}
Suppose $D$ is a planar drawing of a graph $G$. A vertex of $G$ is a \textit{separating planar vertex in $D$} if it is contained in all separating cycles of $D$ and we will say that a vertex of $G$ is a \textit{separating planar vertex} if it is contained in all separating cycles for all planar drawings of $G$. The graphs $\eta_1$ and $\zeta_3$ have no separating planar vertices. In the graph $\theta_1$, the two separating planar vertices are the degree 5 vertices. For $\epsilon_1$, $\gamma_6$, $\delta_1$, and $\delta_2$ the separating planar vertices are the degree 4 vertices.
\begin{proposition}\label{gluedatzeta}
Suppose $G$ is a separating projective planar graph that is planar and is not II3L. Then $G$ glued at any vertex called $v$ to $\zeta_3$ is IPPI3L.
\end{proposition}
\begin{proof}
The graph $\zeta_3$ has only two unique projective planar embeddings, as seen in Figures \ref{Zeta3Sep}. No matter how $\zeta_3$ is embedded, there is a separating cycle that does not go through $v$. Consider an arbitrary embedding of $\zeta_3$. Since every vertex is equivalent, without loss of generality, we pick vertex $v$ to glue to $G$. Now we must embed $G$ into the projective plane as well. Since $G$ is separating, it will contain a separating cycle, no matter how it is embedded. Even if the separating cycle of $G$ goes through vertex $v$, there exists a disjoint cycle in $\zeta_3$ that does not go through vertex $v$. Thus, it has a type 1a 3-link. If $\zeta_3$ is affine and embedded within a separating cycle of $G$, this creates a type 1b 3-link. We can conclude that every embedding with $G$ and $\zeta_3$ glued at a vertex will be IPPI3L.
\end{proof}
\begin{proposition} \label{minorminimalzetaglued}
Suppose $G$ is a minor-minimal nonouter-projective-planar graph. Also suppose $G$ is connected, separating projective planar, and planar. The graph $G$ glued at any vertex to $\zeta_3$ is minor-minimal IPPI3L.
\end{proposition}
\begin{proof}
In Proposition \ref{gluedatzeta}, we proved $G$ glued to $\zeta_3$ at any vertex is IPPI3L. Now, we will prove that it is minor-minimal in regards to this property.
Suppose $\zeta_3$ and $G$ are glued by a vertex $v$. Consider a minor of this resulting graph, called $H$.
First, consider the case where $H$ does not contain vertex $v$ because $v$ has been deleted. Graph $H$ has at least two components. Consider the part of $H$ that is a proper minor of either $G$ or $\zeta_3$ that is outer-projective-planar, and call it $H_1$. Let $D_1$ be an outer-projective-planar drawing of $H_1$.Then there exists a face $F_1$ of $D_1$ such that all the vertices of $H_1$ are in its boundary. Embed the other pieces of $H$ in $F_1$. This drawing is not projective-planar type I 3-linked.
Second, consider the case where $H$ does contain vertex $v$. Consider the part of $H$ that is a proper minor of either $G$ or $\zeta_3$ that is outer-projective-planar, and call it $H_2$. Let $D_2$ be an outer-projective-planar drawing of $H_2$. Then there exists a face $F_2$ of $D_2$ such that all the vertices of $H_2$ are in its boundary. Embed the other pieces of $H$ in $F_2$. This drawing is not projective-planar type I 3-linked.
We can conclude that $G$ glued to $\zeta_3$ at any vertex is minor-minimal IPPI3L.
\end{proof}
Graph $G$ in this proposition could be: the $\delta$ family, $\gamma_6$, $\zeta_3$, $\eta_1$, and $\theta_1$
An example of this type of graph would be $\zeta_3$ glued with $\gamma_6$ at any vertex, as shown in Figure \ref{fig:zeta3gamma6}.
\begin{figure}[H]
\centering
\includegraphics[scale=0.22]{3Links/glued_edge.png}
\caption{The graph $\zeta_3$ glued with $\gamma_6$ glued at vertex 8. The graph is type 1a linked - the two $S^1$'s are highlighted in blue, and the $S^0$ are highlighted in red.}
\label{fig:zeta3gamma6}
\end{figure}
\begin{comment}
We have a conjecture that $\epsilon_1$ is closed nonseparating. If that is true, then the following results stand.
\begin{con}\label{epsilongluedatvertex}
Suppose we have graph $G$, where we glue a vertex of $\epsilon_1$ that is not a separating planar vertex to a vertex in an $n$-cycle, called $C_n$, at vertex $v$. Then suppose there is an embedding where the $C_n$ and $\epsilon_1$ have 1-homologous cycles. In this embedding, $\epsilon_1$ will always contain a separating cycle that does not go through $v$.
\end{con}
\begin{proof}
Consider $\epsilon_1$ with vertices labeled as is shown in Figure
\ref{fig:epsilonlabeled}.
\begin{figure}[H]
\centering
\includegraphics[scale=0.35]{3Links/E1-label.png}
\caption{The graph $\epsilon_1$ with labeled vertices.}
\label{fig:epsilonlabeled}
\end{figure}
First, we are only considering the vertices that are not separating planar vertices. In the labeled graph, these would be vertices 2, 3, 5, and 7. For one of those vertices called $v$, there exists a separating cycle $C_v$ that does not contain $v$. If $v$ is vertex 2, 5, or 7, then $C_v$ is $\{1,4,6\}$. If $v$ is vertex 3, then $C_v$ is $\{1,2,7,6,4\}$.
Now we will prove that for all drawings of $G$ with two 1-homologous cycles, where one of the cycles is $C_n$ and the other is a cycle contained in $\epsilon_1$ which passes through $v$, then $C_v$ will be a separating cycle that does not pass through $v$. We will use $C_l$ to refer to the 1-homologous cycle that is contained in $\epsilon_1$ which passes through $v$. Suppose $G$ is embedded such that given a $v$, $C_v$ has at least one edge as part of a 1-homologous cycle.
Vertex 2 and 7 are equivalent, so without loss of generality we consider the case where $v$ is vertex 2. If $C_l$ does not intersect $C_2$, then we are done, because we have an affine embedding of $C_2$. Then, we must consider the cases where $C_l$ intersects $C_2$. There are three cases of equivalence where $C_l$ intersects $C_2$. These three cases are shown in Figure \ref{fig:vertex2}, which is found in the appendix. In the figure, the $C_l$'s are highlighted in blue, and in green is the $C_n$ as a 1-homologous cycle. In each of these graphs, the separating cycle that does not go through vertex 2 is $C_2$.
Now, consider the case where $v$ is vertex 5. Similar to the previous case, if $C_l$ does not intersect $C_5$ then we are done, because we have an affine embedding of $C_5$. Now we must consider the cases where $C_l$ intersects $C_5$. Similar to the previous case, there are also three cases of equivalence where $C_l$ intersects $C_5$. The graphs are shown in Figure \ref{fig:vertex5}, which is found in the appendix. In the figure, the $C_l$'s are highlighted in blue, and in green is the $C_n$ as a 1-homologous cycle. In each of these graphs, the separating cycle that does not go through vertex 5 is $C_5$.
Now, consider the case where $v$ is vertex 3. Similar to the previous case, if $C_l$ does not intersect $C_3$ then we are done, because we have an affine embedding of $C_3$. Now we must consider the cases where $C_l$ intersects $C_3$. There are four cases of equivalence where $C_l$ intersects $C_3$. The graphs are shown in Figure \ref{fig:vertex3}, which is found in the appendix. In the figure, the $C_l$'s are highlighted in blue, and in green is the $C_n$ as a 1-homologous cycle. In each of these graphs, the separating cycle that does not go through vertex 3 is $C_3$.
\end{proof}
\begin{con}
Let the graph $G$ be separating projective planar and planar. Suppose the vertices $v_1$ of $\epsilon_1$ and $v_2$ of $G$ are glued together at vertex $v$, where $v_1$ and $v_2$ are not separating planar vertices. The resulting graph will be IPPI3L.
\end{con}
\begin{proof}
Take an arbitrary fixed drawing $D$ of the graph of $\epsilon_1$ glued to $G$. Remove the edges and vertices that only belong to $G$ to isolate $\epsilon_1$. This drawing is $D_1$. There are two cases - if $D_1$ is a separating drawing or not.
First suppose it is not separating. This means $D_1$ is a closed cell embedding of $\epsilon_1$. Now redraw the edges and vertices that were previously removed, creating drawing $D$. Since $D_1$ was a closed cell embedding, then $G$ can only be affine embedded within a face $F$ of $D_1$. Since $v_2$ is not a separating planar vertex, then there exists a separating cycle $C_1$ in $G$ which does not contain $v$. Since $D_1$ is a closed cell embedding, there exists a 0-homologous cycle that bounds $F$, called $C_2$. The cycles $C_1$ and $C_2$ must be disjoint. Finally, since $\epsilon_1$ is nonouter-projective-planar, there exists a vertex outside of $F$. Therefore, $D$ contains a nonsplit type Ib 3-link.
Secondly, suppose that $D_1$ is a separating drawing.
The first case is if both $G$ and $\epsilon_1$ have a 1-homologous cycle which intersects in $v$. Since $v_1$ is not a separating planar vertex, then $D_1$ contains a separating cycle that does not contain $v$, as shown in Conjecture \ref{epsilongluedatvertex}. Label the separating cycle $C_1$. Now, redraw the edges and vertices that were previously removed, creating drawing $D$. Since $G$ is separating, there exists a separating cycle $C_2$ - even if that separating cycle goes through $v$, $C_1$ and $C_2$ will still be disjoint. This forms a nonsplit type Ia 3-link.
The second case is if only $G$ is affine embedded. Since $D_1$ is separating, then there exists a separating cycle $C_1$. Also, because $v_2$ is not a separating planar vertex, there exists a separating cycle $C_2$ that does not go through $v$. If $C_2$ is contained within $C_1$, this creates a type Ib 3-link. If $C_2$ is not contained within $C_1$, this creates a type Ia 3-link. Either way, the graph contains a nonsplit 3-link.
The third case is if only $\epsilon_1$ is affine embedded. Graph $G$ is separating graph, so there exists a separating cycle $C_1$. Also, there exists a cycle $C_2$ in $\epsilon_1$ that does not contain $v$. Similar to the last case, the graph contains a nonsplit 3-link.
The fourth and final case is if both $G$ and $\epsilon_1$ are both affine embedded. This drawing also contains a nonsplit 3-link.
We can conclude that $\epsilon_1$ glued with $G$ at a vertex will be IPPI3L.
\end{proof}
\end{comment}
\begin{con}
Let the graph $G$ be two copies of $\epsilon_1$ be glued together, where the two vertices that were glued, $v_1$ and $v_2$, are not separating planar vertices. Then $G$ is minor-minimal IPPI3L.
\end{con}
\begin{con}
Let the graphs $G$ and $H$ be closed nonseparating, nonouter-projective-planar, and planar. Suppose the vertex $v_1$ of $G$ and $v_2$ of $H$ are glued together at vertex $v$, where $v_1$ and $v_2$ are not separating planar vertices. The resulting graph will be IPPI3L.
\end{con}
\begin{con}
Let the graphs $G$ and $H$ be closed nonseparating, minor-minimal nonouter-projective-planar, and planar. Suppose the vertex $v_1$ of $G$ and $v_2$ of $H$ are glued together at vertex $v$, where $v_1$ and $v_2$ are not separating planar vertices. The resulting graph will be minor-minimal IPPI3L.
\end{con}
\subsection{Nonplanar and nonouter-projective-planar graphs}
Throughout this section, assume $G$ is a projective planar graph. In this section, we examine graphs that are the disjoint union of two components. One component is a graph that is nonplanar and nonouter-projective-planar, and the second component is a graph that is nonouterplanar. We prove that the union results in an IPPI3L graph. We also show that the property of being planar or outer-projective-planar is a minor closed property. Following that, we give conditions to make graphs that minor-minimal in regards to that property. Finally, we characterize graphs that are minor-minimal nonplanar and nonouter-projective-planar.
\begin{proposition}
Suppose a graph $G$ has the property that it has a planar embedding or an outer-projective-planar embedding. This property is a minor closed property.
\end{proposition}
\begin{proof}
Follows, since being planar is a minor closed property, as is being outer-projective planar.
\end{proof}
For a graph $G$ to be nonplanar and nonouter-projective-planar, it must have as minors one minor-minimal nonouter-projective planar graph and one minor-minimal nonplanar graph.
\begin{col}
The set of minor-minimal nonplanar and nonouter-projective-planar graphs is finite. This is because of Robertson and Seymour's Minor Theorem \cite{robertson}.
\end{col}
A \textit{closed cell embedding} is an embedding where every face is bounded by a 0-homologous cycle.
\begin{lem}\label{k33k5}
All projective planar embeddings of $K_5$ and $K_{3,3}$ are closed cell embeddings.
\end{lem}
\begin{figure}[H]
\centering
\includegraphics[scale=0.18]{3Links/K33K5.png}
\caption{Up to symmetry,these are the only possible embeddings of $K_5$ and $K_{3,3}$ \cite{Maharry}.}
\label{fig:my_label}
\end{figure}
\begin{proof}
\noindent First consider $K_5$. It follows from Maharry et al that there are exactly two classes of embeddings of (unlabelled) $K_5$ (see Figure 43 of \cite{Maharry}). Next, consider $K_{3,3}$. Maharry et al show every embedding of labelled $K_{3,3}$ in Figure 40 \cite{Maharry}. It appears as if there are many embeddings, but they are all equivalent, ignoring the labels. Thus, there is only embedding of $K_{3,3}$.
\end{proof}
Given a graph $G$ with vertex $v$ in $G$, define a \textit{vertex splitting} of $v$ to be the graph $G'$ obtained from $G$ with vertex set $V(G)-\{v\} \cup\{v_1,v_2\}$ and edge set including $(v_1,v_2)$, and $G$ is the result of contracting $G'$ along $(v_1,v_2)$.
\begin{lem} \label{lemma4lemma}
Let $D$ be a closed cell projective planar drawing of graph $G$. If we split a vertex in that drawing so that the new drawing $D_1$ remains projective planar, then $D_1$ is also closed cell.
\end{lem}
\begin{proof}
Arbitrarily choose a projective planar drawing $D_1$ of graph $G$. Let $v$ be the vertex in $D_1$ that we are splitting.
\begin{enumerate}
\item If $v$ is degree 0, this will create a disjoint edge. This will not effect the faces, so the same 0-homologous cycles that bound the faces of $D$ will also bound the faces of $D_1$. Thus, $D_1$ is a closed cell embedding, which means every projective planar drawing of $K_1$ is a closed cell embedding.
\item If $v$ is degree 1, this will also not affect the faces, so the same 0-homologous cycles that bound the faces of $D$ will also bound the faces of $D_1$. Thus, $D_1$ is a closed cell embedding, which means every projective planar drawing of $K_1$ is a closed cell embedding.
\item If $v$ is degree 2 or higher, we arbitrarily split the vertex. An example is seen in Figure \ref{Lemma4Lemmapic}. The splitting only affected the face boundary cycles of two faces of the graph - the rest remained the same. The two faces that were affected remain bounded by 0-homologous cycles, which are one vertex longer.
\end{enumerate}
\begin{figure}[H]
\centering
\includegraphics[scale=0.34]{3Links/Lemma4Lemma.png} \caption{Splitting a vertex}
\label{Lemma4Lemmapic}
\end{figure}
\end{proof}
Let $G$ be a planar graph and $C$ in $G$ is a cycle. Then we will say that we have a path $P$ \textit{glued in $C$} when we connect an $n$-path such that the endpoints are glued to two different vertices in $C$, and the path does not intersect $C$ anywhere else.
\begin{lem}\label{closed cell - majors}
Suppose $H$ is a graph with all its projective planar embeddings are closed cell embeddings. Suppose graph $G$ is a connected and projective planar graph that is created by splitting vertices and adding paths glued in the cycles $C$ of $H$. Then all projective planar embeddings of $G$ are closed cell embeddings.
\end{lem}
\begin{proof}
Suppose $H$ is a graph with all its projective planar embeddings are closed cell embeddings. For case one, suppose we add a path glued in cycle $C$ of $G$, creating $G_1$. Arbitrarily choose a projective planar drawing $D_1$ of $G_1$. Note that $D_1$ is a drawing of $H$ with a path added. Since $H$ has only closed cell embeddings, no matter what face the path is in, it will divide a face that is bounded by a 0-homologous cycle. This results in two faces, which are each bounded by two 0-homologous cycles. Thus, $D_1$ is a closed cell embedding, which means every projective planar drawing of $G_1$ is a closed cell embedding. For case two, suppose we split a vertex in $H$ to create $G_1$. By Lemma \ref{lemma4lemma}, every projective planar drawing of $G_2$ is a closed cell embedding. Therefore, for every $G$ created by gluing paths in cycles of $H$, or splitting vertices in $H$, all the projective planar embeddings of $G$ are closed cell embeddings.
\end{proof}
\begin{col}\label{closed cell}
Suppose $G$ is a nonplanar graph obtained by either splitting vertices or gluing paths in the cycles of $K_{3,3}$ or $K_5$. All projective planar embeddings of $G$ are closed cell embeddings.
\end{col}
\begin{proof}
Suppose a nonplanar graph $G$ has a projective planar drawing $D$. Since $G$ is nonplanar, then it contains $K_{3,3}$ or $K_5$ as a minor. By Lemma \ref{k33k5}, all the projective planar embeddings of $K_{3,3}$ and $K_5$ are closed cell embeddings. By Lemma \ref{closed cell - majors}, all the projective planar embeddings of $G$ are also closed cell embeddings. We can conclude all projective planar embeddings of a nonplanar graph $G$ are closed cell embeddings.
\end{proof}
\begin{proposition}\label{nonplanar}
Let $G$ and $H$ be projective planar. Suppose $G$ is nonplanar and nonouter-projective-planar, and all its planar minors are not II3L. Also, suppose $G$ is obtained by either splitting vertices or gluing paths in the cycles of $K_{3,3}$ or $K_5$. Also suppose $H$ is planar and nonouterplanar. Then $G \dot\cup H$ is IPPI3L.
\end{proposition}
\begin{proof}
Suppose graph $G$ is nonplanar projective planar and nonouter-projective-planar, and $H$ is nonouterplanar. First, embed $G$ into the projective plane. Call the drawing $D$. Then, embed $H$ in any face $F$ of $D$. By Corollary \ref{closed cell}, $D$ must be a closed cell embedding, which means $F$ is bounded by a 0-homologous cycle, $C_1$. Since $G$ is nonouter-projective-planar, then there is a vertex that is not in the closure of $F$. Label this vertex $v_1$. Because $H$ is nonouterplanar, it has $K_4$ or $K_{3,2}$ as a minor. Without loss of generality, suppose it has a $K_4$ subdivision. The vertex that is within the outer cycle of the $K_4$ subdivision and $v_1$ constitute the $S^0$ piece. The cycle $C_1$ and the cycle that bounds $K_4$ are the two $S^1$'s. This means this graph is IPPI3L.
\end{proof}
An example of a graph that would be IPPI3L by Proposition \ref{nonplanar} would be $K_6 \dot\cup K_{3,2}$, because $K_6$ is nonplanar and nonouter-projective-planar, and $K_{3,2}$ is planar and nonouterplanar.
We will define the term \textit{separating cycles} to be 0-homologous cycles in a drawing of a graph that contain at least one vertex in its interior and one vertex on its exterior. The next proof is about graphs that do not have two disjoint separating cycles in every drawing of the graph.
\begin{proposition}\label{minorminIPPI3L}
Let $G$ and $H$ be projective planar. Suppose $G$ is minor-minimal nonplanar and nonouter-projective-planar, and all its planar minors are not II3L. Also, suppose $G$ is obtained by either splitting vertices or gluing paths in the cycles of $K_{3,3}$ or $K_5$. Also suppose $H$ is minor-minimal nonouterplanar. This implies that $G \dot\cup H$ is minor-minimal IPPI3L.
\end{proposition}
\begin{proof}
Suppose $G$ is projective planar, minor-minimal nonplanar, and nonouter-projective-planar. Since we have assumed all of $G$'s planar minors are not II3L, it does not have two disjoint separating cycles. Suppose $H$ is projective planar and minor-minimal nonouterplanar.
In Proposition \ref{nonplanar}, we proved that $G\dot{\cup} H$ is IPPI3L. Now, we will prove that $G\dot{\cup} H$ is minor-minimal under this property.
Suppose we have a minor of $G$, $L$. Since $L$ is a minor, it must be planar or outer-projective-planar. First, suppose $L$ is outer-projective-planar. Then embed $L$ in the projective plane, with an outer projective drawing $D_1$. Then embed $H$ in $D_1$ inside the face that contains all the vertices in its closure. That is 3-linkless. Second, suppose $L$ is planar. Affine embed $L$ in the projective plane. Embed $H$ with a 1-homologous cycle. We know $H$ will not have a cycle bounding a vertex, because $H$ is either $K_4$ or $K_{3,2}$. Also, $L$ can have at most one cycle bounding a vertex, because it is not II3L. Thus, the minor is not IPPI3L.
Next, suppose $J$ is a minor of $H$ that is outerplanar. Embed $G$ in the projective plane. Call the drawing $D_2$. Now embed $J$ in $D_2$. The embedding of $J$ will not have a cycle bounding a vertex, and $G$ can have at most one. Thus, the minor is not IPPI3L. We can conclude that $G\dot{\cup} K$ is minor-minimal IPPI3L.
\end{proof}
Now, let us examine the set of graphs that are minor-minimal nonplanar and nonouter-projective-planar, that do not have two disjoint separating cycles. The graphs $\epsilon_4$, $\epsilon_6$, and $\kappa_1$ have these properties. Now let us consider $K_6$. It is nonplanar and nonouter-projective-planar, but it is not minor-minimal.
\begin{proposition}\label{5.13}
The graph $K_6$ minus two edges, where the edges may or may not be adjacent, is minor-minimal nonplanar and nonouter-projective-planar.
\end{proposition}
\begin{proof}
The two graphs - $K_6 -2e$ where the edges are adjacent or nonadjacent - are both nonouter-projective-planar because they have $\gamma_1$ as a minor. Also, they both are nonplanar because they contain $K_{5}$ as a minor.
Now, consider deleting a third edge. If we delete a three edges that connect to the same vertex, the resulting graph is outer-projective-planar. If we delete two edges that connect to the same vertex and a third that is disjoint, the resulting graph is outer-projective-planar. If we delete three edges that form a path, the resulting graph is planar. If we delete three edges that form a 3-cycle, the resulting graph is outer-projective-planar. If we delete three disjoint edges, the resulting graph is planar. Thus, we cannot subtract a third edge and keep the desired properties.
Because we cannot subtract a third edge, this means we also cannot delete a vertex. We also cannot contract an edge, as this would result in a graph that has only five vertices, $K_5$ has an outer-projective-planar embedding, as shown in Figure \ref{fig:my_label}. Thus, $K_6 -2e$ is minor-minimal nonplanar and nonouter-projective-planar.
\end{proof}
\begin{figure}[H]
\centering
\includegraphics[scale=0.25]{3Links/k6-2e_with_k32.png}
\caption{This graph is $(K_6-2e) \dot\cup K_4$. The blue cycles are the two $S^1$'s, and the red vertices are the two pieces of the $S^0$.}
\end{figure}
Next, consider the other graphs in the Petersen family. The Petersen family is the set of seven graphs that can be obtained from the Petersen graph by repeated $\Delta - Y$ and $Y- \Delta$ exchanges. The family also includes $K_6$. All other members of the family have $\kappa_1$ as a minor. Therefore, they would not be minor-minimal nonplanar and nonouter-projective-planar.
The graph $\kappa_1$ has $K_{3,3}$ as a minor. Exploring graphs that have $K_5$ as a minor may yield more graphs in the set.
\begin{figure}[H]
\centering
\includegraphics[scale=0.45]{3Links/LU1.png}
\vskip -.4in
\caption{This graph has both $K_5$ and $\gamma_3$ as minors, so it is minor-minimal nonplanar and nonouter-projective-planar.}
\label{fig:LU1}
\end{figure}
\begin{proposition}\label{5.14}
The graph $LU_1$, pictured in Figure \ref{fig:LU1}, is minor-minimal nonplanar and nonouter-projective-planar.
\end{proposition}
\begin{proof}
Now we want to prove this graph is minor-minimal with regards to being nonplanar and nonouter-projective-planar. There are three types of edges, given different colors in the drawing below.
\begin{figure}[H]
\centering
\includegraphics[scale=0.3]{3Links/LU1-color.png}
\caption{Different types of edges of $LU_1$}
\label{fig:LU1_color}
\end{figure}
The green edge shows an edge that connects a degree 2 and degree 5 vertex. If we delete the green edge, the graph is outer-projective-planar. The red edge shows an edge that connects two degree 4 vertices. If we delete the red edge, the resulting minor is planar. The yellow edge connects a degree 4 and degree 5 vertex. If we delete the yellow edge, the resulting minor is planar. This means that we also cannot delete vertices, because that would delete edges. Now consider edge contractions. If we contract the green edge, the resulting minor will be outer-projective-planar. If we contract the yellow or red edges, the resulting minor will be planar. Thus we can conclude that this graph is minor-minimal with regards to being nonplanar and nonouter-projective-planar.
\end{proof}
\begin{figure}[H]
\centering
\includegraphics[scale=0.32]{3Links/LU2.png}
\vskip -.4in
\caption{This graph has both $K_5$ and $\epsilon_1$ as minors, so it is minor-minimal nonplanar and nonouter-projective-planar.}
\label{fig:LU2}
\end{figure}
\begin{proposition}\label{5.15}
The graph $LU_2$, pictured in Figure \ref{fig:LU2}, is minor-minimal nonplanar and nonouter-projective-planar.
\end{proposition}
\begin{proof}
Now we want to prove this graph is minor-minimal with regards to being nonplanar and nonouter-projective-planar. There are three types of edges, given different colors in the drawing below.
\begin{figure}[H]
\centering
\includegraphics[scale=0.32]{3Links/LU2-color.png}
\vskip -.25in
\caption{Different types of edges of $LU_2$.}
\label{fig:LU2colored}
\end{figure}
The green edge connects two degree 3 vertices. If we delete the green edge, the resulting minor will be outer-projective-planar. The red edge connects two degree 4 vertices. If we delete the red edge, the resulting minor will be planar. The yellow edge connects a degree 3 and a degree 4 vertex. If we delete the yellow edge, the resulting minor will be outer-projective-planar. The blue edge also connects a degree 3 and a degree 4 vertex, where the degree 4 vertex is adjacent to the other degree 3 vertex. If we delete the blue edge, the resulting minor is outer-projective-planar. This means we also cannot delete a vertex, as this will delete an edge. If we contract the green edge, the resulting graph will be outer-projective-planar. If we contract the red edge, the resulting minor is planar. If we contract the yellow edge, the resulting minor is outer-projective-planar. If we contract the blue edge, the resulting minor is outer-projective-planar. Thus, we can conclude this graph is minor-minimal nonplanar and nonouter-projective-planar.
\end{proof}
\begin{figure}[H]
\centering
\includegraphics[scale=0.63]{3Links/LU3.png}
\caption{A graph that contains both $K_5$ and $\gamma_1$ as minors, so it is nonplanar and nonouter-projective-planar.}
\label{fig:LU3}
\end{figure}
\begin{proposition}\label{5.16}
The graph $LU_3$, pictured in Figure \ref{fig:LU3}, is minor-minimally nonplanar and nonouter-projective-planar.
\end{proposition}
\begin{proof}
Now we want to prove this graph is minor-minimal with regards to being nonplanar and nonouter-projective-planar. There are three types of edges, given different colors in the drawing below.
\begin{figure}[H]
\centering
\includegraphics[scale=0.63]{3Links/LU3-color.png}
\caption{Different types of edges of $LU_3$.}
\label{fig:LU3colored}
\end{figure}
The green edge connects a degree 3 vertex and a degree 4 vertex. If we delete the green edge, the resulting minor will be planar. The red edge connects two degree 4 vertices. If we delete the red edge, the resulting minor will be outer-projective-planar. The blue edge connects twp degree 3 vertices. If we delete the blue edge, the resulting minor will be outer-projective-planar. This means we also cannot delete a vertex, as this will delete an edge.
If we contract the green edge, the resulting graph will be outer-projective-planar. If we contract the red edge, the resulting minor is planar. If we contract the blue edge, the resulting minor is planar. Thus, we can conclude this graph is minor-minimal nonplanar and nonouter-projective-planar.
\end{proof}
\begin{proposition} Any graph of the form $G \dot\cup H$, where $G$ is minor-minimal nonouter-projective-planar and nonplanar, $G$ is not II3L, and $H$ is minor-minimal nonouterplanar, is minor-minimal IPPI3L. That is, $H$ is either $K_4$ or $K_{3,2}$, and $G$ is either $K_6-2e$, $\epsilon_4$, $\epsilon_6$, $\kappa_1$, $LU_1$, $LU_2$, $LU_3$, or some other minor-minimal nonouter-projective-planar and nonplanar graph that is not II3L.
\end{proposition}
\begin{proof}
This directly follows from Propositions \ref{minorminIPPI3L}, \ref{5.13}, \ref{5.14}, \ref{5.15}, and \ref{5.16}.
\end{proof}
\subsection{Summary and Open Questions}
\begin{proposition}
The following graphs are minor-minimal IPPI3L:
\begin{itemize}
\item $K_4\dot\cup K_4 \dot\cup K_4$, $K_4\dot\cup K_{4} \dot\cup K_{3,2}$, $K_4 \dot\cup K_{3,2} \dot\cup K_{3,2}$ or $K_{3,2} \dot\cup K_{3,2} \dot\cup K_{3,2}$ (Proposition \ref{threecomponent}).
\item $G \dot\cup H$, where $G$ and $H$ are minor-minimal nonouter-projective-planar, not II3L, and closed nonseparating. (Proposition \ref{minorminimalclosed}).
\item $G$ glued at any vertex to $\zeta_3$, where $G$ is a minor-minimal nonouter-projective-planar, separating projective planar, and planar (Proposition \ref{minorminimalzetaglued}).
\item $G\dot{\cup} H$, where $G$ is minor-minimal nonplanar and nonouter-projective-planar, and not II3L, and $H$ is minor-minimal nonouterplanar. This includes $(K_6-2e) \dot\cup H$, $\epsilon_4 \dot\cup H$, $\epsilon_6 \dot\cup H$, $\kappa_1 \dot\cup H$, $LU_1 \dot\cup H$, $LU_2 \dot\cup H$, and $LU_3 \dot\cup H$, where $H$ is either $K_4$ or $K_{3,2}$ (Proposition 5.17).
\end{itemize}
\end{proposition}
There are open questions that remain unexplored. First, what is the complete set of minor-minimal IPPI3L graphs? We have discovered many graphs, but the set has not been completed. One potential direction would be to examine minor-minimal separating graphs that are glued at an edge rather than a vertex. Secondly, an open question would be what characterizes IPPII3L graphs, especially minor-minimal ones? Also, what is the complete set of graphs that are minor-minimal nonplanar and nonouter-projective-planar? Finally, what is the complete set of graphs that are closed nonseparating?
\begin{comment}
\subsection{IPPI3L conclusion}
\begin{con}
A graph $G$ is in the set of minor-minimal IPPI3L if one of the following cases holds:
\end{con}
\begin{itemize}
\item $G$ is $K_4\dot\cup K_4 \dot\cup K_4$, $K_4\dot\cup K_{4} \dot\cup K_{3,2}$, $K_4 \dot\cup K_{3,2} \dot\cup K_{3,2}$ or $K_{3,2} \dot\cup K_{3,2} \dot\cup K_{3,2}$ (Proposition \ref{threecomponent}).
\item $G$ is $L\dot{\cup}H$ where $L$ and $H$ are minor-minimal separating projective planar and graphs. Both $L$ and $H$ must also be planar and not II3L. We must also suppose any minor of $L$ or $H$ is not closed nonseparating. (Conjecture \ref{IPPI3l-planars}).
\item $G$ is $L \dot\cup H$, where $L$ and $H$ are minor-minimal nonouter-projective-planar, not II3L, and closed nonseparating. (Proposition \ref{minorminimalclosed}).
\item $G$ is $L\dot{\cup} H$, where $L$ is minor-minimal nonplanar and nonouter-projective-planar, and it does not have two disjoint separating cycles and $H$ is minor-minimal nonouterplanar. (Proposition \ref{minorminIPPI3L}).
\end{itemize}
\end{comment}
\begin{comment}
\subsection{Type II}
Type II graphs: We have tested some graphs.
\begin{figure}[H]
\centering
\includegraphics[scale=0.4]{3Links/3-link.png}
\end{figure}
The figures in this chart are not 3-linked in the projective plane. We have also tested the figures in the chart at the bottom of page 15 of the Burkhart paper. None of the graphs in this chart have a type II 3-link in the projective plane.
Future direction: If we have a MMIPP3L graph G, is G + complement of K2 IPL?
\end{comment}
\section{Acknowledgements}
This material is based upon work obtained by the research group at the 2021 Research Experience for Undergraduates Program at SUNY Potsdam and Clarkson University, advised by Joel Foisy and supported by the National Science Foundation under Grant No. H98230-21-1-0336; St. Catharine's College, University of Cambridge; and Universidad Autónoma del Estado de Hidalgo.
|
2206.05827
|
\chapter*{Preface}
This textbook is intended for use by students of physics, physical
chemistry, and theoretical chemistry. The reader is presumed to have a
basic knowledge of atomic and quantum physics at the level provided, for
example, by the first few chapters in our book {\it The Physics of Atoms
and Quanta}. The student of physics will find here material which should
be included in the basic education of every physicist. This book should
furthermore allow students to acquire an appreciation of the breadth and
variety within the field of molecular physics and its future as a
fascinating area of research.
\vspace{1cm}
\begin{flushright}\noindent
June 2016\hfill Walter Olthoff\\
Program Chair\\
ECOOP 2016
\end{flushright}
\chapter*{Organization}
ECOOP 2016 is organized by the department of Computer Science, Univeristy
of \AA rhus and AITO (association Internationa pour les Technologie
Object) in cooperation with ACM/SIGPLAN.
\section*{Executive Commitee}
\begin{tabular}{@{}p{5cm}@{}p{7.2cm}@{}}
Conference Chair:&Ole Lehrmann Madsen (\AA rhus University, DK)\\
Program Chair: &Walter Olthoff (DFKI GmbH, Germany)\\
Organizing Chair:&J\o rgen Lindskov Knudsen (\AA rhus University, DK)\\
Tutorials:&Birger M\o ller-Pedersen\hfil\break
(Norwegian Computing Center, Norway)\\
Workshops:&Eric Jul (University of Kopenhagen, Denmark)\\
Panels:&Boris Magnusson (Lund University, Sweden)\\
Exhibition:&Elmer Sandvad (\AA rhus University, DK)\\
Demonstrations:&Kurt N\o rdmark (\AA rhus University, DK)
\end{tabular}
\section*{Program Commitee}
\begin{tabular}{@{}p{5cm}@{}p{7.2cm}@{}}
Conference Chair:&Ole Lehrmann Madsen (\AA rhus University, DK)\\
Program Chair: &Walter Olthoff (DFKI GmbH, Germany)\\
Organizing Chair:&J\o rgen Lindskov Knudsen (\AA rhus University, DK)\\
Tutorials:&Birger M\o ller-Pedersen\hfil\break
(Norwegian Computing Center, Norway)\\
Workshops:&Eric Jul (University of Kopenhagen, Denmark)\\
Panels:&Boris Magnusson (Lund University, Sweden)\\
Exhibition:&Elmer Sandvad (\AA rhus University, DK)\\
Demonstrations:&Kurt N\o rdmark (\AA rhus University, DK)
\end{tabular}
\begin{multicols}{3}[\section*{Referees}]
V.~Andreev\\
B\"arwolff\\
E.~Barrelet\\
H.P.~Beck\\
G.~Bernardi\\
E.~Binder\\
P.C.~Bosetti\\
Braunschweig\\
F.W.~B\"usser\\
T.~Carli\\
A.B.~Clegg\\
G.~Cozzika\\
S.~Dagoret\\
Del~Buono\\
P.~Dingus\\
H.~Duhm\\
J.~Ebert\\
S.~Eichenberger\\
R.J.~Ellison\\
Feltesse\\
W.~Flauger\\
A.~Fomenko\\
G.~Franke\\
J.~Garvey\\
M.~Gennis\\
L.~Goerlich\\
P.~Goritchev\\
H.~Greif\\
E.M.~Hanlon\\
R.~Haydar\\
R.C.W.~Henderso\\
P.~Hill\\
H.~Hufnagel\\
A.~Jacholkowska\\
Johannsen\\
S.~Kasarian\\
I.R.~Kenyon\\
C.~Kleinwort\\
T.~K\"ohler\\
S.D.~Kolya\\
P.~Kostka\\
U.~Kr\"uger\\
J.~Kurzh\"ofer\\
M.P.J.~Landon\\
A.~Lebedev\\
Ch.~Ley\\
F.~Linsel\\
H.~Lohmand\\
Martin\\
S.~Masson\\
K.~Meier\\
C.A.~Meyer\\
S.~Mikocki\\
J.V.~Morris\\
B.~Naroska\\
Nguyen\\
U.~Obrock\\
G.D.~Patel\\
Ch.~Pichler\\
S.~Prell\\
F.~Raupach\\
V.~Riech\\
P.~Robmann\\
N.~Sahlmann\\
P.~Schleper\\
Sch\"oning\\
B.~Schwab\\
A.~Semenov\\
G.~Siegmon\\
J.R.~Smith\\
M.~Steenbock\\
U.~Straumann\\
C.~Thiebaux\\
P.~Van~Esch\\
from Yerevan Ph\\
L.R.~West\\
G.-G.~Winter\\
T.P.~Yiou\\
M.~Zimmer\end{multicols}
\section*{Sponsoring Institutions}
V. Meyer Inc., Reading, MA, USA\\
The Hofmann-International Company, San Louis Obispo, CA, USA\\
Kramer Industries, Heidelberg, Germany
\tableofcontents
\mainmatter
\part{Hamiltonian Mechanics}
\title{Hamiltonian Mechanics unter besonderer Ber\"ucksichtigung der
h\"ohereren Lehranstalten}
\titlerunning{Hamiltonian Mechanics}
\author{Ivar Ekeland\inst{1} \and Roger Temam\inst{2}
Jeffrey Dean \and David Grove \and Craig Chambers \and Kim~B.~Bruce \and
Elsa Bertino}
\authorrunning{Ivar Ekeland et al.}
\tocauthor{Ivar Ekeland, Roger Temam, Jeffrey Dean, David Grove,
Craig Chambers, Kim B. Bruce, and Elisa Bertino}
\index{Ekeland, I.}
\index{Temam, R.}
\index{Dean, J.}
\index{Grove, D.}
\index{Chambers, C.}
\index{Kim, B.}
\index{Bertino, E.}
\institute{Princeton University, Princeton NJ 08544, USA,\\
\email{I.Ekeland@princeton.edu},\\ WWW home page:
\texttt{http://users/\homedir iekeland/web/welcome.html}
\and
Universit\'{e} de Paris-Sud,
Laboratoire d'Analyse Num\'{e}rique, B\^{a}timent 425,\\
F-91405 Orsay Cedex, France}
\maketitle
\begin{abstract}
The abstract should summarize the contents of the paper
using at least 70 and at most 150 words. It will be set in 9-point
font size and be inset 1.0 cm from the right and left margins.
There will be two blank lines before and after the Abstract. \dots
\keywords{computational geometry, graph theory, Hamilton cycles}
\end{abstract}
\section{Fixed-Period Problems: The Sublinear Case}
With this chapter, the preliminaries are over, and we begin the search
for periodic solutions to Hamiltonian systems. All this will be done in
the convex case; that is, we shall study the boundary-value problem
\begin{eqnarray*}
\dot{x}&=&JH' (t,x)\\
x(0) &=& x(T)
\end{eqnarray*}
with $H(t,\cdot)$ a convex function of $x$, going to $+\infty$ when
$\left\|x\right\| \to \infty$.
\subsection{Autonomous Systems}
In this section, we will consider the case when the Hamiltonian $H(x)$
is autonomous. For the sake of simplicity, we shall also assume that it
is $C^{1}$.
We shall first consider the question of nontriviality, within the
general framework of
$\left(A_{\infty},B_{\infty}\right)$-subquadratic Hamiltonians. In
the second subsection, we shall look into the special case when $H$ is
$\left(0,b_{\infty}\right)$-subquadratic,
and we shall try to derive additional information.
\subsubsection{The General Case: Nontriviality.}
We assume that $H$ is
$\left(A_{\infty},B_{\infty}\right)$-sub\-qua\-dra\-tic at infinity,
for some constant symmetric matrices $A_{\infty}$ and $B_{\infty}$,
with $B_{\infty}-A_{\infty}$ positive definite. Set:
\begin{eqnarray}
\gamma :&=&{\rm smallest\ eigenvalue\ of}\ \ B_{\infty} - A_{\infty} \\
\lambda : &=& {\rm largest\ negative\ eigenvalue\ of}\ \
J \frac{d}{dt} +A_{\infty}\ .
\end{eqnarray}
Theorem~\ref{ghou:pre} tells us that if $\lambda +\gamma < 0$, the
boundary-value problem:
\begin{equation}
\begin{array}{rcl}
\dot{x}&=&JH' (x)\\
x(0)&=&x (T)
\end{array}
\end{equation}
has at least one solution
$\overline{x}$, which is found by minimizing the dual
action functional:
\begin{equation}
\psi (u) = \int_{o}^{T} \left[\frac{1}{2}
\left(\Lambda_{o}^{-1} u,u\right) + N^{\ast} (-u)\right] dt
\end{equation}
on the range of $\Lambda$, which is a subspace $R (\Lambda)_{L}^{2}$
with finite codimension. Here
\begin{equation}
N(x) := H(x) - \frac{1}{2} \left(A_{\infty} x,x\right)
\end{equation}
is a convex function, and
\begin{equation}
N(x) \le \frac{1}{2}
\left(\left(B_{\infty} - A_{\infty}\right) x,x\right)
+ c\ \ \ \forall x\ .
\end{equation}
\begin{proposition}
Assume $H'(0)=0$ and $ H(0)=0$. Set:
\begin{equation}
\delta := \liminf_{x\to 0} 2 N (x) \left\|x\right\|^{-2}\ .
\label{eq:one}
\end{equation}
If $\gamma < - \lambda < \delta$,
the solution $\overline{u}$ is non-zero:
\begin{equation}
\overline{x} (t) \ne 0\ \ \ \forall t\ .
\end{equation}
\end{proposition}
\begin{proof}
Condition (\ref{eq:one}) means that, for every
$\delta ' > \delta$, there is some $\varepsilon > 0$ such that
\begin{equation}
\left\|x\right\| \le \varepsilon \Rightarrow N (x) \le
\frac{\delta '}{2} \left\|x\right\|^{2}\ .
\end{equation}
It is an exercise in convex analysis, into which we shall not go, to
show that this implies that there is an $\eta > 0$ such that
\begin{equation}
f\left\|x\right\| \le \eta
\Rightarrow N^{\ast} (y) \le \frac{1}{2\delta '}
\left\|y\right\|^{2}\ .
\label{eq:two}
\end{equation}
\begin{figure}
\vspace{2.5cm}
\caption{This is the caption of the figure displaying a white eagle and
a white horse on a snow field}
\end{figure}
Since $u_{1}$ is a smooth function, we will have
$\left\|hu_{1}\right\|_\infty \le \eta$
for $h$ small enough, and inequality (\ref{eq:two}) will hold,
yielding thereby:
\begin{equation}
\psi (hu_{1}) \le \frac{h^{2}}{2}
\frac{1}{\lambda} \left\|u_{1} \right\|_{2}^{2} + \frac{h^{2}}{2}
\frac{1}{\delta '} \left\|u_{1}\right\|^{2}\ .
\end{equation}
If we choose $\delta '$ close enough to $\delta$, the quantity
$\left(\frac{1}{\lambda} + \frac{1}{\delta '}\right)$
will be negative, and we end up with
\begin{equation}
\psi (hu_{1}) < 0\ \ \ \ \ {\rm for}\ \ h\ne 0\ \ {\rm small}\ .
\end{equation}
On the other hand, we check directly that $\psi (0) = 0$. This shows
that 0 cannot be a minimizer of $\psi$, not even a local one.
So $\overline{u} \ne 0$ and
$\overline{u} \ne \Lambda_{o}^{-1} (0) = 0$. \qed
\end{proof}
\begin{corollary}
Assume $H$ is $C^{2}$ and
$\left(a_{\infty},b_{\infty}\right)$-subquadratic at infinity. Let
$\xi_{1},\allowbreak\dots,\allowbreak\xi_{N}$ be the
equilibria, that is, the solutions of $H' (\xi ) = 0$.
Denote by $\omega_{k}$
the smallest eigenvalue of $H'' \left(\xi_{k}\right)$, and set:
\begin{equation}
\omega : = {\rm Min\,} \left\{\omega_{1},\dots,\omega_{k}\right\}\ .
\end{equation}
If:
\begin{equation}
\frac{T}{2\pi} b_{\infty} <
- E \left[- \frac{T}{2\pi}a_{\infty}\right] <
\frac{T}{2\pi}\omega
\label{eq:three}
\end{equation}
then minimization of $\psi$ yields a non-constant $T$-periodic solution
$\overline{x}$.
\end{corollary}
We recall once more that by the integer part $E [\alpha ]$ of
$\alpha \in \bbbr$, we mean the $a\in \bbbz$
such that $a< \alpha \le a+1$. For instance,
if we take $a_{\infty} = 0$, Corollary 2 tells
us that $\overline{x}$ exists and is
non-constant provided that:
\begin{equation}
\frac{T}{2\pi} b_{\infty} < 1 < \frac{T}{2\pi}
\end{equation}
or
\begin{equation}
T\in \left(\frac{2\pi}{\omega},\frac{2\pi}{b_{\infty}}\right)\ .
\label{eq:four}
\end{equation}
\begin{proof}
The spectrum of $\Lambda$ is $\frac{2\pi}{T} \bbbz +a_{\infty}$. The
largest negative eigenvalue $\lambda$ is given by
$\frac{2\pi}{T}k_{o} +a_{\infty}$,
where
\begin{equation}
\frac{2\pi}{T}k_{o} + a_{\infty} < 0
\le \frac{2\pi}{T} (k_{o} +1) + a_{\infty}\ .
\end{equation}
Hence:
\begin{equation}
k_{o} = E \left[- \frac{T}{2\pi} a_{\infty}\right] \ .
\end{equation}
The condition $\gamma < -\lambda < \delta$ now becomes:
\begin{equation}
b_{\infty} - a_{\infty} <
- \frac{2\pi}{T} k_{o} -a_{\infty} < \omega -a_{\infty}
\end{equation}
which is precisely condition (\ref{eq:three}).\qed
\end{proof}
\begin{lemma}
Assume that $H$ is $C^{2}$ on $\bbbr^{2n} \setminus \{ 0\}$ and
that $H'' (x)$ is non-de\-gen\-er\-ate for any $x\ne 0$. Then any local
minimizer $\widetilde{x}$ of $\psi$ has minimal period $T$.
\end{lemma}
\begin{proof}
We know that $\widetilde{x}$, or
$\widetilde{x} + \xi$ for some constant $\xi
\in \bbbr^{2n}$, is a $T$-periodic solution of the Hamiltonian system:
\begin{equation}
\dot{x} = JH' (x)\ .
\end{equation}
There is no loss of generality in taking $\xi = 0$. So
$\psi (x) \ge \psi (\widetilde{x} )$
for all $\widetilde{x}$ in some neighbourhood of $x$ in
$W^{1,2} \left(\bbbr / T\bbbz ; \bbbr^{2n}\right)$.
But this index is precisely the index
$i_{T} (\widetilde{x} )$ of the $T$-periodic
solution $\widetilde{x}$ over the interval
$(0,T)$, as defined in Sect.~2.6. So
\begin{equation}
i_{T} (\widetilde{x} ) = 0\ .
\label{eq:five}
\end{equation}
Now if $\widetilde{x}$ has a lower period, $T/k$ say,
we would have, by Corollary 31:
\begin{equation}
i_{T} (\widetilde{x} ) =
i_{kT/k}(\widetilde{x} ) \ge
ki_{T/k} (\widetilde{x} ) + k-1 \ge k-1 \ge 1\ .
\end{equation}
This would contradict (\ref{eq:five}), and thus cannot happen.\qed
\end{proof}
\paragraph{Notes and Comments.}
The results in this section are a
refined version of \cite{smit:wat};
the minimality result of Proposition
14 was the first of its kind.
To understand the nontriviality conditions, such as the one in formula
(\ref{eq:four}), one may think of a one-parameter family
$x_{T}$, $T\in \left(2\pi\omega^{-1}, 2\pi b_{\infty}^{-1}\right)$
of periodic solutions, $x_{T} (0) = x_{T} (T)$,
with $x_{T}$ going away to infinity when $T\to 2\pi \omega^{-1}$,
which is the period of the linearized system at 0.
\begin{table}
\caption{This is the example table taken out of {\it The
\TeX{}book,} p.\,246}
\begin{center}
\begin{tabular}{r@{\quad}rl}
\hline
\multicolumn{1}{l}{\rule{0pt}{12pt}
Year}&\multicolumn{2}{l}{World population}\\[2pt]
\hline\rule{0pt}{12pt}
8000 B.C. & 5,000,000& \\
50 A.D. & 200,000,000& \\
1650 A.D. & 500,000,000& \\
1945 A.D. & 2,300,000,000& \\
1980 A.D. & 4,400,000,000& \\[2pt]
\hline
\end{tabular}
\end{center}
\end{table}
\begin{theorem} [Ghoussoub-Preiss]\label{ghou:pre}
Assume $H(t,x)$ is
$(0,\varepsilon )$-subquadratic at
infinity for all $\varepsilon > 0$, and $T$-periodic in $t$
\begin{equation}
H (t,\cdot )\ \ \ \ \ {\rm is\ convex}\ \ \forall t
\end{equation}
\begin{equation}
H (\cdot ,x)\ \ \ \ \ {\rm is}\ \ T{\rm -periodic}\ \ \forall x
\end{equation}
\begin{equation}
H (t,x)\ge n\left(\left\|x\right\|\right)\ \ \ \ \
{\rm with}\ \ n (s)s^{-1}\to \infty\ \ {\rm as}\ \ s\to \infty
\end{equation}
\begin{equation}
\forall \varepsilon > 0\ ,\ \ \ \exists c\ :\
H(t,x) \le \frac{\varepsilon}{2}\left\|x\right\|^{2} + c\ .
\end{equation}
Assume also that $H$ is $C^{2}$, and $H'' (t,x)$ is positive definite
everywhere. Then there is a sequence $x_{k}$, $k\in \bbbn$, of
$kT$-periodic solutions of the system
\begin{equation}
\dot{x} = JH' (t,x)
\end{equation}
such that, for every $k\in \bbbn$, there is some $p_{o}\in\bbbn$ with:
\begin{equation}
p\ge p_{o}\Rightarrow x_{pk} \ne x_{k}\ .
\end{equation}
\qed
\end{theorem}
\begin{example} [{{\rm External forcing}}]
Consider the system:
\begin{equation}
\dot{x} = JH' (x) + f(t)
\end{equation}
where the Hamiltonian $H$ is
$\left(0,b_{\infty}\right)$-subquadratic, and the
forcing term is a distribution on the circle:
\begin{equation}
f = \frac{d}{dt} F + f_{o}\ \ \ \ \
{\rm with}\ \ F\in L^{2} \left(\bbbr / T\bbbz; \bbbr^{2n}\right)\ ,
\end{equation}
where $f_{o} : = T^{-1}\int_{o}^{T} f (t) dt$. For instance,
\begin{equation}
f (t) = \sum_{k\in \bbbn} \delta_{k} \xi\ ,
\end{equation}
where $\delta_{k}$ is the Dirac mass at $t= k$ and
$\xi \in \bbbr^{2n}$ is a
constant, fits the prescription. This means that the system
$\dot{x} = JH' (x)$ is being excited by a
series of identical shocks at interval $T$.
\end{example}
\begin{definition}
Let $A_{\infty} (t)$ and $B_{\infty} (t)$ be symmetric
operators in $\bbbr^{2n}$, depending continuously on
$t\in [0,T]$, such that
$A_{\infty} (t) \le B_{\infty} (t)$ for all $t$.
A Borelian function
$H: [0,T]\times \bbbr^{2n} \to \bbbr$
is called
$\left(A_{\infty} ,B_{\infty}\right)$-{\it subquadratic at infinity}
if there exists a function $N(t,x)$ such that:
\begin{equation}
H (t,x) = \frac{1}{2} \left(A_{\infty} (t) x,x\right) + N(t,x)
\end{equation}
\begin{equation}
\forall t\ ,\ \ \ N(t,x)\ \ \ \ \
{\rm is\ convex\ with\ respect\ to}\ \ x
\end{equation}
\begin{equation}
N(t,x) \ge n\left(\left\|x\right\|\right)\ \ \ \ \
{\rm with}\ \ n(s)s^{-1}\to +\infty\ \ {\rm as}\ \ s\to +\infty
\end{equation}
\begin{equation}
\exists c\in \bbbr\ :\ \ \ H (t,x) \le
\frac{1}{2} \left(B_{\infty} (t) x,x\right) + c\ \ \ \forall x\ .
\end{equation}
If $A_{\infty} (t) = a_{\infty} I$ and
$B_{\infty} (t) = b_{\infty} I$, with
$a_{\infty} \le b_{\infty} \in \bbbr$,
we shall say that $H$ is
$\left(a_{\infty},b_{\infty}\right)$-subquadratic
at infinity. As an example, the function
$\left\|x\right\|^{\alpha}$, with
$1\le \alpha < 2$, is $(0,\varepsilon )$-subquadratic at infinity
for every $\varepsilon > 0$. Similarly, the Hamiltonian
\begin{equation}
H (t,x) = \frac{1}{2} k \left\|k\right\|^{2} +\left\|x\right\|^{\alpha}
\end{equation}
is $(k,k+\varepsilon )$-subquadratic for every $\varepsilon > 0$.
Note that, if $k<0$, it is not convex.
\end{definition}
\paragraph{Notes and Comments.}
The first results on subharmonics were
obtained by Rabinowitz in \cite{fo:kes:nic:tue}, who showed the existence of
infinitely many subharmonics both in the subquadratic and superquadratic
case, with suitable growth conditions on $H'$. Again the duality
approach enabled Clarke and Ekeland in \cite{may:ehr:stein} to treat the
same problem in the convex-subquadratic case, with growth conditions on
$H$ only.
Recently, Michalek and Tarantello (see \cite{fost:kes} and \cite{czaj:fitz})
have obtained lower bound on the number of subharmonics of period $kT$,
based on symmetry considerations and on pinching estimates, as in
Sect.~5.2 of this article.
\section{Introduction}
In Reinforcement Learning (RL), the goal of the agent, given a Markov Decision Process, is to maximize the expected cumulative reward. The higher the expected reward, the better the agent's policy. In Imitation Learning, on the other hand, the agent does not have access to a reward signal from the environment. Instead, it either has access to an expert who can be asked online for the best action for a given state or a set of trajectories generated by the expert is available. Imitation Learning has been proven to be particularly successful in domains where the demonstration by an expert is easier than the construction of a suitable reward function \cite{arora2021survey}. There are two main approaches to Imitation Learning: Behavioral Cloning (BC) \cite{pomerleau1991efficient} and Inverse Reinforcement Learning (IRL) \cite{fu2017learning,arora2021survey,abbeel2004apprenticeship}. In BC, the agent learns via supervised learning to produce the same actions that the expert would have produced. The advantage of this approach is that no further rollouts in the environment are necessary. However, the approach suffers greatly from compounding error, i.e., the slow drift of states visited by the expert \cite{ross2011reduction}. In the second approach, IRL, a reward function is learned under which the expert is uniquely optimal. Then, a policy can be learned using classical Reinforcement Learning and this reconstructed reward function. However, the drawback of this approach is that it usually requires a lot of rollouts in the environment, as it often includes RL as a subroutine.
GAIL \cite{ho2016generative} is another approach to Imitation Learning. It builds on the ideas of Generative Adversarial Networks. In this approach, a policy and a discriminator are learned. The goal of the discriminator is to be able to distinguish state-action pairs of the expert from state-action pairs of the agent, while the goal of the policy is to fool the discriminator. GAIL requires expert actions, but there is an extension, named GAIfO, which does not \cite{torabi2018generative}. While GAIL discriminates between state-action pairs produced by the agent or the expert respectively, GAIfO does so with state transitions. In this paper we consider, as GAIfO does, the setting where no expert actions are available to the agent. This setting is also called Learning from Observation (LfO) or Imitation from Observation (IfO) \cite{yang2019imitation,torabi2018generative}.
\ \\
Providing expert trajectories is often very expensive and time-consuming, especially if the expert is a human. The goal must therefore be to create algorithms which require as little expert data as possible.
The aim of this paper is to present an algorithm that imitates the higher-level strategy of the expert rather then just imitating the expert on action level.
Our motivation for this is that we hypothesize that it takes less expert data to learn the higher-level strategy than to imitate the expert on action level. We also hypothesize that it makes the training more stable, with less ``forgetting'' of what has already been learned. As a prior for the higher-level strategy, we assume that the higher-level strategy is to reach an unknown target state area, which we hypothesize is a valid prior for many domains in Reinforcement Learning.
We present an algorithm that learns these higher-level strategies from expert trajectories. To prove that the algorithm does not imitate the expert on action level, we consider a special setting of Imitation Learning, which is characterized by incomplete expert trajectories. Here, the agent does not see every state of the expert trajectory, but, for example, only every fifth. Thus, it cannot imitate the expert on action level.
\ \\
The idea behind machine learning is to derive general rules from a large amount of data, which can then be applied to new, unknown scenarios. This induction-based learning principle differs from Case-Based problem solving. In Case-Based Reasoning, a set of problems solved in the past is stored in a database. If a new, unknown problem is to be solved, the problem most similar to the current situation is retrieved from the database and used to solve the current problem. Applications of Case-Based Reasoning range from explaining neural network outputs \cite{li2018deep,keane2019case} over financial risk detection \cite{li2021data} to medical diagnosis \cite{choudhury2016survey}. In our algorithm we build on ideas from Case-Based Reasoning as well as on the idea of Temporal Coherence.
Temporal Coherence \cite{goroshin2015unsupervised,mobahi2009deep,zou2011unsupervised} originates from Video Representation Learning, where the idea is that two images, which occur shortly after each other in a video, are very likely to show the same object or objects. The two images should therefore have a similar representation. On the other hand, distant images should have different representations. The combination of this convergence and divergence, also called contrastive learning, can be used as a self-supervised training signal to learn semantically meaningful representations \cite{knights2021temporally}.
\ \\
Our contribution in this paper is twofold. First, we propose the setting with incomplete expert trajectories without expert actions as a way to prove that the agent really learns the expert's strategy and does not imitate the expert on action level. The prior we are using for the higher-level strategy is to reach an unknown target state area. Second, we present an algorithm that can learn such higher-level strategies and we test it on four typical domains of RL. The results show that our approach can still learn a near-optimal policy in settings with very little expert data, where IRL algorithms that try to imitate the expert at the action level can no longer do so.
\section{Background}
In this section we want to provide a brief introduction to Markov Decision Processes (MDP) \cite{arulkumaran2017brief}. A MDP is a tuple $(S,A,T,R,\gamma)$. $S$ is a set of states, combined with a distribution of starting states $p(s_0)$. $A$ is a set of actions the agent can perform. $T$ is the transition function of the environment which computes the next state $s_{t+1}$ given a state $s_t$ at time $t$ and an action $a_t$: $T(s_{t+1}|s_t,a_t)$. The property of $T$ that the computation of $s_{t+1}$ depends only on the last state $s_t$ and not on $s_{\tau < t}$ is also called the Markov property, hence the name Markov Decision Process. $r_t=R(s_t,a_t,s_{t+1})$ is a reward function and $\gamma \in [0;1]$ is a discount factor. If $\gamma < 1$, immediate rewards are preferred compared to later rewards. An agent acts in a MDP using its policy $\pi$. The policy is a function which outputs an action $a$ given a state $s$: $\pi(s)=a$. MDPs are often episodic, which means that the agent acts for $T$ steps, after which the environment is reset to a starting state. The goal of the agent in a MDP is to maximize the expected return by finding the policy
\begin{equation}
{\pi}^* = \underset{\pi}{\mathrm{argmax}}\ E[R|\pi]
\end{equation}
where the return R is calculated via:
\begin{equation}
R=\sum_{t=0}^{T-1}{\gamma^tr_{t+1}}
\end{equation}
\section{Related Work}
\textbf{Combination of Case-Based Reasoning (CBR) and Reinforcement \linebreak Learning (RL)}: In \cite{bianchi2009improving} the authors use Case-Based Reasoning (CBR) in the setting of Heuristic Accelerated Reinforcement Learning, where a heuristic function assists in action selection to accelerate exploration in the state-action space.
In \cite{auslander2008recognizing}, Case-Based Reasoning is used to efficiently switch between previously stored policies learned with classical RL. A similar approach is taken by \cite{wender2014combining}.
Most Imitation Learning algorithms try to imitate the expert skill step-by-step. In \cite{lee2019follow}, a hierarchical algorithm is presented where this goal is mitigated. Instead, a policy is learned that reaches sub-goals, which in turn are sampled by a meta-policy from the expert demonstrations.
\ \\
\textbf{Temporal Coherence in Reinforcement Learning}: Some papers have already investigated the use of Temporal Coherence in the context of Reinforcement Learning. For example, in \cite{florensa2019self} it was proposed to learn an embedding for the inputs of the Markov Decision Process, such that the euclidean distance in the embedding space is proportional to the number of actions the current agent needs to get from one state to the other. The byproduct of this is a policy that can theoretically reach any previously seen state on demand. A similar idea is followed in the context of goal-conditioned RL: In \cite{lee2021generalizable} a proximity function $f(s,g)$ is learned that outputs a scalar proportional to the distance of the state $s$ to the goal $g$. The distance then serves as a dense reward signal for a classical RL agent. This is especially beneficial when the environment's reward function is sparse.
In \cite{sermanet2018time}, a special setting is considered where multiple observations are available simultaneously, showing the same state from different perspectives. An embedding is then learned so that contemporaneous observations have the same embedding and temporally distant observations have different embeddings. Thus, a perspective-invariant representation is learned, which contains semantic information. That paper also considers the case where only one perspective is available. In this case, the embeddings of two nearby inputs should be as similar as possible and temporally distant inputs should be as dissimilar as possible. We build on this idea of Temporal Coherence, although we do not learn an embedding. \cite{dwibedir2018self} extends the idea of \cite{sermanet2018time} to input sequences to contrast movements.
In \cite{savinov2018episodic}, the concept of Reachability Networks is already introduced, i.e., a network that classifies whether two states can occur in short succession in a trajectory. This network is then used as a curiosity signal to guide exploration in sparse reward domains. We build on this concept, but use it differently. While in \cite{savinov2018episodic} the agent searches for dissimilar states, the goal of the agent in our approach is to reach similar states (compared to expert states).
\ \\
\textbf{Curriculum via Expert Demonstrations}: As we will see in the next section, the reconstructed reward function in our approach can be interpreted as an implicit curriculum. A related approach, which creates an explicit curriculum using expert demonstrations, is \cite{dai2021automatic}. In that paper the expert trajectory is divided into several sections and state resetting to expert states is used to increase the difficulty of reaching the goal state. The sector from which expert states are sampled for resetting is gradually pushed away from the goal as the curriculum progresses. A similar approach is \cite{hermann2020adaptive}, which again uses resetting to starting states of varying difficulty.
\ \\
\textbf{Unsupervised Perceptual Rewards for Imitation Learning}: the closest work to ours is \cite{sermanet2016unsupervised}. In that paper the authors examine how to use pre-trained vision models to reconstruct a reward function from few human video demonstrations. They do so by first splitting the human demo videos into segments, then selecting features of a pre-trained model which best discriminate between the segments and then using a reward function, which is based on these selected features, to learn a policy via standard RL algorithms. The biggest difference to our algorithm is that \cite{sermanet2016unsupervised} reconstructs the reward function entirely before training in the RL domain. In contrast, we learn the reward function and the policy at the same time.
\section{Case-Based Inverse Reinforcement Learning (CB-IRL)}
In this work, we consider a special setting of Imitation Learning that is characterized by two main features.
First, there are no expert actions available to the agent and, second, the expert trajectories are incomplete, i.e., from the original sequence of MDP states of the expert $[s_0, s_1, s_2, ..., s_T]$, the agent only sees, for example, every fifth state: $[s_0, s_5, s_{10}, ...]$. This makes it impossible for the agent to imitate the expert at the action level. Given such a setting, we now propose the algorithm Case-Based Inverse Reinforcement Learning (CB-IRL). The architecture of CB-IRL consists of the Case Base ($C$) and two neural networks, the Equality Net ($E$) and the Policy ($\pi$), see Figure 1. $C$ is filled with the expert trajectories.
\ \\
The basic idea is that the agent should not act in every step exactly as the expert would do, but instead imitate the higher-level strategy of the expert. We chose the task of reaching a target state area as the prior for the higher-level strategy. For example, for the OpenAI domain `MountainCar' the target state area are the states where the car is on top of the mountain. For the Atari game `Pong' the target state area would be the states where the agent has 21 points. The agent does not know the target state area, but since the expert has demonstrated how to reach the target state area, CB-IRL trains the agent to reach similar states as the expert.
Two states are ``similar'' in the context of Reinforcement Learning if it takes only few steps (actions) to get from one state to the other. Other approaches \cite{florensa2019self,sermanet2018time,dwibedir2018self} try to learn a state-embedding such that the euclidean distance of the representations is proportional to the number of steps needed to get from one state to the other. We take a different approach and instead train a neural network that accepts two states $s_1$ and $s_2$ as input and outputs a scalar $E(s_1,s_2) = d$ ; $E:S\times S \rightarrow [0;1]$ to classify whether $s_2$ can be reached within $\textit{windowFrame}$ steps from $s_1$. Thus, this is a classification and not a regression. We believe a classification is easier and more stable to learn compared to a regression, since it suffers less from the ``moving target'' problem. For example if we would predict the number of steps which are required to go from one state to the other, the target of this supervised learning tasks is heavily based on the current performance of the agent. In contrast, for near/far classification, it does not matter if the states are, for example, 30 or 40 steps apart if $\textit{windowFrame}=10$. In both cases the state pair gets the target 0 for supervised learning, since it shall be classified as dissimilar.
\SetKwComment{Comment}{/* }{ */}
\RestyleAlgo{ruled}
\begin{algorithm}
\caption{CB-IRL}\label{alg:two}
\KwData{Case-Base $C$ (containing expert trajectories)}
\KwResult{Policy $\pi$, Equality Net $E$}
\While{training}{
$s \gets$ sample start state\\
$r_{pre} \gets $Reward$(s)$\\
$trajectory \gets [s]$\\
\While{episode is not finished}{
$a \gets \pi(s)$\\
$s' \gets$ execute $a$\\
$trajectory.append(s')$\\
$r_{post} \gets $Reward$(s')$\\
$r \gets r_{post} - \alpha * r_{pre}$\\
use $(s,a,r,s')$ for training $\pi$\\
$s \gets s'$\\
$r_{pre} \gets r_{post}$\\
}
append $trajectory$ to the Replay Buffer of $E$\\
train $E$ using the Replay Buffer, $C$ and the hyperparameters $windowFrame$ and $\nu$\\
}
\ \\
\DontPrintSemicolon
\SetKwProg{Fn}{Function}{:}{}
\SetKwFunction{FMain}{Reward}
\Fn{\FMain{$s$}}{
$mostSimilar = \mu$\;
$similarity = \tau$\;
\ForEach{trajectory $\in C$}{%
\ForEach{$o_e^{(i)} \in $ trajectory}{
\If{$E(s, o_e^{(i)}) > similarity$}{
$mostSimilar = i$\;
$similarity = E(s, o_e^{(i)})$\;
}
}
}
\KwRet $mostSimilar$\;
}
\end{algorithm}
\begin{figure}[bt]
\centering
\includegraphics[scale=0.37]{images/Bild2.png}
\caption{This figure shows the usual cycle of Reinforcement Learning, with a small adjustment. The Equality Net ($E$) is interposed between the environment and the agent. The agent performs an action $a$, which is executed in the environment. $E$ receives the next observation $o'$ from the environment, then calculates the reward $r$ using the case database and forwards both to the agent.}
\end{figure}
A second advantage of Reachability Networks in contrast to embeddings is that they are suitable for asymmetric state-action spaces. For example, it may be easy to reach $s_2$ from $s_1$, but difficult or impossible to reach $s_1$ from $s_2$.
\ \\
The policy $\pi$ is learned via Inverse Reinforcement Learning using the case database $C$ and the Equality Net $E$. If the agent is in state $o$, it executes the action $a = \pi(o)$ with its current policy $\pi$ and receives the next observation $o'$ from the environment. Using $E$, all expert observations $o_e^{(i)}$ from $C$ are now checked to see if they are similar to $o'$, where the similarity must be above a threshold $\tau$. If there is a similar expert state $o_e^{(j)}$ (if more than one, choose the most similar), the reward is given by the position number $j$. Thus, the further back the similar expert state is in the expert trajectory, the higher the reward the agent receives. If there is no similar expert state, the agent receives a penalty $\mu$ (a negative reward). Figure 2 shows the idea schematically. The complete algorithm is summarized in Algorithm 1.
\ \\
The algorithm contains several hyperparameters, whose task and influence we discuss in the following: $\tau \in [0;1]$ is the threshold that determines the minimum similarity of an expert state $o_e^{(i)}$ to the current state $o$ of the agent, so that the agent receives a positive reward. If no expert state has a similarity higher than $\tau$, the agent receives a penalty (a negative reward $\mu$). The hyperparameter $\alpha \in [0;1]$ controls whether the actual reward for the agent is always the reward difference ($\alpha=1$) or whether the agent always receives the full reward ($\alpha=0$). For $\alpha=0$, the agent tends to achieve large rewards as quickly as possible, but maybe not reliably, whereas for $\alpha=1$, the agent tries to achieve a large reward as reliably as possible by the end of the episode.
The hyperparameters $\textit{windowFrame}$ and $\nu$ are used to train $E$. They model on the one hand the threshold which indicates whether two states are considered similar or dissimilar and on the other hand the number of explicit divergence between states of the agent and states of the expert.
\begin{figure}[bt]
\centering
\includegraphics[scale=0.2]{images/Bild1.png}
\caption{This figure shows schematically how the algorithm works. Assume the rectangle is a two-dimensional state space. The light and dark gray boxes represent the trajectory of expert states but only the dark states are visible to the agent. Using its own rollouts the agent now learns the Equality Net, which classifies whether two states can occur close to each other in a trajectory. The outputs of the Equality Net for the expert states are represented in the image by the circles around them. During inference, the agent checks whether the current state is similar to an expert state or not. For example, if the agent is in the yellow state, it is similar to expert state $e_3$ and therefore receives reward 3. If the agent is in the blue state, it is not similar to any expert state and receives a negative reward $\mu$.}
\end{figure}
\ \\
\textbf{Training of the Equality Net}: The task of the Equality Net $E$ is to classify whether two inputs can occur in short succession in a trajectory and are thus ``similar''. To train $E$, we use the Replay Buffer that contains the trajectories sampled by the agent. $E$ is trained using supervised learning.
The training set consists of similar and dissimilar state pairs. For the similar state pairs, two states are selected from the same trajectory of the Replay Buffer which are no further apart than $\textit{windowFrame}$ steps. For the dissimilar state pairs, two states are sampled from two different trajectories. For the similar state pairs, the network $E$ is trained to output the value $1$, for dissimilar state pairs it is trained to output $0$. The structure of $E$ is graphically visualized in Figure 3.
\ \\
In addition, training can also be performed in an analogous manner on the expert trajectories. The hyperparameter $\nu$ models the number of explicit divergence between agent and expert state. That is, there are $\nu$ state pairs where one state is sampled from the Replay Buffer and the other state is sampled from $C$. The target for these state pairs during supervised learning is $0$, since they shall be classified as dissimilar.
\ \\
The output of the Equality Net can be understood as a (lossy) binary distance measure. The distance measure is binary because it only distinguishes between similar (1) and dissimilar (0).
\begin{figure}[bt]
\centering
\includegraphics[scale=0.58]{images/Bild3.png}
\caption{The Equality Net $E$ accepts as input two states and classifies whether they are similar in the sense that they can appear close to each other in a trajectory. During inference, $E$ receives as input the current state $o$ of the agent and compares it to all expert states $o_e^{(i)}$. $E$ is trained using supervised learning on the trajectories produced by the agent, which are stored in the Replay Buffer.}
\end{figure}
\section{Experiments}
We tested our algorithm in four OpenAI Gym domains \cite{brockman2016openai}: Acrobot, Mountain Car, Lunar Lander, and Half Cheetah. For Half Cheetah, we created a modified version called Half Cheetah Discrete. Details can be found in Appendix A.
As justified in \cite{christiano2017deep}, only domains should be used for the evaluation of IRL algorithms in which the episodes are always of the same length. This is because early ending of episodes may contain implicit information about the reward. For example, in the `Mountain Car' domain, the episode ends when the car has successfully driven up the hill. For this reason, we have adjusted all domains so that episodes are always of the same length, with the agent receiving the last observation until the end if the episode ended early.
\ \\
We first trained an expert for each domain using the reward function of the environment. We then used these experts to create exactly one trajectory for each domain, which consisted only of the expert states and not the expert actions. We then used it to train CB-IRL and GAIFO. GAIFO had access to \underline{all} expert states, while CB-IRL only had access to \underline{every tenth} expert state. For example, for Lunar Lander, the expert trajectory was about 150 steps long, so the training set for GAIFO consisted of these 150 expert states, while the training set for CB-IRL consisted of only 15 expert states.
For the hyperparameter search, we tested five hyperparameter sets for each algorithm and domain and selected the best one. Using these hyperparameters, we then trained CB-IRL and GAIFO three times with three different seeds. During training we generated 20 episodes every 10,000 steps for each seed and algorithm (for a total of 60 episodes per algorithm every 10,000 steps). For each episode, we calculated the total return using the environment's reward function. The returns were then scaled using the performance of a random agent (representing value 0) and the expert (representing value 1). We then calculated the 0.25, 0.5 (median), and 0.75 quantiles for these 60 return values. For both algorithms, the solid lines represent the median and the shaded areas enclose the 0.25 and 0.75 quantiles.
As can be seen in Figure 4, CB-IRL mostly performed better than GAIFO in the experiments, even though it had access to only one tenth of GAIFO's training set. Furthermore, CB-IRL showed a more stable learning behavior. The difference was particularly clear in the Half Cheetah Discrete domain. Here, the advantage of CB-IRL became apparent, where the agent did not learn to behave exactly like the expert in every state, but to reach similar states as the expert. CB-IRL has learnt the high-level strategy to ``run as far as possible''.
A Python implementation of CB-IRL and the code used to create the experiments are available on GitHub [\url{https://github.com/JonasNuesslein/CB-IRL}]. For GAIFO we used the implementation of tf2rl \cite{ota2020tf2rl}. The chosen hyperparameters for the experiments can also be found on GitHub in the file $\textit{config.py}$.
\begin{figure}[bt]
\centering
\includegraphics[scale=0.42]{images/Experiments.png}
\caption{Scaled performance of CB-IRL and GAIFO on four different domains trained using one expert trajectory, where GAIFO had access to all expert states and CB-IRL had access to only one in ten.}
\end{figure}
\section{Discussion of the Approach}
In this section we discuss the advantages and disadvantages of CB-IRL. Turning first to the disadvantages: The computation of the reward is more computationally intensive than in many other IRL algorithms, because in each step the current state must be compared against all expert states in the case base $C$. The run-time complexity is thus linear in the size of $C$. This can be serious for larger case bases, however, the target application areas of CB-IRL are precisely the settings where very little expert data is available. Moreover, the computational intensity can be reduced by calculating a reward only in every $k$-th step, rather than in every step.
The second drawback of our approach is the specialization of CB-IRL to state-reaching in contrast to state-keeping domains. By state-reaching domains, we mean domains in which certain variables of the state vector have to be changed. An example of this is the OpenAI Gym domain `Mountain Car' \cite{brockman2016openai}, in which the goal is to maximize the x-position of the car. Another example is the Atari game `Pong' \cite{mnih2015human}, in which the goal of the agent is to reach 21 points. By state-keeping domains, we mean domains in which the goal is to leave certain variables of the state vector unchanged. An example of this would be `Cart-Pole' \cite{brockman2016openai}, where the goal is to keep the angle of the pole at 90° if possible or `HalfCheetah' \cite{brockman2016openai}, where the goal is to keep a high velocity. Due to the structure of CB-IRL, it is predominantly suitable for state-reaching domains, as the algorithm encourages the agent to reach states from the posterior of the expert trajectory. \par
\ \\
The advantages of CB-IRL are that it does not require a reward function, expert actions, or complete expert trajectories.
Since the agent can learn with incomplete expert trajectories, it has proven that it imitates the higher-level strategy of the expert and does not imitate the expert on action level.
This allows the agent to learn a near-optimal strategy with little data, which would be insufficient to imitate the expert on action level (as can be seen in the Half Cheetah Discrete domain). The learning behavior also shows a more stable pattern with less ``forgetting'' of what has already been learned.
A second possible advantage, which we leave as future work to verify, is that the Equality Net is not task-specific and can be reused for other tasks in the same domain, which can enable fast transfer learning.
A third possible advantage also left for future work is that the ability to learn from incomplete trajectories may be beneficial in real-world applications, where state observations may be noisy or delayed.
\section{Conclusion}
In this paper, we have shown that when very little expert data is available, it is advantageous to imitate the higher-level strategy of the expert, rather than imitating the expert on action level.
To prove that the agent really imitated the strategy and not the expert actions, we considered a special setting of Imitation Learning characterized by incomplete expert trajectories. Moreover, no expert actions were available to the agent (Learning from Observations). The chosen prior for the higher-level strategy was to reach an unknown target state area. But since the expert has demonstrated how to reach it, the agent tries to reach similar states as the expert.
The presented algorithm Case-Based Inverse Reinforcement Learning (CB-IRL) builds on the idea of Temporal Coherence and Case-Based Reasoning. The algorithm trains a neural network to predict whether a state $s_2$ can be reached from a state $s_1$ within $\textit{windowFrame}$ time steps (actions). If so, the states can be considered ``similar''. During inference, the agent uses this network to compare its current state $o$ against expert states $o_e^{(i)}$ from a Case Base. If a similar expert state $o_e^{(j)}$ exists, the position $j$ of this expert state in the expert trajectory serves as a (positive) reward signal for the agent. If no similar expert state exists, the agent receives a penalty. Thus, the agent is trained to reach similar states to the expert states. We tested our approach on four typical domains of Reinforcement Learning, where in every case only one tenth of an expert trajectory was available to the agent. The results show that CB-IRL was able to learn a near-optimal policy, often better than GAIfO, which had access to the full expert trajectory and was trying to imitate the expert at action level.
\newpage
\bibliographystyle{alpha}
|
0910.5941
|
\section{Introduction}
The revised edition of a monograph devoted to the two-band theory of superconductivity \cite{1}, published to honor the $80^\text{th}$ birthday
celebration of Professor Vsevolod Moskalenko (September 26, 2008), provides us the opportunity of some comments concerning two classical papers,
published some 50 years ago, devoted to the generalization of BCS theory for metals with overlapping energy bands at the Fermi level. The first one
was submitted to a Russian journal, \textit{Fizica Metallov i Metallovedenia}, and was published in October 15, 1959 \cite{2}; its English
translation, in \textit{Physics of Metals and Metallography} \textbf{8}, 25, \cite{3}, \cite{4} was issued in June 1961. The second one, of H. Suhl,
B.T. Matthias and L.R. Walker, was published in \textit{Physical Review Letters} \textbf{3}, 552, in December 15, 1959 \cite{5}. So, these papers have
been written not only independently, but also in two countries separated by the Iron Curtain, in the years of the Cold War: one of them, in USSR; the
other one, in USA.
The main goal of this article, which contains two parts, is to present "the untold story" of Moskalenko's paper. The first part provides a short
analysis of the scientific content of the two papers. The second part is devoted to some of Moskalenko's recollections describing the scientific life
in Soviet Union, in the '50s.
\section{The beginning of the two-band theory of superconductivity}
The two-band theory of superconductivity has been created, independently, in two papers, authored by Soviet and American Physicists, as already said
in the Introduction. We shall give here a short description of these papers.
The starting point of Moskalenko's paper \cite{2} is the remark that the metals which display a superconducting transition have overlapping energy
bands, and this fact has not been taken into consideration neither by BCS, nor by Bogoliubov theories. So, the Hamiltonian describing the metal has a
free term, with contributions from electrons belonging to two bands (the chemical potential being renormalized by interaction), and an interaction
one. This last term contains a phonon-mediated intra-band, and also inter-band, electron-electron interaction. The fermionic operators entering in
this Hamiltonian undergo a Bogoliubov transform, and, for the new Hamiltonian, a diagrammatic method, due to Bloch and de Dominicis, is used in order
to calculate the thermodynamic potential, in the first order in the interacting potential. A compensation theorem, similar to that given by Zubarev
and Tserkovnikov, is used.
The quasi-particle energy and the gaps in energy bands are obtained, and also the critical temperature . If the intra-band interaction is taken into
account, both subsystems of electrons (belonging to both bands) undergo the superconducting transition at the same temperature. If the inter-band
interaction is neglected, each subsystem has its own critical temperature; so, lowering the temperature, the subsystem with highest critical
temperature undergoes the transition first; after further cooling, the second sub-system undergoes the transition too, at a lower critical
temperature.
The jump in specific heat, the critical magnetic field and the loss in energy due to the superconducting transition are calculated.
The band with highest density of states near Fermi surface plays the dominant role in determination of the parameters of the superconducting state.
The conditions to be fulfilled for occurrence of superconducting state are less rigid than in BCS case. The physics of the problem is obtained for any
energy, between 0 and $T_c$.
The basic ideas of this paper have been developed by Moskalenko and his co-workers at the Institute of Applied Physics of the Academy of Science of
Moldovan SSR (after 1991, the Academy of Science of the Republic of Moldova); for a review see the aforementioned monograph.
A new English translation of Moskalenko's paper is available \cite{4}. This paper is interesting both for historical reasons, and for its recent
applications. It (and its developments, \cite{1}) describes the superconductivity of heavy fermion compounds CeTIn$_5$ (T=In, Rh, Co), which become
superconductors at high pressure; this effect has been discovered in 2000 \cite{6}, \cite{7}. Another interesting situation is provided by a simple
binary compound, MgB$_2$. The discovery of superconductivity in MgB$_2$ at $T_c = 39$ K, almost twice the critical temperature of other simple
intermetallic compounds, was an "unexpected gift" for low-temperature physics. It is, clearly, a two-band superconductor, the ratio of the two gaps
being 2.6 \cite{9}. Its behavior is very well described by Moskalenko's model; the compound is intensively studied, both in 3D- \cite{10}, \cite{11},
\cite{12}, \cite{13} and 2D \cite{14}, \cite{15}, \cite{16}.
In the paper of Suhl et al. \cite{5}, the electronic Hamiltonian is essentially the same; the authors make a similar Bogoliubov transform, and from
the diagonalization conditions, they obtain the critical temperature and the values of energy gaps. No other physical results (jump in specific heat,
critical magnetic field, calculation of thermodynamic potential) are given. However, due to the delay in the publication of Moskalenko's paper, the
Western scientific community perceived the paper of Suhl et al. as the first one proposing the two-band theory of superconductivity.
\section{Recollections of a former Bogoliubov's PhD student:\\ excerpts from a discussion with Acad. Vsevolod A.
Moskalenko}
\vspace{2mm}
\textit{How did you become one of Bogoliubov's PhD students?}
\vspace{2mm}
\textbf{Acad. Vsevolod Moskalenko:} In 1950, in the \textit{Ukrainian Journal of Mathematics}, it was published the paper \textit{The adiabatic theory
of interaction of an elementary particle with a quantum field}, by Bogoliubov, devoted to the polaron problem. At that time, this was a very modern
problem. It was initiated by Landau and developed by Pekar. Using Landau's ideas, Pekar had shown that the elementary particle, I mean the electron,
is polarizing the atoms around him and, is this way, "is digging its well", producing an attractive potential. So, the electrons become dressed
particles, with a fluctuating movement through the crystal. However, the problem was put incorrectly, in the sense that the translational and
fluctuational movements were not properly treated. Bogoliubov formulated a new perturbational approach. Being a man of high mathematical culture, he
developed a very elegant method, which impressed me very much. After having red the article, I decided that I must absolutely find this man.
\vspace{2mm}
\textit{What were you doing at that time?}
\vspace{2mm}
\textbf{AVM:} I was a student in the last grade, the $5^\text{th}$, of the Faculty of Physics at Chi\c{s}in\u{a}u University. This was the first
series of students.
\vspace{2mm}
\textit{So, you started the faculty in ....}
\vspace{2mm}
\textbf{AVM}: \ldots in 1946.
\vspace{2mm}
\emph{It was a very hard time \ldots}
\vspace{2mm}
\textbf{AVM}: It was, indeed. It was a terrible famine, and I was officially registered as a dystrophic, with the right of receiving a bowl of cereal
soup, daily. I was extremely weakened, and I was not able to walk by myself, it
was a colleague of mine who was helping me to go to the refectory. The studies were free, in general, but myself, and my brother, we had to pay the
taxes. This was because we have lived "under bourgeois regime" and "under fascist occupation" (in Romania; a part of ourdays Republic of Moldova had
belonged to Romania, between 1918-1940 and 1941-1944, editor's note). Our father had been deported 1940, and he died in Gulag; so, we were the sons of
an "enemy of the people". This is why me and my brother were obliged to work during our studies, as laboratory assistants in the University. We could
not attend the courses, we could not properly prepare the exams .... it was a very hard time, indeed.
\vspace{2mm}
\textit{So, how did you reach Bogoliubov?}
\vspace{2mm}
\textbf{AVM:} In 1951, I have decided to go to Moscow, to find him. I went to "Steklov Institute", but I could not find Bogoliubov; however I met his
coworker, Tyablikov, who was a very kind and helpful person.
\vspace{2mm}
\textit{You did not write to him, to announce your visit?}
\vspace{2mm}
\textbf{AVM:} No .... I was not daring .... I went to "Steklov" every year, but only 1956 I could find Bogoliubov. It happened to be a seminar; I
attended the seminar, and, after that, Bogoliubov asked me: "please, what would you like to do?" I responded that I want to study Feynman's methods; I
said "I know to tackle products of two operators
(bilinear Hamiltonians, editor's note), but here (in "Steklov", editor's note), you should know how to tackle products
of four operators, too .... and I would like to learn this issue ...." In the room were also Tyablikov, Zubarev,
Tserkovnikov, Vladimirov, .... When I finished, they remained silent. "So, what shall we do with him?" - asked
Bogoliubov; they remained silent again. Then, Bogoliubov took a sheet of paper and wrote down: "Accepted for PhD,
Bogoliubov".
\vspace{2mm}
\textit{Did you remain in Moscow?}
\vspace{2mm}
AVM: No, I had to come back to Chisin\u{a}u. I tried to prepare the papers in order to get a PhD scholarship. But it
was impossible to speak to the rector; his secretary did not allow me to enter, she was treating us like zeros, like no one. However, when they found
out that I was accepted by Bogoliubov, the situation changed. I was enrolled as a PhD student, but only for one year. My colleagues without "political
problems" were accepted for three years.
\vspace{2mm}
\textit{When did you start the PhD preparation?}
\vspace{2mm}
\textbf{AVM:} In September 1957. I met Bogoliubov in his office at "Steklov", and I told him: "Nikolai Nikolaevich, I arrived. "And he responded,
"Very well. From now onwards, you will speak to me in the language of diagrams." And I did not understand what this could mean ....
\vspace{2mm}
\textit{How did you start?}
\vspace{2mm}
\textbf{AVM:} At the beginning, I was finishing my papers, which I had started at Chi\c{s}in\u{a}u. I was in a tight competition with Hacken, who was
studying the same problem - the exciton. My first paper in JETF was devoted to the theory of excitons (JETF, \textbf{30}, 959 (1956), editor's note);
this paper was included by Hacken in his review on excitons. I was afraid that Hacken will finish the work before me, and every morning, when I was
arriving in the
library, I was searching if Hacken's paper appeared or not ... Finally, I finished the article and I wanted to show it to Bogoliubov. But he was
surrounded by a lot of people like a polaron, and I did not dare to disturb him \ldots
\vspace{2mm}
\textit{So, you did not show him your article?}
\vspace{2mm}
\textbf{AVM:} I did, but a little bit later \ldots the institute has a large stair, with windows, and I could see him when he was descending, to go
home. So, I waited for him on stairs and I have shown him the paper. He took the paper and red it quickly and told me, showing an equation: "Look,
here the dead dog is buried." And nothing more. And I was thinking and working again, and after three months I could find the error \ldots
\vspace{2mm}
\textit{So, he did not use to explain much to you \ldots}
\vspace{2mm}
\textbf{AVM:} Not much ... when some colleagues tried to ask him how to proceed, he responded: "Which would be your benefit if I shall tell you what
to do?" However, for me, only the fact that I could stay around such a man was a great happiness. I was avoiding, as much as possible, to consume his
time. This is way I was using to wait for him on stairs and to have there short discussions. But he was a democrat. I shall give you an example.
During my PhD, we, the
youngest researchers, were working it the same room. And Bogoliubov, when he was leaving the Institute to go home, used to come to each of us, to
shake the hands and to say: "I am greeting you, I am greeting you ..." Of course, I was in the most distant place ... and once it happened that he
passed near all my colleagues, and was about to get out, but he noticed that he did not shake the hands with me; then, he came back, went to my desk
and said to me "I am greeting you". I shall keep in my heart forever this "I am greeting you". Especially when you compare with the atmosphere in
Chi\c{s}in\u{a}u, where if you go to rector's office, nobody pays any attention to you ... Bogoliubov was a very direct man, sometimes he was kidding,
but very seldom. He was very cultured, knowing very well Russian literature; he used to say, by heart, long quotations.
He was speaking very wittily, and often I had to ask somebody to explain to me what he really meant.
So, in my life, I had the chance of knowing a giant, Bogoliubov, and also a man of highest moral attitude - Tyablikov.
Tyablikov used to say the truth right in face; this was very rare. He was elegant, handsome, slim. He was very kind, very helpful, but he was helping
people discretely, without speaking about this; he was trying to help you, and not to
let you know about this. I learned a lot of things from him.
\vspace{2mm}
\textit{But coming back to the previous question, how did you start to work in superconductivity?}
\vspace{2mm}
\textbf{AVM:} I shall tell you. I the autumn of '57, everybody was discussing about superconductivity. Previously, in every year, in '55, in '56, a
theory of superconductivity was published, and all of them were wrong. The theories of Fr\"{o}hlich, of Schaffroth, of Blatt.... when the preprint of
BCS article arrived, the people said "look, a new wrong
theory", and they throw it away .... Of course, Cooper's paper was well known, and I personally heard Landau saying that something more
incomprehensible than this paper, he never had seen. It was a great excitement, with a lot of discussions and seminars: at Lomonosov University; at
Steklov, Bogoliubov's seminar; at the Institute of Physics of the Academy of Sciences, Ginzburg's seminar; of course Landau's seminar and the best
rated officially, Kapitza's seminar. People like Landau, Fock, Pomeranchuk, Gorkov, Abrikosov were attending such seminars. Bogoliubov presented his
theory, including the Fr\"{o}hlich interaction. BCS theory took into consideration only the electron gas, the interaction with
phonons being replaced with an effective interaction between electrons; but Bogoliubov considered the electron gas, the phonon gas, and the
Fr\"{o}hlich interaction. In this atmosphere, I decided to change the domain of my
researches and to study the superconductivity. My colleagues told me, "what are you doing, you are here for only one year, and you drop your familiar
themes and start something absolutely new?" It was not a wise decision, but it was impossible to me to act otherwise.
\vspace{2mm}
\textit{So, how did you really start?}
\vspace{2mm}
\textbf{AVM:} It was in May '58. Bogoliubov and Tyablikov were discussing near the blackboard, and Bogoliubov was saying: "You see, these theories
(BSC and Bogoliubov's, editor's note) are made for some ideal gases; you cannot see the dependence on real metals, on their parameters; we should do a
theory for real metals." I was sitting somewhere in
the room, listening. It was the time when Tyablikov was preparing himself for holidays. He used to go in June, alone,
in Siberia, just with a gun, in those frightening forests ...
\vspace{2mm}
\textit{In the taiga?}
\vspace{2mm}
\textbf{AVM:} In the taiga, where the villages are 100 km away each other, and the bears smell your traces and, if you
see a man, you must ask yourself if he will shut you, or you should shut him ... And Tyablikov tells me, "Vsevolod Anatolievich, come with me in
taiga, I shall pay the ticket for you, we shall go together in Kamenaya - Tonguska", where the meteorite had fallen ... "Thank you so much Sergei
Vladimirovich, but I am here for only one year, so how could I go for two months in Siberian forests? So much the more that my wife with my two girls
will arrive, and I must be here, to welcome them. "Very well, said Tyablikov, you will take up this problem" (the superconductivity of real metals,
editor's note).
\vspace{2mm}
\textit{You did not regret not to join Tyablikov?}
\vspace{2mm}
\textbf{AVM:} You know, the science was the priority. So, I remained in Moscow; it was very hot, and I did not know German, and I had to read the
theory of metals, of Sommerfeld, and "Zeitschrift", and "Annalen der Physik"...
\vspace{2mm}
\textit{They were not translated into Russian?}
\vspace{2mm}
\textbf{AVM:} No ... and no textbook, which are now so abundant. I was working hardly, it was very hot, and I was not understanding ... I was trying
to use the Umklapp processes, but I could not achieve anything.... and September was coming, on September 15 Tyablikov will be back, and how can I
meet him with empty hands, when he told me: "take up the problem"? You know, such an imperative is terrible. I was in a very tense state.... and I put
together all may forces, and something more, and I said to myself: I must do something..... And in this way, the two-band theory appeared. I got the
idea, and in two weeks I wrote the paper. When Tyablikov came back, the paper was ready.
\vspace{2mm}
\textit{Where did you submit the paper?}
\vspace{2mm}
\textbf{AVM:} I submitted in to JETF (Journal of Experimental and Theoretical Phy\-sics, the most prestigious Soviet journal of physics, editor's
note); from September till October, it was rejected.
\vspace{2mm}
\textit{Why?}
\vspace{2mm}
\textbf{AVM:} "It does not present interest".
\vspace{2mm}
\textit{Who said this?}
\vspace{2mm}
\textbf{AVM:} The referee was secret, of course.
\vspace{2mm}
\textit{It was not possible to ask for another referee? Even if the article was considered to be very good, by people like Tyablikov and Bogoliubov?}
\vspace{2mm}
\textbf{AVM:} No way. So we decided to send the paper to Sverdlovsk, at "Physics of metals and metallography", whose chief editor was Vonsovskii (at
JETF, the chief editor was Kapitza, and the deputy was Evgueni Lifshitz, editor's note). The paper was submitted in October 1958, and published in
October 1959. But Suhl, Mathias and Walker submitted their paper to Physical Review Letters on November 15, 1959, and it was published on December 15,
1959.
\vspace{5mm}
\textit{As already mentioned, the English translation of that volume of the "Physics of metals and metallography" appeared in June 1961, having a
quite limited circulation. This is why, for Western scholars, the paper proposing the two-band theory was the paper of Suhl et al., and Moskalenko's
paper passed largely unnoticed. (editor's note).}
|
0910.5474
|
\section{Introduction}
While laboratory experiments, solar systems tests and cosmological
observations have all been in complete agreement with General Relativity
for now almost a century, these bounds do not eliminate the
possibility for the graviton to bear a small hard mass $m\lesssim
6.10^{-32}$eV, \cite{Goldhaber:2008xy}. Conversely,
the main obstacle in giving the graviton a mass lies in the
theoretical constraints rather than the observational ones, as
explicit non-linear realizations of massive gravity are
hard to construct.
The Dvali-Gabadadze-Porrati (DGP) model is the first realization of soft massive gravity, where
the graviton can be thought of as a resonance, or a superposition of massive
modes \cite{DGP}. This model was then extended to higher
dimensions, \cite{Gabadadze:2003ck,cascade}, where gravity becomes
even weaker at large distances, and could exhibit a ``degravitation"
mechanism, by which the cosmological constant could be large but
gravitate weakly on the geometry \cite{degravitation}.
Such a degravitation mechanism is also ``expected" to be present if
the graviton bears a hard mass. An explicit
realization of a theory of a hard mass gravity was proposed in \cite{Gabadadze:2009ja}, which appeared
while this work was in progress, and relies on the same mechanism.
This framework is based on the presence of a ``spurious" compact extra dimension on which
we impose Dirichlet boundary condition on one end and Neumann (Isra\"el) on the
other, where our 4d world stands.
The techniques used throughout this study, in particular the introduction of a St\"uckelberg field to restore 4d gauge invariance, are in no way original to this work, however the introduction on the spurious extra dimension provides a geometrical interpretation of massive gravity, for which non-linearities can be tracked down explicitly. Furthermore, this model is of high interest for the study of degravitation, providing a framework where explicit solutions with a cosmological constant can be understood and more general cosmological solution can be studied numerically.
We also show that when diffeomorphism is broken along the extra
dimension, one recovers an effective 4d theory of
gravity where the graviton has a constant mass.
Moreover, this class of model can
also accommodate a fully 5d diffeomorphism invariant theory for
which the 4d effective graviton has a soft mass and is free of any
ghost-like instability at the non-linear level.
We proceed as follows: We first show in section \ref{sec:Scalar} how our mechanism works for a
scalar field toy-model before presenting the full spin-2 analogue in section \ref{sec:Gravity}.
We then recover the expected gravitational exchange amplitude
between two conserved sources for a theory of massive gravity in section \ref{sec:EffAction} and
derive the decoupling limit for a specific class of models where higher extrinsic curvature terms are
present in section \ref{sec:Decoupling}, while the decoupling limit in the more general case is
deferred for later studies. We also discuss on the number of physical degrees of freedom and comment on the stability (presence of ghosts) in this class of model.
We then present in section \ref{sec:FlatSol} solutions capable of ``hiding" a 4d
cosmological constant by curving the extra dimension and keeping the
3-brane flat, which is of high importance for the degravitation mechanism.
Finally, we discuss soft massive gravity in the appendix \ref{sec:Appendix}, which is obtained when restoring 5d gauge invariance along the extra dimension.
\section{Scalar Field Toy model}
\label{sec:Scalar}
Before diving into the technical subtleties of the
gravitational case, we focus to start with on the core of the idea using
a scalar field toy-model.
Let $\varphi(x^\mu,\omega)$ be a massless scalar field living in a 5d space-time $(x^\mu, \omega)$ where
the coordinates $x^\mu$, $\mu=0,\cdots,3$ describe our four transverse dimensions, while the fifth
coordinate $\omega$ is compact, $0\le\omega\le \bar \omega$, and we choose the $\omega$ coordinate to be dimensionless.
We explicitly break the 5d Lorentz invariance by omitting the kinetic term along the transverse direction in the bulk
\begin{eqnarray}
S= \int_0^{\bar \omega}\mathrm{d} \omega \, \mathrm{d}^4x \(\frac{M_5^4}{2} (\partial_\omega\varphi)^2 +\delta(\bar \omega-\omega)\mathcal{L}_4 \) \,,
\end{eqnarray}
while these kinetic terms are present on the brane:
\begin{eqnarray}
\mathcal{L}_4=-\frac{M_4^2}{2} \varphi \Box \varphi+\varphi J(x)\,,
\end{eqnarray}
where $\Box$ is the 4d d'Alembertian and
$J$ the source localized on the 3-brane.
A shift in the brane position $\bar \omega$ is equivalent to rescaling the 5d scale $M_5$ and without loss of generality,
we set $\bar \omega\equiv1$ and $M_5^4=M_4^2 m^2$, where $M_4$ is the 4d Planck scale and $m$ is a mass parameter.
The boundary condition on the brane at $\omega=1$ is set using the standard Neumann or Isra\"el Matching
Conditions, while at $\omega=0$, we choose to impose the Dirichlet boundary condition:
\begin{eqnarray}
\varphi(x,\omega)\big|_{\,\omega=0}&=&0\\
-M_4^2 m^2 \partial_\omega \varphi \big|_{\, \omega=1}&=&-M_4^2 \Box \varphi+J\,.
\end{eqnarray}
Solving the bulk equation of motion with the previous boundary condition, the field profile is therefore
$ \varphi(x,\omega)=\bar \varphi(x) \omega $, where $\bar \varphi$ is the induced value of the field on the brane, satisfying the 4d effective
equation of motion on the brane,
\begin{eqnarray}
M_4^2(\Box-m^2)\bar \varphi=J(x)
\end{eqnarray}
and hence behaving as a massive scalar field from a 4d
point of view. Needless to say that this is a very convoluted way to
obtain a massive scalar field theory, but for gravity, it would be
extremely difficult to do so otherwise.
\section{Massive Gravity}
\label{sec:Gravity}
The extension of this model to a spin-2
field is straight-forward. We consider a 4d metric $q_{\mu \nu}(x^\mu,\omega)$ living
in the previous 5d space-time. 5d diffeomorphism is here again explicitly broken,
but 4d gauge invariance is preserved using the standard trick of introducing a St\"uckelberg
field $M^\mu$ with $N^\mu(x,\omega)=\partial_\omega M^\mu(x,\omega)$,
which shifts under a 4d gauge transformation $x^\mu \to \tilde x^\mu(x,\omega)$ as
\begin{eqnarray}
q_{\mu \nu} &\to& \tilde q_{\mu \nu}=q_{\alpha \beta} \frac{\partial x^\alpha}{\partial \tilde x^\mu} \frac{\partial x^\beta}{\partial \tilde x^\nu}\,,\\
N^\mu &\to& \tilde N^\mu=N^\alpha\frac{\partial \tilde x^\mu}{\partial x^\alpha}+\partial_\omega \tilde x^\mu
\end{eqnarray}
so that the ``extrinsic curvature"
\begin{eqnarray}
K_{\mu \nu}=\frac 12 \mathcal{L}_n q_{\mu \nu}=\frac 12 \(\partial_\omega q_{\mu \nu}-D_{(\mu}N_{\nu)}\)
\end{eqnarray}
transform as 4d tensor. Hereafter, the 4d metric $q_{\mu \nu}$ is used to express the covariant derivative $D_\mu$ as well as
to raise and lower the indices.
Similarly as for the scalar field toy-model, we then construct the 5d bulk action by
considering the equivalent of the
``5d curvature" $R_{5}[q,M]=R_4[q]+K^2-K^\mu_{\, \nu} K^\nu_{\, \mu}$
but omitting the contribution from the 4d kinetic term $R_4$
\begin{eqnarray}
\label{toymodel1}
S_K= \frac{M_4^2 m^2}{2}\int_0^1 \mathrm{d} \omega \, \mathrm{d}^4x\sqrt{-q} \(K^2-K^\mu_{\, \nu} K^\nu_{\, \mu}\)\,.
\end{eqnarray}
Notice that the specific combination $K^2-K^\mu_{\, \nu} K^\nu_{\, \mu}$ that appears when expressing the 5d curvature in terms of the 4d one, is precisely what will give to the specific Fierz-Pauli combination, which is the only ghost-free linear realization of massive gravity that respects 4d diffeomorphism invariance.
The 4d curvature is yet present on the brane at $\omega=1$ which holds the action
\begin{eqnarray}
S_4= \int \mathrm{d}^4x\sqrt{-q} \(\frac{M_4^2}{2}R_4-\mathcal{L}_4\)\,,
\end{eqnarray}
where $\mathcal{L}_4$ is the Lagrangian for matter fields confined to the 3-brane.
Working in terms of the two dynamical variables $q_{\mu \nu}$ and $M^\mu$,
the Isra\"el matching conditions
are used to determine the boundary condition on the brane at $\omega=1$, while we impose Dirichlet boundary condition at $\omega=0$:
\begin{eqnarray}
q_{\mu \nu}(x^\alpha,\omega)\big|_{\, \omega=0}\equiv \eta_{\mu \nu}\ \ {\rm and}\ \ M^\mu\big|_{\, \omega=0}\equiv 0\,.
\end{eqnarray}
Notice that if we had restricted ourselves to theories that only
have the restricted gauge symmetry $x^\mu \to \tilde x^\mu(x)$,
the action \eqref{toymodel1} would be
gauge invariant without the need of the St\"uckelberg field, but the
Dirichlet boundary condition would break 4d gauge invariance.
The extended symmetry $x^\mu \to \tilde x^\mu(x,\omega)$ and the St\"uckelberg
field therefore play a crucial role.
Differentiating the bulk action with respect to the St\"uckelberg field yields the ``Codacci" equation
\begin{eqnarray}
D_\mu K^\mu_{\, \nu}-\partial_\nu K=0\,,
\end{eqnarray}
while differentiating the action with respect to the metric leads to the modified ``Gauss" equation:
\begin{eqnarray}
&&\hspace{-15pt}M_4^2
m^2\left\{\hspace{-2pt}\mathcal{L}_n\hspace{-2pt}\(K^\mu_{\, \nu}-K\delta^\mu_{\, \nu}\)+K K^\mu_{\, \nu}
-\frac 12 \(K^2+K^\alpha_{\, \beta}K^\beta_{\, \alpha}\)\hspace{-2pt}\delta^\mu_{\, \nu}\right\}\nonumber\\
&&\hspace{10pt}=\delta(\omega-1)\(T^\mu_{\, \nu}-M_4^2 G^{(4)}{}^\mu_{\, \nu}\)
\end{eqnarray}
where the Lie derivative of a $(1,1)$-tensor is
\begin{eqnarray}
\mathcal{L}_nF^\mu_{\, \nu}=\(\partial_\omega-N^\alpha\partial_\alpha\)F^\mu_{\, \nu}+F^\alpha_{\, \nu} \partial_\alpha N^\mu-F^\mu_{\, \alpha}\partial_\nu N^\alpha\,.\
\end{eqnarray}
In the absence of any gravitational source $\mathcal{L}_4$, the field $M^\mu$ vanishes
and the 4d metric is flat $q_{\mu \nu}=\eta_{\mu \nu}$ as in standard general relativity.
In what follows, we show that, this theory behaves
as a theory of massive gravity at the linear level.
\section{Effective Boundary Action}
\label{sec:EffAction}
We derive in this section the effective action on the 3-brane for small perturbations around the vacuum solution,
$q_{\mu \nu}=\eta_{\mu \nu}+h_{\mu \nu}(x,\omega)$, sourced by a 4d stress-energy tensor $T_{\mu \nu}$ localized on the brane at $\omega=1$. We follow the same approach as that used in \cite{Luty:2003vm}.
In terms of the variable $H_{\mu \nu}$,
\begin{eqnarray}
H_{\mu \nu}= h_{\mu \nu}-\partial_{(\mu}M_{\nu)}=h_{\mu \nu}-(\partial_\mu M_\nu+\partial_\nu M_\mu)\,,
\end{eqnarray}
the bulk action is then of the form
\begin{eqnarray}
\mathcal{L}_K=-\frac{M_4^2 m^2}{8} \ \partial_\omega H^{\mu \nu}\, \partial_\omega
(H_{\mu \nu}-H_4\eta_{\mu \nu})\,,
\end{eqnarray}
where $H_4=H^\alpha_\alpha$.
The field $H_{\mu \nu}$ is hence linear in the fifth variable $\omega$, and the Dirichlet boundary condition at $\omega=0$ sets
\begin{eqnarray}
H_{\mu \nu} (x^\mu,\omega)= \bar H_{\mu \nu}(x^\mu)\, \omega\,,
\end{eqnarray}
where hereafter bar quantities represent the induced value of the fields on the brane. Using this expression in $\mathcal L_K$, leads after integration by part to the 4d boundary term at $\omega=1$:
\begin{eqnarray}
\mathcal{L}^{\rm bdy}_{K}=-\frac{M_4^2 m^2}{8} \ \bar H^{\mu \nu}\, (\bar H_{\mu \nu}-\bar H_4 \eta_{\mu \nu})\,,
\end{eqnarray}
which is precisely the mass term of a standard Fierz-Pauli massive theory of gravity at the linearized level.
To this induced boundary action, we add the brane Einstein-Hilbert term
\begin{eqnarray}
\mathcal{L}^{\rm bdy}_{R_4}\hspace{-3pt}=\hspace{-3pt}\frac{M_4^2}{8} \Big[\bar h^{\mu \nu}\Box (\bar h_{\mu \nu}-\bar h_4\eta_{\mu \nu})
+2(\partial_\mu \bar h^\mu_{\, \nu})^2+\bar h_4 \partial_\mu\partial_\nu \bar h^{\mu\nu} \Big]\nonumber \,,
\end{eqnarray}
which provides the kinetic for the massive Fierz-Pauli graviton.
Since both boundary actions are invariant under the gauge transformation $x^\mu\to x^\mu+\xi^\mu(x) \omega$, one can fix this gauge freedom by adding a gauge fixing term similarly as in \cite{Luty:2003vm},
\begin{eqnarray}
\label{gf}
\mathcal{L}^{\rm bdy}_{\rm gf}=-\frac{M_4^2}{4} \,
\big(\partial_\alpha\bar h^\alpha_{\, \mu}-\frac 12 \partial_\mu \bar h_4- m^2 \bar M_\mu\big)^2 \,.
\end{eqnarray}
The resulting boundary action is then
\begin{eqnarray}
&&\hspace{-18pt}\mathcal{L}^{\rm bdy}_{\rm eff}=\frac{M_4^2}{8}\Big[
\bar h^{\mu\nu}(\Box-m^2)\bar h_{\mu \nu}-\frac 12 \bar h_4 (\Box-2m^2)\bar h_4\ \ \\
&&\hspace{-5pt}+m^2\(F_{\mu \nu}^2+ \bar h_4 \partial_\mu \bar M^{\mu}
+2m^2 \bar M_\mu^2\)\!
\Big]+\frac 12 \bar h_{\mu \nu} T^{\mu \nu}\nonumber\,,
\end{eqnarray}
with $F_{\mu \nu}=\partial_{(\mu} M_{\nu)}$, and the second line corresponds to
the action of a Proca field coupled to $\bar h_{\mu \nu}$. Notice that in the absence of this coupling, the Proca or St\"uckelberg field would be irrelevant.
When coupling these fields to conserved matter, only the scalar mode in the St\"uckelberg field is excited, and the
resulting gravitational exchange amplitude between two sources is then
\begin{eqnarray}
\mathcal A\sim-\frac 2 M_4^2\int \mathrm{d} ^4 x \, T'^{\mu\nu}\frac{1}{\Box-m^2}\(T_{\mu \nu}-\frac 13 T\eta_{\mu \nu}\)\,,
\end{eqnarray}
corresponding to the expected gravitational exchange amplitude due to a massive graviton.
In particular, we notice the standard factor $1/3\, T$ instead of $1/2\, T$ which
appears in massive gravity and signals the presence of an extra
helicity-0 mode hidden in the St\"uckelberg field.
As observed by van Dam-Veltman and Zakharov (vDVZ), this factor remains $1/3$
even in the massless limit and is at the origin of the well-know
vDVZ discontinuity, \cite{vanDam:1970vg}. The
resolution to this puzzle lies in the observation that close enough to any source,
the extra scalar mode is strongly
coupled, \cite{Vainshtein:1972sx}. Non-linearities dominate over the linear term and
effectively freeze the field. This is most easily understood by
studying the decoupling limit.
\section{Decoupling limit}
\label{sec:Decoupling}
Following the same prescription as in \cite{Luty:2003vm,ArkaniHamed:2002sp}, we
work from now on in the high energy limit $\Box\gg m^2$, and focus on the scalar
mode, $\bar M_\mu = -\partial_\mu \pi$. The helicity-0 mode then decouples when changing variable to $h'_{\mu \nu}=\bar h_{\mu \nu}+m^2 \pi \eta_{\mu \nu}$, and the effective boundary action simplifies to
\begin{eqnarray}
\label{effactiondec}
\mathcal{L}^{\rm bdy}\simeq \frac{M_4^2}{4}\Big[\frac 12 h'^{\mu\nu} \Box ( h'_{\mu \nu}-\frac 12 h'_4 \eta_{\mu \nu})
+3 m^4 \pi \Box \pi\Big]\,.
\end{eqnarray}
The small kinetic term of $\pi$ is precisely what resolves the vDVZ
discontinuity, similarly as in DGP, \cite{Luty:2003vm}.
In the small mass limit, higher order interactions in $\pi$ dominate over the quadratic term and effectively freeze the extra excitations out.
To see the strong coupling at work, let us
find out the most important interaction present beyond this quadratic action. We work for that in terms of the canonically normalized variables $\hat h_{\mu \nu}=M_4 h'_{\mu \nu}$ and $\hat \pi=m^2 M_4\pi$. A general bulk interaction between $\pi$ and $h'_{\mu \nu}$ will give rise to a boundary term of the form
\begin{eqnarray}
\label{interactions}
\mathcal{L}_{{\rm bdy}}^{(p,q)}\simM_4^2 m^2 \(\frac{\partial^2 \hat \pi}{m^2 M_4}\)^q\(\frac{\hat h _{\mu \nu}}{M_4}\)^p\,.
\end{eqnarray}
We immediately see that interactions with the helicity-2 mode $h'_{\mu \nu}$ bear an important coupling scale and will hence be suppressed. Setting $p=0$, the strong coupling scale for this kind of interaction is
\begin{eqnarray}
\Lambda_{q}\sim M_4 \(\frac{m}{M_4}\)^{\frac{2-2q}{4-3q}}\,.
\end{eqnarray}
The lowest interaction scale therefore occurs for cubic interactions $q=3$, as expected from \cite{ArkaniHamed:2002sp},
giving rise to the strong coupling scale
\begin{eqnarray}
\Lambda_5=\(m^4 M_4\)^{1/5}\,.
\end{eqnarray}
We can quickly convince ourselves that such cubic
interactions generically exist in a theory of massive gravity, although they are absent in the specific theory at
hand as the St\"uckelberg field $M^\mu=-\partial_\mu \hat \pi /m^2 M_4$ only comes
to quadratic order in the action. For the cubic interactions with
scale $\Lambda_5$ to be present, the action should include cubic
terms in the St\"uckelberg field such as $(\partial_\mu M_\nu)^3$ not present in
the model considered thus far. However such terms will typically be
present if higher order terms in the extrinsic curvature are
present.
\subsection{In the presence of $K^3$ terms}
Generically we expect to generate higher order in the extrinsic curvature by quantum
interactions, without modifying the linearized arguments provided so far.
Such interactions are typically be of the form
\begin{eqnarray}
\label{toymodel2}
\tilde{\mathcal{L}}_{K^3}= \frac{M_4^2 m^2}{2}
\(\alpha K^3-(\alpha+\beta)K K_{\mu \nu}^2+\beta K_{\mu \nu}^3\)\,,
\end{eqnarray}
with $\alpha$ and $\beta$ arbitrary dimensionless
parameters.
We focus on the scalar mode $h_{\mu \nu}=m^2 \Pi
\eta_{\mu \nu}$ and $M_\mu=-\partial_\mu \Pi$, which extends in the bulk as $\Pi=\pi(x)\,
\omega$.
At high energy, these terms contribute to the boundary action with
the following cubic interactions
\begin{eqnarray}
\label{L3}
\hspace{-15pt}\tilde{\mathcal{L}}_{\rm K^3}^{(3)}=\frac{1}{2\Lambda_5^5} \Big(
\alpha (\Box \hat \pi)^3-(\alpha+\beta) (\Box \hat \pi)(\partial_\mu\partial_\nu \hat \pi)^2+\beta(\partial_\mu\partial_\nu \hat \pi)^3\Big)
\end{eqnarray}
which dominate over the quadratic term at the energy scale $\Lambda_5$.
These cubic interactions are precisely the ones expected for a
typical theory of massive gravity in \cite{ArkaniHamed:2002sp,degravitation}, and are the ones responsible for
the Vainsthein mechanism, \cite{Vainshtein:1972sx}.
At least in the decoupling limit, this theory exhibits the
degravitation behavior \cite{degravitation} and therefore represents
a powerful tool to study this mechanism further in a fully non-linear
scenario.
\subsection{General $K^n$ terms}
In more generality, one may expect the extrinsic curvature interactions to come in at the order $n\ge2$. They will then generate interactions of the form
\begin{eqnarray}
\label{toymodeln}
\tilde{\mathcal{L}}_{K^n}\sim M_4^2 m^2
\(\frac{\Box \hat{\pi}}{M_4 m^2}\)^n\sim \frac{1}{\Lambda_{\star}^{3n-4}}(\Box \hat{\pi})^n\,,
\end{eqnarray}
with the strong coupling scale
\begin{eqnarray}
\Lambda_\star=\(m^{\frac{2(n-1)}{n-2}}M_4\)^{\frac{n-2}{3n-4}}\,,
\end{eqnarray}
in particular we recover the strong coupling scale $\Lambda_\star=\Lambda_5=\(m^4M_4\)^{1/5 }$, when extrinsic curvature interactions are included already at cubic order ($n=3$), whereas if the theory is free of such interactions or $n\to \infty$, the strong coupling scale is $\Lambda_\star=\Lambda_3=\(m^2M_4\)^{1/3 }$.
\subsection{Ghosts and physical degrees of freedom}
As soon as higher extrinsic curvature terms are present, they result in interactions that are relevant at the scale
$\Lambda_\star$. In that case, the theory inexorably manifests a ghost at the non-linear
order. This can be seen by the presence of the higher order
derivative operators of the form ``$\Box^2 \pi$" that appears in
the equation of motion of \eqref{L3} or \eqref{toymodeln}.
Such a ghost is expected in a theory
of hard mass gravity, and is usually refereed to as the Boulware-Deser ghost, \cite{Boulware:1973my, Creminelli:2005qk}.
This theory has $10$ degrees in the metric and $4$ in the St\"uckelberg
field, but the gauge invariance makes only 6 of them physical, like in a usual theory of massive gravity around a general background. The St\"uckelberg field contributes with 4 additional degrees of freedom, compared to the only 2 present in a theory of massless gravity.
However, when perturbing to first order around flat space-time,
only 5 degrees of freedom are excited, ($M_\mu$ plays the role of a
Proca field, with only 3 degrees of freedom, one of them being the helicity-0 mode $\pi$, while the two helicity-1 modes decouple when considering conserved sources), as expected from a usual Fierz-Pauli massive theory of gravity.
At the non-linear level, the 6th mode is typically excited and propagates a ghost, at least when higher extrinsic curvature terms are present, or in other words when the strong coupling scale is below $\Lambda_3$.
\subsection{In the absence of higher extrinsic curvature terms}
We emphasize however that when the theory is exempted of any higher extrinsic curvature term $K^n$ (with $n>2$),
all interactions with coupling scale $\Lambda_q$ with $ 1/5 \le q<1/3$ disappear.
Indeed, in that case interactions of the form \eqref{interactions} are only possible with $0\le q\le2$, since the St\"uckelberg field only comes in at quadratic order in the action. In that case the associated strong coupling scale is then $\Lambda_3=(m^2 M_4)^{1/3}$, and interactions becoming important at that scale can be of any order.
The situation is then far more subtle. In particular, it has been shown in
\cite{Gabadadze:2009ja} that the Hamiltonian density remains
positive definite for appropriate choice of boundary conditions when
these $K^n$ terms are absent.
Furthermore, the strong coupling scale in this case is the same as in the DGP model \cite{DGP} (or its extension in the appendix), for which no ghost-like instability is manifest non-linearly.
Understanding whether the theory \eqref{toymodel1} has an underlying symmetry that
keeps only 5 physical degrees of freedom non-linearly, or in other words whether
or not the Boulware-Deser ghost manifests itself in that case
and if so at which scale therefore deserves more attention and will
presented in some later work, \cite{next}.
Before concluding we show that this model
of massive gravity can be of great interest for cosmology as it can
accommodate for flat solutions in the presence of a cosmological constant on the brane.
\section{Flat Solutions with Tension}
\label{sec:FlatSol}
We show here that such models present solutions which are very
similar to the codimension-2 ``deficit-angle" configuration, that carry a
tension but keep the 4d geometry flat. Indeed, including a
cosmological constant $\lambda_4$ on the brane, gives rise
to the following metric profile: $q_{\mu \nu}=a^2(\omega)\eta_{\mu \nu}$ with
\begin{eqnarray}
a^2(\omega)=\frac{\omega+\omega_0}{\omega_0}\,,
\end{eqnarray}
where $\omega_0$ is a positive constant, related to $\lambda_4$ via the
Isra\"el Matching condition:
\begin{eqnarray}
\lambda_4=\frac32 \frac{m^2 M_4^2}{1+\omega_0}\,.
\end{eqnarray}
The 4d geometry on the brane is flat, and the cosmological constant
on the brane is carried by the bulk profile of the metric. Notice
however, that similarly to the deficit-angle case for codimension-2
branes, such solutions only exist when the tension is smaller than a maximal value $\lambda_{max}=\frac 32 m^2
M_4^2$. When higher order terms in the extrinsic curvature are
included, this bound can be increased slightly but not by a significant
order of magnitude.
\section{Discussion}
\label{sec:Discussion}
In this paper, we constructed a class of models, first presented in \cite{Gabadadze:2009ja},
giving rise to theories of massive gravity, with hard or soft
masses depending on the details of the setup. This model relies on
the presence of a spurious compactified extra dimension on which we
impose half-Neumann, half-Dirichlet boundary conditions. For
definiteness, we have focused most of this paper on a theory of
massive gravity with a constant mass, for which 5d diffeomorphism is
broken, and refer to the appendix for other kinds of solutions. In
the case of a hard mass, we recover the usual decoupling limit with strong coupling scale $(m^4 M_4)^{1/5}\le \Lambda_\star<(m^2 M_4)^{1/3}$
and show the presence of the Boulware-Deser ghost {\it when higher order terms in the extrinsic curvature are
considered}. When such terms are absent, all
the interactions with coupling scale $\Lambda_\star<(m^2 M_4)^{1/3}$ disappear and the
decoupling limit is more subtle.
In particular, using a Hamiltonian approach, it has been shown in
\cite{Gabadadze:2009ja}, that the energy is positive for appropriate
choices of boundary conditions when these $K^n$ terms are absent. It
will therefore be interesting to understand this result in the decoupling limit, \cite{next}.
In parallel, this model allows us to understand in more depth several
aspects of massive gravity and degravitation.
In particular, this model would allow us to understand how strong coupling explicitly
works in the case of a spherically symmetric source both when the
higher order interaction terms are present and the expected decoupling
limit is recovered, similarly as in \cite{Babichev:2009jt},
as well as in the more interesting case where these higher order extrinsic curvature terms are
absent.
Furthermore, independently of the presence or not of ghosts, this framework will allow us to
understand whether a theory of massive gravity
continues to exhibit the degravitation behavior at the fully non-linear
level and whether it can represent a successful tool to tackle the
cosmological constant problem.
It particular, we have shown the presence of static solutions in the
presence of a cosmological constant, and one should understand
whether or not such solutions are late-time attractors. Finally,
such solutions can only carry a maximal tension, and one
should understand whether this framework can be extended to
accommodate larger tensions, similarly as in \cite{deRham:2009wb}.
\section*{Acknowledgments} I am extremely grateful to G.~Gabadadze, J.~Khoury and A.~J.~Tolley for
very fruitful discussions. This work was supported in part at the Perimeter
Institute by NSERC and Ontario's MRI.
\section*{Appendix: Soft Massive gravity}
\label{sec:Appendix}
To finish, we show in this appendix that when 5d Lorentz invariance is restored, the graviton acquires a soft mass, similarly as in DGP or Cascading Gravity, and is free of Boulware-Deser ghost instabilities.
Indeed, when considering the 5d diffeomorphism invariant action
\begin{eqnarray}
S_5=\frac{M_5^2}{2}\int_0^{\bar y} \mathrm{d} y\mathrm{d}^4x\sqrt{-g_5} R_5\,,
\end{eqnarray}
working now in terms of the dimensionful direction $y$ which remains compactified. Imposing the Dirichlet
boundary condition at $y=0$ and the Neumann one at $y=\bar y$, the metric perturbations satisfy the following bulk profile in 5d de Donder gauge,
\begin{eqnarray}
h_{AB}(x,y)=\frac{\sinh (y \nabla)}{\sinh (\bar y \nabla)} \bar h_{AB}(x)
\end{eqnarray}
where $\nabla=\sqrt{-\Box}$. In terms of the graviton mass $m(\Box)$,
the gravitational exchange amplitude between two conserved sources at $\bar y$ is
\begin{eqnarray}
\mathcal A\sim-\frac 2 M_4^2\int \mathrm{d} ^4 x \, T'^{\mu\nu}\frac{1}{\Box-m^2(\Box)}\(T_{\mu \nu}-\frac 13 T\eta_{\mu \nu}\)\,,
\end{eqnarray}
where the graviton mass is
\begin{eqnarray}
m^2(\Box)=m_5\nabla \coth (\bar y \nabla)\,,
\end{eqnarray}
with $m_5=M_5^3/M_4^2$. In particular, we recover the standard DGP behavior for large $\bar
y$, while the opposite limit gives rise to a constant mass, similar to Cascading Gravity, \cite{cascade}
\begin{eqnarray}
\bar y \nabla \gg 1,\ m^2 \to m_5\nabla \hspace{10pt}{\rm and}\hspace{10pt}
\bar y \nabla \ll 1,\ m^2 \to \frac{m_5}{\bar
y}\,.
\end{eqnarray}
Notice that the Dirichlet boundary condition at $y=0$ has projected
out the zero mode, and we do not recover 4d gravity in the infrared
limit, despite having a compactified extra dimension. Had we impose
the Neumann boundary conditions $\partial_y h_{AB}|_0=0$ or periodic
boundary conditions, $h_{AB}(0)=h_{AB}(\bar y)$, the zero mode
would survive and would be the dominant one in the infrared.
In this case, the decoupling limit arises precisely in the same way
as in DGP, \cite{Luty:2003vm}. The main difference with the model
presented in \eqref{toymodel1} is the presence of the lapse, which
plays a crucial role. The $\pi$ mode decouples at the strong
scale $\Lambda_3=(m_5^2 M_4)^{1/3}$ and its equation of motion
is then
\begin{eqnarray}
3 \Box \hat \pi+\frac{1}{\Lambda_3^3}\((\Box \hat \pi)^2-(\partial_\mu \partial_\nu \hat
\pi)^2\)=-\frac{T}{M_4}\,.
\end{eqnarray}
As already hinted in this limit, where the 5d diffeomorphism is
restored, the theory is free of any ghost-like instability, when
working around the standard branch. However, similarly as the DGP model, this will not provide a satisfactory framework for degravitation, since it cannot accommodate for stable static solutions in the presence of a tension.
We can check this statement explicitly, by deriving the effective Friedmann equation on the
brane. For that, we consider the bulk metric
\begin{eqnarray}
\mathrm{d} s^2=\mathrm{d} y^2-\frac{1}{1+\kappa y}\, \mathrm{d} t^2+(1+\kappa y)\, \delta_{ij}\mathrm{d} x^i \mathrm{d} x^j\,,
\end{eqnarray}
where $\kappa$ is a free parameter, analogue to the spatial curvature
which can be scaled to 1. If the brane is located at $y=\bar y(t)$,
the induced extrinsic curvature on the brane is then
\begin{eqnarray}
K_{ij}=\frac{\kappa}{2\sqrt{1-a^2(t)\, \bar y'{}^2}}\
\delta_{ij}\,,
\end{eqnarray}
where $a^2(t)=(1+\kappa\bar y(t))$, and the resulting Friedmann equation in the presence of a fluid
with energy density $\rho$ is
\begin{eqnarray}
M_4^2\(3H^2+\frac{m_5}{2}\sqrt{4H^2+\frac{\kappa}{a^4}}\, \)=\rho\,.
\end{eqnarray}
When $\kappa/a^4\ll H^2$, we recover the intermediary regime analogue to
DGP,
\begin{eqnarray}
M_4^2(3H^2+m_5 H)=\rho\,,
\end{eqnarray}
while in the opposite limit, the
corrections just play the role of a spatial curvature term.
\section*{References}
|
0910.5442
|
\section{Introduction}\label{sect:intro}
The Veblen functions on ordinals are well-known and commonly used in
proof theory. Proof theorists know that these functions have an
interesting and complex behavior that allows them to build ordinals
that are large enough to calibrate the consistency strength of
different logical systems beyond Peano Arithmetic. The goal of this
paper is to investigate this behavior from a computability viewpoint.
The well-known ordinal $\varepsilon_0$ is defined to be the first fixed point
of the function $\alpha\mapsto\omega^\alpha$, or equivalently $\varepsilon_0 = \sup\{\omega,
\omega^\omega, \omega^{\omega^{\omega}},\dots\}$. In 1936 Gentzen \cite{Gen36}, used
transfinite induction on primitive recursive predicates along $\varepsilon_0$,
together with finitary methods, to give a proof of the consistency of
Peano Arithmetic. This, combined with G\"{o}del's Second Incompleteness
Theorem, implies that Peano Arithmetic does not prove that $\varepsilon_0$ is
a well-ordering. On the other hand, transfinite induction up to any
smaller ordinal can be proved within Peano Arithmetic. This makes
$\varepsilon_0$ the {\em proof-theoretic ordinal} of Peano Arithmetic.
This result kicked off a whole area of proof theory, called ordinal
analysis, where the complexity of logical systems is measured in terms
of (among other things) how much transfinite induction is needed to
prove their consistency. (We refer the reader to \cite{Rat06} for an
exposition of the general ideas behind ordinal analysis.) The
proof-theoretic ordinal of many logical systems have been calculated.
An example that is relevant to this paper is the system \system{ACA}\ensuremath{^+_0}\ (see
Section \ref{ssect:subsystems} below), whose proof-theoretic ordinal is
$\varphi_2(0)=\sup\{\varepsilon_0, \varepsilon_{\varepsilon_0},
\varepsilon_{\varepsilon_{\varepsilon_0}},\dots\}$; the first fixed point of the {\em
epsilon function} \cite[Thm.\ 3.5]{Rat91}. The {\em epsilon function}
is the one that given $\gamma$, returns $\varepsilon_\gamma$, the $\gamma$th fixed point
of the function $\alpha\mapsto\omega^\alpha$ starting with $\gamma=0$.
The Veblen functions, introduced in 1908 \cite{Veb08}, are functions
on ordinals that are commonly used in proof theory to obtain the
proof-theoretic ordinals of predicative theories beyond Peano
Arithmetic.
\begin{itemize}
\item $\varphi_0(\alpha)=\omega^\alpha$.
\item $\varphi_{\beta+1}(\alpha)$ is the $\alpha$th fixed point of $\varphi_\beta$ starting with $\alpha=0$.
\item when $\lambda$ is a limit ordinal, $\varphi_{\lambda}(\alpha)$ is
the $\alpha$th simultaneous fixed point of all the $\varphi_\beta$ for
$\beta<\lambda$, also starting with $\alpha=0$.
\end{itemize}
Note that $\varphi_1$ is the epsilon function.
The Feferman-Sch\"{u}tte ordinal $\Gamma_0$ is defined to be the least
ordinal closed under the binary Veblen function
$\varphi(\beta,\alpha)=\varphi_\beta(\alpha)$, or equivalently
\[
\Gamma_0=\sup\{\varphi_0(0), \varphi_{\varphi_0(0)}(0),
\varphi_{\varphi_{\varphi_0(0)}(0)}(0),\dots\}.
\]
$\Gamma_0$ is the proof-theoretic ordinal of Feferman's Predicative
Analysis \cite{Fef64,Sch77}, and of \system{ATR}\ensuremath{_0}\ \cite{FMcS}\footnote{for
the definition of \system{ATR}\ensuremath{_0}\ and of other subsystems of second order
arithmetic mentioned in this introduction see Section
\ref{ssect:subsystems} below.}. Again, this means that the
consistency of \system{ATR}\ensuremath{_0}\ can be proved by finitary methods together with
transfinite induction up to $\Gamma_0$, and that \system{ATR}\ensuremath{_0}\ proves the
well-foundedness of any ordinal below $\Gamma_0$.
Sentences stating that a certain linear ordering is well-ordered are
\PI11. So, even if they are strong enough to prove the consistency of
some theory, they have no set-existence implications. However, a
sentence stating that an operator on linear orderings preserves
well-orderedness is \PI12, and hence gives rise to a natural reverse
mathematics question. The following theorems answer two questions of
this kind.
\begin{theorem}[Girard, {\cite[p.\ 299]{Girard}}]\label{Girard}
Over \system{RCA}\ensuremath{_0}, the statement \lq\lq if {\ensuremath{\mathcal{X}}}\ is a well-ordering then
${\boldsymbol \omega}^{\ensuremath{\mathcal{X}}}$ is also a well-ordering\rq\rq\ is equivalent to \system{ACA}\ensuremath{_0}.
\end{theorem}
\begin{theorem}[H. Friedman, unpublished]\label{Friedman}
Over \system{RCA}\ensuremath{_0}, the statement \lq\lq if {\ensuremath{\mathcal{X}}}\ is a well-ordering then
${\boldsymbol \varphi}({\ensuremath{\mathcal{X}}},0)$ is a well-ordering\rq\rq\ is equivalent to \system{ATR}\ensuremath{_0}.
\end{theorem}
Let ${\mathbf F}$ be an operator on linear orderings. We consider the statement
\[
{\textsf{WOP}}({\mathbf F}): \quad \forall {\ensuremath{\mathcal{X}}}\ ({\ensuremath{\mathcal{X}}}\text{ is a well-ordering} \implies {\mathbf F}({\ensuremath{\mathcal{X}}})\text{ is a well-ordering}).
\]
We study the behavior of ${\mathbf F}$ by analyzing the computational
complexity of the proof of ${\textsf{WOP}}({\mathbf F})$ as follows. The statement
${\textsf{WOP}}({\mathbf F})$ can be restated as ``if ${\mathbf F}({\ensuremath{\mathcal{X}}})$ has a descending
sequence, then ${\ensuremath{\mathcal{X}}}$ has a descending sequence to begin with''. Given
${\mathbf F}$, the question we ask is:
\begin{quote}
Given a linear ordering ${\ensuremath{\mathcal{X}}}$ and a descending sequence in ${\mathbf F}({\ensuremath{\mathcal{X}}})$,
how difficult is to build a descending sequence in ${\ensuremath{\mathcal{X}}}$?
\end{quote}
From Hirst's proof of Girard's result \cite{Hirst94}, we can extract
the following answer for ${\mathbf F}({\ensuremath{\mathcal{X}}})={\boldsymbol \omega}^{\ensuremath{\mathcal{X}}}$.
\begin{theorem}\label{Hirst}
If ${\ensuremath{\mathcal{X}}}$ is a computable linear ordering, and ${\boldsymbol \omega}^{{\ensuremath{\mathcal{X}}}}$ has a
computable descending sequence, then $0'$ computes a descending
sequence in ${\ensuremath{\mathcal{X}}}$. Furthermore, there exists a computable linear
ordering ${\ensuremath{\mathcal{X}}}$ with a computable descending sequence in ${\boldsymbol \omega}^{{\ensuremath{\mathcal{X}}}}$
such that every descending sequence in ${\ensuremath{\mathcal{X}}}$ computes $0'$.
\end{theorem}
The first statement of the theorem follows from the results of Section
\ref{sect:forward}, which includes the upper bounds of the
computability-theoretic results and the \lq\lq forward
directions\rq\rq\ of the reverse mathematics results. We include a
proof of the second statement in Section \ref{sect:exp}, where we
modify Hirst's idea to be able to apply it on our other results later.
In doing so, we give a new definition of the Turing jump which,
although computationally equivalent to the usual jump, is
combinatorially easier to manage. This allows us to define computable
approximations to the Turing jump, and we can also define a computable
operation on trees that produces trees whose paths are the Turing jumps
of the input tree. Furthermore, our definition of the Turing jump
behaves nicely when we take iterations.
In Section \ref{sect:eps} we use these features of our proof of
Theorem \ref{Hirst}. First, in Section \ref{ssect:finite} we consider
finite iterations of the Turing jump and of ordinal exponentiation.
(We write $\iexpop n{\ensuremath{\mathcal{X}}}$ for the $n$th iterate of the operation
${\boldsymbol \omega}^{\ensuremath{\mathcal{X}}}$; see Definition \ref{iexpop}.) In Theorem \ref{0^n}, we
prove:
\begin{theorem}
Fix $n\in\ensuremath{\mathbb N}$. If ${\ensuremath{\mathcal{X}}}$ is a computable linear ordering, and $\iexpop
n{\ensuremath{\mathcal{X}}}$ has a computable descending sequence, then $0^{(n)}$ computes a
descending sequence in ${\ensuremath{\mathcal{X}}}$. Conversely, there exists a computable
linear ordering ${\ensuremath{\mathcal{X}}}$ with a computable descending sequence in $\iexpop
n{\ensuremath{\mathcal{X}}}$ such that the jump of every descending sequence in ${\ensuremath{\mathcal{X}}}$ computes
$0^{(n)}$.
\end{theorem}
From this, in Section \ref{ssect:rmepsilon}, we obtain the following
reverse mathematics result.
\begin{theorem}\label{thm:ACApr}
Over \system{RCA}\ensuremath{_0}, $\forall n\, {\textsf{WOP}}({\ensuremath{\mathcal{X}}}\mapsto\iexpop n{\ensuremath{\mathcal{X}}})$ is equivalent to
\system{ACA}\ensuremath{'_0}.
\end{theorem}
The first main new result of this paper is obtained in Section
\ref{ssect:eps} and analyzes the complexity behind the epsilon
function.
\begin{theorem}
If ${\ensuremath{\mathcal{X}}}$ is a computable linear ordering, and $\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}$ has a
computable descending sequence, then $0^{(\omega)}$ can compute a
descending sequence in ${\ensuremath{\mathcal{X}}}$. Conversely, there is a computable linear
ordering ${\ensuremath{\mathcal{X}}}$ with a computable descending sequence in $\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}$ such
that the jump of every descending sequence in ${\ensuremath{\mathcal{X}}}$ computes
$0^{(\omega)}$.
\end{theorem}
We prove this result in Theorems \ref{thm:epsilon forward} and
\ref{omega}. Then, as a corollary of the proof, we obtain the following
result in Section \ref{ssect:rmepsilon}.
\begin{theorem}\label{thm: espilon vs ACApl}
Over \system{RCA}\ensuremath{_0}, ${\textsf{WOP}}({\ensuremath{\mathcal{X}}}\mapsto\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}})$ is equivalent to \system{ACA}\ensuremath{^+_0}.
\end{theorem}
Our proof is purely computability-theoretic and plays with the
combinatorics of the $\omega$-jump and the epsilon function. By
generalizing the previous ideas, we obtain a new definition of the
$\omega$-Turing jump, which we can also approximate by a computable
function on finite strings and by a computable operator on trees. An
important property of our $\omega$-Turing jump operator is that it is
essentially a fixed point of the jump operator: for every real $Z$, the
$\omega$-Turing jump of $Z$ is equal to the $\omega$-Turing jump of the jump
of $Z$, except for the first bit (we mean equal as sequences of
numbers, not only Turing equivalent). Notice the analogy with the
$\boldsymbol\varepsilon$ and ${\boldsymbol \omega}$ operators.
After a draft of the proof of Theorem \ref{thm: espilon vs ACApl} was
circulated, Afshari and Rathjen \cite{AR} gave a completely different
proof using only proof-theoretic methods like cut-elimination, coded
$\omega$-models and Sch\"{u}tte deduction chains. They prove that
${\textsf{WOP}}({\ensuremath{\mathcal{X}}}\mapsto\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}})$ implies the existence of countable coded
$\omega$-models of \system{ACA}\ensuremath{_0}\ containing any given set, and that this in turn
is equivalent to \system{ACA}\ensuremath{^+_0}. To this end they prove a completeness-type
result: given a set $Z$, they can either build an $\omega$-model of \system{ACA}\ensuremath{_0}\
containing $Z$ as wanted, or obtain a proof tree of `0=1' in a suitable
logical system with formulas of rank at most $\omega$. The latter case
leads to a contradiction as follows. The logical system where we get
the proof tree has cut elimination, increasing the rank of the proof
tree by an application of the $\boldsymbol\varepsilon$ operator. Using
${\textsf{WOP}}({\ensuremath{\mathcal{X}}}\mapsto\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}})$, ${\ensuremath{\mathcal{X}}}$ being the Kleene-Brouwer ordering on
the proof tree of `0=1', they obtain a well-founded cut-free proof tree
of `0=1'.
In Section \ref {sect:General case}, we move towards studying the
computable complexity of the Veblen functions. Given a computable
ordinal $\alpha$, we calibrate the complexity of
${\textsf{WOP}}({\ensuremath{\mathcal{X}}}\mapsto{\boldsymbol \varphi}(\alpha,{\ensuremath{\mathcal{X}}}))$ with the following result, obtained by
extending our definitions to $\omega^\alpha$-Turing jumps.
\begin{theorem}\label{thm: veblen vs jumps}
Let $\alpha$ be a computable ordinal. If ${\ensuremath{\mathcal{X}}}$ is a computable linear
ordering, and ${\boldsymbol \varphi}(\alpha,{{\ensuremath{\mathcal{X}}}})$ has a computable descending sequence,
then $0^{(\omega^\alpha)}$ computes a descending sequence in ${\ensuremath{\mathcal{X}}}$. Conversely,
there is a computable linear ordering ${\ensuremath{\mathcal{X}}}$ such that ${\boldsymbol \varphi}(\alpha,{{\ensuremath{\mathcal{X}}}})$
has a computable descending sequence but every descending sequence in
${\ensuremath{\mathcal{X}}}$ computes $0^{(\omega^\alpha)}$.
\end{theorem}
This result will follow from Theorem \ref{thm:veblen forward} and
Theorem \ref{alpha}. In Section \ref{ssect:rmVeblen}, as a corollary,
we get the following result.
\begin{theorem}\label{thm: veblen vs PiAlphaCA}
Let $\alpha$ be a computable ordinal. Over \system{RCA}\ensuremath{_0},
${\textsf{WOP}}({\ensuremath{\mathcal{X}}}\mapsto{\boldsymbol \varphi}(\alpha,{\ensuremath{\mathcal{X}}}))$ is equivalent to $\Pi^0_{\omega^\alpha}$-\system{CA}\ensuremath{_0}.
\end{theorem}
Exploiting the uniformity in the proof of Theorem \ref{thm: veblen vs
jumps}, we also obtain a new purely computability-theoretic proof of
Friedman's result (Theorem \ref{Friedman}). Before our proof, Rathjen
and Weiermann \cite{RW} found a new, fully proof-theoretic proof of
Friedman's result. They use a technique similar to the proof of
Afshari and Rathjen mentioned above. Friedman's original proof has
two parts, one computability-theoretic and one proof-theoretic.
The table below shows the systems studied in this paper (with the
exception of \system{ACA}\ensuremath{'_0}). The second column gives the proof-theoretic
ordinal of the system, which were calculated by Gentzen, Rathjen,
Feferman, and Sch\"{u}tte. The third column gives the operator ${\mathbf F}$
on linear orderings such that ${\textsf{WOP}}({\mathbf F})$ is equivalent to the given
system. The last column gives references for the different proofs of
these equivalences in historical order ([MM] refers to this paper).
\begin{center}
\begin{tabular}{|c|c|c|>{\Small}l|}
\hline
System & p.t.o. & ${\mathbf F}({\ensuremath{\mathcal{X}}})$ & references \\
\hline
\system{ACA}\ensuremath{_0} & $\varepsilon_0$ & ${\boldsymbol \omega}^{\ensuremath{\mathcal{X}}}$ & Girard \cite{Girard}; Hirst \cite{Hirst94}\\
\system{ACA}\ensuremath{^+_0} & $\varphi_2(0)$ & $\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}$ & [MM]; Afshari-Rathjen \cite{AR}\\
$\Pi^0_{\omega^\alpha}$-\system{CA}\ensuremath{_0} & $\varphi_{\alpha+1}(0)$ & ${\boldsymbol \varphi}(\alpha,{\ensuremath{\mathcal{X}}})$ & [MM]\\
\system{ATR}\ensuremath{_0} & $\Gamma_0$ & ${\boldsymbol \varphi}({\ensuremath{\mathcal{X}}},0)$ &Friedman \cite{FMW}; Rathjen-Weiermann \cite{RW}; [MM]\\
\hline
\end{tabular}
\end{center}
Notice that in every case, the proof-theoretic ordinal equals
\[
\sup\{{\mathbf F}(0), {\mathbf F}({\mathbf F}(0)), {\mathbf F}({\mathbf F}({\mathbf F}(0))),\dots\}.
\]
\section{Background and definitions}\label{sect:background}
\subsection{Veblen operators and ordinal notation}\label{ssect:Veblen operators}
We already know what the $\omega$, $\varepsilon$ and $\varphi$ functions do on
ordinals. In this section we define operators ${\boldsymbol \omega}$, $\boldsymbol\varepsilon$ and
${\boldsymbol \varphi}$, that work on all linear orderings. These operators are
computable, and when they are applied to a well-ordering, they coincide
with the $\omega$, $\varepsilon$ and $\varphi$ functions on ordinals.
To motivate the definition of ${\boldsymbol \omega}^{\ensuremath{\mathcal{X}}}$ we use the following
observation due to Cantor \cite{Can97}. Every ordinal below $\omega^\alpha$
can be written in a unique way as a sum
\[
\omega^{\beta_0}+\omega^{\beta_1}+\dots+\omega^{\beta_{k-1}},
\]
where $\alpha>\beta_0\geq \beta_1\geq \dots\geq \beta_{k-1}$.
\begin{definition}
Given a linear ordering ${\ensuremath{\mathcal{X}}}$, ${\boldsymbol \omega}^{\ensuremath{\mathcal{X}}}$ is defined as the set of
finite strings $\langle x_0, x_1,\dots, x_{k-1}\rangle \in {\ensuremath{\mathcal{X}}}^{<\omega}$
(including the empty string) where $x_0\geq_{\ensuremath{\mathcal{X}}} x_1\geq_{\ensuremath{\mathcal{X}}} \dots \geq_{\ensuremath{\mathcal{X}}}
x_{k-1}$. We think of $\langle x_0, x_1,\dots, x_{k-1}\rangle \in {\boldsymbol \omega}^{\ensuremath{\mathcal{X}}} $ as
$\omega^{x_0}+\omega^{x_1}+\dots+\omega^{x_{k-1}}$. The ordering on ${\boldsymbol \omega}^{\ensuremath{\mathcal{X}}}$
is the lexicographic one: $\langle x_0, x_1,\dots,
x_{k-1}\rangle\leq_{{\boldsymbol \omega}^{\ensuremath{\mathcal{X}}}}\langle y_0, y_1,\dots, y_{l-1}\rangle$ if either
$k\leq l$ and $x_i=y_i$ for every $i<k$, or for the least $i$ such that
$x_i\neq y_i$ we have $x_i<_{\ensuremath{\mathcal{X}}} y_i$.
\end{definition}
We use the following notation for the iteration of the ${\boldsymbol \omega}$
operator.
\begin{definition}\label{iexpop}
Given a linear ordering {\ensuremath{\mathcal{X}}}, let $\iexpop 0{\ensuremath{\mathcal{X}}} = {\ensuremath{\mathcal{X}}}$ and $\iexpop {n+1}{\ensuremath{\mathcal{X}}}
= {\boldsymbol \omega}^{\iexpop n{\ensuremath{\mathcal{X}}}}$.
\end{definition}
To motivate the definition of the $\boldsymbol\varepsilon$ operator we start with the
following observations. On the ordinals, the closure of the set $\{0\}$
under the operations $+$ and $t\mapsto \omega^t$, is the set of the
ordinals strictly below $\varepsilon_0$. The closure of $\{0,\varepsilon_0\}$ under
the same operations, is the set of the ordinals strictly below
$\varepsilon_1$. In general, if we take the closure of $\{0\} \cup
\set{\varepsilon_\beta}{\beta<\alpha}$ we obtain all ordinals strictly below $\varepsilon_\alpha$.
\begin{definition}
Let ${\ensuremath{\mathcal{X}}}$ be a linear ordering. We define $\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}$ to be the set of
formal terms defined as follows:
\begin{itemize}
\item $0$ and $\varepsilon_x$, for $x\in{\ensuremath{\mathcal{X}}}$, belong to $\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}$, and
are called \lq\lq constants\rq\rq,
\item if $t_1,t_2\in \boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}$, then $t_1+t_2\in\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}$,
\item if $t\in \boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}$, then $\omega^t\in \boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}$.
\end{itemize}
Many of the terms we defined represent the same element, so we need to
find normal forms for the elements of $\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}$. The definition of
the ordering on $\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}$ is what one should expect when ${\ensuremath{\mathcal{X}}}$ is an
ordinal. We define the normal form of a term and the relation
$\leq_{\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}}$ simultaneously by induction on terms.
We say that a term $t=t_0+\dots+t_k$ is in {\em normal form} if either
$t=0$ (i.e.\ $k=0$ and $t_0=0$), or the following holds: (a)
$t_0\geq_{\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}} t_1\geq_{\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}}\dots\geq_{\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}} t_k> 0$,
and (b) each $t_i$ is either a constant or of the form $\omega^{s_i}$,
where $s_i$ is in normal form and $s_i\neq\varepsilon_x$ for any $x$.
Every $t\in\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}$ can be written in normal form by applying the
following rules:
\begin{itemize}
\item $+$ is associative,
\item $s+0=0+s=s$,
\item if $s<_{\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}} r$, then $\omega^s+\omega^r=\omega^r$,
\item $\omega^{\varepsilon_x}=\varepsilon_x$.
\end{itemize}
Given $t=t_0+\dots+t_k$ and $s=s_0+\dots+s_l$ in normal form, we let
$t\leq_{\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}} s$ if one of the following conditions apply
\begin{itemize}
\item $t=0$,
\item $t=\varepsilon_x$ and, for some $y\geq_{\ensuremath{\mathcal{X}}} x$, $\varepsilon_y$ occurs in
$s$,
\item $t=\omega^{t'}$, $s_0=\varepsilon_y$ and $t'\leq_{\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}}\varepsilon_y$,
\item $t=\omega^{t'}$, $s_0=\omega^{s'}$ and $t'\leq_{\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}}s'$,
\item $k>0$ and $t_0<_{\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}}s_0$,
\item $k>0$, $t_0=s_0$, $l>0$ and $t_1+\dots+t_k \leq_{\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}}
s_1+\dots+s_l$.
\end{itemize}
\end{definition}
The observation we made before the definition shows how the $\boldsymbol\varepsilon$
operator coincides with the $\varepsilon$-function when ${\ensuremath{\mathcal{X}}}$ is an ordinal
(this includes the case ${\ensuremath{\mathcal{X}}}=\emptyset$, when $0$ is the only constant and we
obtain $\varepsilon_0$ as expected).
\begin{definition}\label{iexp}
In analogy with Definition \ref{iexpop}, for $t \in \boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}$ we use
$\iexp nt$ to denote the term in $\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}$ obtained by applying the
$\omega$ function symbol $n$ times to $t$.
\end{definition}
\begin{definition}
If ${\ensuremath{\mathcal{X}}}$ is a linear ordering and $x \in {\ensuremath{\mathcal{X}}}$, let ${\ensuremath{\mathcal{X}}}\!\restriction\! x$ be the
linear ordering with domain $\set{y\in{\ensuremath{\mathcal{X}}}}{y<_{\ensuremath{\mathcal{X}}} x}$.
\end{definition}
The following lemma expresses the compatibility of the ${\boldsymbol \omega}$ and
$\boldsymbol\varepsilon$ operators.
\begin{lemma}\label{lemma:om eps}
If ${\ensuremath{\mathcal{X}}}$ is a linear ordering, then for every $t\in \boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}$ and $n
\in \ensuremath{\mathbb N}$
\[
\iexpop n{\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}\!\restriction\! t} \cong \boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}} \!\restriction\! \iexp nt
\]
via a computable isomorphism. In particular, ${\boldsymbol \omega}^{\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}\!\restriction\! t}
\cong \boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}} \!\restriction\! \omega^t$.
\end{lemma}
\begin{proof}
The proof is by induction on $n$. When $n=0$ the identity is the
required isomorphism. If $\psi: \iexpop n{\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}\!\restriction\! t} \to
\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}} \!\restriction\! \iexp nt$ is an isomorphism, then the function mapping
the empty string to $0$ and $\langle t_0,\dots,t_k\rangle$ to
$\omega^{\psi(t_0)}+\dots+\omega^{\psi(t_k)}$ witnesses $\iexpop
{n+1}{\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}\!\restriction\! t} \cong \boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}} \!\restriction\! \iexp {n+1}t$.
\end{proof}
To define the ${\boldsymbol \varphi}$ operator we start with the following
observations. If we take the closure of the set $\{0\}$ under the
operations $+$, $t\mapsto \omega^t$ and $t\mapsto \varepsilon_t$, we get all the
ordinals up to $\varphi_2(0)$. If we take the closure of
$\{0\}\cup\set{\varphi_2(\beta)}{\beta<\alpha}$ we get all the ordinals below
$\varphi_2(\alpha)$. In general, we obtain $\varphi_\gamma(\alpha)$ as the closure
of $\{0\} \cup \set{\varphi_\gamma(\beta)}{\beta<\alpha}$ under the operations $+$,
and $t\mapsto \varphi_\delta(t)$, for all $\delta<\gamma$.
\begin{definition}\label{def:phiop}
Let ${\ensuremath{\mathcal{X}}}$ and ${\mathcal Y}$ be linear orderings. We define ${\boldsymbol \varphi}({\mathcal Y},{\ensuremath{\mathcal{X}}})$ to
be the set of formal terms defined as follows:
\begin{itemize}
\item $0$ and $\varphi_{{\mathcal Y},x}$, for $x\in{\ensuremath{\mathcal{X}}}$, belong to
${\boldsymbol \varphi}({\mathcal Y},{\ensuremath{\mathcal{X}}})$, and are called \lq\lq constants\rq\rq,
\item if $t_1,t_2\in {\boldsymbol \varphi}({\mathcal Y},{\ensuremath{\mathcal{X}}})$, then $t_1+t_2\in{\boldsymbol \varphi}({\mathcal Y},{\ensuremath{\mathcal{X}}})$,
\item if $t\in {\boldsymbol \varphi}({\mathcal Y},{\ensuremath{\mathcal{X}}})$ and $\delta\in{\mathcal Y}$, then $\varphi_\delta(t)
\in {\boldsymbol \varphi}({\mathcal Y},{\ensuremath{\mathcal{X}}})$.
\end{itemize}
We define the normal form of a term and the relation $\leq_{{\boldsymbol \varphi}({\mathcal Y},{\ensuremath{\mathcal{X}}})}$
simultaneously by induction on terms. We write $\leq_\varphi$ instead
of $\leq_{{\boldsymbol \varphi}({\mathcal Y},{\ensuremath{\mathcal{X}}})}$ to simplify the notation.
We say that a term $t=t_0+\dots+t_k$ is in {\em normal form} if either
$t=0$, or the following holds: (a) $t_0\geq_\varphi
t_1\geq_\varphi\dots\geq_\varphi t_k> 0$, and (b) each $t_i$ is either
a constant or of the form $\varphi_\delta(s_i)$, where $s_i$ is in normal
form and $s_i\neq\varphi_{\delta'}(s_i')$ for $\delta'>\delta$.
Every $t\in{\boldsymbol \varphi}({\mathcal Y},{\ensuremath{\mathcal{X}}})$ can be written in normal form by applying the
following rules:
\begin{itemize}
\item $+$ is associative,
\item $s+0=0+s=s$,
\item if $\varphi_{\delta'}(s)<_\varphi \varphi_\delta(r)$, then $\varphi_{\delta'}(s)+\varphi_\delta(r)=\varphi_\delta(r)$.
\item if $\delta ' >\delta$, then $\varphi_\delta( \varphi_{\delta'}(r))=\varphi_{\delta'}(r)$.
\item if $\delta\in{\mathcal Y}$ then $\varphi_\delta(\varphi_{{\mathcal Y},r})=
\varphi_{{\mathcal Y},r}$.
\end{itemize}
The motivation for the last two items is that if $\delta ' >\delta$, anything
in the image of $\varphi_{\delta'}$ is a fixed point of $\varphi_\delta$.
Given $t=t_0+\dots+t_k$ and $s=s_0+\dots+s_l$ in normal form, we let
$t\leq_\varphi s$ if one of the following conditions apply
\begin{itemize}
\item $t=0$,
\item $t={\boldsymbol \varphi}_{{\mathcal Y},x}$ and, for some $y\geq_{\ensuremath{\mathcal{X}}} x$,
$\varphi_{{\mathcal Y},y}$ occurs in $s$,
\item $t=\varphi_\delta(t')$, $s_0=\varphi_{\delta'}(s')$ and
$\begin{cases}
\delta<\delta' \text{ and } t' \leq_\varphi \varphi_{\delta'}(s'), \text{ or } \\
\delta=\delta' \text{ and } t' \leq_\varphi s', \text{ or } \\
\delta>\delta' \text{ and } \varphi_\delta(t') \leq_\varphi s',
\end{cases}$
\item $k>0$ and $t_0<_\varphi s_0$,
\item $k>0$, $t_0=s_0$, $l>0$ and $t_1+\dots+t_k \leq_\varphi
s_1+\dots+s_l$.
\end{itemize}
\end{definition}
\subsection{Notation for strings and trees}\label{ssect:notation}
Here we fix our notation for sequences (or strings) of natural numbers.
The \emph{Baire space \ensuremath{\N^\N}} is the set of all infinite sequences of
natural numbers. As usual, an element of \ensuremath{\N^\N}\ is also called a
\emph{real}. If $X \in \ensuremath{\N^\N}$ and $n \in \ensuremath{\mathbb N}$, $X(n)$ is the $(n+1)$-st
element of $X$. {\ensuremath{\N^{<\N}}}\ is the set of all finite strings of natural
numbers. When $\sigma \in {\ensuremath{\N^{<\N}}}$ we use $|\sigma|$ to denote its \emph{length}
and, for $i<|\sigma|$, $\sigma(i)$ to denote its $(i+1)$-st element. We write
$\emptyset$ for the \emph{empty string} (i.e.\ the only string of length
$0$), and $\langle n \rangle$ for the string of length $1$ whose only element
is $n$. When $\sigma, \tau \in {\ensuremath{\N^{<\N}}}$, $\sigma \subseteq \tau$ means that
$\sigma$ is an \emph{initial segment} of $\tau$, i.e.\ $|\sigma| \leq |\tau|$
and $\sigma(i) = \tau(i)$ for each $i<|\sigma|$. We use $\sigma \subset \tau$ to
mean $\sigma \subseteq \tau$ and $\sigma \neq \tau$. If $X \in \ensuremath{\N^\N}$ we write
$\sigma \subset X$ if $\sigma(i) = X(i)$ for each $i<|\sigma|$. We use $\sigma
^\smallfrown \tau$ to denote the \emph{concatenation} of $\sigma$ and $\tau$,
that is the string $\rho$ such that $|\rho| = |\sigma|+|\tau|$, $\rho(i)=
\sigma(i)$ when $i<|\sigma|$, and $\rho(|\sigma|+i)=\tau(i)$ when $i<|\tau|$.
If $X \in \ensuremath{\N^\N}$, $\sigma \in {\ensuremath{\N^{<\N}}}$ and $t \in \ensuremath{\mathbb N}$, $X \!\restriction\! t$ is the
initial segment of $X$ of length $t$, while $\sigma \!\restriction\! t$ is the
initial segment of $\sigma$ of length $t$ if $t\leq|\sigma|$, and $\sigma$
otherwise.
We fix an enumeration of {\ensuremath{\N^{<\N}}}, so that each finite string is also a
natural number, and hence can be an element of another string. This
enumeration is such that all the operations and relations discussed in
the previous paragraph are computable. Moreover we can assume that $\sigma
\subset \tau$ (as strings) implies $\sigma<\tau$ (as natural numbers). For
an enumeration with these properties see e.g.\ \cite[\S II.2]{Sim99}.
The following operation on strings will be useful.
\begin{definition}\label{def:ell}
If $\sigma\in{\ensuremath{\N^{<\N}}}$ is nonempty let $\ell(\sigma) = \langle \sigma(|\sigma|-1) \rangle$, the
string of length one whose only entry is the last entry of $\sigma$.
\end{definition}
\begin{definition}
A \emph{tree} is a set $T \subseteq {\ensuremath{\N^{<\N}}}$ such that $\sigma \!\restriction\! t \in T$
whenever $\sigma \in T$ and $t<|\sigma|$. If $T$ is a tree, $X \in \ensuremath{\N^\N}$ is
\emph{a path through $T$} if $X \!\restriction\! t \in T$ for all $t$. We let
$[T]$ be the set of all paths through $T$.
\end{definition}
\begin{definition}\label{def:Tsigma}
If $T$ is a tree and $\sigma \in {\ensuremath{\N^{<\N}}}$ we let $T_\sigma = \set{\rho \in
T}{\rho \subseteq \sigma \lor \sigma \subseteq \rho}$.
\end{definition}
\begin{definition}
\ensuremath{\leq_{\mathrm{KB}}}\ is the usual \emph{Kleene-Brouwer ordering} of ${\ensuremath{\N^{<\N}}}$: if $\sigma,
\tau \in {\ensuremath{\N^{<\N}}}$, we let $\sigma \ensuremath{\leq_{\mathrm{KB}}} \tau$ if either $\sigma \supseteq \tau$ or
there is some $i$ such that $\sigma \!\restriction\! i = \tau \!\restriction\! i$ and $\sigma(i) <
\tau (i)$.
\end{definition}
The following is well-known (see e.g.\ \cite[Lemma V.1.3]{Sim99}).
\begin{lemma}\label{lemma:KB}
Let $T \subseteq {\ensuremath{\N^{<\N}}}$ be a tree: $T$ is well-founded (i.e.\ $[T]=\emptyset$)
if and only if the linear ordering $(T, {\ensuremath{\leq_{\mathrm{KB}}}})$ is well-ordered.
Moreover, if $f\colon \ensuremath{\mathbb N} \to T$ is a descending sequence with respect
to \ensuremath{\leq_{\mathrm{KB}}}, there exists $Y \in [T]$ such that $Y \leq_T f'$.
\end{lemma}
We will need some terminology to describe functions between partial
orderings.
\begin{definition}
Let $f\colon P \to Q$ be a function, $\leq_P$ and $\leq_Q$ be partial
orderings of $P$ and $Q$ respectively, with $<_P$ and $<_Q$ the
corresponding strict orderings. We say that $f$ is \emph{$({<_P},
{<_Q})$-monotone} if for every $x, y \in P$ such that $x <_P y$ we have
$f(x) <_Q f(y)$.
\end{definition}
\subsection{Computability theory notation}\label{ssect:ctnotation}
We use standard notation from computability theory. In particular, for
a string $\sigma\in \ensuremath{\mathbb N}^{\leq\ensuremath{\mathbb N}}$, $\{e\}^\sigma (n)$ denotes the output of
the $e$th Turing machine on input $n$, run with oracle $\sigma$, for at
most $|\sigma|$ steps (where $|\sigma| = \infty$ when $\sigma \in \ensuremath{\N^\N}$). If
this computation does not halt in less than $|\sigma|$ steps we write
$\{e\}^\sigma (n) \mathord\uparrow$, otherwise we write $\{e\}^\sigma (n)
\mathord\downarrow$. We write $\{e\}^\sigma_t (n) \mathord\downarrow$ if the computation
halts in less than $\min(|\sigma|, t)$ steps.
Given $X,Y\subseteq\ensuremath{\mathbb N}$, the predicate $X=Y'$ is defined as usual:
\[
X=Y'\iff \forall e(e\in X\leftrightarrow \{e\}^Y(e) \mathord\downarrow).
\]
\begin{definition}
Given an ordinal $\beta$ (or actually any presentation of a linear
ordering with first element 0), we say that $X=Y^{(\beta)}$ if
\[
X^{[0]}=Y \ \ , \ \ \forall\gamma<\beta\ (X^{[\gamma]}={X^{[<\gamma]}}')\ \text{ and } X=X^{[<\beta]}.
\]
where $X^{[\gamma]}=\set{y}{\langle \gamma,y\rangle\in X}$ and $X^{[<\gamma]} = \set{\langle
\delta,y \rangle}{\delta<\gamma\ \&\ \langle \delta,y \rangle \in X}$.
\end{definition}
\subsection{Subsystems of second order arithmetic}\label{ssect:subsystems}
We refer the reader to \cite{Sim99} for background information on
subsystems of second order arithmetic. All subsystems we consider
extend \system{RCA}\ensuremath{_0}\ which consists of the axioms of ordered semi-ring, plus
$\Delta^0_1$-comprehension and $\Sigma^0_1$-induction. Adding
set-existence axioms to \system{RCA}\ensuremath{_0}\ we obtain \system{WKL}\ensuremath{_0}, \system{ACA}\ensuremath{_0}, \system{ATR}\ensuremath{_0},\ and \ensuremath{\Pi^1_1}-\CA,
completing the so-called \lq\lq big five\rq\rq\ of reverse mathematics.
In this paper we are interested in \system{ACA}\ensuremath{_0}, \system{ATR}\ensuremath{_0}, and some theories which
lie between these two. All these theories can be presented in terms of
\lq\lq jump-existence axioms\rq\rq, as follows:
\begin{description}
\item[\system{ACA}\ensuremath{_0}] \system{RCA}\ensuremath{_0} + $\forall Y\exists X\ (X=Y')$
\item[\system{ACA}\ensuremath{'_0}] \system{RCA}\ensuremath{_0} + $\forall Y \forall n \exists X\ (X=Y^{(n)})$
\item[\system{ACA}\ensuremath{^+_0}] \system{RCA}\ensuremath{_0} + $\forall Y\exists X\ (X=Y^{(\omega)})$
\item[$\Pi^0_\beta$-\system{CA}\ensuremath{_0}] \system{RCA}\ensuremath{_0} + $\beta \text{ well-ordered} \land \forall Y\exists X\ (X=Y^{(\beta)})$,\\
where $\beta$ is a presentation of a computable
ordinal\footnote{The system $\Pi^0_\beta$-\system{CA}\ensuremath{_0}\ is sometimes
denoted by $(\Pi^0_1\text{-\system{CA}\ensuremath{_0}})_\beta$ in the literature.}
\item[\system{ATR}\ensuremath{_0}] \system{RCA}\ensuremath{_0} + $\forall \alpha(\alpha\text{ well-ordered}\implies
\forall Y\exists X\ (X=Y^{(\alpha)}))$
\end{description}
Notice that $\Pi^0_1$-\system{CA}\ensuremath{_0}\ is \system{ACA}\ensuremath{_0}\ and $\Pi^0_\omega$-\system{CA}\ensuremath{_0}\ is \system{ACA}\ensuremath{^+_0}.
$\Pi^0_\beta$-\system{CA}\ensuremath{_0}\ is strictly stronger than $\Pi^0_\gamma$-\system{CA}\ensuremath{_0}\ if and only
if $\beta\geq\gamma\cdot\omega$. In fact the $\omega$-model $\bigcup_{\alpha<\gamma\cdot\omega}
\set{X}{X \leq_T 0^{(\alpha)}}$ satisfies $\Pi^0_\alpha$-\system{CA}\ensuremath{_0}\ for all
$\alpha<\gamma\cdot\omega$, but not $\Pi^0_{\gamma\cdot\omega}$-\system{CA}\ensuremath{_0}. Each theory in the
above list is strictly stronger than the preceding ones if we assume
$\beta \geq \omega^2$.
\system{ACA}\ensuremath{_0}\ and \system{ATR}\ensuremath{_0}\ are well-known and widely studied: \cite{Sim99}
includes a chapter devoted to each of them and their equivalents.
(The axiomatization of \system{ATR}\ensuremath{_0}\ given above is equivalent to the usual
one by \cite[Theorem VIII.3.15]{Sim99}.) \system{ACA}\ensuremath{^+_0}\ was introduced in
\cite{BHS}, where it was shown that it proves Hindman's Theorem in
combinatorics (to this day it is unknown whether \system{ACA}\ensuremath{^+_0}\ and
Hindman's Theorem are equivalent). \system{ACA}\ensuremath{^+_0}\ has also been used in
\cite{Shore06} (where it is proved that \system{ACA}\ensuremath{^+_0}\ is equivalent to
statements asserting the existence of invariants for Boolean
algebras) and in \cite{MM} (where \system{ACA}\ensuremath{^+_0}\ is used to prove a
restricted version of Fra\"{\i}ss\'{e}'s conjecture on linear orders). \system{ACA}\ensuremath{'_0}\
is also featured in \cite{MM}. The computation of its proof-theoretic
ordinal, which turns out to be $\varepsilon_\omega$, is due to J\"{a}ger
(unpublished notes, a proof appears in \cite{McA}, and a different
proof is included in \cite{Af-thesis}). The theories $\Pi^0_\beta$-\system{CA}\ensuremath{_0}\
are natural generalizations of \system{ACA}\ensuremath{^+_0}.
\section{Forward direction}\label{sect:forward}
In this section we prove the \lq\lq forward direction\rq\rq of Theorems
\ref{Girard}, \ref{thm:ACApr}, \ref{thm: espilon vs ACApl}, \ref{thm:
veblen vs PiAlphaCA}, and \ref{Friedman}. The results in this section
are already known (though often written in different settings) but we
include them as our proofs illustrate how the iterates of the Turing
jump relate with the epsilon and Veblen functions.
The following theorem is essentially contained in Hirst's proof
\cite{Hirst94} of the closure of well-orderings under exponentiation
in \system{ACA}\ensuremath{_0}.
\begin{theorem}\label{thm:om forward}
If ${\ensuremath{\mathcal{X}}}$ is a $Z$-computable linear ordering, and ${\boldsymbol \omega}^{{\ensuremath{\mathcal{X}}}}$ has a
$Z$-computable descending sequence, then $Z'$ can compute a descending
sequence in ${\ensuremath{\mathcal{X}}}$.
\end{theorem}
\begin{proof}
Let $(a_k:k\in\ensuremath{\mathbb N})$ be a $Z$-computable descending sequence in
${\boldsymbol \omega}^{\ensuremath{\mathcal{X}}}$. We can write $a_k$ in the form $\omega^{x_{k,0}} \cdot
m_{k,0} + \omega^{x_{k,1}} \cdot m_{k,1}+ \dots+ \omega^{x_{k,l_k}} \cdot
m_{k,l_k}$ where each $m_{k,0}\in\ensuremath{\mathbb N}$ is positive and $x_{k,i} >_{\ensuremath{\mathcal{X}}}
x_{k,i+1}$ for all $i<l_k$.
Using $Z'$, we recursively define a function $f: \ensuremath{\mathbb N} \to {\ensuremath{\mathcal{X}}} \times \omega$
which is decreasing with respect to the lexicographic ordering $<_{{\ensuremath{\mathcal{X}}}
\times \omega}$. (We use $x\cdot m$ to denote $\langle x,m\rangle\in{\ensuremath{\mathcal{X}}} \times
\omega$.) Each $f(n)$ is of the form $x_{k,i} \cdot m_{k,i}$ for some $k$
and $i\leq l_k$. At the following step, when we define $f(n+1)$,
either we increase $k$ and leave $i$ unchanged, or, if this is not
possible, we keep $k$ unchanged and increase $i$ by one. We will have
that if $f(n)$ is of the form $x_{k,i} \cdot m_{k,i}$, then $x_{h,j}
\cdot m_{h,j} = x_{k,j} \cdot m_{k,j}$ for all $h>k$ and $j<i$.
Let $f(0) = x_{0,0} \cdot m_{0,0}$. Assuming we already defined $f(n) =
x_{k,i} \cdot m_{k,i}$, we need to define $f(n+1)$. If there exist
$h>k$ such that $x_{h,i} \cdot m_{h,i} <_{{\ensuremath{\mathcal{X}}} \times \omega} x_{k,i} \cdot
m_{k,i}$, then let $f(n+1)=x_{h,i} \cdot m_{h,i}$ for the least such
$h$. If $x_{h,i} \cdot m_{h,i} \geq_{{\ensuremath{\mathcal{X}}} \times \omega} x_{k,i} \cdot
m_{k,i}$ for all $h>k$ then we must have $i<l_k$ (otherwise
$a_k>_{{\boldsymbol \omega}^{\ensuremath{\mathcal{X}}}} a_{k+1}$ cannot hold) and we can let $f(n+1)=x_{k,i+1}
\cdot m_{k,i+1}$.
It is then straightforward
to obtain a $f$-computable, and hence $Z'$-computable, descending
sequence in ${\ensuremath{\mathcal{X}}}$.
\end{proof}
The proof above produces an index for a
$Z'$-computable descending subsequence in ${\ensuremath{\mathcal{X}}}$, uniformly in ${\ensuremath{\mathcal{X}}}$ and
the $Z$-computable descending sequence in ${\boldsymbol \omega}^{\ensuremath{\mathcal{X}}}$.
\begin{corollary}
\system{ACA}\ensuremath{_0}$\vdash{\textsf{WOP}}({\ensuremath{\mathcal{X}}}\mapsto{\boldsymbol \omega}^{\ensuremath{\mathcal{X}}})$.
\end{corollary}
\begin{proof}
The previous proof can be formalized within \system{ACA}\ensuremath{_0}.
\end{proof}
\begin{corollary}\label{cor forward ACApr}
\system{ACA}\ensuremath{'_0}$\vdash\forall n\, {\textsf{WOP}}({\ensuremath{\mathcal{X}}}\mapsto\iexpop n{\ensuremath{\mathcal{X}}})$.
\end{corollary}
\begin{proof}
Theorem \ref{thm:om forward} implies that, given $n$, if ${\ensuremath{\mathcal{X}}}$ is a
$Z$-computable linear ordering, and $\iexpop n{\ensuremath{\mathcal{X}}}$ has a $Z$-computable
descending sequence, then $Z^{(n)}$ can compute a descending sequence
in ${\ensuremath{\mathcal{X}}}$. This can be formalized within \system{ACA}\ensuremath{'_0}.
\end{proof}
The following two theorems are new in the form they are stated.
However, they can easily be obtained from the standard proof that
\system{ACA}\ensuremath{_0}\ proves that every ordinal below $\varphi_2(0)$ can be proved
well-founded in \system{ACA}\ensuremath{^+_0}, and that every ordinal below $\Gamma_0$ can
be proved well-ordered in Predicative Analysis \cite{Fef64, Sch77}.
\begin{theorem}\label{thm:epsilon forward}
If ${\ensuremath{\mathcal{X}}}$ is a $Z$-computable linear ordering, and $\boldsymbol\varepsilon_{{\ensuremath{\mathcal{X}}}}$ has a
$Z$-computable descending sequence, then $Z^{(\omega)}$ can compute a
descending sequence in ${\ensuremath{\mathcal{X}}}$.
\end{theorem}
\begin{proof}
Let $(a_k:k\in\ensuremath{\mathbb N})$ be a $Z$-computable descending sequence in
$\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}$. If no constant term $\varepsilon_{x}$ appears in $a_0$, then
$a_0<\iexp{n_0}0$ for some $n_0$ so that we essentially have a
descending sequence in $\iexpop{n_0}0$. Then, applying $n_0$ times
Theorem \ref{thm:om forward}, we have that $Z^{(n_0)}$ computes a
descending sequence in $0$, a contradiction.
Thus we can let $x_0$ be the largest $x\in {\ensuremath{\mathcal{X}}}$ such that $\varepsilon_{x}$
appears in $a_0$. It is not hard to prove by induction on terms that
$\varepsilon_{x_0}\leq a_0<\iexp{n_0}{\varepsilon_{x_0}+1}$ for some $n_0\in\ensuremath{\mathbb N}$. By
Lemma \ref{lemma:om eps}, ${\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}}\!\restriction\! \iexp{n_0}{\varepsilon_{x_0}+1}$
is computably isomorphic to $\iexpop{n_0}{\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}\!\restriction\!
(\varepsilon_{x_0}+1)}$ and we can view the $a_k$'s as elements of the latter.
Using Theorem \ref{thm:om forward} $n_0$ times, we obtain a
$Z^{(n_0)}$-computable descending sequence in $\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}\!\restriction\!
(\varepsilon_{x_0}+1)$. Noticing that the proof of Theorem \ref{thm:om
forward} is uniform, we can apply this process again to the sequence we
have obtained, and get an $x_1<_{\ensuremath{\mathcal{X}}} x_0$ and a descending sequence in
$\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}\!\restriction\! (\varepsilon_{x_1}+1)$ computable in $Z^{(n_0+n_1)}$ for some
$n_1\in \ensuremath{\mathbb N}$. Iterating this procedure we obtain a
$Z^{(\omega)}$-computable descending sequence $x_0>_{\ensuremath{\mathcal{X}}} x_1>_{\ensuremath{\mathcal{X}}} \dots$ in
${\ensuremath{\mathcal{X}}}$.
\end{proof}
\begin{corollary}\label{cor forward ACApl}
\system{ACA}\ensuremath{^+_0} $\vdash{\textsf{WOP}}({\ensuremath{\mathcal{X}}}\mapsto\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}})$.
\end{corollary}
\begin{proof}
The previous proof can be formalized within \system{ACA}\ensuremath{^+_0}.
\end{proof}
\begin{theorem} \label{thm:veblen forward}
Let $\alpha$ be a $Z$-computable well-ordering. If ${\ensuremath{\mathcal{X}}}$ is a $Z$-computable
linear ordering, and ${\boldsymbol \varphi}(\alpha,{{\ensuremath{\mathcal{X}}}})$ has a $Z$-computable descending
sequence, then $Z^{(\omega^\alpha)}$ can compute a descending sequence in
${\ensuremath{\mathcal{X}}}$.
\end{theorem}
\begin{proof}
By $Z$-computable transfinite recursion on $\alpha$, we define a computable
procedure that given a $Z$-computable index for a linear ordering ${\ensuremath{\mathcal{X}}}$
and for a descending sequence in ${\boldsymbol \varphi}(\alpha,{{\ensuremath{\mathcal{X}}}})$, it returns a
$Z^{(\omega^\alpha)}$-computable index for a descending sequence in ${\ensuremath{\mathcal{X}}}$. Let
$(a_k:k\in\ensuremath{\mathbb N})$ be a computable descending sequence in
${\boldsymbol \varphi}(\alpha,{{\ensuremath{\mathcal{X}}}})$. Let $x_0$ be the largest $x\in {\ensuremath{\mathcal{X}}}$ such that the
constant term $\varphi_{\alpha,x}$ appears in $a_0$ (if no $\varphi_{\alpha,x}$
appears in $a_0$, just use $0$ in place of $\varphi_{\alpha,x_0}$ in the
argument below). It is not hard to prove by induction on terms that
$\varphi_{\alpha,x_0}\leq a_0<\varphi_{\beta_0}^{n_0}(\varphi_{\alpha,x_0}+1)$ for
some $\beta_0<\alpha$ and $n_0\in\ensuremath{\mathbb N}$, (where $\varphi^{n_0}_\beta(z)$ is obtained
by applying the $\varphi_\beta$ function symbol $n_0$ times to $z$). It
also not hard to show that ${\boldsymbol \varphi}(\alpha,{{\ensuremath{\mathcal{X}}}}) \!\restriction\! {\varphi_{\beta_0}^{n_0}
(\varphi_{\alpha,x_0}+1)}$ is computably isomorphic to $\varphi^{n_0}(\beta_0,
{\boldsymbol \varphi}(\alpha,{{\ensuremath{\mathcal{X}}}\!\restriction\! {x_0}})+1)$ (where ${\boldsymbol \varphi}^{n_0}(\beta,{\mathcal Z})$ is obtained
by applying the ${\boldsymbol \varphi}(\beta,\cdot)$-operator on linear orderings $n_0$
times to ${\mathcal Z}$). Using the induction hypothesis $n_0$ times, we obtain a
$Z^{(\omega^{\beta_0}\cdot n_0)}$-computable descending sequence in
${\boldsymbol \varphi}(\alpha,{{\ensuremath{\mathcal{X}}}\!\restriction\! {x_0}})+1$. Then, we apply this process again to
the sequence we have obtained, and get $x_1<_{\ensuremath{\mathcal{X}}} x_0$ and a descending
sequence in ${\boldsymbol \varphi}(\alpha,{{\ensuremath{\mathcal{X}}}\!\restriction\! {x_1}})+1$ computable in
$Z^{(\omega^{\beta_0} \cdot n_0+\omega^{\beta_1}\cdot n_1)}$ for some $\beta_1<\alpha$ and
$n_1\in \ensuremath{\mathbb N}$. Iterating this procedure we obtain a $Z^{(\omega^\alpha)}$
descending sequence $x_0>_{\ensuremath{\mathcal{X}}} x_1>_{\ensuremath{\mathcal{X}}} \dots$ in ${\ensuremath{\mathcal{X}}}$.
\end{proof}
\begin{corollary}\label{cor forward Pi0alpha}
Let $\alpha$ be a computable ordinal. Then
$\Pi^0_{\omega^\alpha}$-\system{CA}\ensuremath{_0}$\vdash{\textsf{WOP}}({\ensuremath{\mathcal{X}}}\mapsto\varphi(\alpha,{\ensuremath{\mathcal{X}}}))$.
\end{corollary}
\begin{proof}
The previous proof can be formalized within $\Pi^0_{\omega^\alpha}$-\system{CA}\ensuremath{_0}\ for
a fixed computable $\alpha$.
\end{proof}
\begin{corollary}\label{cor forward ATR}
\system{ATR}\ensuremath{_0}$\vdash{\textsf{WOP}}({\ensuremath{\mathcal{X}}} \mapsto {\boldsymbol \varphi}({\ensuremath{\mathcal{X}}},0))$.
\end{corollary}
\begin{proof}
Let $\alpha$ be a well-ordering and assume, towards a contradiction, that
there exists a descending sequence in ${\boldsymbol \varphi}(\alpha,0)$. Let $Z$ be a real
such that both $\alpha$ and the descending sequence are $Z$-computable. By
Theorem \ref{thm:veblen forward} $Z^{(\omega^\alpha)}$ (which exists in \system{ATR}\ensuremath{_0})
computes a descending sequence in $0$, which is absurd.
\end{proof}
\section{Ordinal exponentiation and the Turing Jump}\label{sect:exp}
In this section we give a proof of the second part of Theorem
\ref{Hirst}. Our proof is a slight modification of Hirst's proof, and
prepares the ground for the generalizations in the following sections.
We start by defining a modification of the Turing jump operator with nicer combinatorial properties.
We will then define two computable approximations to this jump operator, one from strings to strings, and the other one from trees to trees.
\begin{definition}\label{def:J}
Given $Z \in \ensuremath{\N^\N}$, we define the sequence of {\em $Z$-true stages} as follows:
\[
t_{n} = \max\{ t_{n-1}+1, \mu t(\{n\}^Z_t(n)\mathord\downarrow)\},
\]
starting with $t_{-1}=1$ (so that $t_n\geq n+2$). If there is no $t$
such that $\{n\}^Z_t(n)\mathord\downarrow$, then the above definition gives
$t_n=t_{n-1}+1$. So, $t_n$ is a stage where $Z$ can correctly guess
$Z'\!\restriction\! n+1$ because $\forall m \leq n (m\in Z'\iff \{m\}^{Z\!\restriction\!
t_n}(m) \mathord\downarrow)$. With this in mind, we define the \emph{Jump
operator} to be the function $\mathcal{J}\colon \ensuremath{\N^\N} \to \ensuremath{\N^\N}$ such that for
every $Z \in \ensuremath{\N^\N}$ and $n \in \ensuremath{\mathbb N}$,
\[
\mathcal{J}(Z)(n) = Z\!\restriction\! t_n,
\]
or equivalently
\[
\mathcal{J}(Z)=\langle Z\!\restriction\! t_0, Z\!\restriction\! t_1, Z\!\restriction\! t_2, Z\!\restriction\! t_3, \dots\rangle
\]
\end{definition}
Here is a sample of this definition: {\Small
\begin{eqnarray*}
\phantom{Z=\langle Z(0), Z(1), Z(} t_0\phantom{), Z(3), Z(4), Z(5), Z(} t_1\phantom{),Z(} t_2\phantom{), Z(8), Z(9), Z(10), Z(11),Z(}t_3\phantom{),\cdots \rangle}\\
Z= \langle \underbrace{\underbrace{\underbrace{\underbrace{ Z(0), Z(1) }_{\mathcal{J}(Z)(0)}, Z(2), Z(3), Z(4), Z(5)}_{\mathcal{J}(Z)(1)}, Z(6)}_{\mathcal{J}(Z)(2)}, Z(7), Z(8), Z(9), Z(10), Z(11)}_{\mathcal{J}(Z)(3)}, Z(12),\cdots \rangle
\end{eqnarray*}
}
Of course, $\mathcal{J}(Z) \equiv_T Z'$ for every $Z$ as $n\in Z'\iff
\{n\}^{\mathcal{J}(Z)(n)}(n)\mathord\downarrow$. So, from a computability viewpoint,
there is no essential difference between $\mathcal{J}(Z)$ and the usual $Z'$.
\begin{definition}\label{def: J si}
The \emph{Jump function} is the mapping $J \colon {\ensuremath{\N^{<\N}}} \to {\ensuremath{\N^{<\N}}}$
defined as follows. For $\sigma \in {\ensuremath{\N^{<\N}}}$, define $t_{n} = \max\{
t_{n-1}+1, \mu t(\{n\}^{\sigma\!\restriction\! t}(n)\mathord\downarrow)\},$ starting with
$t_{-1}=1$ (so that $t_n\geq n+2$). Again, if there is no $t$ such that
$\{n\}^{\sigma\!\restriction\! t}(n)\mathord\downarrow$, then the above definition gives
$t_n=t_{n-1}+1$. Let $J(\sigma)=\langle\sigma\!\restriction\! t_0, \sigma\!\restriction\! t_1, \dots,
\sigma\!\restriction\! t_{k-1}\rangle$ where $k$ is least such that $t_k>|\sigma|$.
Given $\tau\in J({\ensuremath{\N^{<\N}}})$, we let $K(\tau)$ be the last entry of $\tau$
when $\tau\neq\emptyset$, and $K(\emptyset)=\emptyset$.
\end{definition}
\begin{remark}
Since we can computably decide whether $\{n\}^{\sigma\!\restriction\! t}(n)
\mathord\downarrow$, the Jump function is computable. The computability of $K$
is obvious.
\end{remark}
The following Lemma lists the key properties of $J$ and $K$. We will
refer to these properties as (\ref{P1}), \dots, (\ref{P6}).
\begin{lemma} \label{lemma:J K}
For every $\sigma, \tau'\in {\ensuremath{\N^{<\N}}}$ and $\tau\in J({\ensuremath{\N^{<\N}}})$,
\begin{enumerate} \renewcommand{\theenumi}{P\arabic{enumi}}
\item $J(\sigma)=\emptyset$ if and only if $|\sigma|\leq 1$. \label{P1}
\item $K(J(\sigma))= \sigma$ when $|\sigma|\geq 2$. \label{P2}
\item $J(K(\tau))=\tau$.\label{P3}
\item If $\sigma\neq\sigma'$ and at least one has length $\geq 2$, then
$J(\sigma)\neq J(\sigma')$. \label{P4}
\item $|J(\sigma)|<|\sigma|$ and $|K(\tau)|>|\tau|$ except when
$\tau=\emptyset$. \label{P5}
\item If $\tau'\subset\tau$ then $\tau'\in J({\ensuremath{\N^{<\N}}})$ and
$K(\tau')\subset K(\tau)$. \label{P6}
\end{enumerate}
\end{lemma}
\begin{proof}
(\ref{P1}) is obvious from the definition.
(\ref{P2}) follows from the fact that, when $|\sigma|\geq 2$, $t_{k-1} =
|\sigma|$ (using the notation of Definition \ref{def: J si}). In fact
$t_{k-1} \leq |\sigma|$ by definition of $k$, and if $t_{k-1}<|\sigma|$ then
we have either $\{k\}^{\sigma\!\restriction\! t_k}(k)\mathord\downarrow$ (and hence $t_k \leq
|\sigma|$) or $t_k = t_{k-1}+1 \leq |\sigma|$, against the definition of $k$.
(\ref{P3}) follows from (\ref{P2}) and $K(\emptyset)=\emptyset$.
(\ref{P4}) follows immediately from (\ref{P1}) and (\ref{P2}).
The first part of (\ref{P5}) follows from $t_n\geq n+2$. The second
part is a consequence of the first, (\ref{P1}) and (\ref{P2}).
(\ref{P6}) is obvious when $\tau'=\emptyset$, using the second part of
(\ref{P5}). Otherwise we have $\tau'=\langle \sigma\!\restriction\! t_0, \sigma\!\restriction\! t_1,
\dots ,\sigma\!\restriction\! t_j\rangle$ for some $j<k-1$, so that $K(\tau')=\sigma\!\restriction\!
t_j \subset \sigma\!\restriction\! t_{k-1}=K(\tau)$. It is easy to check that
$\tau'=J(\sigma\!\restriction\! t_j)$.
\end{proof}
The following Lemma explains how the Jump function approximates the
Jump operator.
\begin{lemma}\label{lemma:J approx J}
Given $Y,Z\in \ensuremath{\N^\N}$, the following are equivalent:
\begin{enumerate}
\item $Y = \mathcal{J}(Z)$;
\item for every $n$ there exists $\sigma_n \subset Z$ with $|\sigma_n|>n$
such that $Y\!\restriction\! n =J(\sigma_n)$.
\end{enumerate}
\end{lemma}
\begin{proof}
Suppose first that $Y = \mathcal{J}(Z)$. When $n=0$ let $\sigma_n=Z\upto1$, which
works by (\ref{P1}). When $n>0$ let $\sigma_n = K(Y\!\restriction\! n) = K(Y\!\restriction\! n)
= \mathcal{J}(Z)(n-1) \subset Z$. If $\{0\}^Z (0)\mathord\downarrow$ then $Y(0) \subset
Z$ is such that $\{0\}^{Y(0)} (0)\mathord\downarrow$ and $Y(0) \subseteq \sigma_n$
so that also $\{0\}^{\sigma_n} (0)\mathord\downarrow$ and $J(\sigma_n)(0) = Y(0)$. If
$\{0\}^Z (0)\mathord\uparrow$ then $Y(0) = Z\upto2 = \sigma_n\upto2 =
J(\sigma_n)(0)$. This is the base step of an induction that, using the
same argument, shows that $Y(i)=J(\sigma_n)(i)$ for every $i<n$. Thus $Y
\!\restriction\! n \subseteq J(\sigma_n)$. By (\ref{P6}), we have $Y \!\restriction\! n \in
J({\ensuremath{\N^{<\N}}})$ and we can apply (\ref{P3}) and (\ref{P5}) to obtain $Y \!\restriction\!
n = J(\sigma_n)$ and $|\sigma_n|>n$.
Now assume that (2) holds, and suppose towards a contradiction that $Y
\neq \mathcal{J}(Z)$. Let $n$ be least such that $Y(n-1) \neq \mathcal{J}(Z)(n-1)$. If
$\sigma_n\subset Z$ is such that $Y\!\restriction\! n = J(\sigma_n)$ we have
$J(\sigma_n)(n-1) \neq \mathcal{J}(Z)(n-1)$. This can occur only if
$\{n-1\}^{\sigma_n}(n-1) \mathord\uparrow$ and $\{n-1\}^Z(n-1) \mathord\downarrow$, which
implies $n'>|\sigma_n|$, where $n'=|\mathcal{J}(Z)(n-1)|$. Notice that for any
$m>n'$ we have $J(Z\!\restriction\! m)(n-1) = \mathcal{J}(Z)(n-1)$ and hence $J(Z\!\restriction\!
m)(n-1) \neq Y(n-1)$. This contradicts the existence of
$\sigma_{n'}\subset Z$ with $|\sigma_{n'}|>n'$ such that $Y\!\restriction\! {n'} =
J(\sigma_{n'})$.
\end{proof}
The following corollary is obtained by iterating the Lemma.
\begin{corollary}\label{cor:Jm approx JM}
For every $m>0$, given $Y,Z\in \ensuremath{\N^\N}$, the following are equivalent:
\begin{enumerate}
\item $Y = \mathcal{J}^m(Z)$;
\item for every $n$ there exists $\sigma_n \subset Z$ with $|\sigma_n|
\geq n+m$ such that $Y\!\restriction\! n =J^m(\sigma_n)$.
\end{enumerate}
\end{corollary}
The Jump function leads to the definition of the Jump Tree.
\begin{definition}
Given a tree $T \subseteq {\ensuremath{\N^{<\N}}}$ we define the \emph{Jump Tree of
$T$} to be
\[
\mathcal{JT}(T)=\set{J(\sigma)}{\sigma\in T}.
\]
\end{definition}
The following lemmas summarize the main properties of the Jump Tree.
\begin{lemma}\label{lemma:JTcomp}
For every tree $T$, $\mathcal{JT}(T)$ is a tree computable in $T$.
\end{lemma}
\begin{proof}
$\mathcal{JT}(T)$ is a tree because if $\tau \subset J(\sigma)$ for $\sigma\in T$,
then $\tau=J(K(\tau))$ (by (\ref{P6}) and (\ref{P3})) and $K(\tau)\in
T$ (since by (\ref{P6}), (\ref{P2}) and (\ref{P1}), $K(\tau) \subset
K(J(\sigma)) \subseteq \sigma$).
$\mathcal{JT}(T)$ is computable in $T$ because $\tau\in \mathcal{JT}(T)$ if and only if
$\tau=J(K(\tau))$ (which is equivalent to $\tau \in J({\ensuremath{\N^{<\N}}})$ by
(\ref{P3})) and $K(\tau)\in T$.
\end{proof}
\begin{lemma}\label{lemma:JT paths}
For every tree $T$, $[\mathcal{JT}(T)] = \set{\mathcal{J}(Z)}{Z \in [T]}$.
\end{lemma}
\begin{proof}
First let $Z \in [T]$. Since by Lemma \ref{lemma:J approx J} for every
$n \in \ensuremath{\mathbb N}$, $\mathcal{J}(Z) \!\restriction\! n = J(\sigma)$ for some $\sigma \subset Z$, so
$\mathcal{J}(Z) \!\restriction\! n \in \mathcal{JT}(T)$. This implies $\set{\mathcal{J}(Z)}{Z \in [T]}
\subseteq [\mathcal{JT}(T)]$.
To prove the other inclusion, fix $Y \in [\mathcal{JT}(T)]$, notice that $Y(n)
\subset Y(n+1)\in{\ensuremath{\N^{<\N}}}$ for every $n$, and let $Z
=\bigcup_{n\in\ensuremath{\mathbb N}}Y(n)\in \ensuremath{\N^\N}$. Observe that, again by Lemma
\ref{lemma:J approx J}, $Y = \mathcal{J}(Z)$ and $Z\in [T]$.
\end{proof}
We can now define the $Z$-computable linear ordering of theorem
\ref{Hirst}: let ${\ensuremath{\mathcal{X}}}_Z = \langle \mathcal{JT}(T_Z), {\ensuremath{\leq_{\mathrm{KB}}}} \rangle$ where $T_Z = \set{Z
\!\restriction\! n}{n \in \ensuremath{\mathbb N}}$. Note that ${\ensuremath{\mathcal{X}}}_Z$ is indeed a linear ordering and,
by Lemma \ref{lemma:JTcomp}, it is $Z$-computable. Since $Z$ is the
unique path in $T_Z$, by Lemma \ref{lemma:JT paths} $\mathcal{J}(Z)$ is the
unique path in $\mathcal{JT}(T_Z)$. Moreover, for every $\tau = J(\sigma) \in
\mathcal{JT}(T_Z)$ we have that either $\tau \subset \mathcal{J}(Z)$ or there is some $i$
such that $\tau \!\restriction\! i = \mathcal{J}(Z) \!\restriction\! i$ and $\tau(i) \neq \mathcal{J}(Z)(i)$.
This can only happen if $\{i\}^\sigma(i) \mathord\uparrow$ and $\{i\}^Z(i)
\mathord\downarrow$, so that $\tau(i) \subset \mathcal{J}(Z)(i)$. By our assumption on
the coding of strings, we have $\tau(i) < \mathcal{J}(Z)(i)$ and hence $\tau
\ensuremath{<_{\mathrm{KB}}} \mathcal{J}(Z) \!\restriction\! |\tau|$.
Let $\langle \tau_n \rangle_{n \in \ensuremath{\mathbb N}}$ be an infinite \ensuremath{<_{\mathrm{KB}}}-descending
sequence in $\mathcal{JT}(T_Z)$. If $\tau_n \not\subset \mathcal{J}(Z)$ for some $n$ then
$\tau_m \ensuremath{<_{\mathrm{KB}}} \tau_n \ensuremath{<_{\mathrm{KB}}} \mathcal{J}(Z)\!\restriction\!|\tau_m|$ for all $m>n$, which by
Lemma \ref{lemma:KB} implies the existence of a path in $\mathcal{JT}(T_Z)$
different from $\mathcal{J}(Z)$, a contradiction. Therefore any infinite
descending sequence in ${\ensuremath{\mathcal{X}}}_Z$ consists only of initial segments of
$\mathcal{J}(Z)$ and hence computes $\mathcal{J}(Z) \equiv_T Z'$.
We still need to prove the existence of a $Z$-computable descending
sequence in ${\boldsymbol \omega}^{{\ensuremath{\mathcal{X}}}_Z}$. To this end we use of the following
function.
\begin{definition}
Let $T$ be a tree and order $\mathcal{JT}(T)$ by \ensuremath{\leq_{\mathrm{KB}}}. Define $h\colon T \to
{\boldsymbol \omega}^{\mathcal{JT}(T)}$ by
\[
h(\sigma) = \left( \sum_{\substack{i<|J(\sigma)|\\ \{i\}^\sigma(i)\mathord\uparrow}}
\omega^{J(\sigma) \!\restriction\! i} \right) + \omega^{J(\sigma)} \cdot 2
\]
for $\sigma\neq \emptyset$ and $h(\emptyset) = \omega^{\emptyset}\cdot 3$.
\end{definition}
The sum above is written in \ensuremath{\leq_{\mathrm{KB}}}-decreasing order, so that indeed
$h(\sigma) \in {\boldsymbol \omega}^{\mathcal{JT}(T)}$.
Since $J$ is computable, $h$ is computable as well.
The proof below should help the reader understand the motivation for the definition above.
\begin{lemma}\label{h:monotone}
$h$ is $({\supset}, {<_{{\boldsymbol \omega}^{\mathcal{JT}(T)}}})$-monotone.
\end{lemma}
\begin{proof}
Suppose $\rho, \sigma\in T$ are such that $\rho \supset \sigma$; we want to
show that $h(\rho) <_{{\boldsymbol \omega}^{\mathcal{JT}(T)}} h(\sigma)$.
If $\sigma=\emptyset$ then $\omega^\emptyset$ occurs with multiplicity $3$ in $J(\sigma)$
and with multiplicity at most $2$ in $J(\rho)$. Since $\emptyset$ is the
\ensuremath{\leq_{\mathrm{KB}}}-maximum element in {\ensuremath{\N^{<\N}}} (and hence also in $\mathcal{JT}(T)$), this implies
$h(\rho) <_{{\boldsymbol \omega}^{\mathcal{JT}(T)}} h(\sigma)$.
If $\sigma\neq\emptyset$ then $J(\sigma)\neq J(\rho)$ by (\ref{P4}). Since
$\sigma\subset\rho$, if $\{i\}^\rho(i) \mathord\uparrow$ then $\{i\}^\sigma(i)
\mathord\uparrow$ as well. Thus there are two possibilities. If for all
$i<|J(\sigma)|$, $\{i\}^\sigma(i) \mathord\uparrow$ whenever $\{i\}^\rho(i)
\mathord\uparrow$ then $J(\sigma) \subset J(\rho)$ and the first difference
between $h(\sigma)$ and $h(\rho)$ is the coefficient of $\omega^{J(\sigma)}$,
which in $h(\sigma)$ is $2$ and in $h(\rho)$ is either $1$ or $0$
(depending on whether $\{|J(\sigma)|\}^\rho(|J(\sigma)|) \mathord\uparrow$ or not).
In any case, $h(\rho) <_{{\boldsymbol \omega}^{\mathcal{JT}(T)}} h(\sigma)$. If instead for some
$i<|J(\sigma)|$, $\{i\}^\sigma(i) \mathord\uparrow$ and $\{i\}^\rho(i) \mathord\downarrow$
let $i_0$ be the least such $i$. Then the first difference between
$h(\sigma)$ and $h(\rho)$ occurs at $\omega^{J(\sigma \!\restriction\! i_0)}$, which
appears in $h(\sigma)$ but not in $h(\rho)$. Again, we have $h(\rho)
<_{{\boldsymbol \omega}^{\mathcal{JT}(T)}} h(\sigma)$.
\end{proof}
We can now finish off the proof of the second part of Theorem
\ref{Hirst}. The sequence $\langle h (Z \!\restriction\! n) \rangle_{n \in \ensuremath{\mathbb N}}$ is
$Z$-computable and strictly decreasing in ${\boldsymbol \omega}^{{\ensuremath{\mathcal{X}}}_Z}$ by Lemma
\ref{h:monotone}.
Obviously our proof yields the following generalization of the second
part of Theorem \ref{Hirst}.
\begin{theorem}\label{Hirst:gen}
For every real $Z$ there exists a $Z$-computable linear ordering ${\ensuremath{\mathcal{X}}}$
with a $Z$-computable descending sequence in ${\boldsymbol \omega}^{{\ensuremath{\mathcal{X}}}}$ such that
every descending sequence in ${\ensuremath{\mathcal{X}}}$ computes $Z'$.
\end{theorem}
\section{The $\varepsilon$ function and the $\omega$-Jump}\label{sect:eps}
In this section we extend the construction of Section \ref{sect:exp}.
To iterate the construction, even only a finite number of times, requires
generalizing the definition of $h$. Then we tackle the issue of
extending the definition at limit ordinals by considering the
$\omega$-Jump.
\subsection{Finite iterations of exponentiation and Turing Jump}\label{ssect:finite}
We start by defining a version of the function $h$ used in the previous
section that we can iterate.
\begin{definition}\label{def:hgJ}
Let {\ensuremath{\mathcal{X}}}\ be a linear ordering, $T$ a tree and
\[
g\colon \mathcal{JT}(T) \to {\ensuremath{\mathcal{X}}}
\]
a
function. Define
\[
h_g \colon T \to {\boldsymbol \omega}^{\ensuremath{\mathcal{X}}}
\]
by
\[
h_g(\sigma) = \left( \sum_{\substack{i<|J(\sigma)|\\ \{i\}^\sigma(i)\mathord\uparrow}}
\omega^{g(J(\sigma) \!\restriction\! i)} \right) + \omega^{g(J(\sigma))} \cdot 2
\]
for $\sigma\neq \emptyset$ and $h_g(\emptyset) = \omega^{g(\emptyset)}\cdot 3$.
\end{definition}
Note that when $g$ is the identity, then $h_g = h$ of the previous
section. Also, $h_g$ is $g$-computable.
\begin{lemma}\label{lemma:monotone}
If $g$ is $({\supset}, {<_{\ensuremath{\mathcal{X}}}})$-monotone, then $h_g$ is $({\supset},
{<_{{\boldsymbol \omega}^{\ensuremath{\mathcal{X}}}}})$-monotone.
\end{lemma}
\begin{proof}
Notice that $g$ $({\supset}, {<_{\ensuremath{\mathcal{X}}}})$-monotone implies that the sum in
the definition of $h_g(\sigma)$ is written in decreasing order. The proof
is the same as the one for Lemma \ref{h:monotone}.
\end{proof}
We can now prove the analogue of Theorem \ref{Hirst} for iterations of
the exponential (recall the notation $\iexpop n{\ensuremath{\mathcal{X}}}$ introduced in Definition
\ref{iexpop}).
\begin{theorem}\label{0^n}
For every $n \in \ensuremath{\mathbb N}$ and $Z \in \ensuremath{\N^\N}$, there is a $Z$-computable linear
ordering ${\ensuremath{\mathcal{X}}}_Z^n$ such that the jump of every descending sequence in
${\ensuremath{\mathcal{X}}}_Z^n$ computes $Z^{(n)}$, but there is a $Z$-computable descending
sequence in $\iexpop n{{\ensuremath{\mathcal{X}}}_Z^n}$.
\end{theorem}
\begin{proof}
Letting again $T_Z = \set{Z \!\restriction\! n}{n \in \ensuremath{\mathbb N}}$, we define a sequence
$\langle T_i \rangle_{i \leq n}$ of trees as follows: let $T_0 = T_Z$ and
$T_{i+1} = \mathcal{JT}(T_i)$ for every $i<n$. By induction on $i$, using Lemmas
\ref{lemma:JTcomp} and \ref{lemma:JT paths}, we have that each $T_i$ is
a $Z$-computable tree and that the only path through $T_i$ is $\mathcal{J}^i
(Z)$ (i.e.\ the result of applying $i$ times $\mathcal{J}$ starting with $Z$).
We let ${\ensuremath{\mathcal{X}}}_Z^n = \langle T_n, {\ensuremath{\leq_{\mathrm{KB}}}} \rangle$, which is a $Z$-computable linear
ordering. By Lemma \ref{lemma:KB} if $f$ is a descending sequence in
${\ensuremath{\mathcal{X}}}_Z^n$ then $\mathcal{J}^n (Z) \leq_T f'$. Since $Z^{(n)} \equiv_T \mathcal{J}^n (Z)$, the
first property of ${\ensuremath{\mathcal{X}}}_Z^n$ is proved.
To show that there is a $Z$-computable descending sequence in $\iexpop
n{{\ensuremath{\mathcal{X}}}_Z^n}$ we define by recursion on $m \leq n$ functions $g_m\colon
T_{n-m} \to \iexpop m{{\ensuremath{\mathcal{X}}}_Z^n}$. Let $g_0\colon T_n \to {\ensuremath{\mathcal{X}}}_Z^n$ be the
identity function ($T_n$ is indeed the domain of ${\ensuremath{\mathcal{X}}}_Z^n$). We define
$g_{m+1} \colon T_{n-m-1} \to \iexpop {m+1}{{\ensuremath{\mathcal{X}}}_Z^n}$ by $g_{m+1} =
h_{g_m}$ as in Definition \ref{def:hgJ}. By induction on $m \leq n$,
using Lemma \ref{lemma:monotone}, it is immediate that each $g_m$ is
$({\supset}, {<_{\iexpop m{T_n}}})$-monotone and computable. Hence the
sequence $\langle g_n (Z \!\restriction\! j) \rangle_{j \in \ensuremath{\mathbb N}}$ in $\iexpop n{{\ensuremath{\mathcal{X}}}_Z^n}$ is
$Z$-computable and descending.
\end{proof}
\subsection{The $\omega$-Jump}\label{ssect:eps}
Now we define the iteration of the Jump operator at the first limit
ordinal $\omega$. Again, our definition is slightly different than the usual one
so that it has nicer combinatorial properties.
The difference being that instead of pasting all the $\mathcal{J}^i(Z)$ together as columns, we will take only the first value of each.
Later we will show that this is enough.
We will also define two
computable approximations to this $\omega$-jump operator, one from strings
to strings, and the other one from trees to trees, and a computable
inverse function.
\begin{definition}
We define the \emph{$\omega$-Jump operator} to be the function
$\mathcal{J}^\omega\colon \ensuremath{\N^\N} \to \ensuremath{\N^\N}$ such that for every $Z \in \ensuremath{\N^\N}$
\[
\mathcal{J}^\omega(Z) = \langle \mathcal{J}(Z)(0),\ \mathcal{J}^2(Z)(0),\ \mathcal{J}^3(Z)(0),\dots\rangle,
\]
or, in other words, $\mathcal{J}^\omega(Z)(n)=\mathcal{J}^{n+1}(Z)(0)$.
\end{definition}
Notice that $\mathcal{J}^\omega(\mathcal{J}(Z))$ equals $\mathcal{J}^\omega(Z)$ with the first element
removed. Before showing that $\mathcal{J}^\omega(Z) \equiv_T Z^{(\omega)}$ it is
convenient to define the inverse of $\mathcal{J}^\omega$.
\begin{definition}
Given $Y\in \mathcal{J}^\omega(\ensuremath{\N^\N})$ we define
\[
\mathcal{K}^{\omega}(Y) = \bigcup_{n} K^{n}(Y(n)).
\]
\end{definition}
We need to show that the union above makes sense. Assume $Y=\mathcal{J}^\omega(Z)$.
It might help to look at Figure \ref{fig: Y=Jom Z}. Notice that for
each $n$, $Y(n) \subset \mathcal{J}^n(Z)$ because for every $X$, $\mathcal{J}(X)(0)
\subset X$ and $Y(n)=\mathcal{J}(\mathcal{J}^n(Z))(0)$. We also know that if $\sigma \subset
\mathcal{J}(X)$, then $K(\sigma)\subset X$, so that $K^{n}(Y(n))\subset Z$. It
follows that $\bigcup_{n} K^{n}(Y(n))\subseteq Z$. Applying (\ref{P5})
$n$ times we get that $|K^{n}(Y(n))| > n$, and therefore the union
above does actually produce $Z$. We have just proved the following
lemma.
\begin{figure}
{\Small
\xymatrix@C=12pt@R=12pt{
\mathcal{J}^\omega(Z) \ar@{}|(.6){=}[r] \ar@{}|(.4){\shortparallel}[d] \ar@{}|(.8){\frown}[d] & Y\\
\vdots & \vdots \\
\mathcal{J}^4(Z)(0) \ar@{}|(.6){=}[r] & Y(3) \ar@{}|{\subset}[r] & \cdots & & & &\vdots \\
\mathcal{J}^3(Z)(0) \ar@{}|(.6){=}[r] & Y(2) \ar@{}|(.4){\subset}[r] & K(Y(3)) \ar_{J}[ul] \ar@{}|{\subset}[r] & \cdots& \cdots& \cdots \ar@{}|(.3){\subset}[r] & \mathcal{J}^2(Z) \\
\mathcal{J}^2(Z)(0) \ar@{}|(.6){=}[r] & Y(1) \ar@{}|(.4){\subset}[r] & K(Y(2)) \ar_{J}[ul] \ar@{}|{\subset}[r] & K^2(Y(3)) \ar_{J}[ul] \ar@{}|{\subset}[r] & \cdots& \cdots \ar@{}|(.3){\subset}[r] & \mathcal{J}(Z) \ar_{\mathcal{J}}[u]\\
\mathcal{J}(Z)(0) \ar@{}|(.6){=}[r] & Y(0) \ar@{}|(.4){\subset}[r] & K(Y(1)) \ar_{J}[ul] \ar@{}|{\subset}[r] & K^2(Y(2)) \ar_{J}[ul]\ar@{}|{\subset}[r]& K^3(Y(3)) \ar_{J}[ul] \ar@{}|(.7){\subset}[r] & \cdots\ar@{}|(.3){\subset}[r] & Z \ar_{\mathcal{J}}[u] \\
\ar@{}|{\smile}[u] & K^\omega(Y\!\restriction\! 1) \ar@{}|{\subset}[r]\ar@{}|{\shortparallel}[u] & K^\omega(Y\!\restriction\! 2) \ar@{}|{\subset}[r]\ar@{}|{\shortparallel}[u] & K^\omega(Y\!\restriction\! 3) \ar@{}|{\subset}[r]\ar@{}|{\shortparallel}[u] & K^\omega(Y\!\restriction\! 4) \ar@{}|(.7){\subset}[r]\ar@{}|{\shortparallel}[u] & \cdots\ar@{}|(.3){\subset}[r] &\mathcal{K}^\omega(Y) \ar@{}|{\shortparallel}[u] \\
}}
\caption{Assuming $Y=\mathcal{J}^\omega(Z)$.} \label{fig: Y=Jom Z}
\end{figure}
\begin{lemma}\label{lemma:Kom Bai}
For every $Z\in \ensuremath{\N^\N}$, $\mathcal{K}^\omega(\mathcal{J}^\omega(Z))=Z$.
\end{lemma}
\begin{lemma} \label{lemma: Z om jump}
For every $Z\in \ensuremath{\N^\N}$, $\mathcal{J}^\omega(Z) \equiv_T Z^{(\omega)}$.
\end{lemma}
\begin{proof}
We already know that $Z^{(n)}\equiv_T \mathcal{J}^n(Z)$ uniformly in $n$ and hence
that $Z^{(\omega)} = \bigoplus_{n\in\ensuremath{\mathbb N}}Z^{(n)} \equiv_T \bigoplus_{n\in\ensuremath{\mathbb N}}
\mathcal{J}^n(Z)$. It immediately follows that $\mathcal{J}^\omega(Z) \leq_T Z^{(\omega)}$.
For the other direction we need to uniformly compute all the reals
$\mathcal{J}^n(Z)$ from $\mathcal{J}^\omega(Z)$. We do this as follows. Given $Y\in\ensuremath{\N^\N}$,
let $Y^{-n}$ be $Y$ with its first $n$ elements removed. Then,
$\mathcal{J}^\omega(\mathcal{J}^n(Z))=\mathcal{J}^\omega(Z)^{-n}$. By the lemma above we get that
$\mathcal{J}^n(Z)=\mathcal{K}^\omega(\mathcal{J}^\omega(Z)^{-n})$, which we can compute uniformly from
$\mathcal{J}^\omega(Z)$.
\end{proof}
As in section \ref{sect:exp}, where we computably approximated the jump
operator, we will now approximate the $\omega$-Jump operator with a
computable operation on finite strings.
\begin{definition}\label{def:Jom si}
The \emph{$\omega$-Jump function} is the map $J^\omega \colon {\ensuremath{\N^{<\N}}} \to {\ensuremath{\N^{<\N}}}$
defined as follows. Given $\sigma\in {\ensuremath{\N^{<\N}}}$, let
\[
J^\omega(\sigma) = \langle J(\sigma)(0),\ J^2(\sigma)(0),\dots, J^{n-1}(\sigma)(0)\rangle,
\]
where $n$ is the least such that $J^n(\sigma)=\emptyset$ (there is always such
an $n$, because, by (\ref{P5}), $|J^i(\sigma)|\leq|\sigma|-i$ for $i\leq |\sigma|$). Note that then, by (\ref{P1}),
$|J^{n-1}(\sigma)|=1$.
\end{definition}
$J^\omega$ is computable (because $J$ is computable) and we will now
define its computable partial inverse $K^\omega$.
\begin{definition}\label{def:Kom si}
Given $\tau\in J^\omega({\ensuremath{\N^{<\N}}})$, let $K^\omega(\tau)= K^{|\tau|}(\ell(\tau))$
(recall that $\ell(\tau)=\langle \tau(|\tau|-1|)\rangle$) when $\tau\neq\emptyset$, and
$K^\omega(\emptyset)=\emptyset$.
\end{definition}
The following properties are the analogues of those of Lemma
\ref{lemma:J K} for the $\omega$-Jump function and its inverse. We will
refer to them as (\ref{Pom1}), \dots, (\ref{Pom7}).
\begin{lemma} \label{lemma:Jom Kom}
For $\sigma,\sigma', \tau'\in {\ensuremath{\N^{<\N}}}$, $\tau\in J^\omega({\ensuremath{\N^{<\N}}})$,
\begin{enumerate}\renewcommand{\theenumi}{P$^\omega$\arabic{enumi}}
\item $J^\omega(\sigma)=\emptyset$ if and only if $|\sigma|\leq 1$. \label{Pom1}
\item $K^\omega(J^\omega(\sigma))= \sigma$ for $|\sigma|\geq 2$. \label{Pom2}
\item $J^\omega(K^\omega(\tau))=\tau$. \label{Pom3}
\item If $\sigma\neq\sigma'$ and at least one has length $\geq 2$, then
$J^\omega(\sigma)\neq J^\omega(\sigma')$. \label{Pom4}
\item $|J^\omega(\sigma)|<|\sigma|$ and $|K^\omega(\tau)|>|\tau|$ except when
$\tau=\emptyset$. \label{Pom5}
\item If $\tau'\subset\tau$ then $\tau'\in J^\omega({\ensuremath{\N^{<\N}}})$ and
$K^\omega(\tau')\subset K^\omega(\tau)$. \label{Pom6}
\item If $J^\omega(\sigma')\subseteq J^\omega(\sigma)$ then, for every $m$, $J^m(\sigma')\subseteq J^m(\sigma)$. \label{Pom7}
\end{enumerate}
\end{lemma}
\begin{proof}
(\ref{Pom1}) follows from (\ref{P1}) and the fact that $J^\omega(\sigma)=\emptyset$
is equivalent to $J(\sigma)=\emptyset$.
To prove (\ref{Pom2}) let $|J^\omega(\sigma)|=n>0$. Then $\ell(J^\omega(\sigma)) =
\langle J^n(\sigma)(0) \rangle = J^n(\sigma)$ because $|J^n(\sigma)|=1$ as noticed in
the definition of $J^\omega$. Thus $K^\omega(J^\omega(\sigma)) = K^n(J^n(\sigma)) =
\sigma$ by (\ref{P2}).
(\ref{Pom3}) follows from (\ref{Pom2}) and $K(\emptyset)=\emptyset$. (\ref{Pom4})
follows immediately from (\ref{Pom1}) and (\ref{Pom2}). The first part
of (\ref{Pom5}) is immediate because by (\ref{P5}) we have
$J^{|\sigma|}(\sigma)=\emptyset$. The second part of (\ref{Pom5}) is a consequence
of the first, (\ref{Pom1}) and (\ref{Pom2}).
For (\ref{Pom6}) look at Figure \ref {fig tau = Jom si}.
\begin{figure}
{\Small
\[
\xymatrix@=10pt{
\tau \ar@{}|{=}[r] \ar@{}|(.4){\shortparallel}[dd]& \mathcal{J}^{\omega}(\sigma) \ar@{}|(.4){\shortparallel}[dd] \\
\ar@{}|{\frown}[d] & \ar@{}|{\frown}[d] &\ar@{-}[dddddd] && \emptyset \\
\tau(|\tau|-1)\ar@{}|{=}[r] &J^{|\tau|}(\sigma)(0)& &\langle \tau(|\tau|-1)\rangle\ar@{}|(.6){=}[r]& J^{|\tau|}(\sigma) \ar_J[u] \\
\vdots & \vdots &&&& \ddots \ar_J[ul] \\
\tau(|\tau'|-1)\ar@{}|{=}[r] & J^{|\tau'|}(\sigma)(0) && \langle \tau(|\tau'|-1)\rangle\ar@{}|(.6){=}[r] &J^{|\tau'|}(\sigma') \ar@{}|(.8){\subset}[r] \ar_K[dr] & \cdots \ar@{}|(.3){\subset}[r] & J^{|\tau'|}(\sigma) \ar_J[ul] \\
\vdots& \vdots &&&&\ddots \ar_K[dr]& &\ddots \ar_J[ul]\\
\tau(0)\ar@{}|{=}[r]& J(\sigma)(0) &&&&& J(\sigma') \ar_K[dr] \ar@{}|(.6){\subset}[r] & \cdots \ar@{}|(.3){\subset}[r] & J(\sigma) \ar_J[ul]\\
\ar@{}|{\smile}[u] & \ar@{}|{\smile}[u]&&&&& &\sigma' \ar@{}|(.6){\subset}[r] & \cdots \ar@{}|(.3){\subset}[r] & \sigma \ar_J[ul]\\
}
\]}
\caption{Assuming $\tau=J^\omega(\sigma)$ and $\tau'\subset\tau$.}\label{fig tau = Jom si}
\end{figure}
(\ref{Pom6}) is obvious when $\tau'=\emptyset$, using the second part of
(\ref{Pom5}). Otherwise, let $\sigma$ be such that $\tau=J^\omega(\sigma)$. The
idea is to define $\sigma'\subset \sigma$ as in the picture and then show
that $\tau'=J^\omega(\sigma')$. Notice that $|\sigma|>|\tau|\geq2$ by
(\ref{Pom5}), and that $\sigma=K^\omega(\tau)$ by (\ref{Pom2}). Notice also
that
\[
\ell(\tau') = \langle \tau(|\tau'|-1)\rangle = \langle J^{|\tau'|}(\sigma)(0) \rangle \subset J^{|\tau'|}(\sigma)
\]
because $|\tau'|<|\tau|$ and hence $|J^{|\tau'|}(\sigma)|>1$. Let $\sigma' =
K^{|\tau'|}(\ell(\tau'))$. Using (\ref{P6}) $|\tau'|$ times we know
that $\sigma' \subset \sigma$ and $J^{|\tau'|}(\sigma')=\ell(\tau')$. Now, we
need to show that $J^{\omega}(\sigma')=\tau'$. First notice that
$|J^\omega(\sigma')| = |\tau'|$ because $|J^{|\tau'|} (\sigma')| = |\ell(\tau')|
=1$. By induction on $i \leq |\tau'|$ we can show, using (\ref{P6}) and
(\ref{P2}), that
\begin{equation}\label{aa}
J^{|\tau'|-i}(\sigma') = K^i(\ell(\tau')) \subset K^i(J^{|\tau'|}(\sigma)) = J^{|\tau'|-i}(\sigma).
\end{equation}
Now for $j<|\tau'|$, $J^\omega(\sigma')(j) = J^{j+1}(\sigma')(0) = J^{j+1}(\sigma)(0) = \tau(j) = \tau'(j)$.
For (\ref{Pom7}) let $\tau'=J^\omega(\sigma')$. Then, if $i=|\tau'|-m$,
equation (\ref{aa}) shows that $J^m(\sigma') \subseteq J^m(\sigma)$.
\end{proof}
As we did in Section \ref{sect:exp}, we now explain how the $\omega$-Jump
function approximates the $\omega$-Jump operator.
\begin{lemma}\label{lemma:Jom approx Jom}
Given $Y,Z\in \ensuremath{\N^\N}$, the following are equivalent:
\begin{enumerate}
\item $Y = \mathcal{J}^\omega(Z)$;
\item for every $n$ there exists $\sigma_n\subset Z$ with $|\sigma_n|>n$ such that $Y\!\restriction\! n =J^\omega(\sigma_n)$.
\end{enumerate}\end{lemma}
\begin{proof}
First assume $Y = \mathcal{J}^\omega(Z)$. When $n=0$ let $\sigma_0=Z\upto1$, which
works by (\ref{Pom1}). When $n>0$ let $\sigma_n = K^\omega(Y \!\restriction\! n)$. We
recommend the reader to look at Figure \ref{fig: Y=Jom Z} again. By
(\ref{Pom3}) we have $Y\!\restriction\! n =J^\omega(\sigma_n)$. Since $\sigma_n = K^n(\langle
Y(n-1)\rangle) = K^{n-1}(Y(n-1))$, $\sigma_n$ is one of the strings occurring
in the definition of $\mathcal{K}^\omega(Y)$ and hence $\sigma_n \subset \mathcal{K}^\omega(Y)=Z$
by Lemma \ref{lemma:Kom Bai}. We get that $|\sigma_n|>n$ by (\ref{P5})
applied $n$ times to $\sigma_n=K^n(\langle Y(n-1)\rangle)$.
Suppose now that (2) holds. By (\ref{Pom7}), we get that for all $m$
and $n$, $J^m(\sigma_n)\subseteq J^m(\sigma_{n+1})$. Using Corollary
\ref{cor:Jm approx JM}, it is straightforward to show that for all
$m<n$, $\bigcup_nJ^m(\sigma_n) = \mathcal{J}^m(Z)$ and hence
$J^m(\sigma_n)(0)=\mathcal{J}^m(Z)(0)$. It follows that for every $m$ and $n>m$
\[
\mathcal{J}^\omega(Z)(m)=\mathcal{J}^{m+1}(Z)(0)=J^{m+1}(\sigma_{n})(0) = J^\omega(\sigma_{n})(m)= Y(m).\qedhere
\]
\end{proof}
Again as in Section \ref{sect:exp}, the $\omega$-Jump function leads to
the definition of the $\omega$-Jump Tree.
\begin{definition}
Given a tree $T \subseteq {\ensuremath{\N^{<\N}}}$ the \emph{$\omega$-Jump Tree of $T$}
is
\[
\mathcal{JT}^\omega(T) = \set{J^\omega(\sigma)}{\sigma\in T}.
\]
\end{definition}
\begin{lemma}\label{lemma:JTom comp}
For every tree $T$, $\mathcal{JT}^\omega(T)$ is a tree computable in $T$.
\end{lemma}
\begin{proof}
The proof is identical to the proof of Lemma \ref{lemma:JTcomp}, using
Lemma \ref{lemma:Jom Kom} in place of Lemma \ref{lemma:J K}.
\end{proof}
\begin{lemma}\label{lemma:JTom paths}
For every tree $T$, $[\mathcal{JT}^\omega(T)] = \set{\mathcal{J}^\omega(Z)}{Z \in [T]}$.
\end{lemma}
\begin{proof}
To prove $\set{\mathcal{J}^\omega(Z)}{Z \in [T]} \subseteq [\mathcal{JT}^\omega(T)]$ we can
argue as in the proof of Lemma \ref{lemma:JT paths}, using Lemma
\ref{lemma:Jom approx Jom} in place of Lemma \ref{lemma:J approx J}.
To prove the other inclusion, fix $Y \in [\mathcal{JT}^\omega(T)]$. For each $n$,
let $\sigma_n=K^\omega(Y\!\restriction\! n)\in T$. Since $J^\omega(\sigma_n) = Y \!\restriction\! n$ by
(\ref{Pom3}), we have $\sigma_n \subset \sigma_{n+1}$ for each $n$ by
(\ref{Pom6}). Let $Z=\bigcup_{n\in \ensuremath{\mathbb N}}\sigma_n \in [T]$. Then, by Lemma
\ref {lemma:Jom approx Jom} we get $Y=\mathcal{J}^\omega(Z)$.
\end{proof}
\subsection{$\omega$-Jumps versus the epsilon function}\label{ssect:omJ
eps}
Our goal now is to generalize Definition \ref{def:hgJ} with an operator
that uses $\boldsymbol\varepsilon$ rather than ${\boldsymbol \omega}$. We thus wish to define an
operator $h^\omega$ that, given an order preserving function $g\colon
\mathcal{JT}^\omega (T) \to {\ensuremath{\mathcal{X}}}$ (where {\ensuremath{\mathcal{X}}}\ is a linear order), returns an order
preserving function $h^\omega_g \colon T \to \boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}$. To do so we will
iterate the $h$ operator of Definition \ref{def:hgJ} along the elements
of $\mathcal{JT}^\omega(T)$.
Let us give the rough motivation behind the definition of the operator
$h^\omega$ below. Suppose we are given an order preserving function
$g\colon\mathcal{JT}^\omega (T) \to {\ensuremath{\mathcal{X}}}$. For each $i$, we would like to define a
monotone function $f_i\colon \mathcal{JT}^i(T)\to \boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}$ such that
$f_i=h_{f_{i+1}}$, where $h_{f_{i+1}}$ is as in Definition
\ref{def:hgJ}. Notice that the range of this function is correct, using
the fact that ${\boldsymbol \omega}^{\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}}$ is computably isomorphic to
$\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}$. However, we do not have a place to start as to define such
$f_i$ we would need $f_{i+1}$, and this recursion goes the wrong way.
Note that if $\tau=J^\omega(\sigma)\in \mathcal{JT}^\omega (T)$, then $\langle\tau(i)\rangle \in
\mathcal{JT}^i(T)$, and we could use $g$ to define $f_i$ at least on the strings
of length 1, of the form $\langle\tau(i)\rangle$. (This is not exactly what we
are going to do, but it should help picture the construction.) The good
news is that to calculate $f_{i-1} = h_{f_i}$ on strings of length at
most 2, we only need to know the values of $f_i$ on strings of length
at most 1. Inductively, this would allow us to calculate $f_0\colon
T\to {\ensuremath{\mathcal{X}}}$ on strings of length at most $i$. Since this would work for
all $i$, we get $f_0$ defined on all $T$. We now give the precise
definition.
First, we need to iterate the Jump Tree operator along any finite
string.
\begin{definition}
If $T$ is a tree we define
\[
\mathcal{JT}^\omega_\tau(T) = \set{J^{|\tau|+1}(\sigma)}{\sigma\in T \land \tau\subseteq J^{\omega}(\sigma)}.
\]
\end{definition}
Notice that $\mathcal{JT}^\omega_\tau(T) \subseteq \mathcal{JT}^{|\tau|+1}(T)$ and that
$\mathcal{JT}^\omega_\tau(T)$ is empty when $\tau \notin \mathcal{JT}^\omega(T)$. The following
Lemma provides an alternative way of defining $\mathcal{JT}^\omega_\tau(T)$ by an
inductive definition.
\begin{lemma}\label{lemma: JT tau recursive}
Given a tree $T \subseteq {\ensuremath{\N^{<\N}}}$,
\begin{align*}
\mathcal{JT}^\omega_\emptyset(T) &= \mathcal{JT}(T) \\
\mathcal{JT}^\omega_{\tau^\smallfrown\langle c\rangle} (T) &= \mathcal{JT}(\mathcal{JT}^\omega_\tau(T)_{\langle c\rangle}).
\end{align*}
($T_{\langle c\rangle}$ was defined in \ref{def:Tsigma} as $\set{\rho\in T}{\langle
c\rangle \subseteq \rho \lor \rho=\emptyset}$.)
\end{lemma}
\begin{proof}
Straightforward induction on $|\tau|$.
\end{proof}
The next Lemma links $\mathcal{JT}^\omega_\tau(T)$ to $\mathcal{JT}^\omega(T)$.
\begin{lemma}
Given a tree $T \subseteq {\ensuremath{\N^{<\N}}}$, $\tau\in {\ensuremath{\N^{<\N}}}$, and $c \in \ensuremath{\mathbb N}$,
\[
\tau^\smallfrown \langle c\rangle \in \mathcal{JT}^\omega(T) \iff \langle c \rangle \in \mathcal{JT}^\omega_{\tau}(T).
\]
\end{lemma}
\begin{proof}
This follows immediately from the definitions of $\mathcal{JT}^\omega(T)$ and
$\mathcal{JT}^\omega_{\tau}(T)$.
\end{proof}
\begin{definition}\label{def:fgtau}
Let {\ensuremath{\mathcal{X}}}\ be a linear ordering and $g\colon \mathcal{JT}^\omega (T) \to {\ensuremath{\mathcal{X}}}$ be a
function. We define simultaneously for each $\tau \in \mathcal{JT}^\omega(T)$ a
function
\[
f_\tau \colon \mathcal{JT}^\omega_\tau (T) \to \boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}
\]
by recursion on $|\sigma|$:
\[
f_\tau (\sigma) =
\begin{cases}
\varepsilon_{g(\tau)} & \text{if $\sigma = \emptyset$;}\\
h_{f_{\tau ^\smallfrown \langle \sigma(0) \rangle}} (\sigma) & \text{if $\sigma \neq \emptyset$.}
\end{cases}
\]
Here $h_{f_{\tau ^\smallfrown \langle \sigma(0) \rangle}}$ is defined according to
Definition \ref{def:hgJ}. We then define
\[
h^\omega_g = h_{f_\emptyset}\colon T \to \boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}.
\]
\end{definition}
\begin{remark}
First of all notice that we are really doing a recursion on $|\sigma|$. In
fact, to compute $h_{f_{\tau ^\smallfrown \langle \sigma(0) \rangle}} (\sigma)$ when $\sigma
\neq \emptyset$ we use $f_{\tau ^\smallfrown \langle \sigma(0) \rangle}$ on strings of the form
$J(\sigma) \!\restriction\! i$, which have length $\leq |J(\sigma)|<|\sigma|$ by
(\ref{P5}).
Let us notice the functions have the right domains and ranges. The
proof is done simultaneously for all $\tau\in \mathcal{JT}^\omega(T)$ by induction
on $|\sigma|$. Take $\sigma\in \mathcal{JT}^\omega_\tau(T)$ with $|\sigma|=n$. Suppose that
for all $\tau'\in \mathcal{JT}^\omega(T)$ and all $\sigma'\in \mathcal{JT}^\omega_{\tau'}(T)$ with
$|\sigma'|<n$ we have that $f_{\tau'}(\sigma')$ is defined and
$f_{\tau'}(\sigma') \in \boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}$.
If $\sigma=\emptyset$, then $f_\tau(\sigma) = \varepsilon_{g(\tau)} \in \boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}$.
Suppose $\sigma\neq\emptyset$ and let $\tau'=\tau^\smallfrown\langle\sigma(0)\rangle$. Then
$f_\tau(\sigma)=h_{f_{\tau'}}(\sigma)$. When computing $h_{f_{\tau'}}(\sigma)$,
we only apply $f_{\tau'}$ to strings of the form $J(\sigma) \!\restriction\! i$.
These strings have length less than $n$ and, by Lemma
\ref{lemma:JTcomp}, belong to $\mathcal{JT}(\mathcal{JT}^\omega_\tau(T)_{\langle\sigma(0)\rangle})=
\mathcal{JT}^\omega_{\tau'}(T)$ (by Lemma \ref{lemma: JT tau recursive}). By the
induction hypothesis we have that $f_{\tau'}$ is defined on these
strings and takes values in $\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}$. Therefore,
$h_{f_{\tau'}}(\sigma)$ is defined and $h_{f_{\tau'}}(\sigma) \in
{\boldsymbol \omega}^{\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}}$. Using that ${\boldsymbol \omega}^{\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}}= \boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}$, we get
that $f_\tau \colon \mathcal{JT}^\omega_\tau (T) \to \boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}$.
Finally, since $f_\emptyset \colon \mathcal{JT}(T) \to \boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}$, we get that
$h^\omega_g \colon T \to \boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}$.
\end{remark}
\begin{lemma}\label{lemma:om monotone}
If $g\colon \mathcal{JT}^\omega(T)\to{\ensuremath{\mathcal{X}}}$ is $(\supset,<_{\ensuremath{\mathcal{X}}})$-monotone, then
$h_g^\omega\colon T\to\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}$ is $(\supset,<_{\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}})$-monotone.
\end{lemma}
\begin{proof}
First, we note that by Lemma \ref{lemma:monotone}, it suffices to show
that $f_\emptyset$ is $(\supset,<_{\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}})$-monotone. We will actually
show that for every $\tau\in \mathcal{JT}^\omega(T)$, $f_\tau$ is
$(\supset,<_{\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}})$-monotone.
The proof is again done simultaneously for all $\tau\in \mathcal{JT}^\omega(T)$ by
induction on the length of the strings. Suppose that on strings of
length less than $n$, for every $\tau'$, $f_{\tau'}$ is
$(\supset,<_{\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}})$-monotone. Let $\sigma' \subset \sigma \in
\mathcal{JT}^\omega_\tau (T)$ with $|\sigma|=n$. Let $\tau'=\tau^\smallfrown\langle\sigma(0)\rangle$.
Consider first the case when $\sigma'=\emptyset$. Then $f_\tau(\sigma') =
\varepsilon_{g(\tau)}$ while $f_\tau(\sigma)$ is a finite sum of terms of the
form $\omega^{f_{\tau'} (J(\sigma)\!\restriction\! i)}$. By the induction hypothesis,
the exponent of each such term is less than or equal to $f_{\tau'}(\emptyset)
= \varepsilon_{g(\tau')} <_{\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}} \varepsilon_{g(\tau)}$. So, the whole sum is
less than $\varepsilon_{g(\tau)} = f_\tau(\sigma')$. Suppose now that
$\sigma'\neq\emptyset$. Since the proof of Lemma \ref{lemma:monotone} (based on
the proof of Lemma \ref{h:monotone}) uses the monotonicity of
$f_{\tau'}$ only for strings shorter then $\sigma$ (by (\ref{P5})), we get
that $h_{f_{\tau'}}(\sigma') >_{\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}}} h_{f_{\tau'}}(\sigma)$.
\end{proof}
\begin{theorem}\label{omega}
For every $Z \in \ensuremath{\N^\N}$, there is a $Z$-computable linear ordering ${\ensuremath{\mathcal{X}}}$
such that the jump of every descending sequence in ${\ensuremath{\mathcal{X}}}$ computes
$Z^{(\omega)}$, but there is a $Z$-computable descending sequence in
$\boldsymbol\varepsilon_{{\ensuremath{\mathcal{X}}}}$.
\end{theorem}
\begin{proof}
Let ${\ensuremath{\mathcal{X}}} = \langle \mathcal{JT}^\omega (T_Z), {\ensuremath{\leq_{\mathrm{KB}}}} \rangle$ where again $T_Z = \set{Z\!\restriction\!
n}{n\in \ensuremath{\mathbb N}}$. By Lemma \ref{lemma:JTom comp}, ${\ensuremath{\mathcal{X}}}$ is $Z$-computable.
By Lemma \ref{lemma:JTom paths}, $\mathcal{J}^\omega(Z)$ is the unique path in
$\mathcal{JT}^\omega (T_Z)$. Therefore, by Lemma \ref{lemma:KB}, the jump of every
descending sequence in ${\ensuremath{\mathcal{X}}}$ computes $\mathcal{J}^\omega(Z) \equiv_T Z^{(\omega)}$.
Let $g$ be the identity on ${\ensuremath{\mathcal{X}}}$, which is obviously
$(\supset,<_{{\ensuremath{\mathcal{X}}}})$-monotone. By Lemma \ref{lemma:om monotone},
$h_g^\omega$ is $(\supset,<_{\boldsymbol\varepsilon_{{\ensuremath{\mathcal{X}}}}})$-monotone. Since $h_g^\omega$ is
computable, $\set{h_g^\omega(Z\!\restriction\! n)}{n\in\ensuremath{\mathbb N}}$ is a $Z$-computable
descending sequence in $\boldsymbol\varepsilon_{{\ensuremath{\mathcal{X}}}}$.
\end{proof}
\subsection{Reverse mathematics results}\label{ssect:rmepsilon}
In this section, we work in the weak system \system{RCA}\ensuremath{_0}. Therefore, we do not
have an operation that given $Z\in{\ensuremath{\N^{<\N}}}$, returns $\mathcal{J}(Z)$, let alone
$\mathcal{J}^{\omega}(Z)$. However, the predicates with two variables $Z$ and $Y$
that say $Y=\mathcal{J}(Z)$ and $Y=\mathcal{J}^\omega(Z)$ are arithmetic as witnessed by
Lemmas \ref {lemma:J approx J} and \ref {lemma:Jom approx Jom}. Notice
that if condition (2) of Lemma \ref {lemma:J approx J} holds, then
\system{RCA}\ensuremath{_0}\ can recover the sequence of $t_i$'s in the definition of $\mathcal{J}(Z)$
and show that $\mathcal{J}(Z)$ is as defined in \ref {def:J}. Furthermore, \system{RCA}\ensuremath{_0}\
can show that $\mathcal{J}(Z)\equiv_T Z'$ and hence that \system{ACA}\ensuremath{_0}\ is equivalent to
\system{RCA}\ensuremath{_0}+$\forall Z\exists Y(Y=\mathcal{J}(Z))$, and \system{ACA}\ensuremath{'_0}\ is equivalent to
\system{RCA}\ensuremath{_0}+$\forall Z\forall n\exists Y(Y=\mathcal{J}^n(Z))$.
Also, if condition (2) of Lemma \ref {lemma:Jom approx Jom} holds, then
as in the proof of that lemma, in \system{RCA}\ensuremath{_0}\ we can uniformly build
$\mathcal{J}^m(Z)$ as $\bigcup_{n}J^m(\sigma_n)$, and show that $\mathcal{J}^\omega(Z)$ is as
defined in Definition \ref {def:Jom si}. Furthermore, we can prove
Lemma \ref {lemma: Z om jump} in \system{RCA}\ensuremath{_0}: if $Y=\mathcal{J}^\omega(Z)$, then $Y$ can
compute $Z^{(\omega)}$, and if $X=Z^{(\omega)}$, then $X$ can compute a real
$Y$ such that $Y=\mathcal{J}^\omega(Z)$. Therefore, we get that \system{ACA}\ensuremath{^+_0}\ is
equivalent to \system{RCA}\ensuremath{_0}+$\forall Z\exists Y(Y=\mathcal{J}^{\omega}(Z))$.
We already know, from Girard's result Theorem \ref {Girard} that over
\system{RCA}\ensuremath{_0}, the statement \lq\lq if {\ensuremath{\mathcal{X}}}\ is a well-ordering then ${\boldsymbol \omega}^{\ensuremath{\mathcal{X}}}$ is
a well-ordering\rq\rq\ is equivalent to \system{ACA}\ensuremath{_0}. We now start climbing up
the ladder.
\begin{theorem}\label{ACApr}
Over \system{RCA}\ensuremath{_0}, $\forall n\, {\textsf{WOP}}({\ensuremath{\mathcal{X}}}\mapsto\iexpop n{\ensuremath{\mathcal{X}}})$ is equivalent to
\system{ACA}\ensuremath{'_0}.
\end{theorem}
\begin{proof}
We showed, in Corollary \ref {cor forward ACApr}, that
\system{ACA}\ensuremath{'_0}$\vdash\forall n\, {\textsf{WOP}}({\ensuremath{\mathcal{X}}}\mapsto\iexpop n{\ensuremath{\mathcal{X}}})$.
Suppose now that $\forall n\, {\textsf{WOP}}({\ensuremath{\mathcal{X}}}\mapsto\iexpop n{\ensuremath{\mathcal{X}}})$ holds.
Consider $Z\in\ensuremath{\N^\N}$ and $n\in\omega$; we want to show that $\mathcal{J}^n(Z)$
exists. By Girard's theorem we can assume \system{ACA}\ensuremath{_0}. Let ${\ensuremath{\mathcal{X}}}_Z^n = \langle T_n,
{\ensuremath{\leq_{\mathrm{KB}}}} \rangle$, where $T_n = \mathcal{JT}^n(T_Z)$ as in the proof of Theorem
\ref{0^n}. The proof that there is a $Z$-computable descending sequence
in $\iexpop n{{\ensuremath{\mathcal{X}}}_Z^n}$ is finitary and goes through in \system{RCA}\ensuremath{_0}. So, by
${\textsf{WOP}}({\ensuremath{\mathcal{X}}}\mapsto\iexpop n{\ensuremath{\mathcal{X}}})$ we get a descending sequence in ${\ensuremath{\mathcal{X}}}_Z^n$.
By Lemma \ref {lemma:KB}, using \system{ACA}\ensuremath{_0}, we get $Y_n \in [T_n]$. For each
$i\leq n$, let $Y_i=\mathcal{K}^{n-i}(Y_n)$. Lemma \ref {lemma:JT paths} shows
that for each $i$, $Y_i\in [T_i]$ and $Y_i=\mathcal{J}(Y_{i-1})$. Since $Z$ is
the only path through $T_Z$, we get that $Y_0=Z$, and so $Y_n=\mathcal{J}^n(Z)$.
\end{proof}
\begin{theorem}\label{rev:ACApl}
Over \system{RCA}\ensuremath{_0}, ${\textsf{WOP}}({\ensuremath{\mathcal{X}}}\mapsto\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}})$ is equivalent to \system{ACA}\ensuremath{^+_0}.
\end{theorem}
\begin{proof}
We already showed that \system{ACA}\ensuremath{^+_0}\ proves ${\textsf{WOP}}({\ensuremath{\mathcal{X}}}\mapsto\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}})$ in
Corollary \ref{cor forward ACApl}.
Assume \system{RCA}\ensuremath{_0}+${\textsf{WOP}}({\ensuremath{\mathcal{X}}}\mapsto\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}})$. Let $Z\in \ensuremath{\N^\N}$; we want to show
that there exists $Y$ with $Y=\mathcal{J}^{\omega}(Z)$. Build ${\ensuremath{\mathcal{X}}} = \langle \mathcal{JT}^\omega
(T_Z), {\ensuremath{\leq_{\mathrm{KB}}}} \rangle$ as in Theorem \ref {omega}. The proof that
$\boldsymbol\varepsilon_{{\ensuremath{\mathcal{X}}}}$ has a $Z$-computable descending sequence is completely
finitary and can be carried out in \system{RCA}\ensuremath{_0}. By ${\textsf{WOP}}({\ensuremath{\mathcal{X}}}\mapsto\boldsymbol\varepsilon_{\ensuremath{\mathcal{X}}})$,
we get that ${\ensuremath{\mathcal{X}}}$ has a descending sequence. Since we have \system{ACA}\ensuremath{_0}\ we can
use this descending sequence to get a path $Y$ through $\mathcal{JT}^\omega (T_Z)$.
Now, the proof of Lemma \ref {lemma:JTom paths} translates into a proof
in \system{RCA}\ensuremath{_0}\ that $Y$ is $\mathcal{J}^\omega$ of some path through $T_Z$. Since $Z$ is
the only path through $T_Z$, we get $Y=\mathcal{J}^\omega(Z)$ as wanted.
\end{proof}
\section{General Case}\label{sect:General case}
In this section we define the $\omega^\alpha$-Jump operator, the $\omega^\alpha$-Jump
function, and the $\omega^\alpha$-Jump Tree, for all computable ordinals $\alpha$.
The constructions of Sections \ref{sect:exp} and \ref{sect:eps}, where
we considered $\alpha=0$ and $\alpha=1$ respectively, are thus the simplest
cases of what we will be doing here.
The whole construction is by transfinite recursion, and the base case
was covered in Section \ref{sect:exp}. If $\alpha>0$ is a computable
ordinal, we assume that we have a fixed non-decreasing computable
sequence of ordinals $\set{\alpha_i}{i\in \ensuremath{\mathbb N}}$ such that
$\alpha=\sup_{i\in\ensuremath{\mathbb N}}(\alpha_i+1)$. (So, if $\alpha=\gamma+1$, we can take $\alpha_i=\gamma$
for all $i$.) Notice that we have $\sum_{i\in\ensuremath{\mathbb N}}\omega^{\alpha_i}=\omega^\alpha$. In
defining the $\omega^\alpha$-Jump operator, the $\omega^\alpha$-Jump function, and
the $\omega^\alpha$-Jump Tree we make use the $\omega^{\alpha_i}$-Jump operator, the
$\omega^{\alpha_i}$-Jump function, and the $\omega^{\alpha_i}$-Jump Tree for each
$i$.
\subsection{The iteration of the jump}\label{ssect:iterating jump}
Our presentation here is different from the one of previous sections,
where we defined the operator first. Here we start from the
$\omega^\alpha$-Jump function, prove its basic properties, then use it to
define the $\omega^\alpha$-Jump Tree, and eventually introduce the
$\omega^\alpha$-Jump operator.
Let $\alpha>0$ be a computable ordinal and $\set{\alpha_i}{i\in \ensuremath{\mathbb N}}$ be its
canonical sequence as described above. To simplify the notation in the
definition of the $\omega^\alpha$-Jump function, assume we already defined
$\Jom{\alpha_i}$ and $\Kom{\alpha_i}$ for all $i$, and let $\Jom\alpha_n\colon {\ensuremath{\N^{<\N}}}
\to {\ensuremath{\N^{<\N}}}$ and $\Kom\alpha_n\colon {\ensuremath{\N^{<\N}}} \to {\ensuremath{\N^{<\N}}}$ be defined recursively by
\begin{alignat*}{2}
\Jom\alpha_0 & = id; \qquad & \Jom\alpha_{n+1} & = \Jom{\alpha_n} \circ \Jom\alpha_n;\\
\Kom\alpha_0 & = id; \qquad & \Kom\alpha_{n+1} & = \Kom\alpha_n \circ \Kom{\alpha_n}.
\end{alignat*}
In other words:
\begin{align*}
\Jom\alpha_n &= J^{\omega^{\alpha_{n-1}}}\circ J^{\omega^{\alpha_{n-2}}} \circ \cdots \circ \Jom{\alpha_0}, \\
\Kom\alpha_n &= K^{\omega^{\alpha_0}}\circ K^{\omega^{\alpha_{1}}} \circ \cdots \circ \Kom{\alpha_{n-1}}.
\end{align*}
\begin{definition}
The \emph{$\omega^\alpha$-Jump function} is the map $\Jom\alpha \colon {\ensuremath{\N^{<\N}}} \to
{\ensuremath{\N^{<\N}}}$ defined by
\[
\Jom\alpha(\sigma) = \langle \Jom\alpha_1(\sigma)(0), \Jom\alpha_2(\sigma)(0), \dots,\Jom\alpha_{n-1}(\sigma)(0)\rangle,
\]
where $n$ is least such that $\Jom\alpha_n(\sigma)=\emptyset$. In this case, since
$\Jom\alpha_n(\sigma)=\Jom{\alpha_{n-1}}(\Jom\alpha_{n-1}(\sigma))$, by (\ref{Pa1}) below
applied to $\alpha_{n-1}$, we have $|\Jom\alpha_{n-1}(\sigma)| =1$.
Given $\tau\in \Jom\alpha({\ensuremath{\N^{<\N}}})$, let
\[
\Kom\alpha(\tau) = \Kom\alpha_{|\tau|}(\ell(\tau)).
\]
In particular $\Kom\alpha(\emptyset)=\emptyset$, since $\Kom\alpha_0$ is the identity
function.
\end{definition}
Since for $\alpha=1$ we have $\alpha_i=0$ for every $i$, the definitions we
just gave match exactly Definitions \ref{def:Jom si} and \ref{def:Kom
si}, where we introduced $J^\omega$ and $K^\omega$. We will not mention again
this explicitly, but the reader should keep in mind that the case
$\alpha=1$ of Section \ref{sect:eps} is the blueprint for the work of this
section.
Notice that, by transfinite induction, $\Jom\alpha$ and $\Kom\alpha$ are
computable.
The following properties generalize those of Lemmas \ref{lemma:J K} and
\ref{lemma:Jom Kom}. We will refer to them, as usual, as (\ref{Pa1}),
\dots, (\ref{Pa7}).
\begin{lemma} \label{lemma:J om al K}
For $\sigma, \tau'\in {\ensuremath{\N^{<\N}}}$, $\tau\in \Jom\alpha({\ensuremath{\N^{<\N}}})$,
\begin{enumerate}\renewcommand{\theenumi}{P$^{\omega^\alpha}\!$\arabic{enumi}}
\item $\Jom\alpha(\sigma)=\emptyset$ if and only if $|\sigma|\leq 1$. \label{Pa1}
\item $\Kom\alpha(\Jom\alpha(\sigma))= \sigma$ for $|\sigma|\geq 2$. \label{Pa2}
\item $\Jom\alpha(\Kom\alpha(\tau))=\tau$. \label{Pa3}
\item If $\sigma\neq\sigma'$ and at least one has length $\geq 2$, then
$\Jom\alpha(\sigma)\neq \Jom\alpha(\sigma')$. \label{Pa4}
\item $|\Jom\alpha(\sigma)|<|\sigma|$ and $|\Kom\alpha(\tau)|>|\tau|$ except when
$\tau=\emptyset$. \label{Pa5}
\item If $\tau'\subset\tau$ then $\tau'\in \Jom\alpha({\ensuremath{\N^{<\N}}})$ and
$\Kom\alpha(\tau')\subset \Kom\alpha(\tau)$. \label{Pa6}
\item If $\Jom\alpha(\sigma')\subseteq \Jom\alpha(\sigma)$ and $\alpha>0$ then for
every $m$, $\Jom\alpha_m(\sigma')\subseteq \Jom\alpha_m(\sigma)$. \label{Pa7}
\end{enumerate}
\end{lemma}
\begin{proof}
The proof is by transfinite induction on $\alpha$. The case $\alpha=0$ is Lemma
\ref{lemma:J K}.
Since $\Jom\alpha(\sigma)=\emptyset$ if and only if $\Jom{\alpha_0}(\sigma)=\emptyset$,
(\ref{Pa1}) follows from the same property for $\alpha_0$.
To prove (\ref{Pa2}) let $|\Jom\alpha(\sigma)|=n-1>0$. Then $\ell(\Jom\alpha(\sigma))
= \langle \Jom\alpha_{n-1}(\sigma)(0) \rangle = \Jom\alpha_{n-1}(\sigma)$ because
$|\Jom\alpha_{n-1}(\sigma)|=1$ as noticed above. Since $\Kom\alpha(\Jom\alpha(\sigma)) =
\Kom\alpha_{n-1} (\Jom\alpha_{n-1}(\sigma))$, $\Kom\alpha(\Jom\alpha(\sigma)) = \sigma$ follows
from (\ref{Pa2}) for $\alpha_{n-2}, \alpha_{n-3}, \dots, \alpha_0$.
As in the proof of the case $\alpha=1$ in Lemma \ref{lemma:Jom Kom},
(\ref{Pa3}), (\ref{Pa4}) and (\ref{Pa5}) follow from the properties we
already proved.
The proof of (\ref{Pa6}) is also basically the same as the proof of
(\ref{Pom6}). We recommend the reader to have Figure \ref{fig tau =
Jomalpha si} in mind while reading the proof. The nontrivial case is
when $\tau' \neq \emptyset$. Let $\sigma$ be such that $\tau=\Jom\alpha(\sigma)$. The
idea is to define $\sigma'\subset \sigma$ as in the picture and then show
that $\tau'=\Jom\alpha(\sigma')$. Notice that $|\sigma|>|\tau|\geq2$ by
(\ref{Pa5}), and that $\sigma=\Kom\alpha(\tau)$ by (\ref{Pa2}). Notice also
that $\ell(\tau') = \langle \tau(|\tau'|-1)\rangle = \langle
\Jom\alpha_{|\tau'|}(\sigma)(0) \rangle \subset \Jom\alpha_{|\tau'|}(\sigma)$, where the
strict inclusion is because $|\tau'|<|\tau|$ and hence
$|\Jom\alpha_{|\tau'|}(\sigma)|>1$. By induction on $i \leq |\tau'|$ we can
show, using (\ref{Pa6}) and (\ref{Pa2}) for $\alpha_{|\tau'|-1}, \dots,
\alpha_1, \alpha_0$, that
\begin{align*}
(\Kom{\alpha_{|\tau'|-i}} \circ \dots \circ \Kom{\alpha_{|\tau'|-1}})(\ell(\tau'))
& \subset (\Kom{\alpha_{|\tau'|-i}} \circ \dots \circ \Kom{\alpha_{|\tau'|-1}})(\Jom\alpha_{|\tau'|}(\sigma))\\
& = \Jom\alpha_{|\tau'|-i}(\sigma)
\end{align*}
and $(\Kom{\alpha_{|\tau'|-i}} \circ \dots \circ \Kom{\alpha_{|\tau'|-1}})
(\ell(\tau')) \in \Jom\alpha_{|\tau'|-i}({\ensuremath{\N^{<\N}}})$. In particular, when
$i=|\tau'|$, if we set $\sigma' =\Kom\alpha_{|\tau'|}(\ell(\tau'))$, we obtain
$\sigma' \subset \sigma$. Furthermore, by (\ref{Pa2}) applied to
$\alpha_{0},\dots,\alpha_{|\tau'|-i-1}$, we also get
\begin{equation}\label{bb}
\Jom\alpha_{|\tau'|-i}(\sigma') = (\Kom{\alpha_{|\tau'|-i}} \circ \dots \circ \Kom{\alpha_{|\tau'|-1}})(\ell(\tau')) \subset \Jom\alpha_{|\tau'|-i}(\sigma).
\end{equation}
Therefore, for every $j<|\tau'|$
\[
\Jom\alpha(\sigma')(j)=\Jom\alpha_{j+1}(\sigma')(0)=\Jom\alpha_{j+1}(\sigma)(0)=\tau(j)=\tau'(j).
\]
Since $\Jom\alpha_{|\tau'|-1}(\sigma') = \ell(\tau')$ which has length 1, we
get that $\Jom\alpha(\sigma')$ has length $|\tau'|$ as wanted.
For (\ref{Pa7}) let $\tau'=\Jom\alpha(\sigma')$. Then, if $i=|\tau'|-m$,
equation \ref{bb} shows that $\Jom\alpha_m(\sigma')\subseteq \Jom\alpha_m(\sigma)$.
\end{proof}
\begin{figure}
{\Small
\[
\xymatrix@=10pt{
\tau \ar@{}|{=}[r] \ar@{}|(.4){\shortparallel}[dd]& \JJom\alpha(\sigma) \ar@{}|(.4){\shortparallel}[dd] \\
\ar@{}|{\frown}[d] & \ar@{}|{\frown}[d] &\ar@{-}[dddddd] && \emptyset \\
\tau(|\tau|-1)\ar@{}|{=}[r] &\Jom\alpha_{|\tau|}(\sigma)(0)& &\langle \tau(|\tau|-1)\rangle\ar@{}|(.6){=}[r]& \Jom\alpha_{|\tau|}(\sigma) \ar_ {\Jom{\alpha_{|\tau|}}}[u] \\
\vdots & \vdots &&&& \ddots \ar_ {\Jom{\alpha_{|\tau|-1}}}[ul] \\
\tau(|\tau'|-1)\ar@{}|{=}[r] & \Jom\alpha_{|\tau'|}(\sigma)(0) && \langle \tau(|\tau'|-1)\rangle\ar@{}|(.6){=}[r] &\Jom\alpha_{|\tau'|}(\sigma') \ar@{}|(.8){\subset}[r] \ar_ {\Kom{\alpha_{|\tau'|-1}}}[dr] & \cdots \ar@{}|(.3){\subset}[r] & \Jom\alpha_{|\tau'|}(\sigma) \ar_ {\Jom{\alpha_{|\tau'|}}}[ul] \\
\vdots& \vdots &&&&\ddots \ar_ {\Kom{\alpha_1}}[dr]& &\ddots \ar_{\Jom{\alpha_{|\tau'|-1}}}[ul]\\
\tau(0)\ar@{}|{=}[r]& \Jom\alpha_1(\sigma)(0) &&&&& \Jom\alpha_1(\sigma') \ar_ {\Kom{\alpha_0}}[dr] \ar@{}|(.6){\subset}[r] & \cdots \ar@{}|(.3){\subset}[r] & \Jom\alpha_1(\sigma) \ar_{\Jom{\alpha_1}}[ul]\\
\ar@{}|{\smile}[u] & \ar@{}|{\smile}[u]&&&&& &\sigma' \ar@{}|(.6){\subset}[r] & \cdots \ar@{}|(.3){\subset}[r] & \sigma \ar_{\Jom{\alpha_0}}[ul]\\
}
\]}
\caption{Assuming $\tau=\Jom\alpha(\sigma)$ and $\tau'\subset\tau$.}\label{fig tau = Jomalpha si}
\end{figure}
We can now introduce the $\omega^\alpha$-Jump Tree and prove its
computability.
\begin{definition}
Given a tree $T \subseteq {\ensuremath{\N^{<\N}}}$ the \emph{$\omega^\alpha$-Jump Tree of $T$} is
\[
\JTom{\alpha}(T) = \set{\Jom\alpha(\sigma)}{\sigma\in T}.
\]
\end{definition}
\begin{lemma}\label{lemma:JTomegacomp al}
For every tree $T$, $\JTom{\alpha}(T)$ is a tree computable in $T$.
\end{lemma}
\begin{proof}
The proof is again the same as the one of Lemma \ref{lemma:JTcomp},
using Lemma \ref{lemma:J om al K} in place of Lemma \ref{lemma:J K}.
\end{proof}
We now define the $\omega^\alpha$-Jump operator $\JJom\alpha\colon\ensuremath{\N^\N}\to\ensuremath{\N^\N}$ by
transfinite induction: the base case is the Jump operator $\mathcal{J}$
(Definition \ref{def:J}). Given $\alpha$ we assume that $\JJom{\alpha_n}$ has
been defined for all $n$. To simplify the notation let us define
$\JJom{\alpha}_n$ recursively by $\JJom\alpha_0 = id$, $\JJom\alpha_{n+1} =
\JJom{\alpha_n} \circ \JJom\alpha_n$, so that
\[
\JJom\alpha_n = \JJom{\alpha_{n-1}} \circ \JJom{\alpha_{n-2}} \circ \cdots \circ \JJom{\alpha_0}.
\]
\begin{definition}\label{def: Ja Z}
Given the computable ordinal $\alpha$ we define the \emph{$\omega^\alpha$-Jump
operator} $\JJom\alpha\colon \ensuremath{\N^\N}\to\ensuremath{\N^\N}$ and its inverse $\KKom\alpha$ by
\[
\JJom\alpha(Z)(n) = \JJom{\alpha}_{n+1}(Z)(0) \quad \text{ and } \quad
\KKom\alpha(Y) = \bigcup_n \Kom\alpha(Y\!\restriction\! n).
\]
\end{definition}
We first show that $\KKom\alpha$ is indeed the inverse of $\JJom\alpha$.
\begin{lemma}\label{lemma: Y equal Koma Z}
If $Y=\JJom\alpha(Z)$ then $Z=\KKom\alpha(Y)$.
\end{lemma}
\begin{proof}
The proof of the lemma is by transfinite induction. Let
$\set{\alpha_i}{i\in \ensuremath{\mathbb N}}$ be the fixed canonical sequence fo $\alpha$. Recall
from the definition of $\Kom\alpha$ that $\Kom\alpha(Y\!\restriction\! n) = \Kom\alpha_n(\langle
Y(n-1)\rangle)$. Since $\langle Y(n-1)\rangle=\langle \JJom{\alpha}_{n}(Z)(0)\rangle \subseteq
\JJom{\alpha}_{n}(Z)$, by the induction hypothesis applied to
$\alpha_{n-1},\dots,\alpha_0$, we get that $\Kom\alpha_n(\langle Y(n-1)\rangle) \subseteq
Z$. So $Z \supseteq \KKom\alpha(Y)$. By (\ref{Pa5}) applied to
$\alpha_0,\dots,\alpha_{n-1}$ we get that $|\Kom\alpha_n(\langle Y(n-1)\rangle)|> n+1$ and
hence $Z= \KKom\alpha(Y)$.
\end{proof}
\begin{lemma}\label{lemma: Ja Z vs Z to the om alpha}
For every $Z\in \ensuremath{\N^\N}$ and computable ordinal $\alpha$, $\JJom\alpha(Z)\equiv_T Z^{(\omega^\alpha)}$.
\end{lemma}
\begin{proof}
This is again proved by transfinite induction. Assuming that
$\JJom{\alpha_i}(Z)\equiv_T Z^{(\omega^{\alpha_i})}$ for every $i$, and uniformly in
$i$, we immediately obtain $\JJom\alpha_n(Z) \equiv_T Z^{(\beta_n)}$, where $\beta_n
= \sum_{i=0}^{n-1} \omega^{\alpha_i}$, for every $n$. Since $\beta_n<\omega^\alpha$,
$\JJom\alpha(Z)\leq_T Z^{(\omega^\alpha)}$ is immediate.
For the other reduction we need to uniformly compute $\JJom\alpha_n(Z)$
from $\JJom\alpha(Z)$. The same way we compute $Z$ from $\JJom\alpha(Z)$
applying $\KKom\alpha$, we can compute $\JJom\alpha_n(Z)$ by forgetting about
$\alpha_0,\dots,\alpha_{n-1}$. In other words, by the same proof as Lemma \ref
{lemma: Y equal Koma Z} we can show that for every $m$
\[
\JJom\alpha_m(Z)= \bigcup_{n> m} \Kom{\alpha_{m}}(\Kom{\alpha_{m+1}}(\dots(\Kom{\alpha_{n-1}}(\langle Y(n-1)\rangle))\dots))
\]
using $\Kom{\alpha_{m}} \circ \Kom{\alpha_{m+1}} \circ\cdots\circ \Kom{\alpha_{n-1}}$ instead of $\Kom\alpha_n$.
\end{proof}
We can now prove that $\Jom\alpha$ approximates $\JJom\alpha$, extending Lemma
\ref{lemma:Jom approx Jom}.
\begin{lemma}\label{lemma:Jomal approx Jomal}
Given $Y,Z\in \ensuremath{\N^\N}$, the following are equivalent:
\begin{enumerate}
\item $Y = \JJom\alpha(Z)$;
\item for every $n$ there exists $\sigma_n\subset Z$ with $|\sigma_n|>n$ such
that $Y\!\restriction\! n =\Jom\alpha(\sigma_n)$.
\end{enumerate}\end{lemma}
\begin{proof}
We first prove (1) $\implies$ (2). When $n=0$ let $\sigma_0=Z\upto1$, which
works by (\ref{Pa1}).
Let $\sigma_n=\Kom\alpha(Y \!\restriction\! n)$.
Then $\sigma_n \subseteq \KKom\alpha(Y)= Z$, and $Y\!\restriction\! n =\Jom\alpha(\sigma_n)$.
We get that $|\sigma_n|>n$ by applying (\ref{Pa5}) $n$ times to $\sigma_n=\Kom\alpha_n(\langle Y(n-1)\rangle)$.
The proof of (2) $\implies$ (1) is similar to the proof of Lemma
\ref{lemma:Jom approx Jom} but uses transfinite induction. By
(\ref{Pa7}), for all $m$ and $n$, $\Jom\alpha_m(\sigma_n)\subseteq
\Jom\alpha_m(\sigma_{n+1})$, and hence we can consider $\bigcup_n
\Jom\alpha_m(\sigma_n) \in \ensuremath{\N^\N}$. Then, by the induction hypothesis,
$\bigcup_n \Jom\alpha_m(\sigma_n) = \JJom\alpha_m(Z)$, and hence
$\Jom\alpha_m(\sigma_n)(0) =\JJom\alpha_m(Z)(0)$ for all $m<n$. It follows that
for every $m$ and $n>m$
\[
\JJom\alpha(Z)(m)=\JJom\alpha_{m+1}(Z)(0)=\Jom\alpha_{m+1}(\sigma_{n})(0) = \Jom\alpha(\sigma_{n})(m)= Y(m).\qedhere
\]
\end{proof}
We are now able to show the intended connection between the
$\omega^\alpha$-Jump Tree and the $\omega^\alpha$-Jump operator.
\begin{lemma}\label{lemma:JTomal paths}
For every tree $T$, $[\JTom\alpha(T)] = \set{\JJom\alpha(Z)}{Z \in [T]}$.
\end{lemma}
\begin{proof}
To prove $\set{\JJom\alpha(Z)}{Z \in [T]} \subseteq [\JTom\alpha(T)]$ we can
argue as in the proof of Lemma \ref{lemma:JT paths}, using Lemma
\ref{lemma:Jomal approx Jomal} in place of Lemma \ref{lemma:J approx
J}.
To prove the other inclusion, fix $Y \in [\JTom\alpha(T)]$. Arguing as in
the proof of Lemma \ref{lemma:JTom paths}, we first let
$\sigma_n=\Kom\alpha(Y\!\restriction\! n)\in T$. Let $Z=\KKom\alpha(Y)=\bigcup_{n\in \ensuremath{\mathbb N}}\sigma_n
\in [T]$. We get that $Y=\JJom\alpha(Z)$ from Lemma \ref{lemma:Jomal approx
Jomal}.
\end{proof}
\subsection{Jumps versus Veblen}\label{ssect:JvsV}
First, we need to iterate the Jump Tree operator along a finite string.
\begin{definition}
If $T$ is a tree and $\tau \in \JTom{\alpha}(T)$ we define
\[
\JTom{\alpha}_\tau(T) = \set{\Jom\alpha_{|\tau|+1}(\sigma)}
{\sigma\in T \land \tau\subseteq\Jom\alpha(\sigma)}.
\]
\end{definition}
\begin{lemma}\label{lemma: JT tau gen recursive}
For $\tau\in \JTom{\alpha}(T)$,
\begin{align*}
\JTom{\alpha}_\emptyset(T) &= \JTom{\alpha_0}(T) \\
\JTom{\alpha}_{\tau^\smallfrown\langle c\rangle} (T) &= \JTom{\alpha_{|\tau|+1}}(\JTom{\alpha}_\tau(T)_{\langle c\rangle}).
\end{align*}
($T_{\langle c\rangle}$ was defined in \ref{def:Tsigma}.)
\end{lemma}
\begin{proof}
Straightforward induction on $|\tau|$.
\end{proof}
\begin{lemma}
Given a tree $T \subseteq {\ensuremath{\N^{<\N}}}$, $\tau\in {\ensuremath{\N^{<\N}}}$, and $c\in\ensuremath{\mathbb N}$
\[
\tau^\smallfrown \langle c\rangle \in \JTom{\alpha}(T) \iff \langle c \rangle \in \JTom{\alpha}_{\tau}(T).
\]
\end{lemma}
\begin{proof}
Follows from the definitions of $\JTom{\alpha}(T)$ and $\JTom{\alpha}_{\tau}(T)$.
\end{proof}
We now generalize the construction of Definition \ref{def:fgtau}, by
defining an operator that converts a function with domain
$\JTom{\alpha}(T)$ and values in ${\ensuremath{\mathcal{X}}}$ into a function with domain $T$ and
values in ${\boldsymbol \varphi}(\alpha,{\ensuremath{\mathcal{X}}})$. We will show in Lemma \ref{lemma:halphag
monotone} that this operator preserves monotonicity.
\begin{definition}\label{def:halphag}
By transfinite recursion, we build, for each computable ordinal $\alpha$,
an operator $\hom{\alpha}$ such that given a linear ordering ${\ensuremath{\mathcal{X}}}$ and a
function
\[
g\colon \JTom{\alpha}(T) \to {\ensuremath{\mathcal{X}}},
\]
it returns
\[
\hom{\alpha}_g\colon T\to {\boldsymbol \varphi}(\alpha,{\ensuremath{\mathcal{X}}}).
\]
For $\alpha=0$, we let $\hom{\alpha} = h$ of Definition \ref{def:hgJ}. For
$\alpha>0$ we first define simultaneously for each $\tau \in \JTom{\alpha}(T)$
a function
\[
f_\tau \colon \JTom{\alpha}_\tau (T) \to {\boldsymbol \varphi}(\alpha, {\ensuremath{\mathcal{X}}})
\]
by recursion on $|\sigma|$:
\[
f_\tau (\sigma) =
\begin{cases}
\varphi_{\alpha, g(\tau)} & \text{if $\sigma = \emptyset$;}\\
\\
\hom{\alpha_{n}}_{f_{\tau'}} (\sigma) & \text{if $\sigma \neq \emptyset$, where $\tau'= \tau ^\smallfrown \langle \sigma(0) \rangle$ and $n=|\tau'|$.}
\end{cases}
\]
We then define
\[
\hom{\alpha}_g = \hom{\alpha_0}_{f_\emptyset}\colon T \to {\boldsymbol \varphi}(\alpha,{\ensuremath{\mathcal{X}}}).
\]
\end{definition}
\begin{lemma}\label{lemma:halphag monotone}
If $g\colon \JTom{\alpha} (T) \to {\ensuremath{\mathcal{X}}}$ is total and $(\supset,
<_{\ensuremath{\mathcal{X}}})$-monotone, then $\hom{\alpha}_g\colon T\to {\boldsymbol \varphi}(\alpha,{\ensuremath{\mathcal{X}}})$ is also
total and $(\supset, <_{{\boldsymbol \varphi}(\alpha,{\ensuremath{\mathcal{X}}})})$-monotone. Moreover,
$\hom{\alpha}_g$ is computable in $g$.
\end{lemma}
\begin{proof}
We say that a partial function $e$ on a tree $T$ is {\em
$(n,T,{\ensuremath{\mathcal{X}}})$-good} if $e$ is defined on all strings of length less than
or equal to $n$, it takes values in ${\ensuremath{\mathcal{X}}}$, and is
$(\supset,<_{\ensuremath{\mathcal{X}}})$-monotone on strings of length less than or equal to
$n$.
By transfinite induction on $\alpha$ we will show that for every $n\in\ensuremath{\mathbb N}$
and every $(n,\JTom{\alpha}(T),{\ensuremath{\mathcal{X}}})$-good partial function $g$, we have
that $\hom{\alpha}_g$ is $(n+1,T,{\boldsymbol \varphi}(\alpha,{\ensuremath{\mathcal{X}}}))$-good.
For $\alpha=0$, this follows from the proof of Lemma \ref{lemma:monotone}:
recall that $h_g(\sigma)$ is a finite sum of terms of the form
$\omega^{g(J(\sigma)\!\restriction\! i)}$, and $|J(\sigma)|<|\sigma|$ by (\ref{P5}). Thus to
compute and compare $h_g$ on strings of length $\leq n+1$, we need only
$g$ to be defined and $(\supset,<_{\ensuremath{\mathcal{X}}})$-monotone on strings of length
$\leq n$.
Now fix $\alpha>0$ and suppose that $g$ is $(n,\JTom{\alpha}(T),{\ensuremath{\mathcal{X}}})$-good.
Since $\hom{\alpha}_g = \hom{\alpha_0}_{f_\emptyset}$, by the induction hypothesis it
is enough to show that $f_\emptyset$ is $(n,\JTom{\alpha}_\emptyset(T),
{\boldsymbol \varphi}(\alpha,{\ensuremath{\mathcal{X}}}))$-good. Notice that if $f_\emptyset$ takes values in
${\boldsymbol \varphi}(\alpha,{\ensuremath{\mathcal{X}}})$, then $\hom{\alpha}_g$ takes values in
${\boldsymbol \varphi}(\alpha_0,{\boldsymbol \varphi}(\alpha,{\ensuremath{\mathcal{X}}})) = {\boldsymbol \varphi}(\alpha,{\ensuremath{\mathcal{X}}})$ (by Definition
\ref{def:phiop}). We will prove by induction on $m\leq n$ that for
every $\tau\in \JTom\alpha(T)$ of length $n-m$, $f_\tau$ is
$(m,\JTom{\alpha}_\tau(T), {\boldsymbol \varphi}(\alpha,{\ensuremath{\mathcal{X}}}))$-good. When $m=0$, all we need to
observe is that $f_\tau(\emptyset)=\varphi_{\alpha,g(\tau)}\in {\boldsymbol \varphi}(\alpha,{\ensuremath{\mathcal{X}}})$,
and $g(\tau)$ is defined because $|\tau|=n$. Consider now $\tau\in
\JTom\alpha(T)$ of length $n-(m+1)$. If $\sigma=\emptyset$, then $f_\tau(\emptyset)$ is
correctly defined as in the case $m=0$. For $\sigma \in \JTom\alpha_\tau(T)$
with $0<|\sigma| \leq m+1$, let $\tau'=\tau^\smallfrown\langle\sigma(0)\rangle$. We first
need to check that $f_\tau(\sigma)= \hom{\alpha_{n-m}}_{f_{\tau'}}(\sigma)$ is
defined. By the subsidiary induction hypothesis $f_{\tau'}$ is
$(m,\JTom\alpha_{\tau'}(T),{\boldsymbol \varphi}(\alpha,{\ensuremath{\mathcal{X}}}))$-good. By Lemma \ref{lemma: JT
tau gen recursive}, $\JTom\alpha_{\tau'}(T) = \JTom{\alpha_{n-m}}
(\JTom\alpha_\tau(T)_{\langle\sigma(0)\rangle})$. By the transfinite induction
hypothesis (since $\alpha_{n-m}<\alpha$) $\hom{\alpha_{n-m}}_{f_{\tau'}}$ is
$(m+1,\JTom\alpha_\tau(T)_{\langle\sigma(0)\rangle},{\boldsymbol \varphi}(\alpha,{\ensuremath{\mathcal{X}}}))$-good. Therefore
$f_\tau(\sigma)$ is defined.
Now we need to show $f_\tau$ is
$(\supset,<_{{\boldsymbol \varphi}(\alpha,{\ensuremath{\mathcal{X}}})})$-monotone on strings of length less than
or equal to $m+1$. Take $\sigma'\subset \sigma \in \JTom{\alpha}_{\tau}(T)$
with $|\sigma|\leq m+1$. Again let $\tau'=\tau^\smallfrown\langle\sigma(0)\rangle$. By the
transfinite induction hypothesis, we know that
$\hom{\alpha_{n-m}}_{f_{\tau'}}$ is $(m+1,\JTom{\alpha}_{\tau}(T),
{\boldsymbol \varphi}(\alpha,{\ensuremath{\mathcal{X}}}))$-good. Furthermore $f_{\tau'}$ is $(\supset,
<_{{\boldsymbol \varphi}(\alpha,{\ensuremath{\mathcal{X}}})})$-monotone and takes values in ${\boldsymbol \varphi}(\alpha,{\ensuremath{\mathcal{X}}}) \!\restriction\!
(\varphi_{\alpha,g(\tau')}+1)$, because $f_{\tau'}(\emptyset)=
\varphi_{\alpha,g(\tau')}$. Therefore $\hom{\alpha_{n-m}}_{f_{\tau'}}$ takes
values below $\varphi_{\alpha_{n-m}}(\varphi_{\alpha,g(\tau')}+1)$. When
$\sigma'=\emptyset$, $f_\tau(\sigma')=\varphi_{\alpha,g(\tau)}>
\varphi_{\alpha_{n-m}}(\varphi_{\alpha,g(\tau')}+1)> f_\tau(\sigma)$. When
$\sigma'\neq\emptyset$, we use the monotonicity of
$\hom{\alpha_{n-m}}_{f_{\tau'}}$.
\end{proof}
\begin{theorem}\label{alpha}
For every computable ordinal $\alpha$ and $Z \in \ensuremath{\N^\N}$, there exists a
$Z$-computable linear ordering ${\ensuremath{\mathcal{X}}}$ such that the jump of every
descending sequence in ${\ensuremath{\mathcal{X}}}$ computes $Z^{(\omega^\alpha)}$, but there is a
$Z$-computable descending sequence in ${\boldsymbol \varphi}(\alpha,{{\ensuremath{\mathcal{X}}}})$.
\end{theorem}
\begin{proof}
Let ${\ensuremath{\mathcal{X}}} = \langle \JTom{\alpha} (T_Z), {\ensuremath{\leq_{\mathrm{KB}}}} \rangle$ where $T_Z$ is the tree
$\set{Z\!\restriction\! n}{n\in \ensuremath{\mathbb N}}$. By Lemma \ref{lemma:JTomegacomp al}, ${\ensuremath{\mathcal{X}}}$ is
$Z$-computable. By Lemma \ref{lemma:JTomal paths}, $\JJom\alpha(Z)$ is the
unique path in $\JTom{\alpha} (T_Z)$. Therefore, by Lemma \ref{lemma:KB},
the jump of every descending sequence in ${\ensuremath{\mathcal{X}}}$ computes $\JJom\alpha(Z)$ and
hence, by Lemma \ref{lemma: Ja Z vs Z to the om alpha}, computes
$Z^{(\omega^\alpha)}$.
Let $g$ be the identity on ${\ensuremath{\mathcal{X}}}$, which is $(\supset,<_{{\ensuremath{\mathcal{X}}}})$-monotone.
By Lemma \ref{lemma:halphag monotone}, $h^\alpha_g$ is $(\supset,
<_{{\boldsymbol \varphi}(\alpha,{{\ensuremath{\mathcal{X}}}})})$-monotone and computable. Thus $\set{h^\alpha_g(Z\!\restriction\!
n)}{n\in\ensuremath{\mathbb N}}$ is a $Z$-computable descending sequence in
${\boldsymbol \varphi}(\alpha,{{\ensuremath{\mathcal{X}}}})$.
\end{proof}
\subsection{Reverse mathematics results}\label{ssect:rmVeblen}
In this section, we work in the weak system \system{RCA}\ensuremath{_0}. Therefore, again, we
do not have an operation that given $Z\in\ensuremath{\N^\N}$, returns $\JJom\alpha(Z)$
but the predicate with three variables $Z$, $Y$ and $\alpha$ that says
$Y=\JJom\alpha(Z)$ is arithmetic as witnessed by Lemma \ref {lemma:Jomal
approx Jomal}. Notice that if if we have that condition (2) of Lemma
\ref {lemma:Jomal approx Jomal} holds, then \system{RCA}\ensuremath{_0}\ can recover all the
$\JJom\alpha_m(Z)$ and show that $\JJom\alpha(Z)$ is as defined in Definition
\ref {def: Ja Z}. We can then prove Lemma \ref {lemma: Ja Z vs Z to the
om alpha} in \system{RCA}\ensuremath{_0}: if $Y=\JJom\alpha(Z)$, then $Y$ can compute
$Z^{(\omega^\alpha)}$, and $Z^{(\omega^\alpha)}$ can compute a real $Y$ such that
$Y=\JJom\alpha(Z)$. Therefore, we get that $\Pi^0_{\omega^\alpha}$-\system{CA}\ensuremath{_0}\ is
equivalent to \system{RCA}\ensuremath{_0}+$\forall Z\exists Y(Y=\JJom\alpha(Z))$ and that \system{ATR}\ensuremath{_0}\ is
equivalent to \system{RCA}\ensuremath{_0}+$\forall \alpha \forall Z\exists Y(Y=\JJom\alpha(Z))$.
\begin{theorem}
Let $\alpha$ be a computable ordinal. Over \system{RCA}\ensuremath{_0},
${\textsf{WOP}}({\ensuremath{\mathcal{X}}}\mapsto{\boldsymbol \varphi}(\alpha,{\ensuremath{\mathcal{X}}}))$ is equivalent to $\Pi^0_{\omega^\alpha}$-\system{CA}\ensuremath{_0}.
\end{theorem}
\begin{proof}
We already showed that
$\Pi^0_{\omega^\alpha}$-\system{CA}\ensuremath{_0}$\vdash{\textsf{WOP}}({\ensuremath{\mathcal{X}}}\mapsto\varphi(\alpha,{\ensuremath{\mathcal{X}}}))$ in Corollary
\ref {cor forward Pi0alpha}.
The proof of the other direction is just the formalization of Theorem \ref {alpha} exactly as we did in Theorem \ref {rev:ACApl}.
\end{proof}
We now give a new, purely computability-theoretic, proof of Friedman's
theorem.
\begin{theorem}
Over \system{RCA}\ensuremath{_0}, ${\textsf{WOP}}({\ensuremath{\mathcal{X}}}\mapsto{\boldsymbol \varphi}({\ensuremath{\mathcal{X}}},0))$ is equivalent to \system{ATR}\ensuremath{_0}.
\end{theorem}
\begin{proof}
We already showed that \system{ATR}\ensuremath{_0}\ proves ${\textsf{WOP}}({\ensuremath{\mathcal{X}}}\mapsto{\boldsymbol \varphi}({\ensuremath{\mathcal{X}}},0))$ in
Corollary \ref{cor forward ATR}.
For the reversal, we argue within \system{RCA}\ensuremath{_0}. Let $\alpha$ be any ordinal. Notice
that relative to the presentation of $\alpha$, all the constructions of
this section can be done as if $\alpha$ were any computable ordinal.
Therefore, by the previous theorem it is enough to show that
${\textsf{WOP}}({\ensuremath{\mathcal{X}}}\mapsto{\boldsymbol \varphi}(\alpha,{\ensuremath{\mathcal{X}}}))$ holds. Let ${\ensuremath{\mathcal{X}}}$ be a well-ordering. We
now claim that ${\boldsymbol \varphi}(\alpha,{\ensuremath{\mathcal{X}}})$ embeds in ${\boldsymbol \varphi}(\alpha+{\ensuremath{\mathcal{X}}},0)$, which would
imply that ${\boldsymbol \varphi}(\alpha,{\ensuremath{\mathcal{X}}})$ is well-ordered too as needed to show
${\textsf{WOP}}({\ensuremath{\mathcal{X}}}\mapsto{\boldsymbol \varphi}(\alpha,{\ensuremath{\mathcal{X}}}))$.
Define $f\colon {\boldsymbol \varphi}(\alpha,{\ensuremath{\mathcal{X}}})\to{\boldsymbol \varphi}(\alpha+{\ensuremath{\mathcal{X}}},0)$ by induction on the
terms of ${\boldsymbol \varphi}(\alpha,{\ensuremath{\mathcal{X}}})$, setting
\begin{itemize}
\item $f(0)=0$,
\item $f(\varphi_{\alpha,x}) = \varphi_{\alpha+x}(0)$,
\item $f(t_1+t_2)=f(t_1)+f(t_2)$,
\item $f(\varphi_a(t))=\varphi_a(f(t))$.
\end{itemize}
The proof that $f$ is an embedding is by induction on terms. Consider
$t,s\in {\boldsymbol \varphi}(\alpha,{\ensuremath{\mathcal{X}}})$. We want to show that $t\leq_{{\boldsymbol \varphi}(\alpha,{\ensuremath{\mathcal{X}}})}
s\iff f(t)\leq_{{\boldsymbol \varphi}(\alpha+{\ensuremath{\mathcal{X}}},0)} f(s)$. By induction hypothesis, assume
this is true for pairs of terms shorter than $t+s$. Suppose that $t\leq
s$. Using the induction hypothesis, it is not hard to show that
$f(t)\leq f(s)$. Suppose now that $t\not\leq s$. Then, none of the
conditions of Definition \ref{def:phiop} hold. If $t=0, t=t_1+t_2$, or
$t= \varphi_a(t_1)$, then we can apply the induction hypothesis again
and get that none of the conditions of Definition \ref{def:phiop} hold
for $f(t)$ and $f(s)$ either and hence $f(t)\not\leq f(s)$. The case
$t=\varphi_{\alpha,x}$ is the only one that deserves attention. In this
case we have that for no $y\geq x$, $\varphi_{\alpha,y}$ appears in $s$. It
then follows that $f(s)\in {\boldsymbol \varphi}(\alpha+{\ensuremath{\mathcal{X}}}\!\restriction\! x,0)$. Since
$f(t)=\varphi_{\alpha+x}(0)$ is greater than all the elements of
${\boldsymbol \varphi}(\alpha+{\ensuremath{\mathcal{X}}}\!\restriction\! x,0)$ we get $f(t)\not\leq f(s)$ a wanted.
\end{proof}
|
1107.2451
|
\section{Introduction} \label{sec:int}
Recently, since the developments of both hardware and software
in computer science enable us to simulate complex
physical processes numerically, such computer simulations
become more important from industrial viewpoints.
Especially
the computation of the incompressible multi-phase fluid dynamics
has crucial roles in order to evaluate the behavior of several
devices and materials in a micro-region,
{\it {e.g.}}, ink-jet printers, solved toners and so on.
In the evaluation, it is strongly required that
the fluid interfaces with multiple junctions are
stably and naturally computed from these practical reasons.
In this article,
in order to handle the fluid interfaces with multiple junctions
in a three dimensional micro-region,
we investigate a surface tension of an incompressible multi-phase
flow with multiple junctions as a numerical computational method
under the assumption that the Reynolds number is not so large.
In the investigation, we encounter many interesting mathematical
objects and results,
which are associated with
low dimensional interface geometry having singularities, and
with the infinite dimensional geometry of incompressible fluid dynamics.
Further since even in a macroscopic theory,
we introduce artificial intermediate regions in the material
interfaces among different fluids or among a solid and fluids,
the regions give a resolution of the singularities in the interfaces
to provide extended Euler equations naturally.
Thus even though we consider the multi-phase fluid model as
a computational model,
we believe that it must be connected with mathematical nature of
real fluid phenomena as their description.
We will mention the background, the motivation and the strategy of
this study more precisely as follows.
For a couple of decades,
in order to represent the physical process with
the interfaces of the multi-phase fluids, the computational
schemes have been studied well.
These schemes are mainly classified into two types.
The first type is based on the level-set method \cite{S}
discovered by H-K. Zhao, T. Chan, B. Merriman, S. Osher
and L. Wang \cite{ZCMO,ZMOW}.
The second one
is based on the phase-field theory, which was found by
J. U. Brakbill, D. B. Kothe and C. Zemach \cite{BKZ},
and B. Lafaurie, C. Nardone, R. Scardovelli, S. Zaleski, and G. Zanetti
\cite{LZZ}.
The authors in Reference \cite{LZZ} called the scheme SURFER.
Following them, there are many studies on the SURFER scheme,
{\it{e.g.}}, \cite[references therein]{AMS,Cab,Jac}.
The level-set method is a computational method in which we describe a
(hyper-)surface in terms of zeros of the level-set function, {\it{i.e.}}, a
real function whose value is a signed distance from the surface,
such as $q(x)$ in Section \ref{sec:two-one}.
Using the scheme based upon the level-set method
in the three dimensional Euclidean space,
we can deal well with topology changes,
geometrical objects with singularities, {\it{e.g.}}, cusps,
the multiple junctions of materials, and so on.
However in the computation, we need to deal with the
constraint conditions even for two-phase fluids
\cite{ZCMO,ZMOW}.
A dynamical problem with constraint conditions
is basically complicate and sometimes gives difficulties
to find its solution since the constraint conditions sometimes
generate an ill-posed problem in the optimization.
In the numerical computation for incompressible fluid,
we must check the consistency between
the incompressible condition and the constraint condition.
The check generally requires a complicate implementation
of the algorithm, and increases computational cost.
Its failure sometimes makes the computation unstable, especially
when we add some other physical conditions.
Since instability disturbs the evaluation of a complex
system as a model of a real device, it must be avoided.
On the other hand,
using the SURFER scheme \cite{LZZ},
we can easily compute effects of the surface tension of a two-phase fluid
in the Navier-Stokes equation.
The phase field model is the model that we represent materials in terms
of supports of smooth functions which roughly correspond to
the partition of unity in pure mathematics \cite[I p.272]{KN}
as will be mentioned in Sections \ref{sec:four} and \ref{sec:five}.
We call these functions
{\lq\lq}color functions{\rq\rq} or {\lq\lq}phase fields{\rq\rq}.
The phase fields have artificial intermediate regions which
represent their interfaces approximately.
In the SURFER scheme \cite{LZZ},
the surface tension is given as a kind of stress force, or volume force
due to the intermediate region.
Hence the scheme makes the numerical computations of the surface tension
stable.
However
it is not known how to consider a multi-phase ($N$-phase, $N\ge2$) flow
in their scheme.
In Reference \cite{BKZ}, the authors propose a method as an
extension of the SURFER scheme \cite{LZZ}
to the contact angle problem
by imposing a constraint to fix its angle.
In this article, we will generalize
the SURFER scheme to multi-phase flow
without any constraints.
Nature must not impose any constraints even at such a triple junction,
which is governed by a physical principle. If it is a
Hamiltonian system, its determination must obey the minimal
principle or the variational principle.
We wish to find a theoretical framework
in which we can consistently
handle the incompressible flows with interfaces including
the surface tensions and the multiple junctions without any constraints.
As the multiple junctions should be treated as singularities in
a mathematical framework which are very difficult to be handled
in general,
it is hard to extend
mathematical approaches for fluid interface problems without
a multiple junction
\cite{BG,SZ} to a theory for the problem with multiple junctions.
Our purpose of this article is to find such a theoretical
framework which enables us to solve the fluid interface problems with
multiple junctions numerically
as an extension of the SURFER scheme.
For the purpose, we employ the phase field model.
The thickness of the actual intermediate region
in the interface between a solid and a fluid or between
two fluids is of atomic order and
is basically negligible in the macroscopic theory.
However the difference
between zero and {\lq\lq}the limit to zero{\rq\rq}
sometimes brings a crucial difference in physics and
mathematics; for example, in the Sato hyperfunction theory,
the delta function is regarded as a function in the boundary
of the holomorphic functions \cite{KKK,II}, {\it{i.e.}},
$\displaystyle{\delta(x) =
\lim_{\epsilon \to 0}\frac{1}{2\pi \ii}
\left(\frac{1}{x - \ii \epsilon}
-\frac{1}{x + \ii \epsilon}\right)}$
$\displaystyle{
\equiv \lim_{\epsilon \to 0}\frac{1}{\pi }
\frac{\epsilon}{x^2 + \epsilon^2}}$.
As mentioned above,
the phase field model has the artificial intermediate region which is
controlled by a small parameter $\epsilon$
and appears explicitly even as a macroscopic
theory. We regard that it represents the effects coming from
the actual intermediate region of materials.
Namely, we regard that the stress force expression in
the SURFER scheme
is caused by the artificial intermediate region of the phase-fields
and it represents well the surface effect
coming from that of real materials.
In order to extend the stress force expression of the two-phase
flow to that of the multi-phase ($N$-phase, $N\ge2$) flow,
we will first reformulate
the SURFER scheme in the framework of the variational theory.
In Reference \cite{Jac}, a similar attempt was reported
but unfortunately there were not precise derivations. Our
investigations in Section \ref{sec:four}
show that the surface tension expression of
the SURFER scheme
is derived as a momentum conservation in
Noether's theorem \cite{BGG,IZ} and its derivation
requires a generalization of the Laplace equation \cite{LL} as
the Euler-Lagrange equation \cite{AM,BGG}, which is not trivial
even for a static case.
In order to deal with this problem in a dynamics case consistently,
we should also consider the Euler equation in the framework
of the variational principle.
It is well-known that the incompressible fluid dynamics is
geometrically interpreted as
a variational problem of an infinite
dimensional Lie group, related to diffeomorphism, due to
V. I. Arnold \cite{Ar,AK}, D. Ebin and J. Marsden \cite{EM}, H. Omori \cite{O}
and so on. Following them, there are so many related works
\cite{AF,B,K,NHK,Schm,Shk,Shn,V}.
On the reformulation of the SURFER scheme \cite{LZZ} for the dynamical case,
we introduce an action integral including
the kinematic energy of the incompressible fluid and the surface energy.
The variational method reproduces the governing equation in
the SURFER scheme.
After then, we extend the surface energy to that of multi-phase fields
and add the energy term to the action integral.
The variational principle of the action integral
leads us to a novel expression of the surface tension
and the extended Euler equation which we require.
Using the extended Euler equation,
we can deal with the surface tensions of the
multi-phase flows, the multiple junctions of the
of phase fields including singularities, the topology changes and so on.
We can also compute a wall effect naturally and a contact
angle problem.
The computation of the governing equation is freed from any
constraints, except the incompressible condition.
In other words, in this article, we completely unify
the theory of the multi-phase ($N$-phase, $N\ge2$) field
and the theory of the incompressible fluid dynamics of Euler equation
as an infinite dimensional geometrical problem.
Contents are as follows:
Section \ref{sec:two} is devoted to the preliminaries
of the theory of surfaces
in our Euclidean space from a low-dimensional
differential geometrical viewpoint
\cite{M,GMO,FW} and Noether's theorem in the
classical field theory \cite{AM,BGG,IZ}.
Section \ref{sec:three} reviews the derivation of the Euler equation
to the incompressible fluid dynamics following the variational method
for an infinite-dimensional Lie algebra based upon Reference \cite{EM}.
In Section \ref{sec:four}, we reformulate the SURFER scheme \cite{LZZ}.
There the Laplace equation for the surface tension
and the Euler equation in Reference \cite{LZZ}
are naturally obtained by the variational method
in Propositions \ref{prop:4-4} and \ref{prop:4-6}.
Section \ref{sec:five} is our main section in which we extend
the theory in Reference \cite{LZZ} to that for a multi-phase flow
and obtain the Euler equation with the surface tension
of the multi-phase field in Theorem \ref{th:5-2}.
The extended Euler equation for the multi-phase flow
is derived from the variational
principle of the action integral in Theorem \ref{th:5-1}.
As a special case, we also derive the
Euler equation to a two-phase field with wall effects
in Theorem \ref{th:5-3}.
In Section 6,
using these methods in the computational fluid dynamics \cite{Ch,H,HN},
we consider numerical computations
of the contact angle problem of a two-phase field
because the contact angle problem
for the two-phase field circumscribed in a wall
is the simplest non-trivial triple junction problem.
By means of our scheme, for given surface tension coefficients,
we show two examples of the numerical computations in which
the contact angles automatically appeared
without any geometrical constraints
and any difficulties for the singularities at triple junctions.
The computations were very stable.
Precisely speaking, as far as we computed,
the computations did not collapse for any boundary conditions
and for any initial conditions.
\section{Mathematical Preliminaries} \label{sec:two}
\subsection{Preliminary of surface theory} \label{sec:two-one}
In this subsection, we review the theory of surfaces
from the viewpoint of low-dimensional differential geometry.
The interface problems have been also studied for last three decades
in pure mathematics,
which are considered as a revision of the
classical differential geometry \cite{E} from a modern point of view
\cite{ES,FW,GMO,M,T},
{\it{ e.g.}}, generalizations of the Weierstrass-Ennpper
theory of the minimal surfaces, isothermal surfaces,
constant curvature surfaces, constant mean curvature surfaces,
Willmore surfaces and so on.
They are also closely connected with
the harmonic map theory
and the theory of the variational principle \cite{FW,GMO}.
We consider a smooth surface $S$ embedded in
three dimensional Euclidean space $\EE^3$.
Let $x = (x^1,x^2,x^3)$ be of the Cartesian coordinate system
and represent a point in $\EE^3$, and
let the surface $S$ be locally expressed by a local parameter $(s^1,s^2)$.
We assume that the surface $S$ is expressed by
zeros of a real valued smooth function $q$ over $\EE^3$, {\it{i.e.}},
$$
q(x)=0,
$$
such that in the region whose $|q|$ is sufficiently small
($|q| < \epsT$ for a positive number $\epsT>0$),
$|dq|$ agrees with the infinitesimal length in the Euclidean space.
Then $dq$ means the normal co-vector field (one-form), {\it{ i.e.}},
for the tangent vector field
$e_\alpha:=\partial_\alpha:=\partial/\partial s^\alpha$
($\alpha = 1, 2$) of $S$,
\begin{equation}
\langle\partial_\alpha, dq\rangle=0 \quad \mbox{ over }\quad
S = \{ x \in \EE^3 \ | q(x) = 0\}.
\label{eq:ddq}
\end{equation}
Here $\langle, \rangle$ means the pointwise pairing
between the cotangent bundle and the tangent bundle of $\EE^3$.
The function $q$
can be locally regarded as so-called the level-set function
\cite{S,ZMOW}.
We could redefine the domain of $q$ such that it is restricted to
a tubular neighborhood $T_S$ of $S$,
$$
T_S:=\{x \in \EE^3 \ | \ |q(x)|<\epsT \}.
$$
Over $T_S$, $q$
agrees with the level-set function of $S$.
There we can naturally define a projection map
$\pi: T_S \to S$ and then we can regard $T_S$ as a fiber bundle over
$S$, which is homeomorphic to the normal bundle $N_{S}\to S$.
However the level-set function is defined as a signed
distance function which is a global function over $\EE^3$
as a continuous function \cite{S} and thus it has no natural
projective structure in general;
for example,
the level-set function $L$ of a sphere with radius $a$ is given by
$$
L(x^1,x^2,x^3) = \sqrt{(x^1)^2 +(x^2)^2 + (x^3)^2} - a,
$$
which induces the natural projective (fiber) structure but
the origin $(0, 0, 0)$ in the sphere case.
The level-set function has no projective structure
at $(0, 0, 0)$ in this case, and
we can not define its differential there.
In other words,
the level-set function is not a global function over $\EE^3$
as a smooth function in general.
When we use the strategy of the
fiber bundle and its connection, we restrict ourselves to
consider the function $q$ in $T_S$.
Then the relation (\ref{eq:ddq}) and the parameter
$(s_1, s_2)$ are naturally lifted
to $T_S$ as an inverse image of $\pi$.
Further for $e_q:=\partial_q:=\partial/\partial q$, we have
$$
\partial_\alpha (e_q) = \sum_\beta \Gamma^\beta_{\ \alpha q} e_\beta
\mbox{ over } S.
$$
Here
$(\Gamma^\beta_{\ \alpha q})$ is the Weingarten map, which is
a kind of a point-wise $2\times2$-matrix
$((\Gamma^\beta_{\alpha q})_{\alpha\beta})$ \cite[Chapter VII]{KN}.
The eigenvalue of
$(\Gamma^\beta_{\alpha q})$ is the principal
curvature, whereas a half of its trace
$\mbox{tr}(\Gamma^\beta_{\alpha q})/2$ is known as the
mean curvature and its determinant
$\mbox{det}(\Gamma^\beta_{\alpha q})$
means the Gauss curvature \cite[Chapter VII]{KN}.
Noting the relation,
$\langle e_\beta, d s^\alpha\rangle = \delta_\beta^\alpha$
for $\alpha, \beta = 1, 2$,
the twice of the mean curvature, $\kappa$, is given by,
$$
\sum_{\alpha} \partial_\alpha (e_q) d s^\alpha = \kappa
\quad \mbox{over}\quad S.
$$
Further noting the relation $\partial_q e_q d q =0$, we obtain
$$
\sum_{\alpha} \partial_\alpha (e_q) d s^\alpha
+ \partial_q(e_q) d q = \kappa
\quad \mbox{over}\quad S.
$$
Due to the flatness of the Euclidean space, we identify $e_q$ with
$\nabla q /|\nabla q|$ and then we have the following proposition.
\begin{proposition} \label{prop:2-1}
The following relation holds at a point over $S$,
$$
\mathrm{div}\left( \frac{\nabla q }{|\nabla q|}\right) = \kappa.
$$
\end{proposition}
For the case $|\nabla q| = 1$,
using the Hodge star operator \cite{AM,N} and the exterior
derivative $d$, we also have
an alternative expression $*d * dq =\kappa$ over the surface $S$.
Here the Hodge star operator is $*: \Lambda^p(T_S) \to \Lambda^{3-p}(T_S)$
and the exterior derivative $d: \Lambda^p(T_S) \to \Lambda^{p+1}(T_S)$
$(d \omega = \sum_{i=1}^3\partial_i \omega d x^i)$,
where $\Lambda^p(T_S)$ is the set of smooth $p$-forms over
$T_S$ \cite{N}.
Noting that as the left hand side of formula in Proposition \ref{prop:2-1}
can be lifted to $T_S$,
the formula plays an important role in References \cite{BKZ,LZZ,ZCMO}
and in this article.
\subsection{Preliminary of Noether's theorem} \label{sec:two-two}
In this subsection,
we review Noether's theorem in the variational method which
appears in a computation of the energy-momentum tensor-field
in the classical field theory \cite{AM,BGG,IZ}.
Let the set of $\ell$ smooth real-valued functions
over $n$-dimensional Euclidean space $\EE^n$
be denoted by $\cC^{\infty}(\EE^n)^{\otimes\ell}$,
where $n$ is mainly three.
Let $x = (x^1,x^2,\ldots,x^n)$ be of the Cartesian coordinate system
of $\EE^n$.
We consider the functional
$I: \cC^{\infty}(\EE^n)^{\otimes\ell} \to \RR$,
\begin{equation}
I = \int_{\EE^n} d^n x{ \cF}(\phi_a(x),\partial_i \phi_a(x)),
\label{eq:IinEL}
\end{equation}
where
$\cF$ is a local functional,
$\cF: \cC^{\infty}(\EE^n)^{\otimes\ell}|_x \to \Lambda^n(\EE^n)|_x$,
\begin{equation*}
\begin{split}
\cF: (\phi_a)_{a=1,\ldots,\ell}|_x \mapsto &
{\cF}(\phi_a(x),\partial_i \phi_a(x))d^n x
\equiv{ \cF}(\phi_a(x),\partial_1 \phi_a(x),
\ldots, \partial_n \phi_a(x))d^n x \\
&
\equiv{ \cF}(\phi_1(x), \ldots, \phi_\ell(x),
\partial_1 \phi_1(x), \ldots, \partial_n \phi_\ell(x))d^n x \\
\end{split}
\end{equation*}
and $\partial_i := \partial/ \partial x^i$, $(i=1, \cdots, n)$.
Then we obviously have the the following
proposition.
\begin{proposition}\label{prop:A-1}
For the functional $I$ in (\ref{eq:IinEL}) over
$\cC^{\infty}(\EE^n)^{\otimes\ell}$,
the Euler-Lagrange equation coming from the variation
with respect to $\phi_a$ of $(\phi_b)_{b=1, \ldots, \ell}
\in \cC^{\infty}(\EE^n)^{\otimes\ell}$,
{\it{i.e.}},
\frac{\delta I}{\delta \phi_a(x)} = 0
$,
is given by
\begin{equation}
\frac{\delta \cF}{\delta \phi_a(x)}
-\sum_{i=1}^n
\partial_i \frac{\delta \cF}{\delta \partial_i \phi_a(x)} = 0.
\label{eq:AppeA1}
\end{equation}
\end{proposition}
Using the equation (\ref{eq:AppeA1}),
we consider an effect of a small translation
$x$ to $x'= x + \delta x$ on the functional $I$.
The following proposition is known as Noether's theorem
which plays crucial roles in this article.
\begin{proposition}\label{prop:A-2}
The functional derivative $I$ with respect to $\delta x_i$ is
given by
\begin{equation}
\frac{\delta I}{\delta x^i}
= \sum_{j=1}^n\partial_j\left[ \sum_{a=1}^\ell
\frac{\delta \cF}{\delta\partial_j \phi_a}\partial_i \phi_a \right]
- \partial_i \left[ { \cF}\right].
\label{eq:AppeA2}
\end{equation}
If $I$ is invariant for the translation,
(\ref{eq:AppeA2}) gives the conservation of the momentum.
\end{proposition}
\begin{proof}
For the variation $x'= x + \delta x$, the scalar function becomes
\begin{equation*}
\phi_a(x') = \phi_a(x) +\sum_{i=1}^n \partial_i \phi_a(x)
\delta x^i + O(\delta x^2).
\end{equation*}
From the relations on the Jacobian and each component,
\begin{equation*}
\frac{\partial x'}{\partial x}
= 1 +\sum_{i=1}^n \partial_i \delta x^i + O(\delta x^2),
\quad
\frac{\partial x^k}{\partial x'{}^i}=\delta^k_i
-\partial_i \delta x^k + O(\delta x^2),
\end{equation*}
we have
\begin{equation*}
\begin{split}
\frac{\partial \phi_a(x')}{\partial x'{}^i}
&=\frac{\partial \phi_a(x) +
\sum_{j=1}^n \partial_j \phi_a(x) \delta x^j}{\partial x^k}
\frac{\partial x^k}{\partial x'{}^i} + O(\delta x^2)\\
&=\partial_i \phi_a+
\sum_{j=1}^n (\partial_i\partial_j \phi_a) \delta x^j + O(\delta x^2).
\end{split}
\end{equation*}
Then up to $\delta x^2$, we obtain
\begin{equation*}
\begin{split}
& \int_{\EE^n} d^n x' {\cF}(\phi_a(x'),\partial_i' \phi_a(x'))
- \int_{\EE^n} d^n x {\cF}(\phi_a(x),\partial_i \phi_a(x))\\
&= \int_{\EE^n}\Bigr[
\sum_{i=1}^n\sum_{a=1}^\ell
\frac{\delta {\cF}}{\delta \phi_a} \partial_i \phi_a(x)
\delta x^i +\sum_{i,j = 1}^n \sum_{a=1}^\ell
\frac{\delta \cF}{\delta\partial_j \phi_a}
\partial_i \partial_j\phi_a(x) \delta x^i
+\sum_{j=1}^n{\cF} \partial_i \delta x^i\Bigr] d^n x\\
&=\int_{\EE^n}\left(
\sum_{i=1}^n \partial_i\left[ \sum_{j=1}^n \sum_{a=1}^\ell
\frac{\delta \cF}{\delta\partial_i \phi_a}\partial_i \phi_a
-{ \cF} \right] \delta x^i \right) d^n x.\\
\end{split}
\end{equation*}
Here we use the Euler-Lagrange equation (\ref{eq:AppeA1}) and then
we have (\ref{eq:AppeA2}).
If we assume that $I$ is invariant for the variation, it vanishes.
\qed
\end{proof}
\section{Variational principle for incompressible fluid dynamics}
\label{sec:three}
As we will derive the governing equation as the variational
equation of an incompressible multi-phase flow with interfaces
using the variational method,
let us review the variational theory of the
incompressible fluid to obtain the Euler equation
following References \cite{Ar,AK,EM,K,Ko,MW,NHK}.
Let $\Omega$ be a smooth domain in $\EE^3$.
The incompressible fluid dynamics can be interpreted
as a geometrical problem associated with an infinite dimensional Lie group
\cite{AK,EM,O}.
It is related to the volume-preserving diffeomorphism
group $\SDiff(\Omega)$
as a subgroup of the diffeomorphism group
$\Diff(\Omega)$. The diffeomorphism group $\Diff(\Omega)$
is generated by a smooth coordinate transformation of $\Omega$.
The Lie algebras $\sdiff(\Omega) \equiv T_e\SDiff(\Omega)$ of $\SDiff(\Omega)$
and $\diff(\Omega) \equiv T_e\Diff(\Omega)$ of $\Diff(\Omega)$ are
the infinite dimensional real vector spaces.
The $\sdiff(\Omega)$ is a linear subspace of $\diff(\Omega)$.
Following Ebin and Marsden \cite{EM}, we consider
the geometrical meaning of the action integral of
an incompressible fluid,
\begin{equation}
\int_T dt \int_{\Omega} d^3 x
\left(\frac{1}{2}\rho |u|^2\right).
\label{eq:Eue}
\end{equation}
Here
$T:=(0, T_0)$ is a subset of the set of real numbers $\RR$,
$(x, t)$ is the Cartesian coordinate of the space-time
$\Omega\times T$, $\rho$ is the density of the fluid
which is constant in this section, and $u=(u^1, u^2, u^3)$ is
the velocity field of the fluid.
Geometrically speaking,
a flow obeying the incompressible fluid dynamics is
considered as a section of
a principal bundle
$\IFluid(\Omega\times T)$ over the absolute time axis $T \subset \RR$
as its base space,
\begin{equation}
\begin{CD}
\SDiff(\Omega) @>>> \IFluid(\Omega\times T) \\
@. @V{\varpi}VV\\
@. T.\\
\end{CD}
\label{eq:PBIFluid}
\end{equation}
The projection $\varpi$ is induced from the
trivial fiber structure $\varpi_\Omega: \Omega \times T
\to T$, $((x, t) \to t)$. In the classical (non-relativistic) mechanics,
every point of space-time has a unique absolute time $t\in \RR$,
which is contrast to one in the relativistic theory.
Due to the Weierstrass polynomial approximation theorem \cite{Y},
we can locally approximate
a smooth function by a regular function.
Let the set of smooth functions
over $\Omega$ be denoted by $\cC^\infty(\Omega)$
and the set of the regular real functions
by $\cC^\omega(\Omega)$
whose element can be expressed by the Taylor expansion
in terms of local coordinates.
The action of $\Diff(\Omega)$ on
$\cC^\omega(\Omega) \subset\cC^\infty(\Omega)$
is given by
$$
\ee^{s u^i \partial_i} f(x) = f(x + s u),
$$
for an element $f \in \cC^\omega(\Omega)$, and small $s>0$,
where $\partial_i := \partial / \partial x^i$ and
we use the Einstein convention; when an index $i$ appears twice,
we sum over the index $i$.
Thus the action $\ee^{s u^i \partial_i}$
is regarded as an element of $\Diff(\Omega)$.
As a frame bundle of the principal bundle $\IFluid(\Omega\times T)$, we
consider a vector bundle $\Coor(\Omega\times T)$ with infinite rank,
$$
\begin{CD}
\cC^\infty(\Omega) @>>> \Coor(\Omega\times T) \\
@. @V{\varpi'}VV\\
@. T.\\
\end{CD}
$$
Since $\cC^\infty(\Omega)$ is regarded as a non-countably infinite
dimensional linear space over $\RR$,
we should regard
$\Diff(\Omega)$ and $\SDiff(\Omega)$ as subgroups of
an infinite dimensional general linear group if defined.
More rigorously, we should
consider the ILH space (inverse limit of Hilbert space)
(or ILB space (inverse limit of Banach space)) introduced in
Reference \cite{O} by adding a certain
topology to (a subspace of) $\cC^\infty(\Omega\times T)$,
and then we also should regard $\Diff$ and $\SDiff$
as an ILH Lie group.
However our purpose is to obtain an extended Euler equation from a more
practical viewpoint.
Thus we formulate the theory primitively even though
we give up to consider a general solution for a general initial
condition.
We consider smooth sections of $\Coor(\Omega\times T)$ and
$\IFluid(\Omega\times T)$.
Smooth sections of $\Coor(\Omega\times T)$ can be
realized as $\cC^\infty(\Omega \times T)$.
In the meaning of the Weierstrass polynomial approximation theorem \cite{Y},
an appropriate topology in $\cC^\infty(\Omega\times T)$ makes
$\cC^\omega(\Omega\times T)$ dense in $\cC^\infty(\Omega\times T)$
by restricting the region $\Omega \times T$ appropriately.
Under the assumption, we also deal with a smooth section of
$\IFluid(\Omega\times T)$.
Let us consider a coordinate function
$(\gamma^i(x, t))_{i=1,2,3} \in \cC^\omega(\Omega \times T)$
such that
$$
\frac{d}{d t} \gamma^i(x, t) = u^i(x, t), \quad
\gamma^i(x, t) = x^i \ \ \mbox{at } \ t \in T,
$$
which means
$$
\gamma^i(x, t + \delta t) = x^i + u^{i}(x,t) \delta t + O(\delta t^2),
$$
for a small $\delta t$.
Here the addition is given as a Euclidean move in $\EE^3$.
As an inverse function of $\gamma=\gamma(u,t)$,
we could regard $u$ as a function of $\gamma$ and $t$,
$$
u(x,t) =u(\gamma(x,t),t).
$$
Further we introduce a small quantity modeled on $\delta t\cdot u^i$,
\begin{equation}
\tgamma^i(x,t) := \gamma^i(x,t) - x^i.
\label{eq:tgamma}
\end{equation}
Then a section $g$
of $\IFluid(\Omega\times T)$ at $t \in T$ can written by,
\begin{equation}
g(t) = \ee^{\tgamma^i \partial_i} \in \IFluid(\Omega\times T)\Bigr|_{t}
\approx \SDiff(\Omega) \subset \Diff(\Omega).
\label{eq:etgamma}
\end{equation}
Here we consider $g$ as an element of $\SDiff(\Omega)$ and thus
it satisfies the condition of
the volume preserving, which appears as the constraint that the Jacobian,
$$
\frac{\partial \gamma}{\partial x}
:=
\det\left(\frac{\partial \gamma^i}{\partial x^j} \right)
= (1 +
\tr(\partial_j u^i) \delta t) + O(\delta t^2),
$$
must preserve $1$, {\it{i.e.}}, the well-known condition
that $\tr (\partial_j u^i) = \mathrm{div}(u)$ must vanish,
or $\displaystyle{\frac{d}{dt}\frac{\partial \gamma}{\partial x} = 0}$.
Following Reference \cite{EM}, we reformulate the
action integral (\ref{eq:Eue})
as {\lq\lq}the energy functional{\rq\rq}
in the frame work of the harmonic
map theory.
In the harmonic map theory \cite{GMO}
by considering a smooth map $h : M \to G$
for a $n$-smooth base manifold $M$ and its target group manifold $G$,
{\lq\lq}the energy functional{\rq\rq} is given by
\begin{equation}
E = \frac{1}{2}\int_M \tr \left((h^{-1} d h )*(h^{-1} d h )\right).
\label{eq:B-1}
\end{equation}
Here $*$ means the Hodge star operator, which is
for $*:TG\otimes\Lambda^p(M) \to TG\otimes\Lambda^{n-p}(M)$ where
$\Lambda^p(M)$ is the set of the smooth $p$-forms over $M$ \cite{N},
and
$TG\otimes\Lambda^p(M)$ is the set of the tangent bundle $TG$ valued
smooth $p$-forms over $M$ \cite{N}.
The term {\lq\lq}energy functional{\rq\rq} in the harmonic map theory
means that it is an invariance of the system and thus it sometimes
differs from an actual energy in physics.
Since in (\ref{eq:PBIFluid}), the base space $T$ is one-dimensional and
the target space $\IFluid(\Omega\times T)|_{t}$ at $t \in T$ is
the infinite dimensional space,
{\lq\lq}the energy functional{\rq\rq} (\ref{eq:B-1})
in the harmonic map theory corresponds to
the action integral $\cS_{\free} [\gamma]$ which is defined by
$$
\cS_{\free} [\gamma]=
\frac{1}{2} \int _T \int_\Omega
\frac{\partial \gamma}{\partial x}
\rho d^3 x \cdot dx^i \otimes dx^i \left(
\left( \ee^{-\tgamma^k \partial_k}
dt \frac{d}{dt}\ee^{\tgamma^\ell \partial_\ell} \right)
\left( \ee^{-\tgamma^j \partial_j}
\frac{d}{dt}\ee^{\tgamma^n \partial_n} \right)
\right).
$$
Here $d x^i (\partial_j ) :=
\langle\partial_j , d x^i \rangle
= \delta^i_{\ j}$ is the natural pairing between
$T \Omega$ and $T^* \Omega$.
The trace in (\ref{eq:B-1}) corresponds to the
integral over $\Omega$ with
$ \frac{\partial \gamma}{\partial x}
\rho d^3 x \cdot dx^i \otimes dx^i$.
The Hodge $*$ operator acts on the element such as
$*\left( \ee^{-\tgamma^k \partial_k}
dt \frac{d}{dt}\ee^{\tgamma^\ell \partial_\ell} \right)
= \left( \ee^{-\tgamma^k \partial_k}
\frac{d}{dt}\ee^{\tgamma^\ell \partial_\ell} \right)$
as the natural map from
$\diff(\Omega)$ valued 1-form to 0-form.
Further we assume that $\rho$ is a constant function in this section.
Then the action integral $\cS_{\free} [\gamma]$
obviously agrees with (\ref{eq:Eue}).
We investigate the functional derivative and the
variational principle of this $\cS_\free[\gamma]$.
Let us consider the variation,
$$
\gamma^j(x,t') = \gamma^j(x, t') + \delta \gamma^j(x,t'), \quad
\mbox{and} \quad
\tgamma^j(x,t') = \tgamma^j(x, t') + \delta \gamma^j(x,t'), \quad
$$
where we implicitly assume that $\delta \gamma^j$ is proportional
to the Dirac $\delta$ function,
$\delta(t' - t)$, for some $t$ and $\delta \gamma^j$ vanishes at
$\partial \Omega$.
As we have concerns only for local effects or differential
equations, we implicitly assume that
we can neglect the boundary effect arising from $\partial \Omega$ on
the variational equation.
If one needs the boundary effect, he would follow the study of
Shkoller \cite{Shk}.
Further one could use the language of the sheaf theory to
describe the local effects \cite{KKK}.
As we are concerned only with differential equation and thus
our theory is completely local except Section \ref{sec:VOF}, we could
deal with germs of related bundles \cite{AGLV} as in Reference \cite{M},
which is also naturally connected with
a computational method of fluid dynamics \cite{M2}.
Let us consider the extremal point of the action integral (\ref{eq:Eue})
following the variational principle.
Noting that $\partial \gamma /\partial x=1$,
the above Jacobian becomes
$$
\frac{\partial (\gamma + \delta \gamma)}{\partial x}
=
\frac{\partial \gamma}{\partial x}
(1 + \partial_k \delta \gamma^k) + O((\delta \gamma)^2).
$$
Since we employ the projection method, we firstly
consider a variation
in $\diff(\Omega)$ rather than $\sdiff(\Omega)$.
For the variation,
the action integral $\cS_{\free} [\gamma]$ with (\ref{eq:etgamma})
becomes
\begin{equation}
\begin{split}
\cS_\free [\gamma+\delta \gamma] -&
\cS_\free [\gamma] =\\
&-\int_T dt \int_\Omega
\frac{\partial \gamma}{\partial x}
d^3 x \cdot dx^i \otimes dx^i
\left(
\delta \gamma^k
\frac{d}{dt}\left( \rho
g^{-1} \frac{d}{dt} g \right)
+ \delta \gamma^k \partial_k \frac{1}{2}\rho |u|^2
\right).\\
\end{split}
\nonumber
\end{equation}
Now we have the following proposition.
\begin{proposition} \label{prop:3-1}
Using the above definitions, the variational principle
in $\SDiff(\Omega)$,
$$
\frac{\delta \cS_\free[\gamma]}{\delta \gamma(x,t)}
\Bigr|_{\SDiff(\Omega)|_t} = 0,
$$
is reduced to the Euler equation,
\begin{equation}
\frac{\partial}{\partial t} \rho u^i + u^j \partial_j \rho u^i
+ \partial_i p = 0,
\label{eq:EulerEq}
\end{equation}
where $p$ comes from the projection from $T\Diff(\Omega)|_{\SDiff(\Omega)}
\to T\SDiff(\Omega)$.
\end{proposition}
\begin{proof}
Basically we leave the rigorous proof
and especially the derivation of $p$ to \cite{AK,EM}.
The existence of $p$ was investigated well in
Appendix of Reference \cite{EM}
as the Hodge decomposition \cite{AM,N}. (See also
the following Remark \ref{rmk:3-2}.)
Except the derivation of $p$,
we use the above relations and the following relations,
\begin{equation*}
\begin{split}
\frac{d}{dt}
\left(\rho \ee^{-\tgamma^j \partial_j}
\frac{d}{dt}\ee^{\tgamma^n \partial_n} \right)
&=
\frac{d}{dt} \left(\rho u^i(\gamma(t),t) \partial_i \right) \\
&= \left( \frac{\partial}{\partial t}\rho u^i|_{x=\gamma}
+ (\frac{d}{dt}\tgamma^j) \partial_j \rho u^i \right) \partial_i \\
&= \left( \frac{\partial}{\partial t}\rho u^i
+ u^j \partial_j\rho u^i \right) \partial_i \\
&=:\left(\frac{D}{Dt} \rho u^i \right) \partial_i. \\
\end{split}
\end{equation*}
Then we obtain the Euler equation.
\qed
\end{proof}
\begin{myremark}\label{rmk:3-2}
{\rm{
The Euler equation was obtained by the simple variational principle.
Physically speaking, the conservation of
the momentum in the sense of Noether's theorem \cite{BGG,IZ} led to
the Euler equation.
However, we could introduce the pressure $p_L$ term as the
Lagrange multiplier of the constraint of the volume preserving.
In the case, instead of $\cS_\free$, we deal with
$$
\cS_{\free,p} = \cS_\free
+ \int_T dt \int_\Omega p_L(x,t) \frac{\partial \gamma}{\partial x}
d^3 x.
$$
Then noting the term coming from the Jacobian, the relation,
$$
\frac{\delta \cS_{\free,p}
[\gamma]}{\delta\gamma(x,t)}\Bigr|_{\SDiff(\Omega)|_t} = 0,
$$
is reduced to the Euler equation,
$$
\frac{\partial}{\partial t} \rho u^i + u^j \partial_j \rho u^i
+ \partial_i (p_L + \frac{1}{2}\rho |u|^2) = 0.
$$
As the pressure is determined by the (divergence free) condition of $u$,
we renormalize \cite[(25)]{Ko},
$$
p := p_L + \frac{1}{2}\rho |u|^2.
$$
More rigorous arguments are left to References \cite{EM,O}
and physically interpretations are, {\it {e.g.}}, in References
\cite{AF,B,K,NHK,Schm,V}.
We give a comment on
the projection from $T\Diff(\Omega)|_{\SDiff(\Omega)}
\to T\SDiff(\Omega)$ in (\ref{eq:EulerEq}),
which is known as the projection method.
First we note that
the divergence free condition $\mathrm{div} (u) =0$
simplifies the Euler equation (\ref{eq:EulerEq}),
$$
\rho \frac{D u}{D t} + \nabla p = 0, \quad
\frac{\partial u^i}{\partial t} + u^j \partial_j u^i
+\frac{1}{\rho}\partial_i p=0.
$$
As mention in Section \ref{sec:VOF}, in the difference equation
we have a natural interpretation of the projection method \cite{Cho}.
We, thus, regard $D u / Dt$ in $T\Diff(\Omega)|_{\SDiff(\Omega)}$ as
$\lim_{\delta t \to 0}\frac{u(t + \delta t)-u(t)}{\delta t}$
for $u(t+\delta t):=u(t+\delta t,\gamma(t+\delta t)) \in \diff(\Omega)$
and $u(t):=u(t,\gamma(t)) \in \sdiff(\Omega)$, {\it{i.e.}},
$\mathrm{div} \left(u(t)\right) = 0$ by considering
$T\Diff(\Omega)$ at the unit $e$ of $\Diff(\Omega)$
up to $\delta x^2$, as we did in (\ref{eq:tgamma}) and (\ref{eq:etgamma}).
In order to find the deformation
$u^\parallel(t + \delta t)$ in $\sdiff(\Omega)$ by a natural projection
from $\diff(\Omega)$ to $\sdiff(\Omega)$ \cite[,p.36]{CM},
we decompose $u(t + \delta t)$ into $u^\parallel(t + \delta t)$ and
$u^\perp(t + \delta t)$ such that
$\partial_i u^{\perp i}(t + \delta t) := \partial_i u^i(t + \delta t)$.
Then $u^\parallel(t + \delta t):= u(t + \delta t)-$
$u^\perp(t + \delta t)$ belongs to $\sdiff(\Omega)$.
Thus the pressure $p$ is determined by \cite{CM}
\begin{equation}
\partial_i u^i(t + \delta t)
+ \delta t\partial_i \frac{1}{\rho}\partial_i p =0.\quad
\label{eq:ProjM}
\end{equation}
In other words, since $u^\parallel(t + \delta t)\equiv$
$u^i(t + \delta t) + \delta t \frac{1}{\rho}\partial_i p$ belongs
to $\sdiff(\Omega)$,
the deformation of
$u^{\parallel i}(t + \delta t)- u^i(t)$ which gives
$D u^\parallel/D t$ and the Euler equation (\ref{eq:EulerEq})
is the deformation in $\IFluid(\Omega \times T)$.
After taking the continuous limit $\delta t \to 0$,
the equation for the pressure
(\ref{eq:ProjM}) can be written as \cite{Cho},
$$
(\partial_i u^j) (\partial_j u^i)
+\partial_i\frac{1}{\rho}\partial_i p=0,
$$
by noting the relations $[\partial_t, \partial_i]=0$
and $\mathrm{div}(u(t))=0$, {\it{i.e.}},
$\partial_i u^i(t + \delta t) = \partial_i[ u^i(t) $
$+ \frac{\partial}{\partial t} u^i(t) \delta t$
$+ u^j(t)\partial_j u^i(t)\delta t] + O(\delta t^2)$.
The Poisson equation with (\ref{eq:EulerEq})
guarantees the divergence free condition.
Hence the pressure $p$ in the incompressible fluid
is determined geometrically.
}}
\end{myremark}
\section{Reformulation of Surface tension as a minimal surface energy}
\label{sec:four}
In this section we reformulate
the SURFER scheme \cite{LZZ} following the
variational principle and the arguments of previous sections.
\subsection{Analytic expression of surface area}\label{subsec:fourOne}
We first should note that in general, the higher dimensional
generalized function like the Dirac delta function has some
difficulties in its definition \cite{Y}.
For the difficulties,
in the Sato hyperfunctions theory \cite{KKK},
the sheaf theory and the cohomology theory are
necessary to the descriptions of the higher dimensional generalized functions,
which are too abstract to be applied to a problem with an arbitrary
geometrical setting.
Even for the generalized function in the framework of
Schwartz distribution theory, we should pay attentions on its treatment.
However since the surface $S$ in this article is
a hypersurface and its codimension is one,
the situation makes the problems much easier.
We assume that the smooth surface $S$ is orientable and compact such
that we could
define its inner side and outer side. In other words, there
is a three dimensional subspace (a manifold with boundary) $B$ such that
its boundary $\partial B$ agrees with $S$ and $B$
is equal to the inner side of $S$ with $S$ itself.
Then we consider a generalized function $\theta$ over
$\Omega \subset \EE^3$ such that
it vanishes over the complement $B^c = \Omega \setminus B$
and is unity for the interior $B^\circ:=
B \setminus \partial B$;
$\theta$ is known as a characteristic function of $B$.
We consider the global function
$\theta(x)$ and its derivative $d\theta(x)$ in the sense of
the generalized function, which is given by
\begin{equation*}
d \theta(x) = \sum_{i}\partial_i \theta(x) d x^i =\partial_q \theta(x) d q.
\end{equation*}
Here we use the notations in Section \ref{sec:two-one}.
Using the nabla symbol $\nabla \theta=(\partial_i \theta(x) )_{i=1,2,3}$,
$|\nabla \theta| d^3 x$ is interpreted as
\begin{equation*}
|\nabla \theta|d^3 x=|(* d \theta)\wedge dq|. \quad
\end{equation*}
Here due to the Hodge star operation
$*: \Lambda^p(\Omega) \to \Lambda^{3-p}(\Omega)$,
$* d \theta = \tilde e \partial_q \theta d s^1\wedge d s^2$
where $\tilde e$ is the Jacobian between the coordinate systems
$(d s^1, d s^2, d q)$ and $(d x^1, d x^2, d x^3)$.
Then we have the following proposition;
\begin{proposition}
If the integral,
\begin{equation*}
\cA:= \int_{\Omega} |\nabla \theta| d^3 x \equiv \int_{\Omega}
|(* d \theta)\wedge dq|,
\end{equation*}
is finite, $\cA$ agrees with the area of the surface $S$.
\end{proposition}
It should be noted that due to the codimension of
$S \subset \Omega$,
we have used the fact that the Dirac $\delta$ function
along $q \in T_S$ is the integrable
function whose integral is the Heaviside function.
This fact is a key of this approach.
\subsection{Quasi-characteristic function for surface area}
For the later convenience, we introduce a support of
a function over $\Omega$, which
is denoted by {\lq\lq}supp{\rq\rq}, {\it{i.e.}},
for a function $g$ over $\Omega$, its support is defined by
$$
\supp(g) =\overline{\{x \in \Omega \ | \ g(x) \neq 0\}},
$$
where {\lq\lq}$\ \bar{\ } \ ${\rq\rq}
means the closure as the topological space $\Omega$.
One of our purposes is to express the surface $S$ by means of
numerical methods, approximately.
Since it is difficult to deal with the generalized function $\theta$
in a discrete system
like the structure lattice \cite{Ch},
we introduce a smooth function $\xi$ over $\Omega$
as a quasi-characteristic function
which approximates the function
$\theta$
\cite{BKZ,LZZ},
\begin{equation}
\xi(x) =\left\{
\begin{matrix}
0 & \mbox{for } x \in B^c \bigcap \{x \in \Omega \ | \
|q(x)| < \epsX/2\}^c, \\
1 & \mbox{for } x \in B \bigcap \{x \in \Omega\ | \
|q(x)| < \epsX/2\}^c, \\
\mbox{monotonically increasing in } q(x)& \ \mbox{otherwise}.
\end{matrix}
\right.
\label{eq:epsX}
\end{equation}
We note that along the line of $d q$ for $q \in (-\epsX/2, \epsX/2)$,
$\xi$ is a monotonically increasing function which interpolates
between $0$ and $1$.
We now implicitly assume that $\epsX$ is much smaller than
$\epsT$ defined in Section \ref{sec:two-one} so that
support of $|\nabla \xi|$ is in the tubular neighborhood $T_S$.
However after formulating
the theory, we extend the geometrical setting in Section \ref{sec:two-one}
to more general ones which include singularities; there
$\epsT$ might lose its mathematical meaning but $\epsX$
survives as a control parameter which governs the system.
For example, as in Reference \cite{LZZ},
we can also deal with a topology change well.
By letting $\xi^c (x) := 1 - \xi(x)$, $\xi^c$ and $\xi$ are
regarded as the partition of unity \cite[I p.272]{KN}, or
$$
\xi(x) + \xi^c(x) \equiv 1.
$$
We call these $\xi$ and $\xi^c$ {\lq\lq}color functions{\rq\rq} or
{\lq\lq}phase fields{\rq\rq} in the following sections.
We have an approximation
of the area of the surface $S$ by the following proposition.
\begin{proposition}
Depending upon $\epsX$, we define the integral,
\begin{equation*}
\cA_{\xi}:=\int_{\Omega} |\nabla \xi| d^3 x,
\end{equation*}
and then the following inequality holds,
\begin{equation*}
\quad |\cA_\xi -\cA | < \epsX \cdot \cA.
\end{equation*}
\end{proposition}
Here we note that
$\cA_\xi$ is regarded as the approximation of the area $\cA$ of $S$
controlled by $\epsX$. In other words,
we use $\epsX$ as the parameter which controls the
difference between the characteristic function $\theta$ and
the quasi-characteristic function $\xi$
in the phase field model \cite{BKZ,LZZ}.
Let us consider its extremal point following the
variational principle in a purely geometrical sense.
\begin{proposition} \label{prop:4-3}
For sufficiently small $\epsX$, we have
\begin{equation*}
\begin{split}
\frac{\delta}{\delta \xi(x)} \cA_\xi
&=-\partial_i\frac{\partial_i \xi }{|\nabla \xi| }(x)
\\
&=\kappa(x),
\end{split}
\end{equation*}
where $x \in S$ or $q=0$.
\end{proposition}
\begin{proof}
Noting the facts that $\partial \xi/\partial q <0$ at $q=0$ and
\begin{equation*}
|\nabla \xi| = \sqrt{\nabla \xi\cdot \nabla \xi},
\end{equation*}
Proposition \ref{prop:A-1} and
the equality in Proposition \ref{prop:2-1} show the relation.
\qed
\end{proof}
In the vicinity of $S$, $q$ in Section \ref{sec:two-one}
could
be identified with the level-set function and
the authors in References \cite{ZCMO,ZMOW} also used this relation.
Since all of geometrical quantities on $S$ are
lifted to $T_S$ as the inverse image of $\pi$,
the relation in Proposition \ref{prop:4-3} is also
defined over $(\supp(|\nabla \xi|))^\circ \subset T_S$
and we redefine the $\kappa$ by the relation from here.
\bigskip
\subsection{Statics}
Let us consider physical problems as we finish the geometrical
setting.
Before we consider dynamics of the phase field flow,
we consider a statical surface problem.
Let $\sigma$ be the surface tension coefficient between two fluids
corresponding to $\xi$ and $\xi^c$.
Now let us call $\xi$ and $\xi^c$ {\lq\lq}color functions{\rq\rq} or
{\lq\lq}phase fields{\rq\rq}.
More precisely, we say that a color function with individual
physical parameters is a phase field.
The surface energy $\cE:=\sigma \cA$ is, then, approximately given by
\begin{equation}
\cE_\two := \sigma \cA_\xi = \sigma\int_{\Omega} |\nabla \xi| d^3 x.
\end{equation}
As a statical mechanical problem, we consider
the variational method of this system following Section \ref{sec:two-two}.
Since a statical surface phenomenon is caused by the difference
of the pressure of each material, we now consider a free energy functional
\cite{MM},
\begin{equation}
\cF_\two :=\int_{\Omega}
\left(\sigma|\nabla \xi| - (p_1 \xi + p_2 \xi^c)\right) d^3 x,
\label{eq:ELs0}
\end{equation}
where $p_a$ ($a = 1, 2$) is the proper pressure of each material.
\begin{proposition}\label{prop:4-4}
The variational problem with respect to $\xi$, $\delta \cF_\two/\delta \xi =0$,
reproduces the Laplace equation {\rm{\cite[Chap.7]{LL}}},
\begin{equation}
(p_1 - p_2) - \sigma \kappa(x) = 0, \quad x\in
(\supp(|\nabla \xi|))^\circ.
\label{eq:ELs}
\end{equation}
\end{proposition}
\begin{proof}
As in Proposition \ref{prop:A-1}, direct computations give the relation.
\qed
\end{proof}
This proposition implies that the functional $\cF_\two$ is natural. The
solutions of (\ref{eq:ELs}) are given by the constant mean curvature
surfaces studied in References \cite{FW,GMO,T}.
Furthermore we also have another static equation,
whose relation to the Laplace equation (\ref{eq:ELs})
is written in Remark
\ref{rmk:4-2}.
\begin{proposition}\label{prop:4-5}
For every point $x \in \Omega$,
the variation principle,
$\delta \cF_\two/\delta x^i=0$, gives
\begin{equation}
\sigma\left( \sum_j \partial_i\frac{\partial_j \xi
\partial_j \xi }{|\nabla \xi| }
- \sum_j\partial_j\frac{\partial_j \xi \partial_i \xi }{|\nabla \xi| }
\right)
- (p_1 - p_2) \partial_i \xi=0,
\label{eq:Amini0}
\end{equation}
or
\begin{equation}
\partial_j \tau_{ij}(x) - (p_1 - p_2) \partial_i \xi(x)=0,
\label{eq:Amini}
\end{equation}
where
$$
\tau(x) := \sigma\left(I - \frac{\nabla \xi}{|\nabla \xi|}
\otimes
\frac{\nabla \xi}{|\nabla \xi|}\right)
|\nabla \xi|(x).
$$
\end{proposition}
\begin{proof}
We are, now, concerned with the variation
$x \to x + \delta x$ for every point $x \in \Omega$.
We apply Proposition \ref{prop:A-2} to this case, {\it{i.e.}},
\begin{equation*}
\begin{split}
\frac{\delta \cF_\two}{\delta x^i}
&= -\sigma \left[\partial_i |\nabla \xi|- \partial_j\left(
\partial_i \xi(x) \cdot\frac{\delta}{\delta\partial_j \xi(x)}
|\nabla \xi| \right) \right](x)
+ (p_1 - p_2) \partial_i \xi(x),
\end{split}
\end{equation*}
by using (\ref{eq:ELs}) as its Euler-Lagrange equation
(\ref{eq:AppeA1}). Further for $x\not\in (\supp(|\nabla \xi|))^\circ$,
its Euler-Lagrange equation (\ref{eq:AppeA1}) gives a trivial
relation, {\it{i.e.}}, {\lq\lq}$0=0${\rq\rq}.
Then we have (\ref{eq:Amini}).
\qed
\end{proof}
\begin{myremark}\label{rmk:4-0}
{\rm{
It is worthwhile noting that (\ref{eq:Amini0}) and (\ref{eq:Amini})
are defined over $\Omega$
rather than $(\supp(|\nabla \xi|))^\circ$ because due to the relation,
$$
| \partial_i \xi | \le |\nabla \xi|,
$$
even at the point at which denominators in the first term in (\ref{eq:Amini0})
vanish,
the first term is well-defined and vanishes.
Hence (\ref{eq:Amini0}) and (\ref{eq:Amini})
could be regarded as an extension of the defined region
of (\ref{eq:ELs}) to $\Omega$ and thus
(\ref{eq:Amini0}) and (\ref{eq:Amini})
have the advantage over (\ref{eq:ELs}).
The extension makes the handling of the surface tension much easier.
}}
\end{myremark}
\begin{myremark}\label{rmk:4-1}
{\rm{
In the statical mechanics, there appears a force $\partial_i \tau_{ij}$,
which agrees with one in
(33) and (34) in Reference \cite{LZZ}
and (2.11) in Reference \cite{Jac}.
We should note that in Reference \cite{Jac},
it was also stated that
this term is derived from the momentum conservation
however there was not its derivation in detail.
The derivation of the above $\tau$ needs the
Euler-Lagrange equation (\ref{eq:AppeA1}), which corresponds to
the Laplace equation (\ref{eq:ELs}) in this case,
when we apply Proposition \ref{prop:A-2} to
this system, though these objects did not appear in Reference \cite{Jac}.
}}
\end{myremark}
\begin{myremark}\label{rmk:4-2}
{\rm{
In this remark, we comment on the identity between
(\ref{eq:ELs}) and (\ref{eq:Amini}).
Comparing these, we have the identity,
$$
\partial_i \tau_{ij} = \sigma \partial_j \xi \cdot \kappa,
$$
which is, of course, obtained from the primitive computations.
It implies that (\ref{eq:Amini}) can be derived from the
Laplace equation (\ref{eq:ELs}) with this relation.
However it is worthwhile noting that both come from the
variational principle in this article.
In fact, when we handle multiple junctions,
we need a generalization of the Laplace equations over there
like (\ref{eq:ELM3}), which is not easily obtained by taking the primitive
approach.
Further the derivations from the variational principle
show their geometrical meaning in the sense of References \cite{Ar,AM,BGG}.
}}
\end{myremark}
\bigskip
\subsection{Dynamics}
Now we investigate the dynamics of the two-phase field.
There are two different liquids which are expressed by
phase fields $\xi$ and $\xi^c$ respectively.
We assume that they obey the incompressible fluid
dynamics.
As in the previous section,
we consider the action of the volume-preserving diffeomorphism
group $\SDiff(\Omega)$
on the color functions $\xi$ and $\xi^c$.
We extend the domain of $\xi$ and $\xi^c$ to $\Omega \times T$
and they are smooth sections of $\Coor(\Omega\times T)$.
For the given $t$, we will regard
$\xi$ and $\xi^c$ as functions of $\gamma^i$ in the previous section,
{\it{i.e.},} $\xi=\xi(\gamma(x,t))$.
For example, the density of the fluid is expressed by the relation,
$$
\rho = \rho_1 \xi^c + \rho_2 \xi
$$
for constant proper densities $\rho_1$ and $\rho_2$ of the individual
liquids.
The density $\rho$, now, differs from a constant function
over $\Omega\times T$ in general.
We consider the action integral $\cS_{\two}$
including the surface energy,
\begin{equation}
\cS_{\two}[\gamma]
= \int_T d t\int_{\Omega}\left( \frac{1}{2}\rho |u|^2 -
\sigma|\nabla \xi| + (p_1\xi + p_2 \xi^c) \right) d^3 x.
\label{eq:AI2d}
\end{equation}
The ratio between $\rho$ and $\sigma$
determines the ratio between the contributions
of the kinematic part and the potential (or surface energy) part
in the dynamics of the fluid.
Since the integrand in
(\ref{eq:AI2d}) contains no $\partial \xi/\partial t$ term,
we obtain the same terms in the variational calculations
from the second and the third term in (\ref{eq:AI2d})
as those in (\ref{eq:ELs}) and (\ref{eq:Amini}) in
the static case even if we regard $n$ as $4$
and $x^4$ as $t$ in Section \ref{sec:two-two}.
By applying Proposition \ref{prop:A-1} to this system, we have
the following proposition as the Euler-Lagrange equation for $\xi$.
\begin{lemma}
The function derivative of $\cS_{\two}$ with respect to $\xi$ gives
\begin{equation}
\frac{1}{2}(\rho_1 - \rho_2) |u(x,t)|^2
+ (p_1 - p_2) - \sigma \kappa (x,t)= 0,
\quad x\in
(\supp(|\nabla \xi|))^\circ,
\label{eq:EL2d}
\end{equation}
up to the volume preserving condition.
\end{lemma}
This could be interpreted as a generalization of the
Laplace equation (\ref{eq:ELs}) as in the following remark.
\begin{myremark}{\rm{
Here we give some comments on the
generalized Laplace equation (\ref{eq:EL2d})
up to the volume preserving condition.
This relation (\ref{eq:EL2d})
does not look invariant for
Galileo's transformation, $u \to u + u_0$ for a constant velocity
$u_0$.
However for the simplest problem of Galileo's boost,
{\it{i.e.}}, static state on a system with a constant velocity $u_0$,
the equation (\ref{eq:EL2d}) gives
\begin{equation}
\frac{1}{2}(\rho_1 - \rho_2) |u_0|^2
+ (p_1 - p_2) - \sigma \kappa (x,t)= 0,
\quad x\in
(\supp(|\nabla \xi|))^\circ,
\end{equation}
which might differ from the Laplace equation (\ref{eq:ELs}).
However for the boost, we should transform $p_a$ into
\begin{equation}
\tilde p_a := p_a + \frac{1}{2} \rho_a |u_0|^2.
\label{eq:GalileoB}
\end{equation}
Then the above equation of $\tilde p_a$ agrees with the static one
(\ref{eq:ELs}). In other words (\ref{eq:GalileoB}) makes
our theory invariant for the Gaililio's transformation.
For a more general case, we should regard $p_a$ as a function over
$\Omega \times T$ rather than a constant number due to
the volume preserving condition.
These values are contained in the pressure as mentioned in
(\ref{eq:p_PLTwo}).
The statement {\lq\lq}up to the volume preserving condition{\rq\rq}
has the meaning in this sense.
In fact, in the numerical computation,
these individual pressures $p_a$'s are not so important
as we see in Remark \ref{rmk:4-11}.
Due to the constraint of the incompressible
(volume-preserving) condition, the pressure $p$ is determined
as mentioned in Remark \ref{rmk:3-2}.
There are no contradictions with the
Galileo's transformation and $\SDiff(\Omega)$-action.
}}
\end{myremark}
We consider the infinitesimal action of
$\SDiff(\Omega)$ around its identity.
As did in Section \ref{sec:three},
we apply the variational method to this
system in order to obtain the Euler equation with the surface tension.
\begin{proposition}\label{prop:4-6}
For every $(x,t) \in \Omega \times T$,
the variational principle,
$\delta \cS_\two/\delta \gamma^i(x,t) = 0$, gives the equation of motion,
or the Euler equation with the surface tension,
\begin{equation}
\begin{split}
\frac{D \rho u^i}{D t} +
\sigma \left( \sum_j \partial_i\frac{\partial_j \xi
\partial_j \xi }{|\nabla \xi| }
- \sum_j \partial_j\frac{\partial_j \xi \partial_i \xi }{|\nabla \xi| }
\right)
+ \partial_i p = 0.
\label{eq:Eulxi}
\end{split}
\end{equation}
Here $p$ is also the pressure coming from the effect of the
volume-preserving.
\end{proposition}
\begin{proof}
The measure $d^3 x$ is regarded as
$\displaystyle{\frac{\partial \gamma}{\partial x} d^3 x}$ with
$\displaystyle{\frac{\partial \gamma}{\partial x} = 1}$.
Noting $\displaystyle{\frac{d}{dt}\frac{\partial \gamma}{\partial x} = 0}$,
the proof in Proposition \ref{prop:3-1} and Remark \ref{rmk:3-2}
provide the kinematic part with pressure term and
Proposition \ref{prop:4-5} gives the remainder.
In this proof, the total pressure $p$ is defined in Remark \ref{rmk:4-11}.
\qed
\end{proof}
\begin{myremark} \label{rmk:4-11}
{\rm{
More rigorous speaking, as we did in Remark \ref{rmk:3-2},
we also renormalize the pressure,
\begin{equation}
\begin{split}
p &= p_L + \frac{1}{2}\rho |u|^2 + p_1 \xi + p_2 \xi^c \\
&= p_L + \frac{1}{2}(\rho_1 - \rho_2) \xi |u|^2
+ (p_1 - p_2) \xi + \frac{1}{2}\rho_2 |u|^2 + p_2.
\end{split}
\label{eq:p_PLTwo}
\end{equation}
As in Section \ref{sec:two-two}, the third term in
(\ref{eq:Eulxi}) includes the effects from $p_a$'s via the
generalized Laplace equation (\ref{eq:EL2d}) as the Euler-Lagrange
equation (\ref{eq:AppeA1}).
}}
\end{myremark}
\begin{myremark} \label{rmk:7} {\rm{
\begin{enumerate}
\item
The equation of motion (\ref{eq:Eulxi}) is
the same as (24) in Reference \cite{LZZ} basically.
We emphasize that it is
reproduced by the variational principle.
\item
As in Reference \cite{LZZ}, in our framework, we can deal with the
topology
changes and the singularities which are controlled by the parameter
$\epsX$. The above dynamics is well-defined as a field
equation provided that $\epsX$ is finite. If needs,
one can evaluate
its extrapolation for vanishing of $\epsX$.
\item
In general, $\epsX$ is not constant for the time
development. Due to the equation of motion, it changes.
At least, in numerical computation, the numerical
diffusion makes the intermediate region wider in general.
However even when the time passes
but we regard it as a small parameter,
the approximation is justified.
\item
Since from Remark \ref{rmk:4-0}, the surface tension is defined over
$\Omega$, the Euler equation is defined over $\Omega$ without any
assumptions.
\item
It should be noted that the surface force is
not difficult to be computed as in Reference \cite{LZZ} but
there sometimes appear so-called parasite current problems in the
computations even though we will not touch the problem
in this article.
\end{enumerate}
}}
\end{myremark}
\section{Multi-phase flow with multiple junctions}
\label{sec:five}
In this section,
we extend the SURFER scheme \cite{LZZ}
of two-phase flow to multi-phase ($N$-phase, $N\ge2$) flow.
\subsection{Geometry of color functions}
In order to extend the geometry of the color functions in the previous
section, we introduce several geometrical tools.
First let us define a geometrical object similar to smooth $d$-manifold
with boundary.
Here we note that $d$-manifold means $d$-dimensional manifold,
and $d$-manifold with boundary means that
its interior is a $d$-manifold and its boundary is
a $(d-1)$-dimensional manifold. We distinguish
a smooth (differential) manifold from a topological manifold here.
When we consider multi-junctions in $\EE^3$,
we encounter
a geometrical object with smooth {\lq\lq}boundaries{\rq\rq} whose dimensions
are two, one and zero
even though it is regarded as a topological $3$-manifold with boundary.
\begin{definition} \label{def:5-1}
We say that a path-connected topological $d$-manifold with boundary $V$ is
{\it{a path-connected interior smooth $d$-manifold}}
if $V$ satisfies the followings:
\begin{enumerate}
\item
The interior $V^\circ$ is a path-connected smooth $d$-manifold, and
\item
$V$ has finite path-connected subspaces
$V_\alpha$, $(\alpha = 1, \cdots, \ell)$
such that
\begin{enumerate}
\item $V\setminus V^\circ$ is decomposed by $V_\alpha$,
{\it{i.e.}},
$$
V\setminus V^\circ = \coprod_{\alpha = 1}^\ell V_\alpha,
$$
\item Each $V_\alpha$ is a path-connected
smooth $k$-manifold in $\Omega$ $(k<d)$.
\end{enumerate}
\end{enumerate}
We say that $V_\alpha$ is a {\it{singular-boundary}} of $V$ and
let their union $V\setminus V^\circ$ denoted by
$\partial_\sing V := V\setminus V^\circ$.
\end{definition}
Here the disjoint union is denoted by $\coprod$, {\it{i.e.}},
for subsets $A$ and $B$ of $\Omega$,
$A \coprod B := A \bigcup B$
if $A \bigcap B = \emptyset$.
By letting $V^{(n)}:=V$ and
$V^{[k]}:=\{V_\alpha \subset V\ | \ \dim V_\alpha \le k\}$,
and by picking up an appropriate path-connected part $V^{(k)}\subset V^{[k]}$
each $k$,
we can find a natural stratified structure,
$$
V^{(n)} \supset V^{(n-1)} \supset \cdots \supset
V^{(2)} \supset V^{(1)} \supset V^{(0)},
$$
which is known as a stratified submanifold in the
singularity theory \cite{AGLV}.
In terms of
path-connected interior smooth $d$-manifolds, we
express subregions corresponding to materials in a regions
$\Omega \subset \EE^3$ as extensions of
$B$ and $B^c$ in Section \ref{subsec:fourOne}.
\begin{definition} \label{def:5-2}
For a smooth domain $\Omega \subset \EE^3$,
we say that $N$ path-connected interior smooth $3$-manifolds
$\{ B_a\}_{a=0, \cdots, N-1}$
are {\it{colored decomposition of $\Omega$}}
if $\{ B_a\}_{a=0, \cdots, N-1}$ satisfy the followings:
\begin{enumerate}
\item every $B_a$ is a closed subset in $\Omega$,
\item $\Omega = \bigcup_{a=0, \cdots, N-1} B_a$, and
\item $\Omega \setminus (
\bigcup_{a <b } B_a \cap B_b) =
\coprod_{a=0, \cdots, N-1} B_a^\circ$.
\end{enumerate}
\end{definition}
Roughly speaking, each $B_a$ corresponds to a material in $\Omega$;
Definition \ref{def:5-2}
1. means that $B_a$ is surrounded by singular boundary or the boundary
of $\Omega$, 2. implies that there is no {\lq\lq}vacuum{\rq\rq}
in $\Omega$ and 3. guarantees that the interiors of these materials
don't overlap.
\bigskip
In general,
for $ a \neq b$, $ B_a \cap B_b$ is a singular geometrical
object if it is not the empty set.
Singularity basically makes its treatment difficult in mathematics.
In order to avoid such difficulties, we introduce
color functions $\xi_a(x)$ $(a=0, 1, 2, \cdots, N-1)$
over a region $\Omega$, which are modeled on $\xi$ and $\xi^c$
as in Section \ref{subsec:fourOne},
are controlled by
a small parameter $\epsX > 0$ and approximate
the characteristic functions over $B_a$.
\bigskip
To define color functions $\xi_a(x)$ $(a=0, 1, 2, \cdots, N-1)$,
we introduce another geometrical object,
{\it{$\epsilon$-tubular neighborhood}} in $\EE^3$:
\begin{definition} \label{def:5-3}
For a closed subspace $U \subset \Omega$ and a positive number $\epsilon$,
{\it{$\epsilon$-tubular neighborhood $T_{U, \epsilon}$ of $U$}} is
defined by
$$
T_{U, \epsilon} := \{ x \in \Omega \ |\ \dist(x, U) < \frac{\epsilon}{2}\},
$$
where $\dist(x, U)$ is the distance between $x$ and $U$ in $\EE^3$.
\end{definition}
We assume that each $T_{\partial_\sing B_a, \epsilon}$ has a
fiber structure over $\partial_\sing B_a$ as topological manifolds
as mentioned in Section \ref{sec:two-one}.
Using the $\epsilon$-tubular neighborhood, we define
$\epsX$-controlled color functions.
\begin{definition} \label{def:5-4}
We say that
$N$ smooth non-negative functions $\{ \xi_a\}_{a=0, \cdots, N-1}$
over $\Omega \subset \EE^3$ are
{\it{$\epsX$-controlled color functions associated with
a colored decomposition $\{ B_a\}_{a=0, \cdots, N-1}$ of $\Omega$}},
if they satisfy the followings:
\begin{enumerate}
\item $\xi_a$ belongs to $\cC^\infty(\Omega)$ and for $x \in \Omega$,
$$
\sum_{a=0, 1 \cdots, N-1} \xi_a(x) \equiv 1.
$$
\item For every $ M_a := \supp(\xi_a)$ and
$L_a := \supp(1-\xi_a)$, $(a=0, 1, \cdots, N-1)$,
\begin{enumerate}
\item $B_a \varsubsetneqq M_a$,
\item $L_a^c \varsubsetneqq B_a$,
\item $(M_a \setminus L_a^c)^\circ \subset T_{\partial_\sing B_a, \epsX}$,
\item $(M_a \setminus L_a^c)^\circ \supset \partial_\sing B_a$.
\end{enumerate}
\item For $x \in (M_a \setminus L_a^c)$,
we define the smooth function $q_a$ by
$$
q_a(x) = \left\{ \begin{matrix}
\dist(x, \partial_\sing B_a), & x \in (M_a \setminus B_a), \\
-\dist(x, \partial_\sing B_a), &
\mbox{ otherwise. }
\end{matrix} \right.
$$
Then for the flow $\exp( - t \frac{\partial}{\partial q_a})$
on $\cC^\infty(\Omega) |_{ (M_a \setminus L_a^c)}$,
$\xi_a$ monotonically increases along $t \in U \subset \RR$ at
$x \in (M_a \setminus L_a^c)$.
\end{enumerate}
When $(M_a \setminus L_a^c)^\circ = T_{ \partial_\sing B_a, \epsX}$
for every $a$,
$\{ \xi_a\}_{a=0, \cdots, N-1}$ are called
{\it{proper $\epsX$-controlled color functions associated with
the colored decomposition of $\Omega \subset \EE^3$,
$\{ B_a\}_{a=0, \cdots, N-1}$}} or merely {\it{proper}}.
\end{definition}
\bigskip
The functions $\xi_a$'s are, geometrically, the partition of unity
\cite[I p.272]{KN}
and a quasi-characteristic function, roughly speaking, which
is equal to $1$ in the far inner side of $B_a$, vanishes at the
far outer side of
$B_a$ and monotonically behaves in the artificial intermediate region.
Noting that the flow $\exp( - t \frac{\partial}{\partial q})$ corresponds
to the flow from the outer side to the inner side,
$\xi_a$ decreases from the inner side to the outer side.
\bigskip
From here, let us go on to use the notations $B_a$, $M_a$, $L_a$, and
$\xi_a$ in Definition \ref{def:5-4}.
Further for later convenience, we employ the following assumptions
which are not essential in our theory but make the arguments simpler.
\begin{assumption}\label{assump:one} {\rm{
We assume the following:
\begin{enumerate}
\item {\it{The colored decomposition $\{ B_a\}_{a=0, \cdots, N-1}$ of $\Omega$
and $\epsX$ satisfy the condition
that every $L_a^c$ is not the empty set.}}
This assumption means that the singularities that we consider can be resolved
by the above procedure. Since $\epsX$ can be small enough,
this assumption does not have crucial effects on our theory.
\item {\it{The colored decomposition $\{ B_a\}_{a=0, \cdots, N-1}$ of $\Omega$
and $\epsX$ satisfy the relation,
$$
\partial \Omega \bigcap
\left(\bigcup_{a \neq b; a, b \neq 0} M_a\bigcap M_b\right)
= \emptyset,
$$
and every intersection $B_a \bigcap B_0$ perpendicularly intersects
with $\partial \Omega$.}}
This describes the asymptotic behavior of the materials.
For example $M_0$ will be assigned to a wall in Section \ref{sec:VOF}.
This assumption is neither so essential in this
model but makes the arguments easy of the boundary effect.
As mentioned in Section \ref{sec:three},
we neglect the boundary effect because we are concerned only with
a local theory or differential equations.
If one wishes to remove this assumption, he could consider smaller
region $\Omega'\subset \Omega$ after formulates the problems in
$\Omega$.
\item
{\it{ The volume of every $B_a$, the
area of every $\partial_\sing B_a$,
and the length defined over every one-dimensional object
in $\partial_\sing B_a$
are finite.}}
As our theory is basically local, this assumption is not essential, either.
\end{enumerate}
}}
\end{assumption}
Under the assumptions,
we fix colored decomposition $\{ B_a\}_{a=0, \cdots, N-1}$ and
$\epsX$-controlled color functions $\{ \xi_a\}_{a=0, \cdots, N-1}$.
As mentioned in the previous section, we have an approximate
description of the area of $\partial_\sing B_a$.
\begin{proposition} \label{prop:5-1}
By letting
the area of $\partial_\sing B_a$ be $\cA_a$, the integral
$$
\cA_{\xi_a}:=\int_{\Omega} |\nabla \xi_a| d^3 x,
$$
approximates $\cA_a$ by
$$
|\cA_{\xi_a} - \cA_a | < \epsX \cA_a.
$$
\end{proposition}
Here we notice that
$M_{ab}:=M_a \bigcap M_b$ $(a, b = 0, 1, 2, \cdots, N-1, a\neq b)$ means
the intermediate region whose interior
is a $3$-manifold.
Similarly we define $M_{abc}:=M_a \bigcap M_b \bigcap M_c$
$(a, b, c = 0, 1, 2, \cdots, N-1; a\neq b, c; b\neq c)$ and so on.
Since the relation, $\bigcup M_a = \Omega$, holds,
we look on the intersections of $M_a$ as an approximation of
the intersections of $B_a$ which is parameterized by $\epsX$.
Even though there exist some singular geometrical
objects in $\{B_a\}$ \cite{AGLV},
we can avoid its difficulties due to finiteness of $\epsX$.
(We expect that the
computational result of a physical process
might have weak dependence on $\epsX$
which is small enough.
More precisely the actual value is obtained by the extrapolation of
$\epsX = 0$ for series results of different $\epsX$'s
approaching to $\epsX = 0$.)
\subsection{Surface energy}
Let us define the surface energy $\cE_\exact^{(N)}$
by
$$
\cE_\exact^{(N)} = \sum_{a > b} \sigma_{ab} \Area(B_a \bigcap B_b),
$$
where $\sigma_{ab}$ is the surface tension coefficient
$(\sigma_{ab}>0$, $\sigma_{ab} = \sigma_{b a})$ between
the materials corresponding to $B_a$ and $B_b$,
and $\Area(U)$ is the area of an interior smooth $2$-manifold $U$.
We have an approximation of the surface energy $\cE_\exact^{(N)}$
by the following proposition.
\begin{proposition} \label{prop:5-2}
The free energy,
\begin{equation}
\cE^{(N)} =
\sum_{a > b}
\sigma_{ab}\int_\Omega d^3x\
\sqrt{|\nabla \xi_a(x)| |\nabla \xi_b(x)|}
(\xi_a(x) + \xi_b(x)),
\label{eq:cEN}
\end{equation}
has a positive number $M$ such that
$$
|\cE^{(N)} - \cE_\exact^{(N)}| < \epsX M.
$$
\end{proposition}
\begin{proof}
For $a\neq b$,
$B_a \bigcap B_b$ consists of the union of some interior smooth
$2$-manifolds. Their singular-boundary parts
$\partial_\sing (B_a \bigcap B_b)
\equiv \{V_\alpha\}_{\alpha \in I_{ab}}$ are
union of some smooth $1$-manifolds and
smooth $0$-manifolds.
Thus $\{V_\alpha\}_{\alpha \in I_{ab}}$ has no
effect on the surface energy $\cE_\exact^{(N)}$
because they are measureless.
Over the subspace,
\begin{equation}
M_{ab}^\prop := \{ x \in M_{ab} \ | \
\xi_a(x) + \xi_b(x) = 1 \}^\circ,
\label{eq:Mabprop}
\end{equation}
and for a positive number $\ell$, we have identities,
\begin{equation}
\begin{split}
|\nabla \xi_a(x)| (\xi_a(x) + \xi_b(x))^{\ell}
& = |\nabla \xi_b(x)| (\xi_a(x) + \xi_b(x))^{\ell}\\
& = \sqrt{|\nabla \xi_a(x)| |\nabla \xi_b(x)|}
(\xi_a(x) + \xi_b(x))^{\ell}.\\
\end{split}
\label{eq:sym}
\end{equation}
The sum of the integrals over $M_{ab}^\prop$ dominates $\cE^{(N)}$
if $\epsX$ is sufficiently small.
We evaluate the remainder.
For example, for different $a$, $b$ and $c$,
the part in $\cE^{(N)}$ coming from
\begin{equation}
M_{abc}^\prop := \{ x \in M_{abc} \ | \
\xi_a(x) + \xi_b(x) + \xi_c(x) = 1 \}^\circ
\label{eq:Mabcprop}
\end{equation}
is order of $\epsX^2$. Namely we have
\begin{equation*}
\begin{split}
&\left|\int_{M_{abc}} d^3x\
\sqrt{|\nabla \xi_a(x)| |\nabla \xi_b(x)|}
(\xi_a(x) + \xi_b(x))
-\Length(B_a\cap B_b \cap B_c) \right|\\
&\qquad\qquad\qquad < \epsX^2 \Length(B_a\cap B_b \cap B_c),\\
\end{split}
\end{equation*}
where $\Length(C)$ is the length of a curve $C$.
Thus we find a number $M$ satisfying the inequality.
\qed
\end{proof}
\begin{myremark}\label{rmk:5-1} {\rm{
\begin{enumerate}
\item
$M$ is bound by
$$
M \le \max(\sigma_{ab})\left(
\sum_{a<b}\left(\Area(B_a\cap B_b)
+ \epsX
\Length\left(\partial_\sing(B_a\cap B_b)\right)\right) + K \epsX^2\right),
$$
where $K$ is the number of isolated points in all
of singular-boundary parts of $\{B_a\}$.
\item
It should be noted that $\cE^{(N)}$ becomes the surface
energy of the system exactly when $\epsX$ vanishes.
\item
Using the identities (\ref{eq:sym}), we can also approximate
$\cE^{(N)}$ by
\begin{equation*}
\sum_{a> b}
\sigma_{ab}\int_\Omega d^3x\
|\nabla \xi_a(x)| (\xi_a(x) + \xi_b(x))^\ell,
\end{equation*}
using a positive number $\ell$. In such a way, there are so many
variants which, approximately, represent the surface energy
in terms of $\xi_a$'s.
\end{enumerate}
}}
\end{myremark}
\subsection{Statics}
Let us consider the statics of the multi-phase fields.
In the above arguments in this section,
we have given the geometrical objects, first, and
defined the functions $\xi_a$, functional energy
$\cE^{(N)}$ and so on.
In this subsection on
the static mechanics of the multi-fields,
we consider the deformation of these geometrical objects
and determine a configuration whose corresponds to an extremal point
of the functional, {\it{i.e.}}, $\cF_\mul$ in the following
proposition. In other words, we derive the Euler-Lagrange equation
which governs the extremal point of the functional
and characterizes the configuration of $M_a$, $L_a$ and
approximately $B_a$ for every $a = 0, \cdots, N-1$.
Let us introduce the proper pressure
\begin{equation}
p_P(x) := \sum p_a \xi_a(x),
\label{eq:proppress}
\end{equation}
where $p_a$ is a certain pressure of each material.
\begin{proposition}
\label{prop:5-8}
The Euler-Lagrange equation
of the static free energy integral,
\begin{equation*}
\cF_\mul= \int_{\Omega}\left(
\sum_{a> b}
\sigma_{ab}
\sqrt{|\nabla \xi_a(x)| |\nabla \xi_b(x)| }
(\xi_a(x) + \xi_b(x)) - p_P
\right) d^3 x,
\end{equation*}
with respect to $\xi_a$, {\it{i.e.}},
$\delta \cF_\mul/\delta \xi_a = 0$, is given as follows:
\begin{enumerate}
\item
For a point $ x \in {M_{ab}}^\prop$ of (\ref{eq:Mabprop}),
\begin{equation}
(p_a - p_b) - \sigma_{ab}\kappa_a (x)= 0,
\label{eq:ELM2}
\end{equation}
where
$$
\kappa_a := - \partial_i \frac{\partial_i \xi_a}{|\nabla\xi_a|}.
$$
\item
For a point $x \in M_{abc}^\prop$ of (\ref{eq:Mabcprop}),
\begin{equation}
(p_a - p_b - p_c) - \tilde\kappa_{abc} (x)= 0,
\label{eq:ELM3}
\end{equation}
where
\begin{equation}
\begin{split}
\tilde\kappa_{abc} &:=
\sigma_{bc}\sqrt{|\nabla \xi_b(x)| |\nabla \xi_c(x)| }
-\sigma_{ab}\sqrt{|\nabla \xi_a(x)| |\nabla \xi_b(x)| }
-\sigma_{ac}\sqrt{|\nabla \xi_a(x)| |\nabla \xi_c(x)| }\\
& + \partial_i
\left[\frac{\partial_i \xi_a}{\sqrt{|\nabla\xi_a|}^3}.
\left(\sigma_{ab} \sqrt{|\nabla\xi_b|}( \xi_a + \xi_b)+
\sigma_{ac} \sqrt{|\nabla\xi_c|}( \xi_a + \xi_c) \right) \right].
\end{split}
\end{equation}
\end{enumerate}
\end{proposition}
\begin{proof}
For a point $ x \in {M_{ab}}^\prop$ of (\ref{eq:Mabprop}),
we have $ \xi_a(x) + \xi_b(x) = 1 $, and thus the
Euler-Lagrange equation
(\ref{eq:AppeA1}) leads (\ref{eq:ELM2}).
Similarly for a point $ x \in {M_{abc}}^\prop$ of (\ref{eq:Mabcprop}),
we have $ \xi_a(x) + \xi_b(x) + \xi_c(x)= 1 $, and thus the
concerned terms of the integrand in the energy functional are given by
\begin{equation}
\begin{split}
\cdots &
+\sqrt{|\nabla \xi_a(x)| |\nabla \xi_b(x)| }(\xi_a + \xi_b)
+\sqrt{|\nabla \xi_a(x)| |\nabla \xi_c(x)| }(\xi_a + \xi_b) \\
&+\sqrt{|\nabla \xi_b(x)| |\nabla \xi_c(x)| }(1 - \xi_a) +\cdots. \\
\end{split}
\end{equation}
The Euler-Lagrange equation
(\ref{eq:AppeA1})
gives (\ref{eq:ELM3}).
\qed
\end{proof}
\begin{myremark}
{\rm{
\begin{enumerate}
\item It is noticed that
(\ref{eq:ELM2}) agrees with the Laplace equation (\ref{eq:ELs}) and thus
we also reproduce the Laplace equation locally.
\item (\ref{eq:ELM3})
could be regarded as another generalization of the Laplace equation
though $M_{abc}^\prop$ does not contribute to the surface energy
when $\epsX$ vanishes and has a negligible effect even for
a finite $\epsX$ if $\epsX$ is sufficiently small.
Indeed, (\ref{eq:ELM3}) does not appear in the theory of surface tension
\cite{LL}. However
(\ref{eq:ELM3}) is necessary and plays a role
to guarantee the stability
in the numerical computations
and to preserve the consistency in numerical approach with
finite intermediate regions for $\epsX \neq 0$.
\item Similarly we have similar equations
for a higher intersection regions.
\end{enumerate}
}}
\end{myremark}
As a generalization of (\ref{eq:Amini0})
we immediately have the following.
\begin{proposition} \label{prop:5-9}
For every point $x \in \Omega$,
the variational principle, $\delta \cF_\mul/ \delta x^i = 0$,
gives
\begin{equation}
\begin{split}
& \partial_i p_P
-\sum_{a > b} \sigma_{a,b}\Bigr[
\partial_i \left(
\sqrt{|\nabla \xi_a| |\nabla \xi_b|}
(\xi_a+\xi_b) \right) \\
&
- \partial_j\left(
\frac{\partial_i \xi_a \partial_j \xi_a }
{\sqrt{|\nabla \xi_a|}^{3}}
\sqrt{|\nabla \xi_b|}
(\xi_a+\xi_b) \right) \Bigr] = 0.
\end{split}
\label{eq:MSFe}
\end{equation}
\end{proposition}
\begin{proof}
It is the same as Proposition \ref{prop:4-5},
which essentially comes from
Proposition \ref{prop:A-2}.
\qed
\end{proof}
\begin{myremark}
{\rm{
In Proposition \ref{prop:5-9}, we can apply the equation without
any classification of geometry like (\ref{eq:Mabprop}) and (\ref{eq:Mabcprop}).
It is also noted that (\ref{eq:MSFe}) is globally defined over
$\Omega$ as mentioned in Remark \ref{rmk:4-0}.
}}
\end{myremark}
\subsection{Dynamics}
Using these equations, let us consider
the dynamics of the multi-phase flow.
We extend the colored-decomposition of $\Omega$ and
the $\epsX$-controlled color functions of $\{\xi_a\}_{a=0,\cdots,N-1}$
to those of $\Omega \times T$ and $\cC^\infty(\Omega \times T)$
using another fiber structure of $\Coor(\Omega\times T)$.
Mathematically speaking, since our space-time is a
trivial bundle $\Omega\times T$ and has the fiber structure
$\Omega\times (t_a, t_b) \to \Omega$ for a small interval $(t_a, t_b)$
due to the integrability,
we can consider the pull-back of the map $\xi_a : \Omega \to \RR$.
If we consider a global behavior of $\xi_a$ with respect to time $t$,
we should pay more attentions on the Lagrange picture $\gamma(x,t)$
and the integrability.
However as our theory is local, we can regard
$(t_a, t_b)$ as $T$ with an infinitesimal interval.
Thus $\xi_a$ is redefined as $\xi_a:=\xi_a(\gamma(x,t))$
for $(x,t) \in \Omega \times T$ and it is denoted by
$\xi_a(x,t)$.
In the time development of $\xi_a$, the control parameter $\epsX$
is not necessary to be constant. However in this article,
we assume that $\epsX$ is sufficiently small for every $t \in T$.
Let the density of each $\xi_a$ be denoted by $\rho_a$.
We have the global density function $\rho(x,t)$ and pressure
$p_P(x,t)$ given by
$$
\rho(x,t) = \sum \rho_a \xi_a(x,t), \quad
p_P(x,t) = \sum p_a \xi_a(x,t).
$$
In contrast to the previous subsection, in this subsection,
we investigate an initial problem. In other words,
every configuration of the geometrical objects,
$M_a$, $L_a$ and approximately $B_a$ ($a = 0, \cdots, N-1$),
with divergence free velocity $u$, ($\mathrm{div}(u)=0$)
can be an initial condition to the dynamics of the multi-phase fields.
The following equations which we will derive in this subsection
govern the deformations of these
geometrical objects as their time-development.
Further it is noticed that
in this subsection,
the proper pressure $p_P(x,t)$ has no mathematical nor
physical meaning because it becomes a part of the total pressure $p$,
which is determined by the divergence free condition $\mathrm{div}(u) =0$
as mentioned in Remark \ref{rmk:3-2}.
\bigskip
We have the first theorem;
\begin{theorem} \label{th:5-1}
The action integral of the multi-phase fields, or
the $\epsX$-controlled color functions $\xi_a$
with physical parameters $\rho_a$, $\sigma_{ab}$, $p_a$
$(a, b = 0, 1, \cdots, N-1)$ defined above, is given by
\begin{equation}
\cS_\mul= \int_T d t\int_{\Omega}\left( \frac{1}{2}\rho |u|^2 -
\sum_{a> b}
\sigma_{ab}
\sqrt{|\nabla \xi_a| |\nabla \xi_b| }
(\xi_a + \xi_b)
+ p_P \right) d^3 x,
\label{eq:actionM}
\end{equation}
under the volume-preserving deformation.
\end{theorem}
\begin{proof}
The action integral is additive.
The first term exhibits the kinematic energy of the fluids.
The second term represents the surface energy up to $\epsX$ as in
Proposition \ref{prop:5-2}.
The proper pressure $p_P$ in (\ref{eq:proppress}) leads the Laplace equations.
We can regard it as the action integral of
the multi-phase fields with these parameters.
\qed
\end{proof}
Then we have further generalization of (\ref{eq:EL2d}) as follows:
\begin{lemma} \label{lemma:5-11}
Assume that every $M_a(t)$, $M_{ab}^\prop(t)$ and $M_{abc}^\prop(t)$ deform
for the time-development following a certain equation.
The Euler-Lagrange equation of the action integral
with respect to $\xi_a$, $\delta \cS_\mul/ \delta \xi_a =0$,
is given, up to the volume preserving condition, as follows:
\begin{enumerate}
\item
For a point $x \in M_{ab}^\prop$, we have
\begin{equation}
\frac{1}{2}(\rho_a - \rho_b) |u(x,t)|^2
+ (p_a - p_b) - \sigma_{ab}\kappa_a (x,t)= 0.
\label{eq:ELM2d}
\end{equation}
\item
For a point $x \in M_{abc}^\prop$, we have
\begin{equation}
\frac{1}{2}(\rho_a - \rho_b - \rho_c) |u(x,t)|^2
+ (p_a - p_b - p_c) - \tilde\kappa_{abc} (x,t)= 0.
\label{eq:ELM3d}
\end{equation}
\end{enumerate}
\end{lemma}
Similarly we have the similar equations for higher intersection regions.
\begin{proof}
It is the same as proof of Proposition
\ref{prop:5-8}.
\qed
\end{proof}
Using these equations, we have the second theorem,
which is our main theorem:
\begin{theorem} \label{th:5-2}
For every $(x, t) \in \Omega \times T$,
the variational principle, $\delta \cS_\mul/ \delta \gamma(x,t) = 0$,
provides the equation of motion,
\begin{equation}
\begin{split}
\frac{D \rho u^i}{D t} +
& \partial_i p
+\sum_{a> b} \sigma_{a,b}\Bigr[
\partial_i \left(
\sqrt{|\nabla \xi_a| |\nabla \xi_b|}
(\xi_a+\xi_b) \right) \\
&
- \partial_j\left(
\frac{\partial_i \xi_a(x) \partial_j \xi_a(x) }
{\sqrt{|\nabla \xi_a|}^{3}}
\sqrt{|\nabla \xi_b|}
(\xi_a+\xi_b) \right) \Bigr] = 0.
\end{split}
\label{eq:EeqM}
\end{equation}
Here $p$ is the pressure coming from the effect of the
volume-preserving or incompressible condition, which
includes the proper pressure $p_P$ (\ref{eq:proppress}).
\end{theorem}
\begin{proof}
We naturally obtain it by using 1)
Proposition \ref{prop:3-1} and its proof,
2) Remark \ref{rmk:3-2},
3) Lemma \ref{lemma:5-11}
and 4) Proposition \ref{prop:A-2}.
\qed
\end{proof}
Here we note that
by expressing the low-dimensional geometry in terms
of the global smooth functions $\xi$'s with finite $\epsX$,
we have unified the infinite dimensional geometry
or the incompressible fluid dynamics
governed by $\IFluid(\Omega \times T)$, and the $\epsX$-parameterized
low dimensional geometry with singularities to obtain
the extended Euler equation (\ref{eq:EeqM}).
When $\epsX$ approaches to zero, we must consider the
hyperfunctions \cite{KKK,II} instead of $\cC^\infty(\Omega \times T)$,
but we conjecture that our results would be justified even under the limit;
the unification would have more rigorous meanings.
It should be noted that
on the unification,
it is very crucial that we express the low-dimensional geometry in terms
of the global smooth functions $\xi$'s as
the infinite-dimensional vector spaces.
The $\SDiff(\Omega)$ naturally acts on $\xi$'s and thus
we could treat the low-dimensional geometry and the incompressible
fluid dynamics in the framework of the infinite dimensional
Lie group \cite{AK,EM,O}.
It is contrast to the level-set method.
As mentioned in Section \ref{sec:two-one},
the level-set function does not belong to $\cC^\infty(\Omega)$
and thus we can not consider $\SDiff(\Omega)$ action and
treat it in the framework.
\begin{myremark}\label{rmk:11}{\rm{
\begin{enumerate}
\item
(\ref{eq:EeqM}) is the Euler equation with the surface tension
to multi-phase fields
which gives the equation of motion of the multi-phase flow even with
the multiple junctions.
As we will illustrate examples in Section \ref{sec:VOF},
the dynamics with the triple junction
can be solved without any geometrical constraints.
It should also noted that for a point in $M_{ab}^\prop$,
(\ref{eq:EeqM}) is reduced to the original Euler equation in
Reference \cite{LZZ}
or (\ref{eq:Eulxi}).
\item
The Euler equation (\ref{eq:EeqM}) appears as the momentum conservation in
the sense of Noether's theorem (Section \ref{sec:two-two}).
It implies that (\ref{eq:EeqM}) is natural from the geometrical viewpoint
\cite{Ar,AK,EM,K,Ko,MW,NHK}.
\item
Further even though we set $\{\xi_a(\cdot, t)\}$ as
proper $\epsX$-controlled
colored functions as an initial state,
their time-development is not guaranteed
that $\{\xi_a(\cdot, t)\}$, $(t>0)$,
is proper $\epsX$-controlled.
In general $\epsX$ may become large for the time development,
at least, numerically due to the numerical diffusion. (See examples in
Section \ref{sec:VOF}).
However even for $t>0$,
we can find $\epsX(t)$ such that
$\{\xi_a(\cdot, t)\}$ are $\epsX(t)$-controlled colored functions
and if $\epsX(t)$ is sufficiently small, our approximation is
guaranteed by $\epsX(t)$.
\item
The surface tension is also defined over $\Omega \times T$ and thus
the Euler equation is defined over $\Omega \times T$ without any
assumptions due to Remark \ref{rmk:4-0}.
\item We may set $\epsX$ depending upon the individual
intermediate region between these fields by
letting $\epsilon_{ab}$ mean that for
$\xi_a$ and $\xi_b$, $a\neq b$. Then if we recognize $\epsX$ as
$\displaystyle{\max_{a, b =0}^{N-1}\epsilon_{ab}}$,
above arguments are applicable for the
case.
\item We defined the $\epsX$-controlled colored functions
using the $\epsT$-tubular neighborhood $T_{U, \epsT}$ and
the colored decomposition of $\Omega$
in Definition \ref{def:5-4} by letting $\epsT = \epsX$.
On the other hand,
as in Reference \cite{LZZ},
our formulation can describe a topology change well following
the Euler equation (\ref{eq:EeqM}) such as a split of
a bubble into two bubbles in a liquid.
The $\epsX$-controlled colored functions
can represents the geometry for such a topology change without any
difficulties. However on the topology change,
the path-connected region and
the $\epsX$-tubular neighborhood
lose their mathematical meaning and thus, more rigorously,
we should redefine the $\epsX$-controlled colored functions.
Since
the $\epsX$-controlled colored functions represent the geometry
as an analytic geometry, it is not difficult to modify the definitions
though it is too abstract.
In other words, we should first define the $\epsX$-controlled colored
functions $\xi$'s without the base geometry, and
characterize geometrical objects using
the functions $\xi$'s.
However since such a way is too abstract to find these geometrical meanings,
we avoided a needless confusion in these definitions and employed
Definition \ref{def:5-4}.
\end{enumerate}
}}
\end{myremark}
\subsection{Equation of motion of triple-phase flow}
Let us concentrate ourselves on a triple-phase flow problem,
noting (\ref{eq:sym}).
From the symmetry of the triple phase,
we introduce {\lq\lq}proper{\rq\rq} surface tension coefficients,
$$
\sigma_0 = \frac{\sigma_{01} + \sigma_{02} - \sigma_{12}}{2}, \quad
\sigma_1 = \frac{\sigma_{01} + \sigma_{12} - \sigma_{02}}{2}, \quad
\sigma_2 = \frac{\sigma_{02} + \sigma_{12} - \sigma_{01}}{2}, \quad
$$
or $\sigma_{ab} = \sigma_a + \sigma_b$.
Here it should be noted that
the {\lq\lq}proper{\rq\rq} surface tension coefficient
is based upon the speciality of the triple-phase and
does not have more physical meaning than above definition.
\begin{lemma} \label{lemma:5-2}
For different $a, b,$ and $c$, we have the following
approximation,
\begin{equation}
\left|\int_\Omega\left( \sqrt{|\nabla \xi_a| |\nabla \xi_b|}
(\xi_a + \xi_b)
+ \sqrt{|\nabla \xi_a| |\nabla \xi_c|}
(\xi_a + \xi_c)
- |\nabla \xi_a|\right)d^3 x\right| < \epsX \cA_a.
\label{eq:Appr3}
\end{equation}
\end{lemma}
Using the relation,
the free energy (\ref{eq:cEN}) has a simpler expression up to
$\epsX$.
\begin{proposition} \label{prop:5-3}
By letting
\begin{equation*}
\cE^{(3)}_{\mathrm{sym}} := \sigma_{0}\int_\Omega d^3x\ |\nabla \xi_0(x)|
+ \sigma_{1}\int_\Omega d^3x\ |\nabla \xi_1(x)|
+ \sigma_{2}\int_\Omega d^3x\ |\nabla \xi_2(x)| ,
\end{equation*}
we have a certain number $M$ related to area of the surfaces $\{B_a\}$
such that
\begin{equation*}
|\cE^{(3)} - \cE^{(3)}_{\mathrm{sym}}| < \epsX M.
\end{equation*}
\end{proposition}
\begin{proof}
Due to Lemma \ref{lemma:5-2}, it is obvious.
\qed
\end{proof}
The action integral (\ref{eq:actionM}) also becomes
\begin{equation*}
\cS_\tri= \int_T d t\int_{\Omega}\left( \frac{1}{2}\rho |u|^2 -
\sum_a( \sigma_a |\nabla\xi_a| - p_a \xi_a )\right) d^3 x.
\end{equation*}
For a practical reason, we consider a simpler expression by
specifying the problem.
\subsection{Two-phase flow and wall with triple-junction}
\label{subsec:5-6}
More specially we consider the case
that $\xi_o$ corresponds to the wall which does not move.
For the case, we can neglect the wall part of the equation, because
it causes a mere energy-shift of $\cE^{(3)}_{\mathrm{sym}}$.
Then the action integral and the Euler equation become simpler.
We have the following theorem as a corollary.
\begin{theorem} \label{th:5-3}
The action integral of two-phase flow with wall is given by
\begin{equation*}
\cS_\wall= \int_T d t\int_{\Omega}
\left( \frac{1}{2}\rho |u|^2 -
\sum_{a=1}^2( \sigma_a |\nabla\xi_a| - p_a \xi_a )\right) d^3 x,
\end{equation*}
and the equation of motion is given by
\begin{equation}
\frac{D \rho u^i}{D t}
+ \partial_i p
-\partial_j (\overline \tau_{ij}) = 0,
\label{eq:Eeq3}
\end{equation}
where
\begin{equation}
\overline \tau = \sum_{a=1}^2\sigma_a
\left(I -
\frac{\nabla \xi_a}{|\nabla \xi_a|}
\otimes
\frac{\nabla \xi_a}{|\nabla \xi_a|}\right)
|\nabla \xi_a|.
\label{eq:overtau}
\end{equation}
\end{theorem}
Practically this Euler equation (\ref{eq:Eeq3})
is more convenient due to the proper surface tension coefficients.
However this quite differs from the original (\ref{eq:Amini0})
and (\ref{eq:Amini}) in Reference \cite{LZZ}
and governs the motion of two-phase flow with a wall completely.
\begin{myremark}\label{rmk:5-2}
{\rm{
Equation (\ref{eq:Eeq3})
is the Euler equation with the surface tension
for two-phase fields with a wall or triple junctions
in our theoretical framework.
We should note that
under the approximation (\ref{eq:Appr3}),
(\ref{eq:Eeq3}) is equivalent to (\ref{eq:EeqM}),
even though
(\ref{eq:Eeq3}) is far simpler than (\ref{eq:EeqM}).
From Remark \ref{rmk:4-0},
it should be noted that
$\overline \tau$
and the Euler equation (\ref{eq:Eeq3})
are defined over $\Omega \times T$. This property
as a governing equation is very important for the
computations to be stable, which is
mentioned in Introduction.
Since the non-trivial part of $\overline{\tau}$
is localized in $\Omega$ of each $t\in T$,
$\overline \tau$ vanishes and has no effect on the equation
in the other area.
}}
\end{myremark}
We will show some numerical computational results
of this case in the following section.
There we could also consider
the viscous stress forces and the wall shear stress.
\section{Numerical computations}
\label{sec:VOF}
In this section, we show some numerical
computations of two-phase flow surrounded by a wall obeying the extended Euler
equation in Theorem \ref{th:5-3}.
As in Theorem \ref{th:5-3},
the wall is expressed by the color function $\xi_0$ and
has the intermediate region $(M_0 \setminus L_0^c)^\circ$
where $\xi_0$ has its value $(0,1)$.
As dynamics of the incompressible two-phase flow with a static wall,
we numerically
solve the equations,
\begin{equation}
\begin{split}
&\mathrm{div}(u) = 0,
\\
&\frac{D \rho u^i}{D t} +
(\partial_i p - K_i) = 0 ,
\\
&\frac{D \rho}{D t} = 0.
\end{split}
\label{eq:NumEq}
\end{equation}
Here for the numerical computations,
we assume that the force $K$ consists of
the surface tension, the viscous stress forces,
and the wall shear stress,
\begin{equation}
K_j = \partial_i \bar \tau_{ij} + \partial_i \tau_{ij} + \hat \tau_j.
\label{eq:Ki}
\end{equation}
Here
$\bar \tau$ is given by (\ref{eq:overtau}),
$\tau$ is the viscous tensor,
$$
\tau_{ij} := 2 \eta \left(E_{ij} - \frac{1}{3}
\mathrm{div} (u)\right), \quad
E_{ij} :=
\frac{1}{2}\left(\frac{\partial u^i}{\partial x_j}
+\frac{\partial u^j}{\partial x_i}\right)
$$
with the viscous constant
$$
\eta(x) := \eta_1 \xi_1 + \eta_2 \xi_2,
$$
and $\hat \tau_j$ is the wall shear stress which is localized at
the intermediate region $(M_0 \setminus L_0^c)^\circ$
where $\xi_0$ has its value $(0, 1)$.
The boundary condition of the interface between the fluid $\xi_a$
$(a = 1, 2)$ and the wall $\xi_0$ is generated dynamically in this
case. In other words, in order that
the wall shear stress term suppress the slip over the intermediate region
$(M_0 \setminus L_0^c)^\circ$ asymptotically $t \to \infty$ due to
damping, we let $\hat \tau_j$ be proportional to
$j$-component of $\partial u^\parallel /\partial q_0$
for the parallel velocity $u^\parallel$ to the wall and relevant to
$(1 - \xi_0(x))$, and make $u$ vanish over $L_0$.
Here $q_0$, $M_0$, and $L_0$ are of Definition \ref{def:5-4}.
The viscous force can not be dealt with in the framework of the
Hamiltonian system because it has dissipation.
However from the conventional consideration of the balance of
the momentum \cite[Sec.13]{EM},
it is not difficult to evaluate it.
The viscosity basically makes the numerical computations stable.
In the numerical computations,
we consider the problem in the structure lattice $\cL$
marked by $a \ZZ^3$,
where $\ZZ$ is the set of the integers and $a$ is a positive number.
The lattice consists of cells and faces of each cell.
Let every cell be a cube with sides of the length $a$.
We deal with a subspace $\Omega_\cL$
of the lattice as $\Omega_\cL :=\Omega \cap \cL \subset \EE^3$.
The fields $\xi$'s are defined over the cells as cellwise constant
functions and
the velocity field $u$ is defined over faces as
facewise constant functions \cite{Ch};
$\xi$ is a constant function in each cell and depends on the
position of the cell,
and similarly the components of the velocity field,
$u^1$, $u^2$, and $u^3$ are facewise constant functions
defined over $x^2x^3$-faces,
$x^3x^1$-faces, and $x^1x^2$-faces of each cell respectively.
As we gave a comment in Remark \ref{rmk:11} 5, we make the parameter
$\epsX$ depend on the intermediate region in this section.
Let $\epsilon_{12}$ be the parameter
for the two-phase field or the liquids, and
$\epsilon_0:= \epsilon_{01}\equiv\epsilon_{02}$ be
one for the intermediate region
$(M_0 \setminus L_0^c)^\circ$ between liquids and the wall.
As mentioned in Introduction,
we assume that $\epsilon_{12}$ for the two-phase field
in our method is given as $\epsilon_{12} \ge a$
so that we could estimate the intermediate effect in our model
following References \cite{AMS,BKZ,Cab,Jac,LZZ},
even though the thickness of the intermediate region among real liquids
is of atomic order and is basically negligible in the macroscopic theory.
In the computational fluid dynamics, the VOF (volume of fluid) method
discovered by Hirt and his coauthors \cite{Ch,HN,H}
is well-established when we deal with fluid with a wall.
Since we handled triple-junction problems as in Section
\ref{subsec:5-6}, we reformulate our model in the VOF-method.
It implies that
we identify $1-\xi_0$ with the so-called $V$-function $V := 1 - \xi_0$
in the VOF method
because $V$ in the VOF method
means the volume fraction of the fluid and corresponds to $1-\xi_0$
in our formulation.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.50,angle=270]{Fig1.ps}
\caption{VOF with porous matter expression:
For the consistency between the color function method and
VOF-method,
we consider each cell as a fictitious
porous material whose volume ratio and
open fraction are a value in $[0,1]$ without
imposing any wall shear stress on fictitious
surface of the porous parts in each cell.
This expression represents purely geometrical effects.
}
\label{fig:VOF}
\end{center}
\end{figure}
As the convention in Reference \cite{H}, $V$ is also defined as
a cellwise constant function.
In the following examples, we will set
$\epsilon_0$ to be $a$ or the unit cell basically.
However we can also make it $\epsilon_0 > a$ as for two-phase field.
It means that for the case $\epsilon_0 > a$,
we consider each cell as a fictitious
porous material whose volume ratio $V \in [0,1]$
without imposing any wall shear stress on the fictitious
surface of the porous parts itself in each cell as in Figure 1.
(As mentioned above, we set the wall shear stress $\hat \tau_j$ from the
physical wall $\xi_0$. The porous parts are purely fictitious.)
The region where $V$ is equal to 1 means the region where fluid freely exists
whereas
the region where $V$ vanishes means the region where existence of fluid
is prohibited. The region with $V \in (0,1)$ is the intermediate
region $(M_0 \setminus L_0^c)^\circ$.
Here we emphasize that
the fictitious porous in each cell brings
purely geometrical effects to this model.
Then we could go on to
consider the problem in consistency between
VOF-method and $\xi_0$ function in the phase-field model.
Let functions $f_1\equiv f$ and $f_2$
over $\supp(V)$ be defined by the relations,
\begin{equation*}
\xi_1 = V f_1, \quad \xi_2 = V f_2, \quad f_1 + f_2 = 1.
\end{equation*}
Further we also modify the open fraction $A$ in the VOF-method,
which is defined over each face. We interpret $A$ as the open area of
the fictitious porous material of each face of each cell,
which also has a value in $[0,1]$ as in Figure 1.
We also use the open area fraction $A$ of each face of each cell
\cite{H,HN}.
For a face belonging to the cell whose $V=1$, $A$ is also
equal to 1.
Following the convention in discretization by Hirt \cite{H},
$A$ is regarded as an operator acting
on the face-valued functions
like
\begin{equation*}
A \circ u \equiv Au = (A_{1} u^1,
A_{2} u^2,A_{3} u^3),
\end{equation*}
\begin{equation}
(Au)^{1} = A_{1} u^1, \quad
(Au)^{2} = A_{2} u^2, \quad
(Au)^{3} = A_{3} u^3. \quad
\label{eq:Au}
\end{equation}
Here we note that $A_i a^2$ implicitly appearing in
(\ref{eq:Au}) can be interpreted as a
two-chain of homological base associated with a face of a cell.
For example, for a velocity field $\mu := u^i(x) d x^i$ defined
over a cell in the continuous theory
and a piece of the boundary element of the
cell $A_1 a^2$,
the discretized $u^1$ defined over the face is given by
$$
(A u)^1 := \frac{1}{a^2} \int_{A_1 a^2} * \mu = A_1 u^1,
$$
where $*$ is the Hodge star operator, {\it{i.e.}},
$*\mu := u^1(x) d x^2 d x^3 +
u^2(x) d x^3 d x^1 +
u^3(x) d x^1 d x^2$.
Thus the discretization (\ref{eq:Au}) is very natural even
from the point of view of the modern differential geometry.
Hence $\mathrm{div} (u) \equiv \nabla u$ reads
$\nabla A u$ as the difference equation in VOF-method \cite{H}
and we employ this discretization method.
We give our algorithm to compute (\ref{eq:NumEq})
precisely as follows.
As a convention, we specify the quantities with {\lq\lq}old{\rq\rq}
and {\lq\lq}new{\rq\rq} corresponding to the previous states
and the next states at each time step respectively
in the computation.
In other words, we give the algorithm that we construct the
next states using the previous data by regarding the current
state as an intermediate state in the time step.
We use the project-method \cite{Cho,Ch};
\begin{equation*}
\begin{split}
\mbox{I} &: \frac{\rho \tilde\uu - \rho \uu^\old }{\Delta t }
= -(\uu^\old\cdot \nabla ) \rho \uu^\old ,
\nonumber \\
\mbox{II} &:\frac{\uu^\new-\tilde\uu }{\Delta t }
= -\frac{1}{\rho}( \nabla p -\KK),
\nonumber \\
\mbox{III} &: \nabla \uu^\new =0. \\
\end{split}
\end{equation*}
The step I is the part of the advection of the
velocity $\uu^\old$.
In the step I, we define an intermediate velocity
$\tilde\uu$ and after then,
we compute $\uu^\new$ and $p$ in the steps II and III.
The time-development of $\rho$ is given by the equation,
\begin{equation*}
f^\new = f^\old + \Delta t \nabla ((A \uu^\old) f^\old),
\end{equation*}
and
\begin{equation*}
\rho = V(\rho_{1} f + \rho_{2} (1-f))
\end{equation*}
for the proper densities $\rho_{a}$ of $\xi_a$ $(a = 1, 2)$.
Even for the case that we can deal with multi-phase flow with
large density difference, we evaluate its time-development.
Precisely speaking, when we evaluate $\tilde \uu$,
following the idea of Rudman \cite{R} we
employ the momentum advection $\tilde\uu$ of $\uu$,
$$
\tilde\uu:=
\frac{1}{\rho^\new}[\rho^\old\uu^\old
-\Delta t(\uu^\old\cdot \nabla ) \rho^\old \uu^\old].
$$
Our derivation of the Euler equation shows that the Rudman's
method is quite natural.
Following the conventional notation, the
guessed-value of the velocity is denoted by $\uu^*$, which
is the initial value for the steps in II and III.
Let us define
\begin{equation*}
\uu^* := \tilde\uu +
\Delta t\frac{1}{\rho^\new} \KK(\rho^\old,f^\old, \uu^\old).
\end{equation*}
In order to evaluate the guessed velocity,
we compute the force $\KK$ from
(\ref{eq:Ki}) noting that
$\mathrm{div} \tau$ and $\mathrm{div} \overline\tau$
read
$\nabla A \tau$ and $\nabla A \overline\tau$ respectively.
Following the SMAC (Simplified-Marker-and-Cell) method \cite{AH,Cho,Ch},
we numerically determine the new velocity $\uu^\new$
and the pressure $p$ in a certain boundary condition
using the preconditioned conjugate gradient method (PCGM):
\begin{equation*}
\begin{split}
\mbox{(IIa) Evaluate } p \mbox{ using the PCGM}
&:\frac{1}{\Delta t}\nabla( A\circ\uu^*)
= \nabla A \circ \frac{1}{\rho^\new} \nabla p, \\
\mbox{(IIb) By using }p \mbox{ determine } \uu^\new
&: \uu^\new = \uu^* - \Delta t \frac{1}{\rho^\new} \nabla p.\\
\end{split}
\end{equation*}
More precisely speaking, (III) $\nabla(A\circ \uu^\new)=0$ means
that we numerically solve the Poisson equation,
$$
\nabla\left(A \Delta t \frac{1}{\rho^\new} \nabla p\right)
=\nabla(A\circ \uu^*).
$$
Then we obtain $\uu^\new$, which obviously
satisfies (III) $\nabla(A\circ \uu^\new)=0$,
which is known as the Hodge decomposition method \cite{AH,Cho,CM}
as mentioned in Remark \ref{rmk:3-2}.
Following the algorithm, we computed the two-phase flow with
a wall and triple junctions.
We illustrate two examples of the numerical
solutions of the triple junction problems as follows.
\bigskip
\subsection{Example 1}
\bigskip
Here we show a computation of
a capillary problem, or the meniscus oscillation,
in Figure 2.
We set two liquids in a parallel wall with the physical parameters;
$\eta_1 = \eta_2 = 0.1$[cp],
$\rho_1 = \rho_2 = 1.0$[pg/$\um^3$],
$\sigma_1 = 3.349$[pg/$\mu$sec${}^2$],
$\sigma_2 = 46.651$[pg/$\mu$sec${}^2$].
We used $\cL:=12[\um] \times 0.5[\um]\times 16 [\um]$ lattice whose
unit length $a$ is $0.125[\um]$.
The first liquid exists in the down side and the second liquid does
in the upper side in the region $10[\um] \times 0.5[\um]\times 15 [\um]$
surrounded by the wall and the boundaries with the boundary conditions.
As the boundary conditions,
at the upper side from the bottom of the wall by $15[\um]$,
we fix the constant pressure as $100$[KPa] and,
along $x^2$-direction, we set the periodic boundary condition.
We set $\epsilon_{12} = \epsilon_0 = 1$ mesh for the intermediate regions,
at least, as its initial condition.
Each time interval is 0.001 [$\mu$sec].
As the initial state, we start the state that
the fluid surface is flat as in Figure 2 (a)
and the first liquid exists in the box region
$10[\um] \times 0.5[\um] \times 7.0 [\um]$, which
is not stable.
Due to the surface tension, it moves and starts
to oscillate but due to viscosity, the oscillation decays.
Though we did not impose the contact angle as a geometrical constraint,
the dynamics of the contact angle was calculated due to a balance between
the kinematic energy and the potential energy or the surface energy.
The oscillation converged to the stable shape with the proper contact
angle, which is given by
\begin{equation}
\cos \varphi = \frac{\sigma_2 - \sigma_1} {\sigma_2 + \sigma_1}
\equiv \frac{\sigma_{02} - \sigma_{01}} {\sigma_{12}}.
\label{eq:sigma_phi}
\end{equation}
The angle given by $\sigma$'s are designed as $30$ [degree] whereas it
in the numerical experiment in Figure 2 is a little bit
larger than $30$ [degree],
though it is very difficult to determine it precisely.
However since we could tune the parameters $\sigma$'s so that we obtain
the required state, our formulation is very practical.
Due to the numerical diffusions and others,
the thickness of the intermediate regions changes
in the time development and also depends on the positions of
the interfaces, even though it is fixed the same at the
initial state. However we consider that it is thin enough
to evaluate the physical system since the contact angle
is reasonably estimated.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.50,angle=270]{Fig2.ps}
\caption{ The meniscus oscillation:
Each figure shows the time development.}
\label{fig:OS}
\end{center}
\end{figure}
\subsection{Example 2}
This example is on the computations of the contact angles for different
surface tension coefficients displayed in Figure 3.
Even in this case, in order to see the difference between
the designed contact angle and computed one,
we go on to handle two-dimensional symmetrical
problems though we used three-dimensional computational software.
In other words, we set that
$x^2$-direction is periodic.
Since the contact angle $\varphi$ in our convention is given by
the formula (\ref{eq:sigma_phi}).
By setting $\sigma$'s
$$
\frac{\sigma_1}{\sigma_2} =
\frac{1- \cos\varphi}{1+ \cos\varphi},
$$
for given the contact angle $\varphi$, we computed
five triple junction problems without any geometrical constraints;
each $\sigma$ is given in the caption in Figure 3.
The other physical parameters are
given by $\eta_1 = \eta_2 = 0.1$[cp]
and $\rho_1 = \rho_2 = 1.0$[pg/$\um^3$].
In this computation we used a $240\times 4\times 112$ lattice whose
unit length $a$ is $0.125[\um]$; $\Omega= 30[\um] \times 0.5[\um]
\times 14[\um]$.
We set the flat layer as a wall
by thickness $3[\um]$ from the bottom of $\Omega$ along the $z$-axis.
As the boundary conditions,
at the upper side from the bottom of the wall by $9[\um]$,
we fix the constant pressure as $100$[KPa].
As the initial state for each computation.
we set a semicylinder with radius $5[\mu m]$
in the flat wall like Figure 3 (d).
We also set $\epsilon_{12} = \epsilon_0 = 1$ mesh for the intermediate regions.
Each time step also corresponds to 0.001 [sec].
Due to the viscosity, after time passes sufficiently $50[\mu$sec], the
static solutions were obtained as illustrated in Figure 3, which
recover the contact angles under our approximation within
good agreements.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.50,angle=270]{Fig3.ps}
\caption{ The different contact angles are illustrated due to
the different surface energy: By fixing $\sigma_1 =1.0000$[pg/$\mu$sec${}^3$],
(a): $\varphi = 30$ [degree], $\sigma_2 = 13.9282 $[pg/$\mu$sec${}^3$],
(b): $\varphi = 45$ [degree], $\sigma_2 = 5.8284 $[pg/$\mu$sec${}^3$],
(c): $\varphi = 60$ [degree], $\sigma_2 = 3.0000$[pg/$\mu$sec${}^3$],
(d): $\varphi = 90$ [degree], $\sigma_2 = 1.0000$[pg/$\mu$sec${}^3$],
and
(e): $\varphi = 120$ [degree], $\sigma_2 = 0.3333$[pg/$\mu$sec${}^3$].}
\label{fig:CA}
\end{center}
\end{figure}
\bigskip
\bigskip
\section{Summary}
By exploring an incompressible fluid with a phase-field
geometrically \cite{Ar,AK,EM,K,Ko,MW,NHK},
we reformulated the expression of the surface tension
for the two-phase flow found by
Lafaurie, Nardone, Scardovelli, Zaleski and Zanetti
\cite{LZZ} as a variational problem.
We reproduced the Euler equation of two-phase flow (\ref{eq:Eulxi})
following the variational principle of the action integral
(\ref{eq:AI2d}) in Proposition \ref{prop:4-6}.
The new formulation along the line of the variational principle
enabled us to extend (\ref{eq:Eulxi}) to that for the
multi-phase ($N$-phase, $N\ge2$) flow.
By extending (\ref{eq:Eulxi}), we obtained the novel Euler equation
(\ref{eq:EeqM}) with the surface tension of the multi-phase fields
in Theorem \ref{th:5-2} from the action integral of
Theorem \ref{th:5-1}
as the conservation of momentum in the sense of Noether's theorem.
The variational principle for the infinite dimensional system
in the sense of References \cite{Ar,AK,EM} gives the equation of motion
of multi-phase flow controlled by the small parameter $\epsX$
without any geometrical
constraints and any difficulties for the singularities at
multiple junctions.
For the static case, we gave governing equations
(\ref{eq:ELM2}), (\ref{eq:ELM3}) and (\ref{eq:MSFe})
which generate the
locally constant mean curvature surfaces with triple junctions
by controlling a parameter $\epsX$ to avoid these singularities.
As the solutions of (\ref{eq:ELs}) has been studied well
as the constant mean curvature surfaces
for last two decades \cite{ES,FW,GMO,T}, our extended equations
(\ref{eq:ELM2}), (\ref{eq:ELM3}) and (\ref{eq:MSFe})
might shed new light on treatment of singularities of their extended
surfaces, or a set of locally constant mean
curvature surfaces.
(Even though we need an interpretation of our scheme,
for example,
it can be applied to a soap film problem with triple junction.)
It implies that our method might give a method of resolutions
of singularities in the framework of analytic geometry.
By specifying the problem of the multi-phase flow
to the contact angle problems
at triple junctions with a static wall,
we obtained the simpler Euler equation (\ref{eq:Eeq3})
in Theorem \ref{th:5-3}.
Using the VOF method \cite{H,HN},
we showed two examples of the numerical computations
in Section \ref{sec:VOF}.
In our computational method,
for given surface tension coefficients,
the contact angle is automatically generated
by the surface tension
without any geometrical constraints
and any difficulties for the singularities at
triple junctions. The computations were very stable.
It means that the computations did not
collapse nor behave wildly for every initial and the boundary
conditions.
In our theoretical framework,
we have unified the infinite dimensional geometry
or an incompressible fluid dynamics
governed by $\IFluid(\Omega \times T)$, and the $\epsX$-parameterized
low dimensional geometry with singularities given by the multi-phase fields.
We obtained all of equations following the same
variational principle.
We naturally reproduced the Laplace equations, (\ref{eq:ELs}) and
(\ref{eq:ELM2}), and obtained their generalizations
(\ref{eq:EL2d}),
(\ref{eq:ELM2}), (\ref{eq:ELM3}),
(\ref{eq:ELM3d}) and (\ref{eq:MSFe}),
and the Euler equations,
(\ref{eq:Eulxi}),
(\ref{eq:EeqM}), and (\ref{eq:Eeq3})
in Proposition \ref{prop:4-6} and Theorems \ref{th:5-2} and \ref{th:5-3}.
These equations are derived
from the same action integrals by choosing the physical parameters.
In the sense of References \cite{Ar,AM,BGG}, it implies that
we gave geometrical interpretations of the multi-phase flow.
Even though the phase-field model has
the artificial intermediate regions with unphysical thickness $\epsX$,
our theory supplies a model which shows how to evaluate their effects on
the surface tension forces, from geometrical viewpoints.
The key fact of the model
is that we express the low-dimensional geometry in terms
of the infinite-dimensional vector spaces, or
{\it{global functions $\xi$'s}}
which have natural $\Diff$ and $\SDiff$ actions.
Thus
we can treat them in the framework of infinite dimensional
Lie group \cite{AK,EM,O} to consider its Euler equation.
It is contrast to the level-set method;
in analytic geometry and algebraic geometry, zeros of a function
expresses a geometrical object and thus the level-set method is
so natural from the point of view. However as mentioned in Section
\ref{sec:two-one}, the level-set function cannot be a global functions
as $\cC^\infty(\Omega)$ and thus it is difficult to handle the method in the
framework of the infinite dimensional Lie group $\SDiff(\Omega)$.
As our approach gives a resolution of the singularities
by a parameter $\epsX$,
in future we will explore topology changes, geometrical objects
with singularities and so on, more concretely in our theoretical framework.
When $\epsX$ approaches to zero, we need more rigorous arguments in
terms of hyperfunctions \cite{KKK}
but we conjecture that our results would be correct for the
vanishing limit of $\epsX$ because the Heaviside function is expressed by
$\displaystyle{\theta(q) = \lim_{\epsX \to 0} \frac{1}{\pi}\tan^{-1}
\left(\frac{q}{\epsX}\right)}$ in the Sato hyperfunction theory,
which could be basically identified with $\xi(q)$ of the finite $\epsX$.
Since an application of the Sato hyperfunction theory to fluid
dynamics was reported by Imai on vortex layer and so on \cite{II},
we believe that this approach might give another collaboration
between pure mathematics and fluid mechanics.
\bigskip
\bigskip
Acknowledgements:
This article is written by the authors in memory
of their colleague, collaborator and leader Dr. Akira Asai
who led to develop this project.
The authors are also grateful to Mr. Katsuhiro Watanabe
for critical discussions and to the anonymous referee for
helpful and crucial comments.
\bigskip
\bigskip
|
1107.1854
|
\section{Introduction}
Much more than simple barriers, membranes play an active role in many biological phenomena, in particular when they bend and curve into a multitude of shapes~\cite{McMahon05,Zimmerberg06}. Indeed, each membrane shape is coupled to a specific function, and the diversity and dynamics of these shapes are vital for cell physiology~\cite{Farsad03,Shnyrova09}. The remarkable material properties of biological membranes, inherent in their nature of complex bi-dimensional visco-elastic films, play an important part in membrane deformations.
Giant unilamellar vesicles (GUVs) are model membranes made of amphiphilic lipid molecules that self-assemble into closed bilayers in water. Their sizes and curvatures are similar to those of living cells and their lipid bilayer membrane exhibits the basic properties of biological membranes. These features have made them very attractive objects to study the physics of a number of phenomena in cellular biology. Although lacking membrane proteins and a cytoskeleton, they have been used as a minimal cell model to mimic various biological processes such as membrane budding and endocytosis, fusion and fission, transport phenomena across the membrane, lipid domain formation, etc.~\cite{Cans03,Streicher09,Campelo08,Baumgart03,Yanagisawa07,Veatch02}. Especially, giant vesicle ``micromanipulation'' gives a unique opportunity to carry out controlled experiments on an individual vesicle, obtaining data \textit{directly} while exposing this vesicle to mechanical, biochemical, or chemical perturbations~\cite{Wick96,Angelova99,Borghi03,Khalifat08}. In this last case, a localized chemical gradient is created close to the membrane by microinjection. The response of the membrane to such a localized gradient can be significantly different from the one caused by a uniform perturbation of its environment. In particular, we will show in this paper that a change of the spontaneous curvature and a change of the preferred area per lipid lead to different dynamics in the case of a local perturbation. In contrast, when uniform perturbations are studied, observing the change of the equilibrium shape does not enable to distinguish between these two types of changes of the membrane properties~\cite{Lee99}.
In a previous work~\cite{Fournier09}, we have reported both theoretically and experimentally a chemically-driven membrane shape instability observed when GUVs are submitted to a local pH increase. In this instability, a local chemical modification affecting some lipids in one monolayer of the membrane changes the preferred area per lipid, thereby inducing a transient local bilayer curvature. Since only one monolayer is affected, the lateral redistribution of the lipids is strongly slowed down by the intermonolayer friction. When such a chemical modification is applied to a small enough surface, it triggers the ejection of a tubule growing exponentially toward the chemical source.
In this paper, we describe in more detail the first phase of this local curvature instability, before the tubule ejection, and we present a more complete comparison of our experimental results with the predictions of our model. In the experiments, the chemical modification of the vesicle membrane has been achieved by locally delivering to the membrane outer leaflet a basic solution of NaOH with a micropipette. We have performed a time-step chemical modifications (\textquotedblleft pulse\textquotedblright experiments) and we have measured the time evolution of the height of the deformation of the vesicle with respect to its initial shape. Our theoretical model has been completed by taking into account in our dynamical equations the spontaneous curvature change in addition to the equilibrium density change induced by the chemical modification. We have also compared these two effects. The comparison of the experimental results with the predictions of our model has been performed by fitting the measured deformation height with the solution of the dynamical equations of our model. Several ``pulse'' experiments, done on three different GUVs, have been analyzed. The agreement between the experiments and the model is quite good, and our fits enable us to estimate the intermonolayer friction coefficient, yielding values consistent with the literature.
\section{Materials and Methods}
\subsection{Membrane composition and giant vesicle preparation}
The following lipids were used without further purification: egg yolk L-$\alpha$-phosphatidylcholine (EYPC), Sigma, Lyon, France; Brain L-$\alpha$-phosphatidylserine (PS), Avanti Polar Lipids, Alabaster, AL. All others chemicals were of highest purity grade: HEPES, Interchim, Montlu\c{c}on, France; EDTA, Sigma; NaOH, Sigma.
Giant vesicles were formed by the liposome electroformation method~\cite{Angelova86} in a thermostated chamber. The particular electroformation protocol used in this work was the following: lipid mixture solutions were prepared in chloroform/diethyl ether/methanol (2:7:1) with a total lipid concentration of 1 mg/ml. All the liposome preparations were made with a unique lipid mixture of EYPC and PS with EYPC/PS 90:10 mol/mol. A droplet of this lipid solution (1 $\mu$l) was deposited on each of the two parallel platinum wires constituting the electroformation electrodes, and dried under vacuum for 15 min. An AC electrical field, 10 Hz, 0.26 Vpp, was applied to the electrodes. Buffer solution (2 ml, pH 7.4, HEPES 0.5 mM, EDTA 0.5 mM, temperature $25^\circ\mathrm{C}$) was added to the working chamber (avoiding agitation). The voltage was gradually increased (for more than two hours) up to 1 Vpp and maintained during 15 more minutes, before switching the AC field off. The GUV were then ready for further use. In each preparation at least 10 GUVs of diameter 50-80 $\mu$m were available.
\subsection{Microscopy imaging and micromanipulation}
We used a Zeiss Axiovert 200M microscope, equipped with a charged-coupled device camera (CoolSNAP HQ; Photometrics, Tucson, AZ). The experiments were computer-controlled using the Metamorph software (Molecular Devices, Downington, PA). The morphological transformations and the dynamics of the membrane were followed by phase contrast microscopy.
Tapered micropipettes for the local injection of NaOH were made from GDC-1 borosilicate capillaries (Narishige, Tokyo, Japan), pulled on a PC-10 pipette puller (Narishige, Tokyo, Japan). The inner diameter of the microcapillary used for performing the local injections onto a GUV was $0.3~\mu\mathrm{m}$. For these local injections, a microinjection system (Eppendorf femtojet) was used. The micropipettes were filled with a basic solution of NaOH ($1~\mathrm{M}$, pH~$13$). The injected volumes were on the order of picoliters, the injection lasted a few seconds, and the injection pressure was 100 hPa. The positioning of the micropipettes was controlled by a high-graduation micromanipulator (MWO-202; Narishige, Tokyo, Japan). The injections were performed at different distances from the GUV surface, taking care to avoid any contact with the lipid membrane. The injected solution covered about 10 \% of the GUV surface at the time of the injection. Estimations based on visualizing the flux from the micropipette and taking into account the dilution of NaOH in the GUV formation buffer (pH 7.4) yield, at the deformation onset, a pH ranging from 8 to 9 on the vesicle membrane.
\section{Experiments}
\label{section_exp}
Giant unilamellar vesicles in the fluid phase were formed by electroformation at $25^\circ\mathrm{C}$ in a buffer at pH~$7.4$ from a mixture of EYPC and PS with EYPC/PS 90:10 mol/mol.
The chemical modification of the membrane was achieved by locally delivering to the membrane outer leaflet a basic solution of NaOH ($1~\mathrm{M}$, pH~$13$). This local increase of the pH should affect the head groups of the phospholipids PS and PC forming the membrane.
Indeed, the amino group of the PS head group deprotonates at high pH, its intrinsic $\mathrm{pK_a}$ being about 9.8 in vesicles constituted of a PC/PS 90:10 mol/mol mixture~\cite{Tsui86}. Besides, the positively charged trimethylammonium group of the PC head group associates with hydroxide ions at high pH, the dissociation constant K of this equilibrium being such that $\mathrm{pK}=14-\mathrm{pK_a^{eff}}$, where $\mathrm{pK_a^{eff}}=11$~\cite{Lee99}. Both reactions increase the negative charge of the lipids, which entails a local change of the preferred area per lipid head group in the outer leaflet.
Figure~\ref{Fig_planche} shows a typical \textquotedblleft pulse\textquotedblright experiment. We try to perform a time-step chemical modification. For this, we quickly approach the pipette without injecting any solution, and then we inject the solution during a time $\Delta t\approx\!3~\mathrm{s}$ before quickly withdrawing the pipette.
\begin{figure}[h t b]
\centering
\includegraphics[width=0.75\textwidth]{figure1.eps}
\caption[]{Time-step chemical modification of the vesicle membrane. A local modulation of the pH at the level of a vesicle membrane induces a smooth deformation of the vesicle (frames 1.3 to 3.6 s). The deformation is completely reversible when the NaOH delivery is stopped (frames 4.3 s to the end). Scale bar: 50 $\mu\mathrm{m}$.
\label{Fig_planche}}
\end{figure}
One can see in Fig.~\ref{Fig_planche} the vesicle before any microinjection (frame 0 s), and the micropipette during its approach (frames 0.6 and 1.3 s). A two-step process occurs. First, a smooth deformation of the vesicle starts to develop toward the pipette, i.e., opposite to the flow (frames 0.6 to 3.6 s). At this point we stop the injection and we quickly withdraw the pipette, and the membrane deformation relaxes after reaching a maximum (frames 4.3 s to the end).
The following control experiments have been carried out: (i) We have checked that no deformation occurs if only buffer solution is injected. (ii) In order to verify that the observed effects were not simply due to charge screening and/or osmotic effects, a local injection of salt solution (NaCl instead of NaOH) has been performed. The typical smooth and reversible deformation has not been observed in these control experiments, which shows that the pH increase is crucial in our instability.
Fig.~\ref{Fig_H_d} shows the measured time evolution of the height $H(t)$ of the deformation of the vesicle with respect to its initial shape for the experiment presented in Fig.~\ref{Fig_planche}.
\begin{figure}[h t b]
\centering
\includegraphics[width=0.55\textwidth]{figure2.eps}
\caption[]{Typical example of the time evolution of the vesicle deformation, measured in front of the pipette, in a \textquotedblleft pulse\textquotedblright experiment. Dots: time evolution $H(t)$ of the deformation amplitude. Solid line: Time evolution $d(t)$ of the distance of the micropipette from the electrode supporting the GUV. \label{Fig_H_d}}
\end{figure}
One can see easily on this figure the two-step process described previously. For more clarity, the distance of the micropipette from the electrode supporting the GUV is also presented.
Several \textquotedblleft pulse\textquotedblright experiments were conducted on three different GUVs. The complete analysis of these experiments is presented in Sec.~\ref{Fit}.
\section{Theoretical model}
\subsection{Free energy of a bilayer membrane}
The free energy per unit area $f$ of a lipid membrane is well described on a large scale by
\begin{equation}
f=\sigma_0+\frac{\kappa}{2}c^2-\kappa c_0^b c\,.
\label{f}
\end{equation}
This free-energy density depends on the membrane curvature $c$, which is defined as the sum of the two principal curvatures on the bilayer midsurface. The constitutive constants $\sigma_0$, $\kappa$ and $c_0^b$ denote, respectively, the membrane tension, its bending elastic constant, and its spontaneous curvature.
The resulting membrane free energy
\begin{equation}
F=\int dA\,f\,
\label{intaire}
\end{equation}
is known as the Helfrich Hamiltonian~\cite{Helfrich73}. The integral in Eq.~(\ref{intaire}) is a surface integral over the area $A$ of the bilayer midsurface. Note that we do not include any Gaussian curvature term in this free energy. Indeed, the topology of the vesicle is not affected by the instability we wish to study, so that the integral of the Gaussian curvature, and thus its contribution to $F$, remains constant by virtue of the Gauss-Bonnet theorem~\cite{Buchin}.
To account for the bilayer structure of the membrane, its free-energy density $f$ can be written as the sum of the free-energy densities of the two monolayers, which will be noted $f^+$ and $f^-$. Since the curvature $c$ is defined on the bilayer midsurface, it is the same for both monolayers. Furthermore, we assume that the two monolayers have the same lipid composition before the onset of the instability, so that they have identical tensions and bending rigidities. They also have opposite spontaneous curvatures, noted $\pm c_0$, since the lipids in the two monolayers are oriented in opposite directions: their hydrophilic heads are oriented towards the exterior of the bilayer, while their hydrophobic tails are oriented towards the interior of the bilayer. To study our instability, it is necessary to take into account the inhomogeneities in the lipid mass densities $\rho^\pm$ defined on the bilayer midsurface in each monolayer. Defining the scaled densities $r^\pm=(\rho^\pm-\rho_0)/\rho_0$, where $\rho_0$ is a reference density, which is chosen identical for both monolayers, we may write~\cite{Seifert93}
\begin{equation}
f^\pm=\frac{\sigma_0}{2}+\frac{\kappa}{4}c^2\pm\frac{\kappa c_0}{2}c+\frac{k}{2}\left(r^\pm \pm ec\right)^2\,.
\label{fpm}
\end{equation}
In this formula, $k$ is the stretching elastic constant of a monolayer, which is the same for both monolayers as they are identical, and $e$ denotes the distance between the neutral surfaces of the monolayers~\cite{Safran} and the midsurface of the bilayer. Indeed, the scaled densities in each monolayer at a distance $e$ from the bilayer midsurface are $r_n^\pm=r^\pm \pm ec$ at first order in the small variable $ec$, so that if $f^\pm$ is written in terms of the curvature and $r_n^\pm=(\rho_n^\pm-\rho_0)/\rho_0$, these two variables are decoupled. Such a decoupling between deformations where only the curvature is modified (bending) and deformations in which only the density is affected (stretching) is characteristic of the neutral surface~\cite{Safran}. We choose the sign convention for the curvature in such a way that a spherical vesicle has $c<0$. Then the monolayer denoted ``$+$" in Eq.~(\ref{fpm}) is the outer monolayer.
The expression (\ref{fpm}) of the free-energy density of a monolayer is a general second-order expansion around a reference state characterized by a flat shape ($c=0$) and a uniform density $\rho^\pm=\rho_0$ \cite{Futur}. It is valid for small deformations around this reference state: $r^\pm=\mathcal{O}(\epsilon)$ and $ec=\mathcal{O}(\epsilon)$, where $\epsilon$ is a small nondimensional parameter characterizing the deformation.
If the densities on the neutral surfaces, $\rho_n^\pm$, are both equal to the reference value $\rho_0$, summing the monolayer free-energy densities $f^\pm$ in (\ref{fpm}) gives back the standard free-energy density (\ref{f}) of the bilayer: $f=f^+ +f^-$. Note that we find $c_0^b=0$ for the spontaneous curvature of the bilayer, since we have considered two identical monolayers.
\subsection{Modification of the free energy due to a local pH change}
\label{modif}
In the experiment, when the pipette expels the NaOH solution close to the GUV, it induces a local increase of the pH, which affects the head groups of the phospholipids forming the membrane, as explained in Sec.~\ref{section_exp}. As some of these head groups become more negatively charged during this modification, the preferred area per lipid head group increases locally. Pursuing the analysis of Ref.~\cite{Fournier09}, we call $\phi(\mathbf{r},t)$ the fraction of the lipids of the external monolayer that are chemically modified by the local pH increase in the experiment. Since the pH on the membrane should never exceed 9, this fraction should remain very small (see the $\mathrm{pK_a}$ values in Sec.~\ref{section_exp}), so we may assume $\phi=\mathcal{O}(\epsilon)$.
Since the characteristic timescale of an acido-basic reaction is determined by the diffusion time of the reactants~\cite{Eigen64}, we can consider that $\phi(\mathbf{r},t)$ follows instantaneously the pH field outside the vesicle. The hydroxide ions diffuse in the buffer solution with a diffusion coefficient $D_{OH^-}\approx5\times10^3\,\mu\mathrm{m^2/s}$ \cite{Daniele99}. Thus, in the typical $5\,\mathrm{s}$ of relaxation of the membrane deformation, they diffuse on a length of approximatively $1.5 \times 10^2\,\mu\mathrm{m}$. This length is larger than the width of the instability and comparable with the size of the GUV. Besides, the pH field is probably also affected by the advection created by the flux coming from the micropipette, then by its retraction and last by the movement of the membrane. However, the central zone of the instability remains in contact with the zone of highest pH. At present, we have no knowledge of this time-dependent pH field beyond these rough estimates. Hence, to simplify, we are going to assume that $\phi(\mathbf{r},t)$ does not evolve significantly during the time of the instability. More precisely, we make the simplifying hypothesis that $\phi(\mathbf{r})$ is time-independent after the end of the injection ($t=0$):
\begin{quote}
\emph{(SH$_1$)}\quad $\phi(\mathbf{r},t)=\phi(\mathbf{r})$ for $t\ge0$.
\end{quote}
We also assume that the inner monolayer is not affected by the pH increase outside the vesicle. The permeation coefficient of hydroxide ions through a lipid membrane is $P_{OH^-}\approx 10^{-3}-10^{-5}\,\mathrm{cm/s}$, which yields a negligible pH increase inside the vesicle on the timescale of our experiments \cite{Elamrani83, Seigneuret86}.
\emph{A priori}, the constitutive constants of monolayer ``$+$" are all affected by the chemical modification, i.e., they depend on $\phi(\mathbf{r})$. However, since we focus on small deformations around the flat shape and the uniform density $\rho^+=\rho_0$, we can write to second order in $\epsilon$:
\begin{eqnarray}
f^+&=&\frac{\sigma_0}{2}+\sigma_1\phi+\frac{\sigma_2}{2}\phi^2+\tilde\sigma\left(1+r^+\right)\phi\ln\phi
+\frac{\kappa}{4}c^2+\frac{\kappa}{2}\left(c_0+\tilde c_0\phi\right)c\nonumber\\&+&\frac{k}{2}\left(r^++ ec\right)^2\,.
\label{fmod}
\end{eqnarray}
With respect to expression~(\ref{fpm}), the monolayer tension and spontaneous curvature of monolayer ``$+$" have been modified according to
\begin{eqnarray}
\frac{\sigma_0}{2}&\to&\frac{\sigma_0}{2}+\sigma_1\phi+\frac{\sigma_2}{2}\phi^2+\tilde\sigma(1+r^+)\phi\ln\phi\,,\\
c_0&\to&c_0+\tilde c_0\phi\,.
\end{eqnarray}
The change of the other constitutive constants ($k$, $\kappa$ and $e$) is irrelevant as far as the free-energy density at second order is concerned. We have assumed that the constitutive constants were analytical functions of $\phi$, apart from the $\tilde\sigma(1+r^+)\phi\ln\phi$ term, which corresponds to a mixing entropy term per unit area \cite{Futur}. Except from this additional term, expression~(\ref{fmod}) is the most general second-order expansion of the monolayer free-energy density (in the three small variables $ec$, $r^+$ and $\phi$) when the total masses of modified and non-modified lipids remain constant \cite{Futur}.
\subsection{Force density in the membrane}
The force density in each monolayer of a bilayer with lipid density and composition inhomogeneities described by the free-energy densities~(\ref{fpm})--(\ref{fmod}) has been derived in Ref.~\cite{Futur}. It reads to first order in $\epsilon$
\begin{eqnarray}
p_i^+&=&-k\,\partial_i\left(r^++ec-\frac{\sigma_1}{k}\phi\right)\,,\label{pip}\\
p_i^-&=&-k\,\partial_i\left(r^--ec\right)\,,\label{pim}\\
p_n&=&\sigma_0 c-\tilde{\kappa}\Delta c-k e\,\Delta \left(r^+-r^-\right) - \frac{\kappa \tilde{c}_0}{2}\Delta\phi\nonumber\\
&=&\sigma_0 c-\tilde{\kappa}\Delta c-k e\,\Delta \left(r^+-r^--\frac{\sigma_1}{k}\phi\right) - \frac{\kappa \bar{c}_0}{2}\Delta\phi\label{pn}\,,
\end{eqnarray}
where $p_i^\pm$ is the force density in monolayer ``$\pm$" acting in a direction $i$ tangential to the membrane, while $p_n=p_n^++p_n^-$ is the total normal force density in the membrane. In these formulas, we have defined the constants $\tilde\kappa=\kappa+2ke^2$ and $\bar c_0=\tilde c_0+2\sigma_1 e/\kappa$. The symbol $\partial_i$ denotes partial derivation in the direction $i$, while $\Delta$ is the covariant Laplacian operator.
Let us discuss the terms depending on $\phi$ in the force densities. For a given composition $\phi(\mathbf{r})$, the equilibrium density of a plane monolayer with fixed total mass is obtained by minimizing the free energy per unit mass $f^\pm/\rho^\pm$ for $c=0$. Considering a fixed total mass is justified here since the timescales of our instability are much shorter than the flip-flop characteristic time, which is assumed not to be significantly modified by the local chemical modification we study.
For monolayer ``$+$", this gives at first order in $\epsilon$
\begin{equation}
r^+_{\mathrm{eq}}(\phi)\equiv \frac{\rho^+_\mathrm{eq}(\phi)-\rho_0}{\rho_0}=\frac{\sigma_0/2+\sigma_1\phi}{k}\,,
\label{req}
\end{equation}
where we have used the fact that $\sigma_0\ll k$. Thus, the chemical modification changes the scaled equilibrium density of the plane monolayer ``$+$" by the amount
\begin{equation}
\delta r^+_\mathrm{eq}=r^+_\mathrm{eq}(\phi)-r^+_\mathrm{eq}(0)=\frac{\sigma_1}{k}\phi\,.
\label{dreq}
\end{equation}
We observe that this change of the equilibrium density at $c=0$ appears in the force densities (\ref{pip}) and (\ref{pn}), which is in agreement with the simpler theory of Ref.~\cite{Fournier09}.
The equilibrium density obtained by minimization at $c=0$, i.e. for a flat membrane, will be referred to as the ``plane-shape equilibrium density" in the following.
Let us now focus on the term $-\frac{1}{2}\kappa \bar{c}_0\Delta\phi$ in (\ref{pn}), which comes from the chemical modification too, but which is not related to the change of the plane-shape equilibrium density. The membrane is at mechanical equilibrium if the force density vanishes. It can be seen from (\ref{pip}) and (\ref{pn}) that the flat shape $c=0$ is a solution to $\bm{p}=\mathbf{0}$ at a given inhomogeneous $\phi(\mathbf{r})$ only if $\bar c_0=0$. Thus, $\bar c_0\phi$ (and not $\tilde c_0\phi$) represents the actual change of the spontaneous curvature of monolayer ``$+$" caused by the chemical modification.
\subsection{Change of the spontaneous curvature and of the equilibrium density}
\label{Mod_subsec}
Changing the local spontaneous curvature of a bilayer membrane may result in shape or budding instabilities~\cite{Tsafrir01,Tsafrir03,Staneva05}. Alternatively, affecting locally the plane-shape equilibrium density non-symmetrically in each monolayer of a bilayer can also yield shape or budding instabilities~\cite{Sens04, Fournier09}. However, until now, no study has considered an asymmetric modification of both the spontaneous curvature and the plane-shape equilibrium density in the two monolayers. As can be seen from Eq.~(\ref{pn}), changing the plane-shape equilibrium density of the lipids produces a destabilizing normal force per membrane unit area $\delta p_n^{(1)}=e\sigma_1\Delta\phi$, while changing the spontaneous curvature yields $\delta p_n^{(2)}=-\frac{1}{2}\kappa \bar c_0\Delta\phi$. Thus, both of these changes should induce a shape or budding instability.
In the case where only the equilibrium density is affected, which corresponds to $\sigma_1\ne0$ and $\bar c_0=0$, the budding should vanish as soon as the lipid density has relaxed to its new equilibrium value, even if the modified lipids remain in place. Indeed, if $r^-$ is homogeneous, and if $r^+(\mathbf{r})$ reaches $r_\mathrm{eq}^+(\mathbf{r})$ (defined in Eq.~(\ref{req})), the equilibrium condition $p_n=p_i^\pm=0$ is satisfied for $c=0$, which means that the flat shape is an equilibrium shape. On the contrary, in the case where only the spontaneous curvature is modified, i.e., assuming $\sigma_1=0$ and $\bar c_0\ne0$, the budding persists as long as the modified lipids remain in place. Indeed, the equilibrium condition $p_n=p_i^\pm=0$ can be satisfied only if $c(\mathbf{r})$ verifies $\sigma_0 c(\mathbf{r})-\kappa\Delta c(\mathbf{r})=\frac{1}{2}\kappa\bar c_0\Delta\phi(\mathbf{r})$, which implies that $c(\mathbf{r})\ne0$ if $\Delta\phi(\mathbf{r})\ne0$, so that the plane shape is not an equilibrium shape for a generic inhomogeneous $\phi(\mathbf{r})$. Thus, although both mechanisms should lead to a shape or budding instability, they are not physically equivalent. We will study the differences between them in more detail in Sec.~\ref{comp}.
It is interesting to determine the relative importance of the two above-mentioned mechanisms in a budding instability. For this, let us first compare the relative variation of the spontaneous curvature $d c_0/c_0$ and the relative variation of the plane-shape equilibrium density $d\rho_\mathrm{eq}^+/\rho_\mathrm{eq}^+$ induced by the chemical modification in monolayer $+$. As explained in Sec.~\ref{section_exp}, this chemical modification affects the head groups of some lipids, which become more negatively charged, so that the preferred area of the head group of these molecules increases. This means that our chemical modification may be viewed roughly as an increase of the preferred area per lipid head group. It is now necessary to resort to a microscopic model to express the variation of the spontaneous curvature and of the plane-shape equilibrium density as a function of the variation of the preferred area per lipid head group.
The simplest way of doing this is to take a mere geometrical model in which each lipid is constituted of a head group with area $a_\mathrm{h0}=l_\mathrm{h0}^2$ and of a chain group with a smaller area $a_\mathrm{c0}=l_\mathrm{c0}^2$, situated at a fixed distance $s$ from each other (see Fig.~\ref{Figgeom}). Both the head and the chain are supposed to be incompressible, and to favor close-packing. In this crude model, the plane-shape equilibrium density is equal to $m/a_\mathrm{h0}$, where $m$ is the mass of a lipid (see Fig.~\ref{Figgeom} (a)). The spontaneous curvature is obtained when both the heads and the chains are close-packed, i.e. when the area per chain is equal to $a_\mathrm{c0}$ and the area per head is equal to $a_\mathrm{h0}$ (see Fig.~\ref{Figgeom} (b)): geometry yields $c_0=2(l_\mathrm{h0}-l_\mathrm{c0})/(l_\mathrm{c0} s)$. If the preferred area per lipid head group $a_\mathrm{h0}$, or equivalently the corresponding characteristic length $l_\mathrm{h0}$ is modified, it gives $\delta\rho_\mathrm{eq}/\rho_\mathrm{eq}=-2 \delta l_\mathrm{h0}/l_\mathrm{h0}$ and $\delta c_0/c_0=2 \delta l_\mathrm{h0}/(l_\mathrm{c0} c_0 s)$. Thus, given that $c_0 s\gg1$ (since lipids spontaneously organize into planar membranes and not micelles), we obtain
\begin{equation}
\frac{\delta c_0}{c_0}\approx -\frac{1}{c_0 s} \frac{\delta\rho_\mathrm{eq}}{\rho_\mathrm{eq}}\,.
\label{geom}
\end{equation}
\begin{figure}[h t b]
\centering
\includegraphics[width=0.3\textwidth]{figure3a_3b.eps}
\caption[]{Two-dimensional illustration of our simple geometrical model: the lipid head groups have a characteristic size $l_\mathrm{h0}$, and the chains have a smaller characteristic size $l_\mathrm{c0}$. The lipids are viewed as solid cones favoring close-packing. (a) Plane shape: the area per lipid is given by $a_\mathrm{h0}=l_\mathrm{h0}^2$. (b) The spontaneous curvature $c_0$ of the monolayer is obtained when both the heads and the chains are close-packed. The conical shape of the lipids has been exaggerated for clarity: in reality, $c_0 s\ll 1$.\label{Figgeom}}
\end{figure}
A less crude microscopic model which we may use to express the variation of $c_0$ and $\rho_\mathrm{eq}$ caused by a change of the preferred area per head group $a_\mathrm{h0}$ was exposed in Ref.~\cite{Miao94}. In this model, the head and chain of a lipid have respective preferred areas $a_\mathrm{h0}$ and $a_\mathrm{c0}$, while in the monolayer, the actual areas of these groups are $a_\mathrm{h}$ and $a_\mathrm{c}$. The elastic energy per molecule is written as
\begin{equation}
g(a_\mathrm{h},a_\mathrm{c})=\frac{K_\mathrm{h}}{2}a_\mathrm{h0}\left(\frac{a_\mathrm{h}}{a_\mathrm{h0}}-1\right)^2+\frac{K_\mathrm{c}}{2}a_\mathrm{c0}\left(\frac{a_\mathrm{c}}{a_\mathrm{c0}}-1\right)^2\,,
\label{modeleMiao}
\end{equation}
where $K_\mathrm{h}$ and $K_\mathrm{c}$ are stretching elastic constants for each group. In Ref.~\cite{Miao94}, it has been shown that this free energy per molecule can be used to derive the area-difference elasticity model of a membrane. Following the same lines, it is possible to show that Eq.~(\ref{modeleMiao}) yields the following free energy per unit area of a monolayer:
\begin{equation}
f^\pm=\frac{\kappa}{4}c^2\pm \frac{\kappa c_0}{2}c+\frac{k}{2}\left (r^\pm \pm ec\right)^2\,,
\end{equation}
which corresponds to our free-energy density Eq.~(\ref{fpm}) in the case where the reference density $\rho_0$ is taken equal to $\rho_\mathrm{eq}$ (which implies $\sigma_0=0$ \cite{Futur}). The calculations in Ref.~\cite{Miao94} provide the following expressions of $c_0$ and $\rho_\mathrm{eq}$ from microscopic constants:
\begin{eqnarray}
c_0&=&\frac{(K_\mathrm{h}a_\mathrm{c0}+K_\mathrm{c}a_\mathrm{h0})(a_\mathrm{h0}-a_\mathrm{c0})}{s(K_\mathrm{h}+K_\mathrm{c})a_\mathrm{h0}a_\mathrm{c0}}\,,\\
\rho_\mathrm{eq}&=&\frac{m}{a_0}=m\frac{K_\mathrm{h}a_\mathrm{c0}+K_\mathrm{c}a_\mathrm{h0}}{(K_\mathrm{h}+K_\mathrm{c})a_\mathrm{h0}a_\mathrm{c0}}\,,
\end{eqnarray}
where $a_0$ is the equilibrium area per lipid on the neutral surface of the monolayer. It is straightforward to express the relative variations $\delta c_0/c_0$ and $\delta\rho_\mathrm{eq}/\rho_\mathrm{eq}$ as a function of the relative variation of the preferred area per head group $\delta a_\mathrm{h0}/a_\mathrm{h0}$, which yields
\begin{equation}
\frac{\delta c_0}{c_0}=-\frac{K_\mathrm{h}a_\mathrm{c0}^2+K_\mathrm{c}a_\mathrm{h0}^2}{K_\mathrm{h}a_\mathrm{c0}(a_\mathrm{h0}-a_\mathrm{c0})}\,\frac{\delta\rho_\mathrm{eq}}{\rho_\mathrm{eq}}\approx-\frac{K_\mathrm{h}+K_\mathrm{c}}{K_\mathrm{h}}\,\frac{1}{c_0 s}\,\frac{\delta\rho_\mathrm{eq}}{\rho_\mathrm{eq}}\,,
\label{Miao}
\end{equation}
where we have used the fact that $c_0 s=(a_\mathrm{h0}-a_\mathrm{c0})/a_0\ll1$. The values of the elastic constants $K_\mathrm{h}$ and $K_\mathrm{c}$ should be of the same order, and we may hint at $K_\mathrm{c}<K_\mathrm{h}$ since the chains are able to reorganize spatially. The result (\ref{geom}) from the crude geometrical model is recovered when $K_\mathrm{h}\to \infty$ while $K_\mathrm{c}$ is finite, as it should.
Since $s\approx e$, the results (\ref{geom}) and (\ref{Miao}) coming from the two microscopic models can both be written in the form
\begin{equation}
\left|\frac{\delta c_0}{c_0}\right|\approx \frac{\alpha}{e c_0} \left|\frac{\delta\rho_\mathrm{eq}}{\rho_\mathrm{eq}}\right|\,,
\label{oom}
\end{equation}
where the order of magnitude of the numerical coefficient $\alpha$ is one.
We may now compare the destabilizing pressures $\delta p_n^{(1)}$ and $\delta p_n^{(2)}$ caused by each effect: $|\delta p_n^{(2)}|/|\delta p_n^{(1)}|=(\frac{1}{2}\kappa|\bar c_0|)/(2e|\sigma_1|)$. The relative variation of the plane-shape equilibrium density in monolayer ``$+$'' induced by the chemical modification is $\delta\rho_\mathrm{eq}^+/\rho_0= \delta r_\mathrm{eq}^+=\sigma_1\phi/k$ (see Eq.~(\ref{dreq})). Besides, the relative variation of the spontaneous curvature caused by the same chemical modification is $\delta c_0/c_0=\bar c_0\phi/c_0$. Using Eq.~(\ref{oom}) and the former expressions yields $|\bar c_0|/|\sigma_1|\approx\alpha/(ek)$. Therefore, we obtain $|\delta p_n^{(2)}|/|\delta p_n^{(1)}|=\alpha\kappa/(2ke^2)$. Using the well-known orders of magnitudes $\kappa\approx10^{-19}\,\mathrm{J}$, $e\approx1\,\mathrm{nm}$ and $k\approx0.1\,\mathrm{J/m^2}$~\cite{Safran}, we find out that the two destabilizing pressures should have the same order of magnitude.
Thus, in the normal force density Eq.~(\ref{pn}), the destabilizing term $\delta p_n^{(1)}$ due to the change of plane-shape equilibrium density should usually be comparable to the other destabilizing term $\delta p_n^{(2)}$ which comes from the change of the spontaneous curvature. We are thus going to take both of these effects into account in our hydrodynamic description of the instability.
\subsection{Hydrodynamic equations}
\label{hydro}
As we focus on small deformations with respect to the plane shape, it is convenient to describe the membrane in the Monge gauge, i.e., by its height $z=h(x,y)$ with respect to a reference plane, $x$ and $y$ being Cartesian coordinates in the reference plane. Then, $c=\nabla^2 h+\mathcal{O}(\epsilon^2)$ for small deformations such that $\partial_i h=\mathcal{O}(\epsilon)$ and $\partial_i\partial_j h=\mathcal{O}(\epsilon)$ where $i,j\in\{x,y\}$. Let us denote by $\mathbf{v}^\pm(x,y,t)$ the two-dimensional velocities of the lipids within the monolayers. Recall that the fraction of chemically modified lipids, $\phi(x,y)$, is assumed to be time-independent, being fixed by the surrounding pH field modelled as a static field ($SH_1$). Since we have assumed $\phi=\mathcal{O}(\epsilon)$, and since the flow is induced by the chemical modification, we also have $\mathbf{v}=\mathcal{O}(\epsilon)$. In the Monge gauge, the force densities (\ref{pip})--(\ref{pn}) become \cite{Futur}
\begin{eqnarray}
p_i^+(x,y)&=&-k\,\partial_i\left(r^++e\nabla^2 h-\frac{\sigma_1}{k}\phi\right)\,,\label{pip_b}\\
p_i^-(x,y)&=&-k\,\partial_i\left(r^--e\nabla^2 h\right)\,,\label{pim_b}\\
p_z(x,y)&=&\sigma_0 \nabla^2 h-\tilde{\kappa}\nabla^4 h-k e\,\nabla^2 \left(r^+-r^-\right)- \frac{\kappa \tilde{c}_0}{2}\nabla^2\phi\label{pn_b}\,.
\end{eqnarray}
The fraction of the chemically modified lipids may be expanded in Fourier modes: $\phi(x,y)=\sum_\mathbf{q}\phi_\mathbf{q}\,e^{i(q_x x+q_y y)}$. Since we work at first order in $\epsilon$, the hydrodynamic equations are going to be linear, which justifies the use of a Fourier expansion. Since the region where the lipids are modified has a well-defined width $\sim\!q^{-1}$ (see Sec.~\ref{Fit}), we are going to identify $\phi(x,y)$ with its dominant Fourier mode.
This approximation will be discussed in Sec.~\ref{Sec_discussion}.
Furthermore, we will consider a wavevector $\mathbf{q}$ parallel to the $x$ axis (this can be done without any loss of generality by choosing the orientation of the axes appropriately). We thus make a second (and last) simplifying hypothesis:
\begin{quote}
\emph{(SH$_2$)}\quad
One-mode approximation: $\phi(x,y)\approx\phi_q\,e^{iqx}$.
\end{quote}
Following this hypothesis, we may also write $r^\pm(x,y,t)=r^\pm_q(t)\,e^{iqx}$, $v_x^\pm(x,y,t)= v_q^\pm(t)\,e^{iqx}$ and $h(x,y,t)= h_q(t)\,e^{iqx}$ at linear order in $\epsilon$.
The dynamical equations describing the joint evolution of the membrane shape and of the lipid densities have been developed in another context in Refs.~\cite{Seifert93,Evans94}. They have been applied to the study of the instability caused by a local density change in the case where $\bar c_0=0$ in Ref.~\cite{Fournier09}. In terms of the above Fourier components, they read
\begin{eqnarray}
&&\hspace{-1cm}
-\eta_2q^2v_q^+
-ikq\left(r_q^+ - e q^2 h_q - \frac{\sigma_1}{k}\phi_q\right)
-2\eta q v_q^+
-b\left(v_q^+ - v_q^-\right)=0\,,
\label{un}\\
&&\hspace{-1cm}
-\eta_2q^2v_q^-
-ikq\left(r_q^- + e q^2 h_q\right)
-2\eta q v_q^-
+b\left(v_q^+ - v_q^-\right)=0\,,
\label{deux}\\
&&\hspace{-1cm}
-\left(\sigma_0 q^2+\tilde\kappa q^4\right)h_q
+k e q^2\left(r_q^+-r_q^-\right)
+ \frac{\kappa \tilde{c}_0}{2}q^2\phi_q
-4\eta q\frac{\partial h_q}{\partial t}=0\,,
\label{trois}\\
&&\hspace{-1cm}
\frac{\partial r_q^\pm}{\partial t} + iq v_q^\pm=0\,.
\label{quatre}
\end{eqnarray}
Equations~(\ref{un}) and (\ref{deux}) are generalized Stokes equations describing the balance of the forces per unit area acting tangentially in monolayer ``$+$" and in monolayer ``$-$", respectively. The first term in each of these equations corresponds to the viscous force density due to the two-dimensional flow of the lipids, $\eta_2$ being the two-dimensional viscosity of the lipids. The second term is the density of elastic forces given by Eqs.~(\ref{pip_b}) and (\ref{pim_b}). The third term corresponds to the viscous stress exerted by the flow of the surrounding fluid, which is set in motion by the two-dimensional flow of the lipids in the membrane. The viscosity of the external fluid is denoted by $\eta$. The last term is the stress originating from the intermonolayer friction~\cite{Evans94}, $b$ being the intermonolayer friction coefficient. Equation~(\ref{trois}) describes the balance of the forces per unit area acting normally to the membrane. Its first three terms represent the elastic force density given by Eq.~(\ref{pn_b}). Its last term is the normal viscous stress exerted by the flow of the surrounding fluid, the normal velocity of which matches $\partial h/\partial t$ on the membrane. Finally, Equation~(\ref{quatre}) expresses the conservation of mass at first order in $\epsilon$. Typical values of the dynamical parameters, used throughout, are $\eta_2\approx10^{-9}\,\mathrm{J\,s/m^2}$, $\eta\approx10^{-3}\,\mathrm{J\,s/m^3}$, and $b\approx10^8-10^9\,\mathrm{J\,s/m^4}$~\cite{Pott02,Shkulipa06}.
Comparing these dynamical equations with the ones of Ref.~\cite{Fournier09} shows that $-\sigma_1 \phi_q/k$ corresponds to the scalar field $\epsilon(\mathbf{r},t)$ introduced in Ref.~\cite{Fournier09} to describe the change of the equilibrium density, which is in agreement with Eq.~(\ref{dreq}). Ref.~\cite{Fournier09} focused on the effect of the variation of the plane-shape equilibrium density, which corresponds to taking $\bar c_0=0$ here. Here, both the change of the equilibrium density and the change of the spontaneous curvature are taken into account.
\subsection{Resolution of the hydrodynamic equations}
\label{resol}
Let us define the (symmetric) average scaled density $\bar r_q(t)=r_q^-+r_q^+$ and the (antisymmetric) differential scaled density $\hat r_q(t)=r_q^+-r_q^-$. Eliminating $v_q^\pm$ in
Eqs.~(\ref{un})--(\ref{deux}) thanks to Eq.~(\ref{quatre}) and adding them together gives
\begin{equation}
\frac{\partial\bar r_q}{\partial t}=-\frac{kq}{\eta_2q+2\eta}
\left(\bar r_q-\displaystyle\frac{\sigma_1}{k}\phi_q\right)\,,
\label{dynabar}
\end{equation}
while substracting them yields
\begin{equation}
\hspace{-1.2cm}
\frac{\partial}{\partial t}\left(\begin{array}{c}
q h_q\\\\
\hat r_q
\end{array}\right)=-
\left(\begin{array}{cc}
\displaystyle\frac{\sigma_0 q+\tilde\kappa q^3}{4\eta}
&-\displaystyle\frac{keq^2}{4\eta}\\\\
-\displaystyle\frac{keq^3}{b}
&\displaystyle\frac{kq^2}{2b}
\end{array}\right)
\left(\begin{array}{c}
\displaystyle q h_q\\\\
\hat r_q
\end{array}\right)
+\left(\begin{array}{c}
\displaystyle\frac{\kappa \tilde c_0 q^2}{8\eta}\phi_q\\\\
\displaystyle\frac{\sigma_1 q^2}{2 b}\phi_q
\end{array}\right),
\label{relax}
\end{equation}
where we have assumed $\eta_2q^2\ll b$ and $\eta q\ll b$, which is well verified for $\pi/q\approx40\,\mathrm{\mu m}$ corresponding to the width of the deformation observed in the experiments.
Equation~(\ref{dynabar}) shows that $\bar r_q$ relaxes to its equilibrium value $\sigma_1\phi_q/k$ with a very short timescale $\tau=-(\eta_2q+2\eta)/(kq)$. Indeed, with typically $\pi/q\approx40\,\mathrm{\mu m}$, and with the parameters given above, we obtain $\tau\approx0.3\,\mathrm{\mu s}$, which can be considered instantaneous in our experiment.
Equation~(\ref{relax}) can be solved by diagonalizing the square matrix involved. In our experiments, we have $q\ll\sqrt{\sigma_0/\tilde\kappa}$. Indeed, the tension of a vesicle is superior to about $10^{-7} \,\mathrm{J/m^{2}}$, so $\sqrt{\sigma_0/\tilde\kappa}\geq 10^6 \,\mathrm{m^{-1}}$. In this regime, the eigenvalues of the square matrix in Eq.~(\ref{relax}) are
\begin{equation}
\gamma_1\approx \frac{kq^2}{2b}\mathrm{\,\,and\,\,}\gamma_2\approx\frac{\sigma_0 q}{4\eta}\,.
\label{eigenvalues}
\end{equation}
This result can be checked rapidly by noting that, in the regime where $q\ll\sqrt{\sigma_0/\tilde\kappa}$, the coefficient $keq^3/b$ is much smaller than all the other coefficients in the matrix, so that the square matrix in Eq.~(\ref{relax}) may be approximated by an upper triangular matrix. The deformation thus evolves according to
\begin{equation}
h_q(t)=A e^{-\gamma_1t}+Be^{-\gamma_2t}+\frac{\kappa\bar c_0}{2\sigma_0}\phi_q\,,
\label{evol_h_gen}
\end{equation}
where the constants $A$ and $B$ can be determined from the initial conditions on $h_q$ and $\hat r_q$. The term $\kappa\bar c_0 \phi_q/(2\sigma_0)$ comes from the constant solution of the inhomogeneous equation (\ref{relax}), which corresponds physically to the residual deformation at equilibrium. We find that this residual deformation vanishes if $\bar c_0=0$, i.e., in the case where only the plane-shape equilibrium density (and not the spontaneous curvature) is modified, which is consistent with the previous discussions. The characteristic times $\gamma_1^{-1}$ and $\gamma_2^{-1}$ are both much longer than $\tau$, the longest one being $\gamma_1^{-1}$, which gives a total relaxation time of some seconds. This slow relaxation originates from the strong intermonolayer friction, which is involved in the antisymmetric changes of the monolayer densities, as the lipids of one monolayer move relative to the other monolayer.
\subsection{Qualitative picture of the dynamics in two limiting cases}
\label{comp}
In Sec.~\ref{Mod_subsec}, we have shown that it is physically different to change locally the plane-shape equilibrium density and to change locally the spontaneous curvature. Now that we have studied the dynamics of the instability, we are going to analyze the difference between these two effects. Although both of them occur in the generic case (see Sec.~\ref{Mod_subsec}), we are going to study the two limiting cases when only one of these changes occurs.
Let us first introduce a schematic representation of lipids, which takes into account the preferred area per lipid on the neutral surface, which is (by definition) independent of the membrane curvature, and their preferred curvature (see Fig~\ref{presentation}). The preferred shape of a lipid is represented by a cone, superimposed on the lipid. The area per lipid on the neutral surface (within the monolayer) is symbolized by the upper surface of this cone, while its preferred curvature is symbolized by the angle of this cone.
\begin{figure}[h t b]
\begin{center}
\includegraphics[width=.45\textwidth]{figure4.eps}
\caption{First lipid (light grey): non-modified lipid. The length $\ell_n$ represents the preferred diameter per lipid on the neutral surface, while the angle $\alpha$ quantifies the preferred curvature. Second and third lipids (dark grey or brown): modified lipids. For the second lipid, the modification affects only the preferred density on the neutral surface, but not the preferred curvature. It is the contrary for the third lipid.}
\label{presentation}
\end{center}
\end{figure}
\subsubsection{Dynamics in the case where $\bar c_0=0$}
We are first going to study the dynamics of the relaxation in the case where $\bar c_0=0$. The interest of studying this theoretical case is to show how modifying the plane-shape equilibrium density and not the spontaneous curvature can induce a transient shape instability.
\begin{figure}[h t b]
\begin{center}
\includegraphics[width=.3\textwidth]{figure5a_5e.eps}
\caption{Qualitative description of the dynamics of the instability if $\bar c_0=0$. The lipids that become chemically modified at $t=0$ are represented in dark grey (brown). The cones are dark (red) for the lipids experiencing compression or dilation with respect to this preferred area and in light grey (green) otherwise. The intensity of the grey (red) colour represents the degree of the compression or dilation. Here, the modification affects the preferred density on the neutral surface, but not the preferred curvature.}
\label{dessin}
\end{center}
\end{figure}
Let us consider simple initial conditions at $t=0$: $\bar r_q(0)=\hat r_q(0)=h_q(0)=0$. This corresponds to the idealized case where the injection of the NaOH solution is local in time: then, following \textit{(SH$_1$)}, we have $\phi(\mathbf{r},t)=0$ for $t<0$ and $\phi(\mathbf{r},t)=\phi(\mathbf{r})$ for $t\ge0$. At $t=0$, the preferred area per lipid suddenly increases for the lipids of the external monolayer that are chemically modified, since we expect their extra negative charge to increase repulsion. Thus, these modified lipids are effectively compressed (see Fig.~\ref{dessin}(b)).
Since the coupled dynamics of $\hat r_q$ and $h_q$ is much slower than the dynamics of $\bar r_q$, we may consider that at $t=0^+$, the equilibrium state $\bar r_q(0^+)=\sigma_1\phi_q/k$ has been reached, while $\hat r_q(0^+)=0$ still holds, so $r_q^\pm(0^+)=\frac{1}{2}\sigma_1\phi_q/k$. This means that after an infinitesimal time, half the compression of the lipids of the external monolayer is relaxed. However, the flow of the lipids of the external monolayer has dragged the lipids of the inner monolayer, because of the intermonolayer friction, thus inducing a dilation equal to the compression in the external monolayer (see Fig.~\ref{dessin}(c)).
After this very fast first step, the time evolution of the membrane deformation is given by
\begin{equation}
h_q(t)=\phi_q\frac{eq\sigma_1}{4\eta}\frac{e^{-\gamma_2t}-e^{-\gamma_1t}}{\gamma_2-\gamma_1}\,,
\label{ci_zero}
\end{equation}
which corresponds to Eq.~(\ref{evol_h_gen}) in the case where $\bar c_0=0$ and with the above initial conditions at $t=0^+$. The deformation appears and increases until a time $t_\mathrm{max}$ when it reaches a maximum \cite{Fournier09}, before decaying exponentially with timescale $\gamma_1^{-1}$. At the same time, the differential density $\hat r_q(t)=r_q^+-r_q^-$ decreases with timescale $\gamma_1^{-1}$ (see Fig.~\ref{dessin}(d)). The membrane curving and one monolayer sliding with respect to the other are two distinct responses to the discrepancy between the equilibrium densities of the two monolayers. Here, both of these phenomena occur in the transient regime, but the final state ($t\rightarrow\infty$) corresponds to a non-deformed membrane (since $\bar c_0=0$): the discrepancy is finally solved by the relative sliding of the monolayers, which is the slowest process (see Fig.~\ref{dessin}(e)).
\subsubsection{Dynamics in the case where $\sigma_1=0$}
Let us now discuss the opposite case, where only the spontaneous curvature is modified. We take the same initial conditions as in the previous section: $\bar r_q(0)=\hat r_q(0)=h_q(0)=0$. At $t=0$, the preferred curvature per lipid suddenly increases for the lipids of the external monolayer that are chemically modified (see Fig.~\ref{dessin2}(b)). Here, solving Eq.~(\ref{dynabar}) shows that $\bar r_q$ remains equal to zero. If the usual approximation~(\ref{eigenvalues}) is used for the eigenvalues $\gamma_1$ and $\gamma_2$, the time evolution of the membrane deformation is given by
\begin{equation}
h_q(t)=\phi_q\frac{\kappa\bar c_0}{2\sigma_0}\left(1-e^{-\gamma_2t}\right)\,,
\label{ci_zero_2}
\end{equation}
which corresponds to Eq.~(\ref{evol_h_gen}) in the case where $\bar \sigma_1=0$ and with the above initial conditions. The deformation increases exponentially with timescale $\gamma_2^{-1}$ towards a deformed final state, where $h_q=\kappa\bar c_0\phi_q/(2\sigma_0)$ (see Fig.~\ref{dessin2}(c)).
\begin{figure}[h t b]
\begin{center}
\includegraphics[width=.3\textwidth]{figure6a_6c.eps}
\caption{Qualitative description of the dynamics of the instability if $\sigma_1=0$. Here, the modification affects the preferred curvature, but not the preferred density on the neutral surface.}
\label{dessin2}
\end{center}
\end{figure}
Note that the long timescale $\gamma_1^{-1}$ involving the intermonolayer friction does not appear in Eq.~(\ref{ci_zero_2}). Indeed, as the plane-shape equilibrium density is not modified here, the lipids should not have to slide with respect to the other monolayer. In fact, since the membrane is curved in the final state, a small sliding is necessary as the equilibrium density on the membrane midsurface is curvature-dependent, contrary to the equilibrium density on the neutral surface of each monolayer. This small sliding is visible on Fig.~\ref{dessin2}(c). This effect can be accounted for by solving our full dynamical equations (\ref{relax}), without using the approximation~(\ref{eigenvalues}) for the eigenvalues. Indeed, solving these equations for typical values of the parameters yields a small increase of the differential density $\hat r_q(t)=r_q^+-r_q^-$ with timescale $\gamma_1^{-1}$, and a small contribution to the deformation $h_q$ with timescale $\gamma_1^{-1}$. It has been checked that this contribution to the deformation is negligible.
Note that in real experimental conditions, \textit{(SH$_1$)} cannot hold indefinitely. In our experiment, this is mainly due to the diffusion of the hydroxide ions (see Sec.~\ref{modif}). Even if the modification of the lipids was irreversible, the modified lipids would diffuse in the monolayer, so the deformation would relax anyway. However, the corresponding relaxation timescale would be much longer than the previous ones given the small diffusion coefficient of the lipids in a membrane $D_\mathrm{lip}\approx1\,\,\mu\mathrm{m^2/s}$ \cite{Schmidt96}: the modified lipids would take about $L^2/D_\mathrm{lip}\approx10^4 \,\,\mathrm{s}$ to diffuse on a length $L\approx10^2\,\,\mu\mathrm{m}$.
\subsubsection{Comparison of the two types of changes of the membrane properties}
Solving the dynamical equations in the two limiting cases has confirmed that a local change of the plane-shape equilibrium density and a local change of the spontaneous curvature yield different dynamics. Indeed, for the length scales we consider, when only the plane-shape equilibrium density is modified, the deformation increases rapidly before relaxing towards a non-deformed shape with a long timescale $\gamma_1^{-1}$ (typically of several seconds). When only the spontaneous curvature is affected, the deformation increases towards a stationary curved state with a short timescale $\gamma_2^{-1}$ (typically shorter than 1 s) before relaxing slowly with a timescale of about one hour.
In contrast, when a global modification of the environment of a vesicle is considered, studying the change of its equilibrium shape does not allow to distinguish between a change of the spontaneous curvature and a change of the preferred area per lipid \cite{Lee99}. As a matter of fact, the equilibrium shape of a vesicle with fixed volume is fully determined within the area-difference elasticity (ADE) model by the value of the combined quantity
\begin{equation}
\overline{\Delta a_0}=\Delta a_0+\frac{2}{\alpha}c_0^b\,,
\end{equation}
where $\Delta a_0$ is the nondimensionalized preferred area difference between the two monolayers, $\alpha$ a nondimensional number involving the elastic constants of the membrane and $c_0^b$ denotes the spontaneous curvature of the bilayer \cite{Miao94}. The shape variations observed when such global modifications are performed have been interpreted as coming from a change of the spontaneous curvature $c_0^b$, under the assumption that the preferred area per lipid was not modified \cite{Lee99,Dobereiner99,Petrov99}.
Thus, the dynamical study of our instability induced by a local chemical modification of the lipids should allow to distinguish two types of changes of the membrane properties, which cannot be distinguished for a static global modification. Since both of them should be involved generically (see Sec.~\ref{Mod_subsec}), studying the dynamics may enable to determine their relative importance. However, this is difficult to do precisely in the present experiments because of the diffusion of the hydroxide ions (see Sec.~\ref{modif}).
\section{Comparison between the experiments and the model}
\label{Fit}
\subsection{Fits of the experimental results}
\label{Fits_subsec}
In the previous section, we have exposed a theoretical model for the curvature instability observed in our experiments. In order to compare the experimental results to the predictions of this model, we are going to fit the deformation height measured during a ``pulse'' experiment with Eq.~(\ref{evol_h_gen}). This equation corresponds to the general solution of the dynamical equations of our model, and it should describe the time evolution of the height $H(t)$ of the deformation of the vesicle with respect to its initial shape.
We present the analysis of several ``pulse'' experiments conducted on three different GUVs (numbered 1, 2 and 3 in the following). For each experiment, we take as an initial condition the last point where the pipette is present. If we call $H_0$ the height of the deformation at this time, which will be referred to as $t=0$ as in our theoretical analysis, this initial condition can be written $H(0)=H_0$. The experimental data is thus fitted with the formula
\begin{equation}
H(t)=(H_0-C-B)e^{-\gamma_1 t}+B e^{-\gamma_2 t}+C\,,
\label{Formule_fit}
\end{equation}
which corresponds to Eq.~(\ref{evol_h_gen}) with the above initial condition. There are thus four free parameters in our fits: $B$, $C$, $\gamma_1$ and $\gamma_2$. Besides, since we expect that the diffusion of the hydroxide ions is no longer negligible after a few seconds (see Sec.~\ref{modif}), we have carried out the fits on identical intervals of duration $5\,\mathrm{s}$ for all the experiments, starting from the end of the injection ($t=0$). Thus, we eliminate the longer-term evolution which should no longer be well-described by our model.
The best fits of the experimental data to Eq.~(\ref{Formule_fit}) are shown in Fig.~\ref{Exs_fits} for one typical ``pulse'' experiment for each of the three vesicles studied.
\begin{figure}[h t b]
\centering
\includegraphics[width=0.5\textwidth]{figure7a.eps}\\
\includegraphics[width=0.5\textwidth]{figure7b.eps}\\
\includegraphics[width=0.5\textwidth]{figure7c.eps}
\caption[]{Typical examples of fits of the experimental results. Dots: experimental data; line: best fit to Eq.~(\ref{Formule_fit}). The dots surrounded at $t=0$ correspond to the initial conditions. The lines at $t=5\,\mathrm{s}$ represent the upper bound of the time interval where the fit is carried out. The insets are superposed pictures of the initial shape of the GUV and of its most deformed shape in each experiment. The definition of $H(t)$ is recalled in these insets. (a) GUV 1 - Experiment number 5. (b) GUV 2 - Experiment number 8. (c) GUV 3 - Experiment number 15.\label{Exs_fits}}
\end{figure}
Fig.~\ref{Exs_fits} shows a good agreement between the experimental results and their best fit to Eq.~(\ref{Formule_fit}). Thus, this formula describes well the experimental results, which is in favor of our theoretical model.
As can be seen on Fig.~\ref{Exs_fits}, the residual deformation at equilibrium is often close to zero. This residual deformation, which corresponds to the constant term $\kappa\bar c_0 \phi_q/(2\sigma_0)$ in Eq.~(\ref{evol_h_gen}), is due to the change of the spontaneous curvature induced by the chemical modification (see Sec.~\ref{Mod_subsec}).
Thus, if the diffusion of the hydroxide ions was much slower than the timescales of our instability, observing a negligible residual deformation would suggest that the change of the equilibrium density is predominant over the change of the spontaneous curvature. However, one cannot use this argument here, because the modified lipids may recover their initial state by reacquiring their protons as the hydroxide ions diffuse away in the solution (see Sec.~\ref{modif}).
The best fits of the experimental results to Eq.~(\ref{evol_h_gen}) provide us with an estimate of the time constants $\gamma_1^{-1}$ and $\gamma_2^{-1}$, from which we may extract the values of the vesicle constitutive constants $b$ and $\sigma_0$, using Eq.~(\ref{eigenvalues}).
However, the wavevector $q$, which comes from our single-mode description of the instability (see Sec.~\ref{hydro}), is involved in the expressions of $\gamma_1$ and $\gamma_2$ (see Eq.~(\ref{eigenvalues})). To calculate the characteristic wavevector $q$ involved in each experiment, we will measure the width $W$ of the instability, from which we will deduce an estimate of $q$ through $q\approx\pi/W$.
\subsection{Estimation of the width of the instability}
\label{Width_subsec}
In order to estimate the characteristic width of the instability, we have measured the width at mid-height of the deformed zone of the vesicle. In practice, what we call ``deformed zone'' is the zone where the height of the vesicle is larger in the deformed state than in the initial state.
\begin{figure}[h t b]
\centering
\includegraphics[width=0.45\textwidth]{figure8a_8b.eps}\\
\vspace{.2cm}
\includegraphics[width=0.45\textwidth]{figure8c.eps}
\caption[]{(a) Superposed pictures of a GUV before the deformation and in the most deformed state (GUV 2, experiment number 6). (b) Polynomial fits of the two vesicle shapes in (a). (c) Plot of the difference between the deformed and the initial shape of the GUV. This difference is calculated by substracting the two fits in (b). It enables to determine the width at mid-height of the deformed zone, noted $W$.\label{width_fig}}
\end{figure}
We have digitized the profile of the vesicle before the approach of the pipette and in its most deformed state, i.e., when $H$ is maximal, which occurs just after the pipette is withdrawn. Fig.~\ref{width_fig}(a) shows the superposed pictures of a GUV in these two states during a typical ``pulse'' experiment. The digitized profiles at these two times have then been fitted to polynomials $z=P(x)$ (see Fig.~\ref{width_fig}(b)). The two fitting polynomials have then been substracted to get the deformation $\Delta z (x)$, from which it is straightforward to measure the width at mid-height of the deformed zone $W$ (see Fig.~\ref{width_fig}(c)).
This estimation of $W$ has been carried out on all of our ``pulse'' experiments. We thus have a specific estimate of the characteristic wavevector $q\approx\pi/W$ for each of these experiments. This value can be used to extract an estimate of the vesicle physical constants from the best fit of each experiment.
\subsection{Measurement of the intermonolayer friction coefficient $b$ and of the vesicle tension $\sigma_0$}
For each ``pulse'' experiment, the best fit of the experimental data to Eq.~(\ref{Formule_fit}), carried out following the method of Sec.~\ref{Fits_subsec}, provides an estimate of $\gamma_1$ and $\gamma_2$ (see Table~\ref{Tab}). Using the value of $q$ obtained for this experiment as explained in Sec.~\ref{Width_subsec} (see Table~\ref{Tab}), we may deduce the intermonolayer friction coefficient $b$ and the vesicle tension $\sigma_0$ from $\gamma_1$ and $\gamma_2$ thanks to Eq.~(\ref{eigenvalues}).
\setlength{\tabcolsep}{0.15cm}
\begin{table}[h t b]
\footnotesize
\centering
\begin{tabular}{|l|lll|lll|lll|lll|lll|}
\hline
Exp.&\multicolumn{3}{c|}{$\gamma_1$ (s$^{-1}$)}&\multicolumn{3}{c|}{$\gamma_2$ (s$^{-1}$)}&\multicolumn{3}{c|}{$q$ ($\times 10^4$ m$^{-1}$)}\\
\hline
\hline
1&0.65&$\pm$&0.1&6.0&$\pm$&3&5.23&$\pm$&0.15\\
\hline
2&0.27&$\pm$&0.12&1.3&$\pm$&1.1&4.33&$\pm$&0.17\\
\hline
3&0.65&$\pm$&0.2&10&$\pm$&8&5.15&$\pm$&0.21\\
\hline
4&0.43&$\pm$&0.06&7.9&$\pm$&5&5.18&$\pm$&0.27\\
\hline
5&0.54&$\pm$&0.07&2.9&$\pm$&1.5&5.64&$\pm$&0.23\\
\hline
\hline
6&0.68&$\pm$&0.05&5.1&$\pm$&0.8&8.62&$\pm$&0.13\\
\hline
7&0.48&$\pm$&0.05&6.3&$\pm$&2&8.80&$\pm$&0.16\\
\hline
8&0.52&$\pm$&0.05&8.0&$\pm$&3&8.15&$\pm$&0.10\\
\hline
9&0.47&$\pm$&0.21&2.7&$\pm$&2&7.78&$\pm$&0.20\\
\hline
10&0.36&$\pm$&0.07&7.9&$\pm$&3&7.54&$\pm$&0.05\\
\hline
11&0.51&$\pm$&0.1&2.8&$\pm$&1.5&7.37&$\pm$&0.02\\
\hline
\hline
12&0.47&$\pm$&0.1&11&$\pm$&9&7.86&$\pm$&0.32\\
\hline
13&0.57&$\pm$&0.1&3.9&$\pm$&2&7.07&$\pm$&0.05\\
\hline
14&0.71&$\pm$&0.2&12&$\pm$&6&7.02&$\pm$&0.15\\
\hline
15&0.76&$\pm$&0.25&3.2&$\pm$&1.5&7.94&$\pm$&0.08\\
\hline
16&1.0&$\pm$&0.5&4.9&$\pm$&2&7.07&$\pm$&0.15\\
\hline
\end{tabular}
\normalsize
\caption[]{Values of the fitting parameters $\gamma_1$ and $\gamma_2$, and of the wavevector $q$ estimated as explained in Sec.~\ref{Width_subsec}, for each ``pulse'' experiment. The experiment numbers are the same as on Fig.~\ref{Fig_b}, and the thick horizontal lines indicate a change of vesicle. The intermonolayer friction coefficient $b$ and the vesicle tension $\sigma_0$ can be deduced from these values (see Fig.~\ref{Fig_b} for $b$). \label{Tab}}
\end{table}
Fig.~\ref{Fig_b} shows the estimates of $b$ obtained from 16 different ``pulse'' experiments carried out on three different GUVs. The error bars correspond to the uncertainty on $b$ assuming that our model describes the experiment well. This uncertainty has several different origins. First, recall that we fit our experimental data on a $5\,\mathrm{s}$ time interval (see Sec.~\ref{Fits_subsec}), which is a somewhat arbitrary choice. In practice, the best fit, and therefore the estimated values of $\gamma_1$ (and $\gamma_2$), depend on the time interval over which the fit is carried out. This fact can be explained by the diffusion of the hydroxide ions, by the change in the equilibrium state which is sometimes observed (see Sec.~\ref{Fits_subsec}), and by the fact that the width of the deformation is in fact time-dependent (see Sec.~\ref{Sec_discussion}). We have thus varied the upper bound of this time interval from about 3 to 7 seconds, and taken the extremal values thus obtained for $\gamma_1$ (and $\gamma_2$) as the bounds of the uncertainty interval over $\gamma_1$ (and $\gamma_2$). This uncertainty interval is generally larger than the one estimated by the fitting software for a given time interval. Besides, estimating $b$ (and $\sigma_0$) also relies on our measurement of $W$. To determine the uncertainty regarding $W$, the measurement of $W$ described in Sec.~\ref{Width_subsec} has been carried out twice for each experiment, using the two most deformed states. All these factors have been taken into account in the error bars in Fig.~\ref{Fig_b}, the dominating one coming from the choice of the time interval.
\begin{figure}[h t b]
\centering
\includegraphics[width=0.55\textwidth]{figure9.eps}
\caption[]{Intermonolayer friction coefficient estimated from the best fit of the experimental data to Eq.~(\ref{Formule_fit}), for each ``pulse'' experiment.\label{Fig_b}}
\end{figure}
The values we find for $b$, i.e., $b=2-8\times10^8\,\mathrm{J.s/m^4}$ (see Fig.~\ref{Fig_b}), are in good agreement with the literature \cite{Pott02, Shkulipa06}. Besides, we can see on Fig.~\ref{Fig_b} that the values of $b$ extracted from the different ``pulse'' experiments for each one of the three GUVs studied are compatible given the error bars. However, the uncertainty on $b$ is quite large, as shown by the error bars in Fig.~\ref{Fig_b}. Besides, our results seem to indicate that GUV 1 has a somewhat smaller intermonolayer coefficient friction $b$ than GUV 2, which was not expected, since all of our vesicles should have the same lipid composition, and $b$ should only depend on this composition. However, the existence of small composition variations in the experiments cannot be totally excluded.
While the intermonolayer friction coefficient $b$ has been deduced from $\gamma_1$, which gives the longest timescale of the relaxation (a few seconds), the vesicle tension $\sigma_0$ has to be extracted from $\gamma_2$, which corresponds to a shorter timescale. The fits give values of $\gamma_2$ in the range $2-10 \,\mathrm{s^{-1}}$, so that the timescale $\gamma_2^{-1}$ is about $0.1$ to $0.5\,\mathrm{s}$ (see Table~\ref{Tab}). This is very short given that an experimental point is measured every $0.3\,\mathrm{s}$. Besides, studying Eq.~(\ref{ci_zero}) as a function of time shows that the term $e^{-\gamma_2 t}$ is of significant importance only before the maximum of $H(t)$ is reached, which confirms that data at very small times after the end of the injection would be needed to determine $\gamma_2$ well. Thus, it is impossible to extract precise values of $\sigma_0$ from our experiments. Even if more data points were available at short times, one might argue that the determination of $\sigma_0$ would still be imprecise because at short times, the pipette is still being withdrawn, and the resulting flow might affect the deformation.
We have nevertheless calculated estimates of $\sigma_0$ from each of our fits in the same way as $b$ was determined. We have obtained tensions in the range $0.6-6\times10^{-7}\,\mathrm{J/m^2}$, without notable difference between the three vesicles (the dispersion among different experiments on a single vesicle is similar to the one among all the experiments). The order of magnitude of $\sigma_0$ is correct, although no precise values can be extracted as mentioned above. It corresponds to floppy vesicles, which is in agreement with their qualitative observation.
Thus, fitting our experimental results provides a satisfying order of magnitude for $b$ and $\sigma_0$, which is an argument in favor of our model. The measurements of $b$ are more precise than the ones of $\sigma_0$, due to the short timescale $\sigma_0$ is involved in. Besides, the observation of a relaxation with a timescale of a few seconds which is well-described by a term in $e^{-\gamma_1 t}$ shows that the change of the plane-shape equilibrium density is important in our instability (see Sec~\ref{comp}).
\section{Discussion}
\label{Sec_discussion}
Let us now discuss the possible improvements of our work. First of all, given that the timescale of the diffusion of the hydroxide ions is comparable to the timescale of the relaxation of the deformation (see Sec.~\ref{modif}), it would be very interesting to visualize the time-dependent pH field during the experiment, and even better to visualize directly the modified lipids thanks to specific fluorescent markers. This would settle whether \textit{(SH$_1$)} is verified or not, and for how long. Besides, the model could be improved by taking into account the diffusion of the hydroxide ions, so that \textit{(SH$_1$)} would no longer be necessary. However, it would be necessary to make strong assumptions on the way the hydroxide ions are injected and diffuse.
Another point of our model that could be improved lies in \textit{(SH$_2$)}: in practice, the deformation of the vesicle is not single-mode, so it would be an improvement to study the evolution of an initial deformation with a complete Fourier expansion. This is work in progress, but given the uncertainty on the spatio-temporal dependence of $\phi$ in the present experiments, it is difficult to gain further insight, and at the moment, the one-mode approximation is the best we can do. Besides, we have used the initial value of the width of the deformation $W$ to calculate the characteristic wavevector $q$. However, in reality, this width can change in time during the instability, due to dispersion, which does not occur for our theoretical single-mode deformation. In order to have an idea of the importance of this phenomenon, we have digitized the vesicle shape for all the experimental points in one ``pulse'' experiment, and calculated the width $W$ for each of these points, using the method in Sec.~\ref{Width_subsec}. The corresponding results are presented in Fig.~\ref{time_evol}.
\begin{figure}[h t b]
\centering
\includegraphics[width=0.45\textwidth]{figure10a.eps}\\ \vspace{0.2cm}
\includegraphics[width=0.45\textwidth]{figure10b.eps}
\caption[]{(a) Time evolution of the vesicle shape during a typical ``pulse'' experiment (GUV 2, experiment number 6) after the pipette is withdrawn (bold dashed line: initial shape). (b) Width $W$ of the deformation as a function of time, extracted from the data in (a). \label{time_evol}}
\end{figure}
We can see that $W$ is not constant during the instability. Its variation is about 90\% during the five second interval used for our fits. Nevertheless, it remains nearly constant just after the end of the injection before increasing: using the initial value of $W$ was thus the best thing we could do while keeping a constant $W$.
In order to determine $\sigma_0$ from these experiments, it would be useful to have more points at short times after the end of the injection. Another further experimental improvement would be to use larger vesicles, and to make the injection even more local than in the present work, so as to be closer to the assumptions of the model.
\newpage
\section{Conclusion}
In this paper, we have reported experiments in which a local curvature instability is observed when GUVs are submitted to a local pH increase.
A theoretical model of the dynamics of the instability has been described. We have shown that the chemical modification of the lipids resulted in a change of the spontaneous curvature and in a change of the plane-shape equilibrium density. Both of these effects have been taken into account and compared. In our description of the dynamics of the instability, the intermonolayer friction plays a crucial part: it gives the longest timescale of the instability, as the monolayers are unable to slide with respect to each other on short timescales.
Our model has been compared to the experiments by fitting the experimental data to the theoretical formula describing the height of the deformation as a function of time. The agreement between the experiments and the model is quite good. The intermonolayer friction coefficient can be extracted from these fits, yielding values consistent with the literature. Finally, we have discussed possible further developments of our work.
Studying the dynamics of our instability caused by a local chemical modification of the lipids enables to distinguish between a change of the plane-shape equilibrium density and a change of the spontaneous curvature. This is impossible in the case of a global modification of the environment of a vesicle. The fact that we observe a relaxation dynamics which is well-described by a mechanism involving the intermonolayer friction indicates that the change of the plane-shape equilibrium density is important in our instability.
\vspace{.5cm}
\bibliographystyle{unsrt}
|
1107.2449
|
\section{Introduction}
An analysis of the two oldest Hamiltonian formulations of the second-order
Einstein-Hilbert (EH) action for metric General Relativity (i.e. Pirani,
Schild, and Skinner (PSS) \cite{PSS}; and Dirac \cite{Dirac}), was completed
in \cite{KKRV, Myths}. Using the approach of Castellani \cite{Castellani}, it
was demonstrated that first-class constraints produce a\ generator for the
diffeomorphism invariance\footnote{We understand diffeomorphism invariance
(\textit{diff}) as \textquotedblleft active\textquotedblright\ \cite{Rovelli}
(p. 62) when \textquotedblleft coordinates play no role\textquotedblright,
i.e. transformations of fields written in the same coordinate system.} - the
known gauge symmetry of the Einstein-Hilbert (EH) action. This outcome
contradicts the result of the Arnowitt, Deser, and Misner formulation (ADM, or
geometrodynamics) \cite{ADM, goldies} where the constraints lead to a
different symmetry, one which is known by many names: \textquotedblleft
spatial diffeomorphism\textquotedblright,\ \textquotedblleft special induced
diffeomorphism\textquotedblright,\ \textquotedblleft field-dependent
diffeomorphism\textquotedblright,\ \textquotedblleft foliation preserving
diffeomorphism\textquotedblright, \textquotedblleft one-to-one
correspondence\textquotedblright, \textquotedblleft one-to-one
mapping\textquotedblright\ (see \cite{Myths} and references therein). It was
shown \cite{FKK, Myths} that the PSS and Dirac Hamiltonians are related by a
canonical transformation of the phase-space variables; while the
transformation from the Dirac to ADM Hamiltonian is not a canonical change of
variables. Canonical transformations must preserve all properties of the
Hamiltonian. Because gauge symmetry is an important characteristic of a
constrained system, a difference between the symmetries of the PSS (or Dirac)
and ADM Hamiltonian formulations indicates that a non-canonical relationship
exists between the two formulations. This truism was explicitly confirmed in
\cite{FKK, Myths} by the calculation of Poisson brackets (PBs) among the
phase-space variables. Waxing poetic, the term \textquotedblleft the
non-canonicity puzzle\textquotedblright\ was coined in \cite{CLM} to describe
the results of \cite{FKK, Myths}; but in \cite{ShestakovaCQG}, using more
direct language, it is called \textquotedblleft the contradiction that again
witnesses about the incompleteness of the theoretical
foundations\textquotedblright. The source of the \textquotedblleft
puzzle\textquotedblright\ or \textquotedblleft contradiction\textquotedblrigh
\ lies in finding how to reconcile the non-equivalence of the two Hamiltonian
formulations with their corresponding Lagrangian formulations when
\textquotedblleft it is supposed\textquotedblright\ \cite{ShestakovaGandC} or
\textquotedblleft it is believed\textquotedblright(as in \cite{ShestakovaCQG})
\textquotedblleft that each of them is equivalent to the Einstein (Lagrangian)
formulation\textquotedblright\ \cite{ShestakovaGandC, ShestakovaCQG}.
This belief might lead one to conclude that Dirac's Hamiltonian formulation of
constrained systems is incomplete \cite{ShestakovaCQG}, and proposals ought to
follow on how to redefine the primary constraints, on how to use boundary
terms, on how to have \textquotedblleft two non-canonical transformations that
compensate each other\textquotedblright\ \cite{CLM}, and on how to modify the
PBs through the extension of phase space\footnote{Such an extension is
obtained by replacing the classical EH action by \textquotedblleft the
effective action including gauge and ghost sectors\textquotedblrigh
\ \cite{ShestakovaGandC, ShestakovaCQG}.} \cite{ShestakovaGandC,
ShestakovaCQG}. But such approaches immediately give rise to a general
question: why are such manipulations needed for one formulation (i.e. ADM),
but not for the others (i.e. PSS and Dirac)? Hamiltonian solutions of the
\textquotedblleft puzzle\textquotedblright\ proposed in \cite{CLM,
ShestakovaGandC, ShestakovaCQG}\ deserve a more detailed discussion; but in
this article we shall limit ourselves to the consideration of the Lagrangian
formulation and the symmetries of the EH\footnote{In PSS \cite{PSS} the
gamma-gamma part of the EH action is considered; and Dirac in \cite{Dirac}
made some additional manipulations in this Lagrangian. In spite of these
modifications, both formulations lead to exactly the same equations of motion
as the original EH action; and the metric is an independent field-variable in
all of them.} and ADM Lagrangians. We note that the literature on the
Hamiltonian formulation of constrained systems contains various treatments
with claims that the variables, which appear in the original Lagrangian, have
different properties (i.e. they might be dynamical and non-dynamical; or some
variables can be treated as Lagrange multipliers; or some variables are
canonical, but some not; or some have conjugate momenta, but others do not
need them; or that secondary first-class constraints can be \textquotedblleft
promoted\textquotedblright\ to primary first-class constraints, et cetera (see
\cite{Myths} and references therein)). This plethora of treatments through
which different results may be obtained, depending on the ingenuity of an
investigator, leave an impression that the Hamiltonian method for constrained
systems\textit{ }is an art, not a defined and unambiguous procedure (or, as
suggested in \cite{CLM, ShestakovaGandC, ShestakovaCQG} that it is not yet a procedure).\
Instead of relying upon the first-class constraints, as in the Dirac
procedure, one may use the Lagrangian formulation of a singular system to
derive symmetries from the Noether identities. The differential relationships
among the Euler-Lagrange derivatives are linked to the gauge transformations;
thus the treatment of the Lagrangian is free from the artistic approaches that
have been applied to the Hamiltonian. Lagrangian symmetries describe a
transformation in which \textit{all} fields are treated on the same footing,
irrespective of the name assigned to them (e.g. \textquotedblleft
dynamical\textquotedblright\ and \textquotedblleft
non-dynamical\textquotedblright, et cetera). The differential identities (DIs)
involve\textit{ }the Euler-Lagrange derivatives with respect to \textit{all}
fields. We must emphasize that the Lagrangian and Hamiltonian methods should
have the same mathematical rigor; but the main reason for us to consider
Lagrangian symmetries is to aid in developing a criterion for the equivalence
of two Lagrangians in light of this \textquotedblleft puzzle\textquotedblright.
In \cite{Myths}, an analysis of the Hamiltonian formulations of Dirac and ADM
was performed; it was concluded that if two Hamiltonian formulations are not
related by a canonical transformation and if they have different symmetries
(i.e. they are not equivalent), then the corresponding Lagrangians are also
not equivalent, contrary to the \textquotedblleft belief\textquotedblrigh
\ which forms the basis of the \textquotedblleft puzzle\textquotedblright. If
two Lagrangians are not equivalent, then the results of \cite{FKK, Myths} are
fully consistent, there is no puzzle, and the theoretical foundations are
sound. The conclusion drawn that PSS and Dirac are not equivalent to the ADM
formulation was based on the belief of the authors of \cite{Myths} that
Dirac's Hamiltonian method for constrained systems is an unambiguous
procedure, applicable to any theory, that leads to a unique symmetry that
corresponds exactly to the symmetry present in the Lagrangian.
The change of variables used by ADM \cite{ADM, goldies} to go from the metric
tensor $g_{\mu\nu}$ of the EH action (used in PSS and Dirac) to the ADM
variables (lapse $N$, shift $N^{i}$ and space-space components of the metric
tensor $\gamma_{km}$)\footnote{We employ Greek letters for space-time indices:
$\mu=0,1,2,3$. Latin letters for space indices: $k=1,2,3$, and
\textquotedblleft$0$\textquotedblright\ for the time index.} i
\begin{equation}
N=\left( -g^{00}\right) ^{-1/2}\text{ },\text{\ }N^{i}=-\frac{g^{0i}
{g^{00}}\text{ \ },\gamma_{km}=g_{km}~,\label{eqnL1
\end{equation}
which is invertible, but not covariant. It is this condition of invariability
that some view as sufficient for the EH and ADM Lagrangians to be equivalent.
But this conclusion is not an obvious one to make when dealing with singular
Lagrangians. In the Hamiltonian formulation of a singular and covariant
Lagrangian, gauge symmetries are derived from the first-class constraints; at
the Lagrangian level, gauge symmetries are associated with the existence of
the Noether identities \cite{Noether}\footnote{For an English translation of
Noether's paper see \cite{Noether-eng}.}. The invariability of redefinition
(\ref{eqnL1}) allows one, starting from known transformations of one set of
variables (e.g. $\delta_{\mathit{diff}}\left\{ g_{\mu\nu}\right\} $), to
find the transformations for another set (i.e. $\delta_{\mathit{diff}}\left\{
N,N^{i},\gamma_{km}\right\} $), and \textit{vise} \textit{versa} (i.e. from
$\delta_{\mathit{ADM}}\left\{ N,N^{i},\gamma_{km}\right\} $ one finds
$\delta_{\mathit{ADM}}\left\{ g_{\mu\nu}\right\} $). For example, one can
also check whether $\delta_{\mathit{ADM}}$, which is a symmetry of the ADM
Lagrangian, is also a symmetry of the EH Lagrangian (i.e. whether the EH
Lagrangian is invariant under transformations given by $\delta_{\mathit{ADM
}\left\{ g_{\mu\nu}\right\} $). Direct calculation is difficult; but by the
converse of Noether's theorem \cite{Noether} (i.e. if an action is invariant
under some symmetry, then there exists the corresponding DI) and by assuming
that $\delta_{\mathit{ADM}}\left\{ g_{\mu\nu}\right\} $ is an invariance,
one can find the corresponding DI\footnote{Such constructions were described
by Schwinger \cite{Schwinger}, see also \cite{Trans}.} and directly check it.
A shorter approach is to connect a new DI to a known DI of the EH action; in
addition, any DIs that are independent linear combinations of known DIs also
describe symmetries. In such a way it is not difficult to demonstrate that
$\delta_{\mathit{ADM}}\left\{ g_{\mu\nu}\right\} $ is indeed a symmetry of
the EH action. Conversely, one starting from $\delta_{\mathit{diff}}\left\{
g_{\mu\nu}\right\} $ can find $\delta_{\mathit{diff}}\left\{ N,N^{i
,\gamma_{km}\right\} $ and check that it is a symmetry of the ADM Lagrangian
by\ using its DI (e.g. see p. 17 of \cite{BGMR}). In a similar way, linear
combinations of the DIs can be used to construct other transformations that
will be symmetries of both the EH and ADM Lagrangians. But do these
relationships prove the equivalence of the EH and ADM Lagrangians? One may
also ask why it is that when using the Hamiltonian method, one particular
symmetry follows from the constraint structure, but in the Lagrangian, we
apparently have an infinity of equally good symmetries? Such a non-uniqueness
should be a warning sign\footnote{Of course, these questions can be avoided
assuming that \textquotedblleft Hamiltonian dynamics is not completely
equivalent to Lagrangian formulation of the original theory. In Hamiltonian
formalism the constraints generate transformations of phase-space variables;
however, the group of these transformations does not have to be equivalent to
the group of gauge transformations of Lagrangian theory\textquotedblrigh
\ \cite{Shestakova}. Such an assumption eliminates a \textquotedblleft
non-canonicity puzzle\textquotedblright\ or, alternatively, it provides the
solution: the EH and ADM Lagrangians are equivalent, but the corresponding
Hamiltonians just happen to pick different symmetries.}.
The key concept that allows one to distinguish among the numerous symmetries
due to the Lagrangian approach may be found in \cite{ShestakovaCQG}, where it
is suggested that Dirac's method is incomplete. According to
\cite{ShestakovaCQG}, \textquotedblleft the difference in the \textit{groups}
of transformations is the first indication to the inconsistency of the
theory\textquotedblright\textit{(italic is ours)}. This key concept can be
used to answer the question about how to classify all symmetries that can be
constructed for \textit{one} Lagrangian: which of the symmetries have group
properties and thus constitute the \textquotedblleft basic\textquotedblright,
\textquotedblleft true\textquotedblright,\ or \textquotedblleft
canonical\textquotedblright\ symmetries? This also allows one to compare
\textit{two} Lagrangian formulations by matching those symmetries that have
group properties for each. A related question is: if only one symmetry has the
property to form a group, is it the symmetry that the Hamiltonian formulation
produces (or should produce)? At least for the EH Lagrangian, where
diffeomorphism is a gauge symmetry with a group property \cite{Bergmann}, its
Hamiltonian formulation \cite{PSS, Dirac} leads exactly to this symmetry
\cite{KKRV, Myths} without any extension of Dirac's procedure. Now consider
going from one Lagrangian to another by performing some invertible change of
variables. If the symmetry that had a group property in the original
formulation ceases to have a group property in a new formulation, but another
symmetry that did not have group properties in the original formulation
\textquotedblleft develops\textquotedblright\ group properties in a new
formulation, is this \textquotedblleft the first indication to the
inconsistency of the theory\textquotedblright\ \cite{ShestakovaCQG} or is it
proof that the two Lagrangian formulations are not equivalent? Perhaps a
change of variables that creates such a result should be called
\textquotedblleft non-canonical\textquotedblright,\ in analogy with
Hamiltonian terminology where a non-canonical change of variables also causes
a change of symmetries \cite{FKK, Myths}.
The investigation of the symmetries of the two Lagrangians (EH and ADM), which
are related to each other by the change of variables (\ref{eqnL1}), is a less
cumbersome calculation to perform compared with the Hamiltonian method; the
same is true of the study of whether a symmetry with a group property, of one
Lagrangian, is also a symmetry with a group property, of the other Lagrangian.
But for non-covariant changes such an investigation is complicated. In
particular, to identify a symmetry with a group property, the commutators of
two transformations must be considered, and in the case of field-dependent
structure functions, higher, nested commutators are needed. \ For
non-covariant variables these calculations must be performed separately for
different fields, and the consistency of different commutators must be
checked. In this article we discuss the simple parts of the calculation that
one may perform in a quasi-covariant form, and consider two symmetries of the
EH Lagrangian: transformations of the metric tensor $g^{\mu\nu}$ under
\textit{diff} ($\delta_{\mathit{diff}}g^{\mu\nu}$) and under ADM
transformations ($\delta_{\mathit{ADM}}g^{\mu\nu}$). We also compare their
group properties. In the next Section we briefly review some results relevant
to the \textit{diff} invariance of the EH action with an emphasis placed on
the role of DIs, their direct connection to the form of the transformations,
and the possible construction of additional symmetries by using combinations
of DIs (i.e. the results that will be needed for a discussion of
$\delta_{\mathit{ADM}}g^{\mu\nu}$). In Section III we demonstrate the
invariance of the EH action under the ADM transformations $\delta
_{\mathit{ADM}}g^{\mu\nu}$, and show that unlike \textit{diff}, the
$\delta_{\mathit{ADM}}g^{\mu\nu}$ do not constitute a group. More cumbersome
calculations of the group properties of the same transformations of the ADM
variables, $\delta_{\mathit{diff}}\left\{ N,N^{i},\gamma_{km}\right\} $ and
$\delta_{\mathit{ADM}}\left\{ N,N^{i},\gamma_{km}\right\} $, for the ADM
Lagrangian are in progress and will be reported elsewhere. In the Conclusion,
we summarize our results and discuss the consequences of\textit{
$\delta_{\mathit{diff}}\left\{ N,N^{i},\gamma_{km}\right\} $\textit{
}and\textit{ }$\delta_{\mathit{ADM}}\left\{ N,N^{i},\gamma_{km}\right\} $
either having or not having group properties, in all possible combinations.
Finally we comment on the role of covariance.
\section{Symmetries in the Lagrangian approach of the Einstein-Hilbert action}
There are statements in the literature such as: \textquotedblleft...one of the
advantages of the Hamiltonian formulation is that one does not have to specify
the gauge symmetries \textit{a priori}. Instead, the structure of the
Hamiltonian constraints provides an essentially algorithmic way in which the
correct gauge symmetry structure is determined automatically\textquotedblrigh
\ \cite{Horava}. We note that this is not a special or exclusive property of
the Hamiltonian method\footnote{It has to be admitted that applications of
Hamiltonian methods can lead to very long and cumbersome calculations, which
in some cases, are not straightforward.}. The Lagrangian approach also
provides an algorithm, which is due to Noether's second theorem for finding
gauge symmetries \cite{Noether, Noether-eng}, that connects these symmetries
with the DIs - combinations of Euler-Lagrange derivatives that are identically
equal to zero (off-shell). The Hamiltonian method provides an algorithm for
finding and classifying constraints (all first-class constraints are needed to
find a symmetry); in the Lagrangian approach, DIs can be built using an
iterative procedure. For the Einstein-Cartan (EC) action, which has richer
symmetry properties than EH, such a construction was performed in
\cite{Trans}. In the same way, DIs can be built for the EH action
\cite{Myths}. The relative simplicity of such calculations is due to the
covariance of the theories considered. It would be a much more complicated
procedure to try to find DIs in non-covariant theories, or non-covariant DIs
for covariant theories. Of course, for any theory for which there is no
\textit{a priori} knowledge of the existence of gauge symmetries, it is
unproductive to search for identities without preliminary analysis. The first
step is to determine if a Lagrangian is singular, by evaluating its Hessian
\begin{equation}
H^{\alpha\beta}=\frac{\delta^{2}L}{\delta\dot{Q}_{\alpha}~\delta\dot{Q
_{\beta}}~,\label{eqnL3
\end{equation}
where $\dot{Q}_{\alpha}$ are the time derivatives of $Q_{\alpha}$ - the
independent fields of the Lagrangian. If the determinant of the Hessian is
zero, then the Lagrangian is singular; the rank of the Hessian is related to
the number of independent DIs that can be found. It should be noted that
singularity of the Lagrangian is a necessary condition to have a gauge
symmetry, but not sufficient (the simplest example is the massive vector
(Proca) field where the Lagrangian is singular, but has no gauge symmetry).
The rank of the Hessian provides only an upper bound on the maximum number of
independent gauge symmetries. The Hessian is often written for velocities,
even for the Lagrangian of covariant theories; but time is not special for
covariant theories and singling it out is unnecessary.
In the Hamiltonian approach, knowledge of the first-class constraints is
sufficient to restore gauge invariance: for example, by using the Castellani
procedure \cite{Castellani}\textbf{.} Although there are some modifications of
the Castellani procedure, they must be used with care (see
\cite{affine-metric}). Similarly for Lagrangians, if the DIs are known, then
transformations can be found using the explicit connections of the DIs and the
transformations \cite{Noether, Schwinger}. The approach described for finding
\textit{a priori} unknown Lagrangian symmetries is general; but for the EH
action, a well-known covariant DI had already appeared, along with the EH
action\footnote{So, it is not easy to recognize this due to Hilbert's
presentation and also complications with coupling to Mie's electrodynamics. In
addition, this identity was known before any connection was made to
Euler-Lagrange derivatives of the EH action - this is simply the contracted
Bianchi identity \cite{Bianchi}.} itself, in Hilbert's paper \cite{Hilbert
\footnote{For an English translation see \cite{Hilbert-eng}.}.
We shall briefly illustrate the application of this general procedure to the
EH action. These results will be needed, used, and compared in the next
Section, where the ADM symmetry is discussed. The Einstein-Hilbert action is
\cite{Landau, Carmeli
\begin{equation}
S_{EH}=\int L~d^{4}x=\int\sqrt{-g}R~d^{4}x~, \label{eqnL5
\end{equation}
where $g=\det g_{\mu\nu}$, $L$ is the scalar density (Lagrangian density) and
the Ricci scalar $R$, Ricci tensor $R_{\mu\nu}$, and Christoffel symbol
$\Gamma_{\mu\nu}^{\alpha}$ are
\begin{equation}
R=g^{\mu\nu}R_{\mu\nu},\text{ \ \ \ \ }R_{\mu\nu}=\Gamma_{\mu\nu,\alpha
}^{\alpha}-\Gamma_{\mu\alpha,\nu}^{\alpha}+\Gamma_{\mu\nu}^{\alpha
\Gamma_{\alpha\beta}^{\beta}-\Gamma_{\mu\beta}^{\alpha}\Gamma_{\alpha\nu
}^{\beta}~, \label{eqnL6
\end{equation
\begin{equation}
\Gamma_{\mu\nu}^{\alpha}=\frac{1}{2}g^{\alpha\beta}\left( g_{\mu\beta,\nu
}+g_{\nu\beta,\mu}-g_{\mu\nu,\beta}\right) . \label{eqnL7
\end{equation}
The variational, Euler-Lagrange derivative (ELD) of the EH action i
\begin{equation}
E^{\alpha\beta}=\frac{\delta L_{EH}}{\delta g_{\alpha\beta}}=\sqrt{-g}\left(
\frac{1}{2}g^{\alpha\beta}R-R^{\alpha\beta}\right) =-\sqrt{-g}G^{\alpha\beta
}, \label{eqnL9
\end{equation}
where $G^{\alpha\beta}=R^{\alpha\beta}-\frac{1}{2}g^{\alpha\beta}R$ is the
Einstein tensor.
It is not difficult to find the DI by using a general construction similar to
that performed for the Einstein-Cartan action \cite{Trans}. Under the
reasonable assumption that a covariant theory should also have covariant
identities, and given the rank of the Hessian for the EH action, the DI
follows almost immediately. The rank of the Hessian is six; and because the
second-rank metric tensor has ten independent components, there should be four
independent DIs. The four covariant identities that one can build from the
ELDs, which are covariant symmetric second-rank tensor densities, consist of
either four scalars or one vector. It is impossible to construct four scalars
from the ELDs; but to find a true vector, one may take a covariant derivative
of a second-rank tensor to yield
\begin{equation}
I^{\mu}=E_{;\nu}^{\mu\nu}=E_{,\nu}^{\mu\nu}+\Gamma_{\alpha\beta}^{\mu
}E^{\alpha\beta}\equiv0.\label{eqnL15
\end{equation}
By direct substitution, one can easily confirm that this combination is
identically zero.
Schwinger's paper \cite{Schwinger} contains a description of how to construct
the DIs from known gauge transformations; and in correspondence to Noether's
theorem, this process also applies in inverse order, through\ the converse
relationship between DIs and transformations. One forms a scalar from a vector
DI (\ref{eqnL15}) by using the gauge parameters of appropriate tensorial
dimension followed by equating the scalar to variations of the action. \ One write
\begin{equation}
\delta S_{EH}=\int\delta g_{\mu\nu}E^{\mu\nu}d^{4}x=\int\xi_{\mu}I^{\mu
d^{4}x~, \label{eqnL17
\end{equation}
and performing integration by parts yield
\begin{equation}
\int\xi_{\mu}I^{\mu}d^{4}x=\int\xi_{\mu}\left( E_{,\nu}^{\mu\nu
+\Gamma_{\alpha\beta}^{\mu}E^{\alpha\beta}\right) d^{4}x=\int\left(
-\frac{1}{2}\xi_{\mu,\nu}-\frac{1}{2}\xi_{\nu,\mu}+\Gamma_{\mu\nu}^{\alpha
\xi_{\alpha}\right) E^{\mu\nu}d^{4}x~, \label{eqnL18
\end{equation}
then one obtain
\begin{equation}
\delta_{\mathit{diff}}g_{\mu\nu}=-\frac{1}{2}\xi_{\mu,\nu}-\frac{1}{2}\xi
_{\nu,\mu}+\Gamma_{\mu\nu}^{\alpha}\xi_{\alpha}~.\label{eqnL19
\end{equation}
Note that the coefficient $\frac{1}{2}$ also appears when symmetries are
restored in the Hamiltonian approach \cite{KKRV, Myths} but this result is
usually presented in a different form. The constant $\frac{1}{2}$ can be
incorporated into a gauge parameter without any effect on the results; and we
will use the shorter for
\begin{equation}
\delta_{\mathit{diff}}g_{\mu\nu}=-\xi_{\mu,\nu}-\xi_{\nu,\mu}+2\Gamma_{\mu\nu
}^{\alpha}\xi_{\alpha}=-\xi_{\mu;\nu}-\xi_{\nu;\mu}~, \label{eqnL20
\end{equation}
which is a manifestly covariant expression, a consequence of using a covariant
DI. Knowledge of this transformation allows one to also find transformations
for any combination built from the metric, for example
\begin{equation}
\delta_{\mathit{diff}}\Gamma_{\mu\nu}^{\alpha}=-\xi^{\beta}\Gamma_{\mu
\nu,\beta}^{\alpha}+\Gamma_{\mu\nu}^{\beta}\xi_{,\beta}^{\alpha}-\Gamma
_{\mu\beta}^{\alpha}\xi_{,\nu}^{\beta}-\Gamma_{\nu\beta}^{\alpha}\xi_{,\mu
}^{\beta}-\xi_{,\mu\nu}^{\alpha}~, \label{eqnL25
\end{equation
\begin{equation}
\delta_{\mathit{diff}}R_{\mu\nu}=-\xi^{\rho}R_{\mu\nu,\rho}-\xi_{,\mu}^{\rho
}R_{\nu\rho}-\xi_{,\nu}^{\rho}R_{\mu\rho}~,\left. {}\right. \left.
{}\right. \delta R=-\xi^{\rho}R_{,\rho}~, \label{eqnL26
\end{equation}
an
\begin{equation}
\delta_{\mathit{diff}}G_{\mu\nu}=-\xi^{\rho}G_{\mu\nu,\rho}-\xi_{,\mu}^{\rho
}G_{\nu\rho}-\xi_{,\nu}^{\rho}G_{\mu\rho}~.\label{eqnL27
\end{equation}
We note that gauge parameters in general, and the $\xi_{\mu}$ of the EH action
in particular, are field-independent, as was explicitly stated by Hilbert
\cite{Hilbert}, Noether \cite{Noether}, and others, and by Rosenfeld, in the
first discussion on the Hamiltonian formulation for a singular Lagrangian
\cite{Rosenfeld}\footnote{For an English translation see \cite{Preprint}.}.
Further, the methods used to restore gauge symmetries, such as the Castellani
approach \cite{Castellani}, are also based on the condition that the gauge
parameters be field-independent.
Our goal is to compare the \textit{diff }transformations with the ADM
transformations of the same EH action; therefore, transformations of the same
variables (i.e. the metric tensor) under ADM are needed. For this purpose we
find them from the transformations of ADM variables by using their connection
to the metric (\ref{eqnL1}). The lapse and shift are expressed in terms of
contravariant components, so it is easier to find the transformations of
contravariant components of the metric from the ADM transformations. Of
course, transformations of covariant and contravariant components of the
metric have a simple relationship due to $g_{\mu\nu}g^{\nu\alpha}=\delta_{\mu
}^{\alpha}$; but for our discussion, it is preferable to know the DIs that
lead directly to $\delta_{\mathit{diff}}g^{\mu\nu}$. There are a few ways to
find a DI that is expressed in terms of a covariant ELD; we might use a DI
that is already known, and conside
\begin{equation}
I_{\alpha}=g_{\alpha\mu}I^{\mu}~, \label{eqnL30
\end{equation}
which is also an identity. After performing some simple rearrangement, we obtai
\begin{equation}
I_{\alpha}=g_{\alpha\mu}I^{\mu}=-2\left( g^{\mu\nu}E_{\mu\alpha}\right)
_{,\nu}-g_{,\alpha}^{\mu\nu}E_{\mu\nu}~. \label{eqnL31
\end{equation}
Repeating steps (\ref{eqnL17}) - (\ref{eqnL20}) for a covariant DI,
$I_{\alpha}$, and usin
\begin{equation}
\delta S_{EH}=\int\delta g^{\mu\nu}E_{\mu\nu}d^{4}x=\int\xi^{\alpha}I_{\alpha
}d^{4}x~, \label{eqnL32
\end{equation}
we obtai
\begin{equation}
\delta_{diff}g^{\mu\nu}=\xi_{,\alpha}^{\nu}g^{\mu\alpha}+\xi_{,\alpha}^{\mu
}g^{\nu\alpha}-g_{,\alpha}^{\mu\nu}\xi^{\alpha} \label{eqnL33
\end{equation}
(here we also incorporated the constant $\frac{1}{2}$ into the gauge parameter).
But do these transformations form a group? To answer this question, the
commutator of two transformations is needed (i.e. $\left[ \delta_{2
,\delta_{1}\right] $) for which it should be possible to present the result
in the form of a single transformation, but with a new parameter
$\delta_{\left[ 1,2\right] }$,
\begin{equation}
\left[ \delta_{2},\delta_{1}\right] g^{\mu\nu}=\left( \delta_{2}\delta
_{1}-\delta_{1}\delta_{2}\right) g^{\mu\nu}=\delta_{\left[ 1,2\right]
}g^{\mu\nu}~. \label{eqnL40
\end{equation}
This result was found by Bergmann and Komar \cite{Bergmann} in the following form
\begin{equation}
\xi_{\left[ 1,2\right] }^{\alpha}=\xi_{2}^{\beta}\xi_{1,\beta}^{\alpha
-\xi_{1}^{\beta}\xi_{2,\beta}^{\alpha}~.\label{eqnL41
\end{equation}
To shorten the notation, we henceforth eliminate the subscript \textit{diff}.
\ Because of the antisymmetry of this expression, this combination is
equivalent to one with covariant derivative
\begin{equation}
\xi_{\left[ 1,2\right] }^{\alpha}=\xi_{2}^{\beta}\xi_{1;\beta}^{\alpha
-\xi_{1}^{\beta}\xi_{2;\beta}^{\alpha}~; \label{eqnL42
\end{equation}
this form explicitly shows that the new parameter, $\xi_{\left[ 1,2\right]
}^{\alpha}$, preserves its vector form.
Because there are no fields in parameter redefinition (\ref{eqnL41}), it
remains unaltered when we consider a double commutator, i.e
\begin{equation}
\xi_{\left[ \left[ 1,2\right] ,3\right] }^{\alpha}=\xi_{3}^{\beta
\xi_{\left[ 1,2\right] ,\beta}^{\alpha}-\xi_{\left[ 1,2\right] }^{\beta
}\xi_{3,\beta}^{\alpha}~.\label{eqnL47
\end{equation}
In general, the field-independence of a new parameter is a sufficient
condition to have a group, but not a necessary condition. Should fields
appear, additional calculations of the double commutators must be performed to
find a definite answer. From (\ref{eqnL42}) one may conclude that the
\textit{diff} transformations form a group. Therefore the Jacobi identity
follows for any transformation with group properties; that i
\begin{equation}
\left( \left[ \left[ \delta_{2},\delta_{1}\right] ,\delta_{3}\right]
+\left[ \left[ \delta_{3},\delta_{2}\right] ,\delta_{1}\right] +\left[
\left[ \delta_{1},\delta_{3}\right] ,\delta_{2}\right] \right) g^{\mu\nu
}\equiv0, \label{eqnL45
\end{equation}
which is equivalent to a simple relationship for the gauge parameters of the
double commutators
\begin{equation}
\xi_{\left[ \left[ 1,2\right] ,3\right] }^{\alpha}+\xi_{\left[ \left[
3,1\right] ,2\right] }^{\alpha}+\xi_{\left[ \left[ 2,3\right] ,1\right]
}^{\alpha}\equiv0. \label{eqnL46
\end{equation}
Note that all of the above expressions (ELDs, DIs, transformations, and even
the redefinition of parameters that support group properties) are written in
manifestly covariant form; the expectation that the results must be covariant
in a covariant theory can be used to obtain some solutions to avoid direct
calculation (e.g. construction of a covariant DI). We can use these properties
to find symmetries by using the Lagrangian approach; but a different method
can be found in the literature that has the name \textquotedblleft the
Lagrangian approach\textquotedblright\ (see \cite{Samanta} and references
therein). It proceeds by singling out one coordinate, time, followed by a long
sequence of calculations to find symmetries. For the covariant theories
discussed in \cite{Samanta} this approach has unnecessary over-complications;
the Noether DI is used as an input in this method anyway. If the Noether DI is
known, then to find a transformation requires a one-line calculation (as
(\ref{eqnL17}) - (\ref{eqnL20})), not the pages of calculation as outlined in
\cite{Samanta}. This particular \textquotedblleft Lagrangian
approach\textquotedblright\ resembles the Hamiltonian one, and it may be of
use to those who want to trace down explicit connections between the
Lagrangian and Hamiltonian methods at different stages in the calculation.
We note that Lagrangian methods are general and unrestricted by the covariance
of an action; therefore, all of the results above can be obtained without
making reference to or taking guidance from covariance, although considerable
difficulties may arise. But for any Lagrangian with an \textit{a priori}
unknown gauge symmetry, one should be able to find a DI by using only ELDs of
a given action.
According to Noether's second theorem \cite{Noether}, there is a maximum
number of independent DIs, but apparently there are no additional
restrictions. Are the DIs (\ref{eqnL15}) and (\ref{eqnL31}) for the EH action
and $\delta_{\mathit{diff}}$ transformation (\ref{eqnL33}) unique? Keeping
covariance, there is little freedom to construct a new DI (see (\ref{eqnL33}))
and its corresponding transformation; and for the EH action the covariant DI
(\ref{eqnL15}) is unique. But for the EC action there is greater freedom to
construct covariant DIs (see \cite{Trans}). If the restriction of covariance
is lifted, then the number of new combinations of DIs and new gauge
transformations become unlimited; they can be found by using a very simple
manipulation, without the need for further calculation (of course this is true
if only the transformations are of interest; but it could be a complicated
task to find, for example, a commutator like (\ref{eqnL40}), or to calculate
the Jacobi identity). If the DIs are known (e.g. (\ref{eqnL31})), one can
start to build combinations of them that are obviously also DIs. And by using
the approach of \cite{Schwinger} one may obtain the corresponding
transformations. Despite considerations of simplicity and the manifest
covariance of the DIs and transformations, are all such transformations
equally good? According to Noether's theorem, which is a general result, the
existence of a maximum number of independent DIs is an important
characteristic of a singular Lagrangian. From the rank of the Hessian of the
EH action, we know that the maximum number of independent DIs is four. So one
can obtain four new DIs by using
\begin{equation}
\tilde{I}_{\left( \nu\right) }=F_{\left( \nu\right) }^{\mu}\left(
g_{\alpha\beta}\right) I_{\mu}~, \label{eqnL50
\end{equation}
where $I_{\mu}$ is a known DI, $\left( \nu\right) $ is not a covariant
index, just a numbering of the DI, and $F_{\left( \nu\right) }^{\mu}\left(
g_{\alpha\beta}\right) $ are some functionals of the metric that also need
not be covariant. The only restriction on (\ref{eqnL50}) is that the
combinations be linearly independent, that is
\begin{equation}
\det\left\vert \frac{\partial\tilde{I}_{\left( \nu\right) }}{\partial
I_{\mu}}\right\vert \neq0. \label{eqnL52
\end{equation}
Using the approach of \cite{Schwinger}, one must consider combinations of
these four new DIs, $\tilde{I}_{\left( \nu\right) }$, with four gauge
functions, $\varepsilon^{\left( \nu\right) }$; after performing an
integration by parts, as in (\ref{eqnL17}) - (\ref{eqnL20}), one can easily
read-off the new transformations with four gauge parameters. One may equally
well perform the following rearrangements
\begin{equation}
\varepsilon^{\left( \nu\right) }\tilde{I}_{\left( \nu\right)
=\varepsilon^{\left( \nu\right) }F_{\left( \nu\right) }^{\mu}\left(
g_{\alpha\beta}\right) I_{\mu}\equiv\tilde{\xi}^{\mu}I_{\mu}~,\label{eqnL53
\end{equation}
after which, transformations of the metric would have the same form as before,
but with a different, field-dependent, gauge parameter
\begin{equation}
\tilde{\xi}^{\mu}=\varepsilon^{\left( \nu\right) }F_{\left( \nu\right)
}^{\mu}\left( g_{\alpha\beta}\right) .\label{eqnL54
\end{equation}
Therefore (\ref{eqnL54}) is a different transformation from (\ref{eqnL33});
for example, even its form cannot be preserved in calculations of the
commutators (\ref{eqnL40}) of two such transformations. The independence of
gauge parameters, stated in \cite{Hilbert, Noether, Rosenfeld}, and
\cite{Castellani}, is not contradicted by (\ref{eqnL54}), because it is merely
a short form of presentation of the results. In a full expression, the
field-independent parameter would appear ($\varepsilon^{\left( \nu\right)
$). But the \textquotedblleft quasi-covariant\textquotedblright\ form
(\ref{eqnL54}) could be useful in performing calculations. This idea will be
explained in the next Section, where one particular case of (\ref{eqnL50
)-(\ref{eqnL54}) is discussed: the ADM transformations.
We note that Noether's theorem and the explicit connection between DIs and
gauge transformations (as in \cite{Schwinger}) considerably simplifies the
analysis of singular Lagrangians. For example, the construction of new
transformations, as outlined above, can also be used to check the validity of
some proposed or \textquotedblleft guessed\textquotedblright\ transformations.
Assuming that a transformation is correct, one would follow \cite{Schwinger}
to construct a corresponding DI candidate that can be checked by direct
substitution of the ELDs. If the candidate is a true DI, then by the converse
of Noether's theorem, it is a symmetry. By checking an identity one may manage
expressions of any complexity because terms of different types can be
considered separately; this property is very important for dealing with
non-covariant expressions (i.e. all terms with a particular derivative of a
particular field should be zero independently of the rest of an expression).
This method is simpler if some DIs are already known; in such a case, one may
express the new DIs as combinations of known identities, as in (\ref{eqnL53})
(e.g. see \cite{BGMR}), which is sufficient confirmation of the correctness of
the proposed transformations.
\section{ADM symmetry of the EH action}
The transformations of the ADM variables, $\delta_{\mathit{ADM}}\left\{
N,N^{i},\gamma_{km}\right\} $, that follow from the constraints of the ADM
Hamiltonian are well-known; and using (\ref{eqnL1}) allows one to find the
transformations of the metric tensor, $\delta_{\mathit{ADM}}\left\{ g^{\mu
\nu}\right\} $. They can be presented in the following form
\begin{equation}
\delta_{ADM}g^{\mu\nu}=\tilde{\xi}_{,\alpha}^{\nu}g^{\mu\alpha}+\tilde{\xi
}_{,\alpha}^{\mu}g^{\nu\alpha}-g_{,\alpha}^{\mu\nu}\tilde{\xi}^{\alpha}
\label{eqnL60
\end{equation}
with $\tilde{\xi}^{\alpha}$ given by\
\begin{equation}
\tilde{\xi}^{\nu}=\delta_{0}^{\nu}\left( -g^{00}\right) ^{\frac{1}{2
}\varepsilon^{\perp}+\delta_{k}^{\nu}\left[ \varepsilon^{k}+\frac{g^{0k
}{g^{00}}\left( -g^{00}\right) ^{\frac{1}{2}}\varepsilon^{\perp}\right]
\label{eqnL61
\end{equation}
(e.g. see appendix of \cite{Castellani}, for more detailed calculations in
\cite{Myths} and also \cite{Bergmann}).
This representation helps one to see the origin of some of the names of the
ADM transformations: \textquotedblleft specific metric-dependent
diffeomorphisms\textquotedblright\ \cite{Pons},\ or \textquotedblleft a
one-to-one correspondence between the diffeomorphisms and the gauge
variations\textquotedblright\ \cite{Mukherjee}, or \textquotedblleft
diffeomorphism-induced gauge symmetry\textquotedblright\ \cite{PonsSS}. From
the discussion at the end of the previous Section, one may see that
(\ref{eqnL61}) is one of many possible \textquotedblleft field-dependent
diffeomorphisms\textquotedblright\ and \textquotedblleft one-to-one
correspondence\textquotedblright. Using the transformations (\ref{eqnL60}),
the corresponding DIs can be restored. Because they are combinations of the
known covariant DIs and they are also the identities, the transformations
(\ref{eqnL60}) represent the gauge symmetry of the EH Lagrangian. So whatever
the field dependence of the transformations of the form shown in
(\ref{eqnL61}) might be, these transformations are guaranteed to be a symmetry
of the EH Lagrangian. We can also explicitly find separate identities that
correspond to each parameter ($\varepsilon^{\perp}$, $\varepsilon^{k}$) of the
ADM transformations
\begin{equation}
\tilde{\xi}^{\nu}I_{\nu}=\varepsilon^{\perp}\left[ \frac{g^{0k}}{g^{00
}\left( -g^{00}\right) ^{\frac{1}{2}}I_{k}+\left( -g^{00}\right)
^{\frac{1}{2}}I_{0}\right] +\varepsilon^{k}I_{k}~, \label{eqnL62
\end{equation}
which in turn, give two DIs to describe the ADM transformations
\[
\tilde{\xi}^{\nu}I_{\nu}=\varepsilon^{\perp}\tilde{I}_{\bot}+\varepsilon
^{k}\tilde{I}_{k}~,
\]
wit
\begin{equation}
\tilde{I}_{\bot}=\frac{g^{0k}}{g^{00}}\left( -g^{00}\right) ^{\frac{1}{2
}I_{k}+\left( -g^{00}\right) ^{\frac{1}{2}}I_{0}~, \label{eqnL63
\end{equation}
an
\begin{equation}
\tilde{I}_{k}=I_{k}~. \label{eqnL64
\end{equation}
These are obviously DIs since they are linear combinations of the components
of the covariant DI.
The names \textquotedblleft field-dependent diffeomorphism\textquotedblrigh
\ and \textquotedblleft one-to-one correspondence\textquotedblright\ are
misleading. This transformation is different from diffeomorphism and even its
resemblance to \textit{diff} in \textquotedblleft form\textquotedblrigh
\ disappears if one were to calculate the commutator of two such
transformations. The previous relation for \textit{diff} (\ref{eqnL41})
changes and a simple substitution of (\ref{eqnL62}) into (\ref{eqnL41}) is not
equivalent to the direct calculation of the commutato
\[
\tilde{\xi}_{\left[ 1,2\right] }^{\alpha}\neq\tilde{\xi}_{2}^{\beta
\tilde{\xi}_{1,\beta}^{\alpha}-\tilde{\xi}_{2}^{\beta}\tilde{\xi}_{1,\beta
}^{\alpha
\]
in which extra contributions appear. This was noticed in \cite{PonsSS}:
\textquotedblleft It is impossible to get for $\xi_{3}$ [our $\tilde{\xi
}_{\left[ 1,2\right] }^{\alpha}$] the standard diffeomorphism
rule\textquotedblright\ and so the transformation with parameters
(\ref{eqnL61}) is not a field-dependent diffeomorphism, but a different
symmetry. Even the formal resemblance of \textit{diff} transformations does
not survive in the commutator.
Let us try to find the commutator of two ADM transformations. From now on we
shall eliminate the subscript ADM, and use $\delta_{ADM}g^{\mu\nu
=\tilde{\delta}g^{\mu\nu}$ to abbreviate the notation. The quasi-covariant
form of (\ref{eqnL61}) allows one to simplify the calculations by using some
of the results from the previous Section, and then to consider the
transformations of all of the components of a contravariant tensor at once;
this is impossible to do when the ADM Lagrangian is analyzed.
In performing the calculation of the commutator of the ADM transformation,
that i
\[
\left( \tilde{\delta}_{2}\tilde{\delta}_{1}-\tilde{\delta}_{1}\tilde{\delta
}_{2}\right) g^{\mu\nu}=\tilde{\delta}_{2}\left( \tilde{\xi}_{1,\alpha
^{\nu}g^{\mu\alpha}+\tilde{\xi}_{1,\alpha}^{\mu}g^{\nu\alpha}-g_{,\alpha
^{\mu\nu}\tilde{\xi}_{1}^{\alpha}\right) -\tilde{\delta}_{1}\left(
\tilde{\xi}_{2,\alpha}^{\nu}g^{\mu\alpha}+\tilde{\xi}_{2,\alpha}^{\mu
g^{\nu\alpha}-g_{,\alpha}^{\mu\nu}\tilde{\xi}_{2}^{\alpha}\right) ,
\]
the result found differs from that obtained by calculating the commutator of
the diffeomorphism transformations; this difference exists because of the
presence of the fields in $\tilde{\xi}^{\alpha}$ (which is an abbreviated form
(\ref{eqnL61}), not a field-independent parameter). Conside
\[
\left( \tilde{\delta}_{2}\tilde{\delta}_{1}-\tilde{\delta}_{1}\tilde{\delta
}_{2}\right) g^{\mu\nu}=\tilde{\xi}_{1,\alpha}^{\nu}\delta_{2}g^{\mu\alpha
}+\tilde{\xi}_{1,\alpha}^{\mu}\delta_{2}g^{\nu\alpha}-\left( \delta_{2
g^{\mu\nu}\right) _{,\alpha}\tilde{\xi}_{1}^{\alpha}-\tilde{\xi}_{2,\alpha
}^{\nu}\delta_{1}g^{\mu\alpha}-\tilde{\xi}_{2,\alpha}^{\mu}\delta_{1
g^{\nu\alpha}+\left( \delta_{1}g^{\mu\nu}\right) _{,\alpha}\tilde{\xi
_{2}^{\alpha
\]
\[
+\left( \delta_{2}\tilde{\xi}_{1}^{\nu}\right) _{,\alpha}g^{\mu\alpha
}+\left( \delta_{2}\tilde{\xi}_{1}^{\mu}\right) _{,\alpha}g^{\nu\alpha
}-g_{,\alpha}^{\mu\nu}\delta_{2}\tilde{\xi}_{1}^{\alpha}-\left( \delta
_{1}\tilde{\xi}_{2}^{\nu}\right) _{,\alpha}g^{\mu\alpha}-\left( \delta
_{1}\tilde{\xi}_{2}^{\mu}\right) _{,\alpha}g^{\nu\alpha}+g_{,\alpha}^{\mu\nu
}\delta_{1}\tilde{\xi}_{2}^{\alpha}~;
\]
the terms in the first line (no contributions with $\tilde{\delta}\tilde{\xi
}^{\alpha}$) give the same result as that for the known \textit{diff} (with
$\tilde{\xi}^{\alpha}$); the second line produces additional contributions
that can be combined into the following form
\[
\left( \delta_{2}\tilde{\xi}_{1}^{\nu}-\delta_{1}\tilde{\xi}_{2}^{\nu
}\right) _{,\alpha}g^{\mu\alpha}+\left( \delta_{2}\tilde{\xi}_{1}^{\mu
}-\delta_{1}\tilde{\xi}_{2}^{\mu}\right) _{,\alpha}g^{\nu\alpha}-g_{,\alpha
}^{\mu\nu}\left( \delta_{2}\tilde{\xi}_{1}^{\alpha}-\delta_{1}\tilde{\xi
_{2}^{\alpha}\right) .
\]
After making some simple rearrangement, we obtain a general expression
\begin{equation}
\tilde{\xi}_{\left[ 1,2\right] }^{\alpha}=\tilde{\xi}_{2}^{\beta}\tilde{\xi
}_{1,\beta}^{\alpha}-\tilde{\xi}_{2}^{\beta}\tilde{\xi}_{1,\beta}^{\alpha
}+\delta_{2}\tilde{\xi}_{1}^{\alpha}-\delta_{1}\tilde{\xi}_{2}^{\alpha}
\label{eqnL72
\end{equation}
with additional contributions that must be explicitly calculated for the
particular field dependence of the parameters.
In the first two terms of (\ref{eqnL72}), we merely substitute (\ref{eqnL61});
and in the last two terms (which are zero if parameters are \textquotedblleft
field-independent\textquotedblright),\ we hav
\begin{equation}
\delta_{2}\tilde{\xi}_{1}^{\alpha}-\delta_{1}\tilde{\xi}_{2}^{\alpha
=\delta_{0}^{\alpha}\varepsilon_{1}^{\perp}\delta_{2}\left( -g^{00}\right)
^{\frac{1}{2}}+\delta_{k}^{\alpha}\varepsilon_{1}^{\perp}\delta_{2}\left[
\frac{g^{0k}}{g^{00}}\left( -g^{00}\right) ^{\frac{1}{2}}\right]
-\delta_{0}^{\alpha}\varepsilon_{2}^{\perp}\delta_{1}\left( -g^{00}\right)
^{\frac{1}{2}}-\delta_{k}^{\alpha}\varepsilon_{2}^{\perp}\delta_{1}\left[
\frac{g^{0k}}{g^{00}}\left( -g^{00}\right) ^{\frac{1}{2}}\right] .
\label{eqnL73
\end{equation}
(Note that in (\ref{eqnL73}) $\varepsilon^{k}$ is absent, since it enters
(\ref{eqnL61}) without field-dependent coefficients.) The final result for
(\ref{eqnL72}) can be presented in the same form as (\ref{eqnL60})
\[
\tilde{\xi}_{\left[ 1,2\right] }^{\alpha}=\delta_{0}^{\alpha}\left(
-g^{00}\right) ^{\frac{1}{2}}\varepsilon_{\left[ 1,2\right] }^{\perp
}+\delta_{k}^{\alpha}\left[ \varepsilon_{\left[ 1,2\right] }^{k
+\frac{g^{0k}}{g^{00}}\left( -g^{00}\right) ^{\frac{1}{2}}\varepsilon
_{\left[ 1,2\right] }^{\perp}\right]
\]
wher
\begin{equation}
\varepsilon_{\left[ 1,2\right] }^{\perp}=\varepsilon_{2}^{k}\varepsilon
_{1,k}^{\bot}-\varepsilon_{1}^{k}\varepsilon_{2,k}^{\bot} \label{eqnL80
\end{equation}
an
\begin{equation}
\varepsilon_{\left[ 1,2\right] }^{k}=\varepsilon_{2}^{m}\varepsilon
_{1,m}^{k}-\varepsilon_{1}^{m}\varepsilon_{2,m}^{k}+\left( \varepsilon
_{1,m}^{\bot}\varepsilon_{2}^{\bot}-\varepsilon_{2,m}^{\bot}\varepsilon
_{1}^{\bot}\right) e^{mk}. \label{eqnL81
\end{equation}
Here the combination, $e^{mk}$, which found in Dirac's Hamiltonian analysis of
the EH action, is forme
\[
e^{mk}=g^{mk}-\frac{g^{0m}g^{0k}}{g^{00}}\text{ \ \ \ }g_{nm}e^{mk}=\delta
_{n}^{k}.
\]
Due to the presence of fields in (\ref{eqnL81}), one might conclude that this
\textquotedblleft soft\ algebra\textquotedblright\ structure signifies that
the symmetry transformations no longer form a group. This is a possible
outcome when fields appear in the structure constant, but not always. The
field independence of the parameters in a commutator of two transformations is
a sufficient condition to have an algebra, but not a necessary one.
With the appearance of fields, such as in (\ref{eqnL81}), the double
commutator must be checked by direct calculation. Again, we can use the
general form of the results and consider the double commutator. We return to
(\ref{eqnL72}), which is a general expression whatever the field dependence of
the gauge parameters might be, and by making the changes $1\rightarrow\left[
1,2\right] $ and $2\rightarrow3$, we obtai
\begin{equation}
\tilde{\xi}_{\left[ \left[ 1,2\right] ,3\right] }^{\alpha}=\tilde{\xi
_{3}^{\beta}\tilde{\xi}_{\left[ 1,2\right] ,\beta}^{\alpha}-\tilde{\xi
}_{\left[ 1,2\right] }^{\beta}\tilde{\xi}_{3,\beta}^{\alpha}+\delta
_{3}\tilde{\xi}_{\left[ 1,2\right] }^{\alpha}-\delta_{\left[ 1,2\right]
}\tilde{\xi}_{3}^{\alpha}. \label{eqnL85
\end{equation}
The evaluation of the first two terms is straightforward; but the second pair,
because of the presence of $\xi_{\left[ 1,2\right] }^{\alpha}$ (with fields,
see (\ref{eqnL81})), produces an additional contribution as compared to the
simple change of indices ($1\rightarrow\left[ 1,2\right] $ and
$2\rightarrow3$) in (\ref{eqnL73})
\[
\delta_{3}\tilde{\xi}_{\left[ 1,2\right] }^{\alpha}-\delta_{\left[
1,2\right] }\tilde{\xi}_{3}^{\alpha}=\delta_{0}^{\alpha}\varepsilon_{\left[
1,2\right] }^{\perp}\delta_{3}\left( -g^{00}\right) ^{\frac{1}{2}
+\delta_{k}^{\alpha}\varepsilon_{\left[ 1,2\right] }^{\perp}\delta
_{3}\left[ \frac{g^{0k}}{g^{00}}\left( -g^{00}\right) ^{\frac{1}{2
}\right] +\delta_{k}^{\alpha}\delta_{3}\varepsilon_{\left[ 1,2\right] }^{k
\]
\begin{equation}
-\delta_{0}^{\alpha}\varepsilon_{3}^{\perp}\delta_{\left[ 1,2\right]
3}\left( -g^{00}\right) ^{\frac{1}{2}}-\delta_{k}^{\alpha}\varepsilon
_{3}^{\perp}\delta_{\left[ 1,2\right] }\left[ \frac{g^{0k}}{g^{00}}\left(
-g^{00}\right) ^{\frac{1}{2}}\right] . \label{eqnL86
\end{equation}
The last contribution in the first line was absent from (\ref{eqnL73}) because
$\varepsilon^{k}$ is all field-independent. This additional contribution i
\[
\hat{\varepsilon}_{\left[ \left[ 1,2\right] ,3\right] }^{k}=\delta
_{3}\varepsilon_{\left[ 1,2\right] }^{k}=\left( \varepsilon_{1,m}^{\bot
}\varepsilon_{2}^{\bot}-\varepsilon_{2,m}^{\bot}\varepsilon_{1}^{\bot}\right)
\delta_{3}e^{mk}.
\]
After performing a transformation $\delta_{3}e^{mk}$, it leads t
\begin{equation}
\hat{\varepsilon}_{\left[ \left[ 1,2\right] ,3\right] }^{k}=\left(
\varepsilon_{1,m}^{\bot}\varepsilon_{2}^{\bot}-\varepsilon_{2,m}^{\bot
}\varepsilon_{1}^{\bot}\right) \left\{ \varepsilon_{3,n}^{m}e^{kn
+\varepsilon_{3,n}^{k}e^{mn}-e_{,n}^{km}\varepsilon_{3}^{n}\right.
\label{eqnL87
\end{equation}
\[
\left. +\left( -g^{00}\right) ^{\frac{1}{2}}\left[ \left( \frac{g^{0k
}{g^{00}}\right) _{,n}e^{mn}+\left( \frac{g^{0m}}{g^{00}}\right)
_{,n}e^{kn}-e_{,n}^{km}\frac{g^{0n}}{g^{00}}-e_{,0}^{km}\right]
\varepsilon_{3}^{\perp}\right\} .
\]
The remaining contributions (see (\ref{eqnL85}) and (\ref{eqnL86})) are the
same as those found in the previous calculations, so we can use (\ref{eqnL81})
with ($1\rightarrow\left[ 1,2\right] $ and $2\rightarrow3$), as before, to obtain
\begin{equation}
\varepsilon_{\left[ \left[ 1,2\right] ,3\right] }^{k}=-\varepsilon
_{\left[ 1,2\right] }^{i}\varepsilon_{3,i}^{k}+\varepsilon_{3
^{i}\varepsilon_{\left[ 1,2\right] ,i}^{k}+\varepsilon_{\left[ 1,2\right]
,m}^{\bot}\varepsilon_{3}^{\bot}e^{mk}-\varepsilon_{3,m}^{\bot}\varepsilon
_{\left[ 1,2\right] }^{\bot}e^{mk}. \label{eqnL88
\end{equation}
After the substitution of $\varepsilon_{\left[ 1,2\right] }^{i}$, we have
\[
\varepsilon_{\left[ \left[ 1,2\right] ,3\right] }^{k}=-\left(
-\varepsilon_{1}^{m}\varepsilon_{2,m}^{i}+\varepsilon_{2}^{m}\varepsilon
_{1,m}^{i}+\varepsilon_{1,m}^{\bot}\varepsilon_{2}^{\bot}e^{mi}-\varepsilon
_{2,m}^{\bot}\varepsilon_{1}^{\bot}e^{mi}\right) \varepsilon_{3,i}^{k
\]
\[
+\varepsilon_{3}^{i}\left( -\varepsilon_{1}^{m}\varepsilon_{2,m
^{k}+\varepsilon_{2}^{m}\varepsilon_{1,m}^{k}+\varepsilon_{1,m}^{\bot
}\varepsilon_{2}^{\bot}e^{mk}-\varepsilon_{2,m}^{\bot}\varepsilon_{1}^{\bot
}e^{mk}\right) _{,i
\]
\begin{equation}
+\left( -\varepsilon_{1}^{n}\varepsilon_{2,n}^{\bot}+\varepsilon_{2
^{n}\varepsilon_{1,n}^{\bot}\right) _{,m}\varepsilon_{3}^{\bot
e^{mk}-\varepsilon_{3,m}^{\bot}\left( -\varepsilon_{1}^{n}\varepsilon
_{2,n}^{\bot}+\varepsilon_{2}^{n}\varepsilon_{1,n}^{\bot}\right) e^{mk}.
\label{eqnL89
\end{equation}
Combining (\ref{eqnL87}) and (\ref{eqnL89}) leads to some simplification; but
the condition, which must be satisfied for the Jacobi identities to be
correct, does not hold
\[
\varepsilon_{\left[ \left[ 1,2\right] ,3\right] }^{k}+\varepsilon_{\left[
\left[ 2,3\right] ,1\right] }^{k}+\varepsilon_{\left[ \left[ 3,1\right]
,2\right] }^{k}\neq0
\]
(one contribution that prevents cancellation is the term in (\ref{eqnL87})
proportional to $e_{,n}^{km}\varepsilon_{3}^{n}$).
The EH action is invariant under the ADM transformations, but unlike
\textit{diff}, $\delta_{\mathit{ADM}}\left\{ g^{\mu\nu}\right\} $ do not
form a group. \ This result illustrates that all possible symmetries, which
can be constructed easily from various combinations of DIs, are not equally
good. There is one transformation (in general, some restricted class of
transformations) that forms a group; and such transformations constitute the
\textquotedblleft basic\textquotedblright\ or \textquotedblleft
true\textquotedblright\ gauge symmetry of the Lagrangian. \ In analogy with
the Hamiltonian formulation, one might call a symmetry that can form a group a
\textquotedblleft canonical\textquotedblright\ symmetry of the Lagrangian.
\section{Conclusion}
The application of Dirac's method to derive the Hamiltonian formulations of
the EH Lagrangian, $L_{EH}\left( g^{\mu\nu}\right) $, and the ADM
Lagrangian, $L_{ADM}\left( N,N^{i},\gamma_{km}\right) $, leads to two
different gauge symmetries; because of this difference in symmetries, it is no
surprise that their Hamiltonian formulations are not related by a canonical
transformation \cite{FKK, Myths}. If the Hamiltonian method is considered to
be an algorithm that allows one to restore a gauge symmetry, and if the
Lagrangian and Hamiltonian methods are equivalent, then one might conclude
that the two Lagrangians are not equivalent \cite{Myths}. The expression
"non-canonicity puzzle" was coined to describe this result\ \cite{CLM}. But if
equivalence of two Lagrangians is assumed, then one might alternatively
conclude that the Hamiltonian method is not an algorithm (at least in its
currently known form or for this particular case); thus Dirac's method must be
modified \cite{ShestakovaGandC, ShestakovaCQG}.
In this paper we offer a preliminary answer to the question of how to compare
the symmetries of two Lagrangians which differ by invertible change of
variables. Before such an undertaking is made, it is essential to understand
how to distinguish two symmetries for the same Lagrangian. Based on Noether's
theorem, we demonstrate that both symmetries (\textit{diff} and ADM) are
symmetries of the EH Lagrangian, when written for the same variables; we also
demonstrate that more symmetries can be constructed using the Lagrangian
method. But a study of their group properties reveals that only one symmetry,
\textit{diff,}\ has group properties; and neither ADM nor any other
symmetries, constructed by using a so-called field-dependent redefinition of
gauge parameters, have such a property. Therefore, for the EH Lagrangian, only
one distinct symmetry with a group property exists (canonical symmetry).
To call two Lagrangians equivalent, any and all canonical symmetries should be
presented in both formulations. The ADM symmetry, which follows from the
Hamiltonian formulation of the ADM action, is not a canonical symmetry of the
EH action. Of course, the question whether the ADM formulation possesses
canonical symmetry needs to be answered. Such calculations are
straightforward, but extremely cumbersome (the penalty for working with
non-covariant variables); and the relatively simple calculations presented in
this article, which use a quasi-covariant form to allow one to consider
transformations for all components of metric at once, are impossible in the
case of the ADM Lagrangian. The calculations must be performed separately for
all fields, and the redefinition of the gauge parameters must be the same for
all fields. The DIs are also much more complicated, especially for the
transformation of the ADM variables under diffeomorphism\footnote{In addition,
such DIs are not covariant, and because of this, cannot be true in all
coordinate systems.}; and such transformations must also be checked to
determine if they correspond to symmetries with a group property for the ADM Lagrangian.
From the analysis of the invariance of the EH Lagrangian performed in this
paper, it follows that $\delta_{\mathit{diff}}$ has a group property; but
$\delta_{\mathit{ADM}}$ does not. We are currently undertaking an
investigation of the properties of these two symmetries for the ADM
Lagrangian. There are four possible cases, all of which lead to contradictions
and further questions. For the ADM Lagrangian, these cases are:
(a) both transformations form groups;
(b) neither transformation forms a group;
(c) $\delta_{\mathit{ADM}}$ forms a group, but not $\delta_{\mathit{diff}}$;
(d) $\delta_{\mathit{diff}}$ forms a group, but not $\delta_{\mathit{ADM}}$.
The first three of these cases lead to the non-equivalence of the Lagrangians.
Cases (a) and (b) raise a question about the uniqueness of Dirac's procedure.
The two transformations both form groups (case (a)), or neither of them forms
a group (case (b)); but only one symmetry is chosen by the Hamiltonian
procedure. Case (c) is consistent with the uniqueness of the Hamiltonian
method, as for the EH action, it selects a symmetry with a group property; but
the Lagrangians (ADM and EH) cannot be equivalent.
Case (d) would imply an equivalence of the canonical symmetries of the ADM and
EH Lagrangians, and that \textit{diff} is a symmetry with group properties for
the ADM Lagrangian; but such a conclusion contradicts the widely quoted
statement of Isham and Kuchar \cite{Isham}: \textquotedblleft the full group
of spacetime diffeomorphisms has \textit{somehow} got lost in making the
transition from the Hilbert action to the Dirac-ADM action\textquotedblrigh
\ (italic is ours)\footnote{The name of Dirac is used incorrectly in this
statement because Dirac's Hamiltonian is not canonically related to the ADM
Hamiltonian and, in addition, Dirac's modification of the EH action is
performed in the way to preserve Einstein's equations. Moreover, if case (d)
is correct, then neither for the Dirac nor for the ADM action the
diffeomorphism \textquotedblleft got lost\textquotedblright.}.\ Such
statements were based on the results of the Hamiltonian formulation of the ADM
Lagrangian with ADM gauge transformations. And one can often find claims that
only spatial \textit{diff} is a symmetry of the ADM formulation. (Such
statements are not compatible at all with equivalence of the EH and ADM
actions.) So, case (d) would inexorably lead one to conclude that Dirac's
Hamiltonian method does not work for ADM variables (for the metric formulation
of the EH action, it picks the symmetry with group properties, but for the ADM
action it fails to do so). This outcome would force one to reconsider the
\textquotedblleft theoretical foundations\textquotedblright;\ to be more
precise, to reconsider Dirac's method, as suggested in \cite{ShestakovaGandC,
ShestakovaCQG}, and to doubt its validity as an algorithm (at least in its
current form). An algorithm should work without an \textit{a priori} knowledge
of the gauge symmetry, and not demand modification of the method to adjust its
outcome to the results that are known, \textit{a priori,} for a particular
Lagrangian (e.g. ADM). Note also that such a formulation should be expected to
be connected by a canonical transformation to the Hamiltonians of PSS and
Dirac. We plan to continue this discussion after completing the analysis of
the group properties of two transformations for the ADM Lagrangian.
There is another solution to the \textquotedblleft puzzle\textquotedblrigh
,\ but it would probably not be well accepted or considered seriously in view
of the movement to devalue the importance of general covariance. This
historical change of views on covariance is expressed perfectly by Norton
\cite{Norton}: \textquotedblleft When Einstein formulated his General Theory
of Relativity, he presented it as the culmination of his search for a
generally covariant theory. That this was the signal achievement of the theory
rapidly became the orthodox conception. A dissident view, however, tracing
back at least to objections raised by Eric Kretschmann in 1917, holds that
there is no physical content in Einstein's demand for general covariance. That
dissident view has grown into the mainstream. Many accounts of general
relativity no longer even mention a principle or requirement of general
covariance.\textquotedblright\
Considering the EH action and its original variables (the metric tensor), the
Hamiltonian method (innately non-covariant) or combinations of DIs (which can
be chosen to be unrestricted by covariance) both single out the one unique,
covariant symmetry. Covariance is neither demanded nor encoded in either of
these methods; but when they are applied to covariant actions only covariant
results are \textit{\textquotedblleft somehow\textquotedblright} produced.
Many statements can be found in the literature that are similar to the recent
one in \cite{HKG2010}: \textquotedblleft one of the beauties of general
relativity is that it is difficult to deform it without running into
inconsistencies\textquotedblright. Maybe, the solution to the
\textquotedblleft puzzle\textquotedblright\ is simple: do not destroy
covariance - \textquotedblleft one of the beauties\textquotedblright\ of
Einstein's theory; and do not deform it by using non-covariant variables.
Heeding these caveats will prevent one from \textquotedblleft running into
inconsistencies\textquotedblright, finding contradictions, and facing such
\textquotedblleft puzzles\textquotedblright. Further, instead of being on the
horns of a dilemma, to choose \textquotedblleft canonical or
covariant\textquotedblright\ \cite{CLM},\ one might simply conclude: only
covariant results are canonical for General Relativity.
\section{Acknowledgment}
We would like to thank A. Frolov, L.A. Komorowski, D.G.C. McKeon, and A.V.
Zvelindovsky for discussions.
|
1104.5208
|
\section{Introduction}
The chemistry of the interstellar medium can be roughly divided into two types:
gas phase chemistry and grain surface chemistry. The two types of chemistry are
coupled by the adsorption and desorption processes.
Species adsorbed on the grain surface migrate in a random walk manner, and they
may react with each other upon encounter at the same site (a local potential
minimum). The products can be released back to the gas phase through certain
desorption mechanisms. In addition to the gas phase chemistry, grain chemistry
is important for the material and energy budget of the interstellar medium. For
example, besides H$_2$, molecules such as methanol are believed to be formed on
the grain surfaces \citep{Garrod07}, because its relatively high abundance
\citep[see, e.g.,][]{Menten1988a} cannot be reproduced by gas phase chemistry.
Several methods have been used to model the gas-grain chemistry. In the rate
equation (RE) approach \citep[see, e.g., ][]{Hasegawa92}, the surface processes
are treated the same way as the gas phase processes. This works fine when the
number of reactants on a single grain is large (under the assumption that the
system is well stirred; see \citet{Gillespie07}), but might not be accurate
enough when the average populations\footnote{Here by ``population'' we mean the
number of a species in a volume of interest, and by ``average'' we mean an
ensemble average (i.e. average over many different realizations of the same
system setup). Hence ``population'' can only take non-negative integer values,
while ``average population'' is a non-negative real number.} of some reactants
on a single grain is small. This failure of the rate equation is related to the
treatment of two-body reaction. For the REs to be applicable, the probability
of one reactant being present should be independent of that of another being
present. This is not always true, especially when the average populations of
both reactants are low, in which case they might be highly correlated. The
flaws in employing the RE for grain-surface chemistry were pointed out by
\cite{Charnley97} and \cite{Tielens97}.
To remedy this problem, modification schemes based on some empirical,
heuristic, and/or physical reasoning have been applied to the RE approach
\citep{Caselli98, Stantcheva01}, and are called modified rate equation (MRE)
approach. The validity of this method has been questioned \citep{Rae03}. A
modification scheme developed by \citet{Garrod08} uses different functional
forms for different surface populations, taking various competition processes
and refinements into account. It has been shown to work very well, even for
very large reaction networks \citep{Garrod09}.
Mathematically, the gas-grain system should be viewed as a stochastic chemical
system \citep[see, e.g.,][]{McQuarrie67, Gillespie76, Charnley98a}, being
described by a probability distribution $P(\vec{x}, t)$, which is the
probability that the system has a population vector $\vec{x}$ at time $t$, with
$x_i$ being the number of the $i$th species in the system. The evolution
equation of $P(\vec{x}, t)$ is the so-called master equation, whose form is
determined by the reaction network.
Many sophisticated methods have been proposed (mainly outside the astro-chemical
community; see, e.g., the operator method described in \cite{Mattis98}, or the
variational approach used by \cite{Ohkubo08}) to solve the master equation.
However, these methods work fine only when either the chemical network is
small or some special assumptions are made in the derivation, thus their
validity in the general case should be questioned. It is unclear whether
these methods can be generalized to large complex networks.
The numerical solution of the master equation has also been performed
\citep{Biham01, Stantcheva02,Stantcheva04}. To limit the number of variables in
the set of differential equations and to separate the deterministic and
stochastic species, usually {\it a priori} knowledge of the system is required
in these studies. The steady state solution of the master equation
can also be obtained analytically in some very simple cases, such as the formation
of H$_2$ molecules on the grain surface \citep{Green01, Biham02}.
On the other hand, the master equation prescription can be ``realized'' through
a stochastic simulation algorithm (SSA), proposed by \cite{Gillespie76} (see
also \citet{Gillespie77} and \citet{Gillespie07}). In this approach, the
waiting time for the next reaction to occur, as well as which specific reaction
will occur are random variables that are completely determined by the master
equation, so this approach should be considered the most accurate. In principle,
multiple runs are needed to average out the random fluctuations, but in
practice this is unnecessary if one only cares about the abundant species. This
approach has been applied successfully to astrochemical problems
\citep{Charnley98a, Charnley01a, Vasyunin09}, even in the case of very large
networks \citep{Vasyunin09}. Besides providing results that are accurate, this
approach is very easy to implement. However, it requires a very long run time
for large networks if a long evolution track is to be followed, although some
approximate accelerated methods do exist \citep[e.g.][]{Gillespie00}.
The SSA described above is somewhat different from a Monte Carlo (MC) approach
which has also been applied to astrochemistry \citep[e.g.][]{Tielens82};
however, this approach is not rigorously consistent with the master equation
\citep[see the comment by][]{Charnley05}, and can lead to a reaction
probability higher than 1 \citep{Awad05} in certain cases. The nomenclature of
these two approaches is not always consistent in the astrochemical
literature\footnote{For a discussion about the relations and differences
between ``stochastic simulation'' and ``Monte Carlo'', see \citet{Kalos08} and
\citet{Ripley08}.}. For example, the SSA used by \citet{Vasyunin09} is called
the Monte Carlo approach in their paper. Hereafter, we use the term ``Monte
Carlo'' when referring to the rigorous stochastic simulation approach of
Gillespie.
By taking various moments of the master equation, the so-called moment equation
(ME) is obtained \citep{Lipshtat03, Barzel07a, Barzel07b}. This set of
equations describes the evolution of both the average population of each
species and the average value of the products of the population of a group of
species, usually cut off at the second order moments. Its formulation is
similar to that of the RE, so it is relatively easy to implement. Furthermore,
in this approach the gas phase chemistry and grain surface chemistry can be
coupled together naturally. It has been tested on small surface networks.
In the present paper, we propose yet another approach to modeling gas-grain
chemistry, named the hybrid moment equation (HME) approach. The goal is to find
a systematic, automatic, and fast way to modeling gas-grain chemistry as
accurately as possible. Our method is based on the ME approach. Different
approximations are applied to the MEs at different time depending on the
overall populations at that specific time. It is hybrid in the sense that the
RE and the ME are combined together. The basic modification and competition
scheme presented in \cite{Garrod08} can be viewed as a semi-steady-state
approximation to our approach (by assuming that the time derivatives of certain
second order moments are equal to zero), while our approach can also be viewed
as a combination of the ME approach of \cite{Barzel07a} and the RE. In our
approach, the MEs are generated automatically with the generating function
technique, and in principle MEs up to any order can be obtained this way. We
benchmark our approach against the exact MC approach (i.e. the SSA of
Gillespie).
The remaining part of this paper is organized as follows. In section
\ref{sec:DesHyb}, we review the chemical master equation and ME, then describe
the main steps of the HME approach. In section \ref{sec:Benchmark}, we
benchmark the HME approach with a cutoff at the second order and the RE
approach against the MC approach with a large gas-grain network; we also tested
the HME approach with a cutoff at the third order with a small network. In
section \ref{sec:Discussion}, we discuss the performance of the HME, and its
relation with previous approaches, as well as possibilities for additional
improvements. Our way of generating the MEs is described in Appendix
\ref{apdxA}. A surface chemical network we used for benchmark
is listed in Appendix \ref{apdxKeane}.
\section{Description of the hybrid moment equation (HME) approach}
\label{sec:DesHyb}
In this section, we first review both the chemical master and moment
equations. Although this content can be found in many other papers
\citep[e.g.,][]{Charnley98a, Gillespie07}, we present them here as they are the
basis of our HME approach. We then describe the MEs and REs for a simple set of
reactions as an example, to demonstrate how the HME approach naturally arise as
a combination of ME and RE. Finally we show the main steps of the HME
approach.
\subsection{The chemical master equation and the moment equation (ME)}
A chemical system at a given time $t$ can be described by a state vector
$\vec{x}$ which changes with time, with its $j$th component $x_j$ being the
number of the $j$th species in this system. As a chemical system is usually
stochastic, $\vec{x}$ should be viewed as a random variable, whose probability
distribution function $P(\vec{x}, t)$ evolves with time according to the master
equation \citep{Gillespie07}
\begin{equation} \partial_t P(\vec{x}, t) =
\sum_{i=1}^{M} [a_i(\vec{x}-\vec{\nu}_i) P(\vec{x}-\vec{\nu}_i, t) -
a_i(\vec{x}) P(\vec{x})], \label{eqn:mastereq}
\end{equation}
where $a_i(\vec{x})$ is called the propensity function, $a_i(\vec{x})\Delta
t$ is the probability that given a current state vector $\vec{x}$ an $i$th
reaction will happen in the next infinitesimal time interval $\Delta t$, and
$\vec{\nu}_i$ is the stoichiometry vector of the $i$th reaction. The sum is
over all the reactions, and $M$ is the total number of reactions.
The ME is derived by taking moments of the master equation.
For example, for the \emph{first order} moment $\langle x_j\rangle$,
which is simply the average number of species $j$, $\langle x_j\rangle \equiv
\sum_{\vec{x}} P(\vec{x}, t) x_j$, its evolution is determined by
\citep{Gillespie07}
\begin{align}
\partial_t\langle x_j\rangle=& \sum_{\vec{x}}\partial_t[P(\vec{x}, t)]x_j \nonumber\\
=& \sum_i^M\sum_{\vec{x}} x_j[a_i(\vec{x}-\vec{\nu}_i) P(\vec{x}-\vec{\nu}_i, t) - a_i(\vec{x}) P(\vec{x})]\nonumber\\
=& \sum_i^M\sum_{\vec{x}} [(x_j+\nu_{ij})a_i(\vec{x}) P(\vec{x}, t) - x_ja_i(\vec{x}) P(\vec{x})]\\
=& \sum_i^M\sum_{\vec{x}} \nu_{ij}a_i(\vec{x}) P(\vec{x}, t)\nonumber
= \sum_i^M \nu_{ij}\langle a_i(\vec{x})\rangle,\nonumber
\end{align}
where $\nu_{ij}$ is the $j$th component of the stoichiometry vector of the $i$th
reaction, i.e. the number of $j$th species produced (negative when being
consumed) by the $i$th reaction. For higher order moments, their corresponding
evolution equations can be similarly derived, although the final form will be
more complex. In Appendix \ref{apdxA}, we present another method based on the
generating function technique to derive the MEs, which is more suitable for
programming.
For the simplest network, in which all the reactions are single-body reactions,
$a_i(\vec{x})$ is a linear function of $\vec{x}$. In this case the ME is closed
and can easily be solved. However, when two-body reactions are present, this is
no longer true, as $\langle a_i(\vec{x})\rangle$ might be of a form $\langle
x_k (x_k-1) \rangle$ or $\langle x_k x_l \rangle$, which is of order two and
cannot be determined in general by the lower order moments. Hence additional
equations governing their evolution should be included, i.e., they should be
taken to be independent variables. The evolution equation of these second order
moments may also involve moments of order three, and this process continues
without an end, thus the ME is actually an infinite set of
coupled equations (although in principle they are not completely independent if
the chemical system being considered is finite, which leads to a
finite-dimensional space of state vectors). The equation cannot be solved
without a compromise, e.g., a cutoff procedure, except for the simplest cases
in which an analytical solution is obtainable in the steady state.
\subsection{The MEs and REs for a set of reactions}
We take the following symbolic reactions as an illustrative example
\begin{align}
\mbox{Adsorption:}\quad & a \xrightarrow{k_{\rm ad}} A, \label{exareacaA} \\
\mbox{Evaporation:}\quad & A \xrightarrow{k_{\rm evap}} a, \\
\mbox{Surface reaction:}\quad & A + B \xrightarrow{k_{\rm AB}} C + D, \\
\mbox{Surface reaction:}\quad & A + A \xrightarrow{k_{\rm AA}} E, \label{exareacAA}
\end{align}
where the $k$s are the reaction rates of each reaction, A -- E are assumed to be
surface species that are distinct from each other, and ``a'' is the gas
phase counterpart of A.
In the following we first write down the MEs and REs for this system, then
discuss the relations and differences between them, as well as the relation
between a cutoff of MEs and a cutoff of master equations in previous studies.
These discussions will be essential to developing our HME approach.
\subsubsection{The MEs for this system}
The propensity functions for the above four reactions are $k_{\rm ad} a$,
$k_{\rm evap} A$, $k_{\rm AB} AB$, and $k_{\rm AA} A(A-1)$, respectively. Here
for convenience we use the letter ``A'' to represent both the name of a species
and the population of the corresponding species.
For the first order moments, we have
\begin{align}
\partial_t \langle A\rangle = & k_{\rm ad} \langle a\rangle - k_{\rm evap}
\langle A\rangle - k_{\rm AB} \langle A B\rangle - 2 k_{\rm AA} \langle A(A-1)
\rangle, \label{eqn:Aa} \\
\partial_t \langle C\rangle = & k_{\rm AB} \langle A B\rangle, \label{eqn:Ca} \\
\partial_t \langle E\rangle = & k_{\rm AA} \langle A(A-1)\rangle. \label{eqn:Ea}
\end{align}
Other similar equations are omitted. The symbol $\langle *\rangle$
is used to represent the average population of ``*'' in the system; the average
should be understood as an ensemble average.
The second order moments $\langle A B\rangle$ and $\langle A(A-1)\rangle$ have
their own evolution equations, which are
\begin{align}
\partial_t \langle A B\rangle = & k_{\rm ad} \langle a B\rangle - k_{\rm evap}
\langle A B\rangle \notag \\ &- k_{\rm AB} [\langle A(A-1) B\rangle + \langle A
B(B-1)\rangle + \langle A B\rangle] \label{eqn:ABa} \\ & -2 k_{\rm AA} \langle
A(A-1)B\rangle, \notag
\end{align}
\begin{align}
\partial_t \langle A(A-1)\rangle = & 2k_{\rm ad} \langle a A\rangle - 2k_{\rm
evap} \langle A(A-1)\rangle \notag \\ &- 2k_{\rm AB} \langle A(A-1) B\rangle
\label{eqn:AAa} \\ & -2 k_{\rm AA} [2\langle A(A-1)(A-2)\rangle + \langle
A(A-1)\rangle]. \notag
\end{align}
For this simple example set of reactions (equation (\ref{exareacaA} --
\ref{exareacAA}) ), the above equations can be easily obtained from the master
equation \citep[see, e.g.,][page 8]{Lipshtat03}. In the general case (e.g.,
when A -- E are not completely distinct from each other), an automatic way of
obtaining the MEs is described in Appendix \ref{apdxA}. The method described there
is also applicable to moments with any order, and to all the common
reaction types in astrochemistry.
In general, the third order moments in the above equations cannot be expressed
as a function of the lower order moments, so they need their own differential
equations. In the case of a cutoff at the second order, the chain of equations,
however, stops here. We describe the method required to evaluate them in
section~\ref{sec:HMEapp}.
\subsubsection{The REs for this system}
When using REs, equations (\ref{eqn:Aa} -- \ref{eqn:AAa}) are replaced by
\begin{align}
\partial_t \langle A\rangle =\ & k_{\rm ad} \langle a\rangle - k_{\rm evap}
\langle A\rangle - k_{\rm AB} \langle A\rangle\langle B\rangle - 2 k_{\rm AA}
\langle A \rangle^2, \tag{\ref{eqn:Aa}$'$} \\
\partial_t \langle C\rangle =\ & k_{\rm AB} \langle A\rangle\langle B\rangle,
\tag{\ref{eqn:Ca}$'$}\\
\partial_t \langle E\rangle =\ & k_{\rm AA} \langle A\rangle^2.
\tag{\ref{eqn:Ea}$'$} \\
\partial_t [\langle A\rangle\langle B\rangle] =\ & k_{\rm ad} \langle a\rangle
\langle B\rangle - k_{\rm evap} \langle A\rangle\langle B\rangle \notag \\
&- k_{\rm AB} [\langle A \rangle^2\langle B\rangle + \langle A\rangle\langle
B\rangle^2] \tag{\ref{eqn:ABa}$'$} \\
& -2 k_{\rm AA} \langle A\rangle^2\langle B\rangle, \notag \\
\partial_t [\langle A\rangle^2] =\ & 2k_{\rm ad} \langle a\rangle\langle
A\rangle - 2k_{\rm evap} \langle A\rangle^2 \notag \\
&- 2k_{\rm AB} \langle A\rangle^2 \langle B\rangle -4 k_{\rm AA} \langle
A\rangle^3. \tag{\ref{eqn:AAa}$'$}
\end{align}
The equations for $\langle A\rangle$$\langle B\rangle$
and $\langle A\rangle^2$ are of course not needed in the RE approach but are
simply derived from equation (\ref{eqn:Aa}$'$) (and an omitted similar equation
for $\langle B\rangle$) using the chain rule of calculus.
\subsubsection{The relation between MEs and REs} \label{sec:relaMERE}
The differences between the MEs (equation \ref{eqn:Aa} -- \ref{eqn:AAa}) and
the REs (equation \ref{eqn:Aa}$'$ -- \ref{eqn:AAa}$'$) in the present case are as follows:
All the $\langle AB\rangle$ are replaced by $\langle A\rangle$$\langle
B\rangle$, all the $\langle A(A-1)\rangle$ are replaced by $\langle
A\rangle^2$, all the $\langle A(A-1)B\rangle$ are replaced by $\langle
A\rangle^2\langle B\rangle$, the $\langle AB(B-1)\rangle$ is replaced by
$\langle A\rangle\langle B\rangle^2$, and the $\langle A(A-1)(A-2)\rangle$ is
replaced by $\langle A\rangle^3$. Furthermore, the term $k_{\rm AB}\langle
AB\rangle$ in equation (\ref{eqn:ABa}) and the term $k_{\rm AA}\langle
A(A-1)\rangle$ in equation (\ref{eqn:AAa}) disappear in the RE
(\ref{eqn:ABa}$'$) and (\ref{eqn:AAa}$'$).
These differences make clear why the REs are accurate when the involved species
are abundant (namely when $\langle A\rangle{\gg}1$ and $\langle B\rangle{\gg}1$).
This is because, in this case, $\langle AB\rangle$ can be approximated well by%
\footnote{Assuming Poisson statistics, we have
$$\frac{|\langle AB\rangle-\langle A\rangle\langle
B\rangle|}{\langle A\rangle\langle B\rangle}\lesssim \sqrt{\frac{1}{\langle
A\rangle}+\frac{1}{\langle B\rangle}}\ll1.$$}
$\langle A\rangle\langle B\rangle$, and $\langle A(A-1)\rangle$ can be
approximated well by $\langle A\rangle^2$.
The RE approach will be erroneous when $\langle A\rangle$ or $\langle B\rangle$
are smaller than 1 because, in this case, the correlation between $A$ and $B$
might cause $\langle AB\rangle$ to differ considerably from $\langle
A\rangle\langle B\rangle$, and the fluctuation in $A$ might cause $\langle
A(A-1)\rangle$ to differ considerably from $\langle A\rangle^2$. It can also be
viewed like this: in equation (\ref{eqn:ABa}$'$) and equation
(\ref{eqn:AAa}$'$) that govern the evolution of second order moments, the
omitted term $k_{\rm AB}\langle AB\rangle$ might be much larger than the
retained terms such as $\langle A\rangle^2\langle B\rangle$ or $\langle
A\rangle\langle B\rangle^2$, and the omitted term $k_{\rm AA}\langle
A(A-1)\rangle$ might be much larger than the retained term $\langle
A\rangle^3$.
\subsubsection{The relation between a cutoff of MEs and a cutoff of possible states in previous master equation approaches}\label{sec:relacutME}
In Eqs. (\ref{eqn:Aa} -- \ref{eqn:AAa}) we do not write terms such as $\langle
A(A-1)\rangle$ in the split form $\langle A^2\rangle - \langle A\rangle$. We
keep terms such as $\langle A(A-1) B\rangle$ and $\langle A(A-1)(A-2)\rangle$
in their present forms intentionally. One reason for this is that terms such as
$\langle A(A-1)\rangle$ look more succinct and follow naturally from our way of
deriving them (see Appendix \ref{apdxA}). When $\langle A\rangle \gg 1$,
$\langle A(A-1)\rangle$ and $\langle AB\rangle$ can be directly replaced by
$\langle A\rangle^2$ and $\langle A\rangle\langle B\rangle$, respectively, to
obtain the RE formulation.
More importantly, this formulation can be directly connected to the cutoff
schemes in the previous master equation approaches \citep[e.g.,][]{Biham01,
Stantcheva02}. For example, in a scheme in which no more than two particles of A
are expected to be present on a single grain at the same time, we have $P(A{>}2)
= 0$. In this case, $\langle A(A-1)(A-2)\rangle = \sum_{A=3}^{\infty} P(A)
A(A-1)(A-2) = 0$. Thus we see that a cutoff at a population of two in the master
equation approach corresponds naturally to assigning a zero value to moments
containing $A$ more than twice, as far as the moments are defined in the form
presented above.
\subsection{The HME approach} \label{sec:HMEapp}
The HME approach is a combination of the ME and RE approaches. The basic idea
is that, for deterministic (average population $>$1) species, the REs are used,
while for stochastic (average population $<$1) species, the stochastic effects
are taken into account by including higher order moments in the equations.
Since a deterministic species may become stochastic as time goes by, and vice
versa, the set of ODEs governing the evolution of the system also changes with
time, and is determined dynamically. A flow chart of our HME code is shown in
Fig. \ref{fig:flowchart}.
We first set up all potentially needed MEs (using the procedure described
in Appendix~\ref{apdxA}), with a cutoff of moments at a prescribed order
(usually two). After this and some other initialization work, the program
enters the main loop.
The main loop contains an ODE solver because the system of MEs is a set of
ordinary differential equations (ODEs). We use the solver from the {\it
ODEPACK} package\footnote{Downloaded from www.netlib.org}.
Not all MEs and moments are used at all times; which ones are used is
determined dynamically. In each iteration of the main loop, we verify whether
some surface species have changed from stochastic to deterministic, or from
deterministic to stochastic. The gas phase species are always treated as
deterministic, regardless of how small their average populations are. In either
of these two cases, we re-examine all the moments, and determine the way to
treat them. There are four cases:
\begin{enumerate}
\item All the first order moments are treated as independent variables.
\item If a moment consists of only stochastic species, and its order is no
larger than the prescribed highest allowed order, it will be treated as
an independent variable, and the corresponding moment equation will be
included and solved. For the sake of numerical stability, its value
should be no larger than its deterministic counterpart. For example, if
the ODE solver yields a value of $\langle AB\rangle>\langle
A\rangle\langle B\rangle$, then the latter value will be assigned to
$\langle AB\rangle$.
\item If a moment consists of only stochastic species, and its order is
larger than the prescribed highest allowed order, its value will be set
to zero, and of course, its moment equation will not be solved.
For example, if $\langle A\rangle{<}1$ and $\langle B\rangle{<}1$,
then, with a highest allowed order set to two, moments such as $\langle
A(A-1)(A-2)\rangle$ and $\langle A(A-1)B\rangle$ will be set to zero.
This follows from the discussion in section \ref{sec:relacutME}.
\item If a moment contains at least one deterministic species, it will not be
treated as an independent variable, and its moment equation will not be
solved. It can be evaluated in the following way: assuming that the
moment under consideration has a form $\langle AB(B-1)\rangle$, and
that $A$ is deterministic (i.e. $\langle A\rangle{>}1$), then the value
of $\langle AB(B-1)\rangle$ is set to be $\langle A\rangle\langle
B(B-1)\rangle$. If $B$ is also deterministic, then it will be evaluated
as $\langle A\rangle\langle B\rangle^2$. This follows from the
discussion in section \ref{sec:relaMERE}.
\end{enumerate}
From these procedures, we see that the number of equations, as well as the form
of these equations will change when a transition between stochastic and
deterministic state of certain species occurs. Each time the ODE system is
updated, the ODE solver must therefore be re-initialized.
It seems possible to replace the sharp transition between the stochastic and
deterministic state of a species (based on whether its average population is
smaller than 1) with a smooth transition, e.g., using a weight function similar
to that in \citet{Garrod08}. However, it is not mathematically clear which
weight function we should choose, and an arbitrary one might cause some artificial
effects, so we prefer not to use this formulation.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\textwidth, keepaspectratio]{16262fg1.ps}
\caption{A flow chart of the main components of our HME code. Steps 2, 5, 6, and
8 are described in detail in the text.}
\label{fig:flowchart}
\end{figure}
\section{Benchmark with the Monte Carlo approach} \label{sec:Benchmark}
We compare the results of our HME approach with those from the exact
stochastic simulation \citep{Gillespie07, Charnley98a, Charnley01a,
Vasyunin09}. The RE results are also compared for reference. As in previous
studies \citep{Charnley01a, Vasyunin09}, we consider a closed chemical system
in a volume containing exactly one grain particle. The number of each species
in this volume is called a ``population'', which can be translated into
an abundance relative to H nuclei by multiplying it by the dust-to-gas ratio,
which is $2.8\times10^{-12}(0.1\ \mu{\rm m}/r)^3$, where $r$ is the grain
radius, assuming an average molecular weight of 1.4, a dust-to-gas mass ratio
0.01, and a density of grain material of 2 g cm$^{-3}$.
In the MC approach, the number of each species in this volume at any time is
an integer. Owing to the large amount of time steps (${>}10^9$), it is
impractical to store all intermediate steps, so we average the population
of each species in time, weighted by the time intervals (remember that the
lengths of time intervals between reactions are also random in MC). Because of
this weighted average (rather than merely saving the state vector at certain
instants), the MC approach can resolve average populations much smaller than
one, although the fluctuations that are intrinsic to the MC approach can be
larger than the average populations when the latter is small.
We first demonstrate how our method works for a large gas-grain network.
We then show that for a small surface network, third order moments can also be
included to improve the accuracy.
\subsection{Test of the HME approach truncated at the second order on a large
gas-grain network} \label{sec:testlargeGG}
We use the ``dipole-enhanced'' version of the RATE06 gas phase reaction
network\footnote{http://www.udfa.net/}
\citep{Woodall07}, coupled with a surface network
of \citet{Keane97} (see Appendix \ref{apdxKeane}).
The surface network contains 44 reactions between
43 species, which is basically a reduced and slightly revised version of the
network of \citet{Tielens82}, containing the formation routes of the most
common grain mantle species, such as H$_2$O, CH$_3$OH, CH$_4$, NH$_3$, etc.
This surface network is not really large in comparison with some of the
previous works, such as that used by \citet{Garrod09}. However, it is already
essential for the most important species. The energy barriers for thermal
desorption and diffusion are taken from \citet{Stantcheva04}. Diffusion of H
atoms on the surface through quantum tunneling is included. Desorption by
cosmic rays is taken into account following the approach of
\citet{Hasegawa93a}. The rate coefficients of the gas phase reactions are
calculated according to \citet{Woodall07}, while the rate coefficients of the
surface reactions are calculated following \citet{Hasegawa92}. The initial
condition is the same as in \citet{Stantcheva04}.
We assumed a dust-to-gas mass ratio of 0.01. The grain mass density is taken to
be 2 g cm$^{-3}$, with a site density $5\times10^{13}$ cm$^{-2}$. Two grain
sizes have been used: 0.1 $\mu$m and 0.02 $\mu$m. A cosmic ray ionization rate
of $1.3\times10^{-17}$ s$^{-1}$ is adopted. Four different temperatures (10,
20, 30, 50 K) and three different densities ($10^3$, $10^4$, $10^5$ cm$^{-3}$)
have been used. In total, the comparison has been made for 24 different sets of
physical parameters. These conditions are commonly seen in translucent clouds
and cold dark clouds.
As in \citet{Garrod09}, we make a global comparison between the results
of MC, HME, and RE. For each set of physical parameters, the
comparisons are made at a time of $10^3$, $10^4$, and $10^5$ years. We
calculate the percentage of species for which the agreement between MC and
HME/RE is within a factor of 2 or 10. Only species with a population (either
from MC or from HME/RE) larger than 10 are included for comparison. This is
because for species with smaller populations, the intrinsic fluctuation in the
MC results can be significant. For several different sets of physical
parameters, we repeated the MC several times to get a feeling for how large
the fluctuation magnitude would be, although this is impractical for all the
cases.
The comparison results are
shown in Table \ref{tab:agreeA} (grain radius = 0.1 $\mu$m) and Table
\ref{tab:agreeB} (grain radius = 0.02 $\mu$m). The HME approach always has a
better global agreement (or the same for several cases) than the RE approach in
the cases we tested. The typical time evolution of certain species is shown in Fig.
\ref{fig:typicalEvol}. In each panel of the figure, the species with a name
preceded with a ``g'' means it is a surface species.
The poorest agreement of HME (Fig. \ref{fig:notSoGood}) is at $t=10^3$ year for
T=20 K, $n_{\rm H}=10^5$ cm$^{-3}$, and grain radius = 0.02 $\mu$m. This is
mostly because at the time of comparison the populations of certain species
were changing very rapidly, so a slight mismatch in time leads to a large
discrepancy. This mismatch is probably caused by the truncation of higher-order
terms in the HME (see section \ref{sec:third}). For gN$_2$ in Fig.
\ref{fig:notSoGood}, its population seems to be systematically smaller in HME
than in MC during the early period, although the HME result matches the one
from MC at a later stage (after $3\times10^3$ years).
The RE is as effective as the HME in several cases, when the temperature is
either relatively low ($\sim$10 K) or high ($\sim50$ K) \citep[see
also][]{Vasyunin09}, and generally works better for a grain radius of 0.1
$\mu$m than of 0.02 $\mu$m. When the temperature is very
low, many surface reactions with barriers cannot happen (at least in the
considered timescales). On the other hand, when the temperature is high, the
surface species evaporate very quickly and the surface reactions are also
unable to occur. In these two extreme cases, the surface processes are
inactive, and the RE works fine.
The RE becomes problematic in the intermediate cases, when the temperature is
high enough for many surface reactions to occur, but not too high to
evaporate all the surface species; in these cases the HME represents a major
improvement over the RE. For a smaller grain radius, the population of each
species in a volume containing one grain will be smaller, thus the stochastic
effect will play a more important role, and the RE will tend to fail.
We note that, in the HME approach, there is no elemental
leakage except those caused by the finite precision of the computer. In
all the models that we have run, all the elements (including electric charge) are
conserved with a relative error smaller than $5{\times}10^{-14}$. The reason
why elemental conservation is always guaranteed is that either
the rate equations or the moment equations for the first order moments conserve
the elements.
\begin{table*}[htbp]
\caption{Percentage of agreement between the results from MC and those
from HME/RE. The comparison is only made between those
species with populations (from MC or HME/RE)
larger than 10. The two numbers in each table entry means the percentage of
agreement within a factor of 2 or 10, respectively. The grain radius is taken
to be 0.1 $\mu$m.}
\label{tab:agreeA}
\centering
\begin{tabular}{c c c c c c c c c c c c}
\hline\hline
\raisebox{1.5mm}{\phantom{I}} & \multicolumn{3}{c}{n$_{\rm H}$ = $2\times10^3$ cm$^{-3}$} & & \multicolumn{3}{c}{n$_{\rm H}$ = $2\times10^4$ cm$^{-3}$} & & \multicolumn{3}{c}{n$_{\rm H}$ = $2\times10^5$ cm$^{-3}$} \\
\cline{1-4} \cline{6-8} \cline{10-12}
\raisebox{1.5mm}{\phantom{I}}t &
$10^3$ yr & $10^4$ yr & $10^5$ yr & & $10^3$ yr & $10^4$ yr & $10^5$ yr & & $10^3$ yr & $10^4$ yr & $10^5$ yr \\
\hline
\multicolumn{12}{c}{hybrid moment equation} \\ \hline
T = 10 K & 100, 100 & 100, 100 & 100, 100 & & 100, 100 & 100, 100 & 97.6, 99.2 && 99.0, 100 & 100, 100 & 100, 100 \\
T = 20 K & 100, 100 & 100, 100 & 100, 100 & & 97.7, 98.9 & 98.2, 100 & 100, 100 && 100, 100 & 100, 100 & 100, 100 \\
T = 30 K & 100, 100 & 100, 100 & 100, 100 & & 98.8, 100 & 100, 100 & 99.2, 100 && 100, 100 & 100, 100 & 100, 100 \\
T = 50 K & 100, 100 & 100, 100 & 100, 100 & & 100, 100 & 100, 100 & 100, 100 && 100, 100 & 100, 100 & 97.9, 100 \\
\hline
\multicolumn{12}{c}{rate equation} \\ \hline
T = 10 K & 100, 100 & 100, 100 & 94.1, 98.8 & & 100, 100 & 100, 100 & 93.7, 98.4 && 99.0, 100 & 100, 100 & 95.8, 99.3 \\
T = 20 K & 90.2, 93.4 & 85.3, 90.7 & 83.6, 93.2 & & 95.5, 95.5 & 91.2, 95.6 & 92.3, 96.2 && 95.3, 96.2 & 95.0, 95.8 & 93.9, 96.6 \\
T = 30 K & 96.6, 96.6 & 95.5, 95.5 & 95.5, 97.0 & & 94.3, 96.6 & 98.1, 98.1 & 96.9, 97.7 && 94.9, 96.9 & 92.9, 97.4 & 40.0, 75.0 \\
T = 50 K & 100, 100 & 100, 100 & 100, 100 & & 100, 100 & 100, 100 & 100, 100 && 100, 100 & 100, 100 & 97.9, 100 \\
\hline\hline
\end{tabular}
\end{table*}
\begin{table*}[htbp]
\caption{Same as Table \ref{tab:agreeA} except a smaller grain radius of 0.02 $\mu$m is taken.}
\label{tab:agreeB}
\centering
\begin{tabular}{c c c c c c c c c c c c}
\hline\hline
\raisebox{1.5mm}{\phantom{I}} & \multicolumn{3}{c}{n$_{\rm H}$ = $2\times10^3$ cm$^{-3}$} & & \multicolumn{3}{c}{n$_{\rm H}$ = $2\times10^4$ cm$^{-3}$} & & \multicolumn{3}{c}{n$_{\rm H}$ = $2\times10^5$ cm$^{-3}$} \\
\cline{1-4} \cline{6-8} \cline{10-12}
\raisebox{1.5mm}{\phantom{I}}t &
$10^3$ yr & $10^4$ yr & $10^5$ yr & & $10^3$ yr & $10^4$ yr & $10^5$ yr & & $10^3$ yr & $10^4$ yr & $10^5$ yr \\
\hline
\multicolumn{12}{c}{hybrid moment equation} \\ \hline
T = 10 K & 100, 100 & 100, 100 & 100, 100 & & 100, 100 & 100, 100 & 100, 100 && 100, 100 & 100, 100 & 100, 100 \\
T = 20 K & 95.5, 100 & 100, 100 & 97.1, 100 & & 100, 100 & 100, 100 & 94.4, 100 && 73.0, 83.8 & 97.7, 100 & 98.3, 100 \\
T = 30 K & 100, 100 & 100, 100 & 100, 100 & & 100, 100 & 100, 100 & 100, 100 && 100, 100 & 100, 100 & 97.0, 100 \\
T = 50 K & 100, 100 & 100, 100 & 100, 100 & & 100, 100 & 100, 100 & 100, 100 && 100, 100 & 100, 100 & 100, 100 \\
\hline
\multicolumn{12}{c}{rate equation} \\ \hline
T = 10 K & 100, 100 & 100, 100 & 82.1, 94.9 & & 100, 100 & 100, 100 & 87.1, 95.2 && 100, 100 & 100, 100 & 94.8, 98.3 \\
T = 20 K & 87.0, 91.3 & 76.7, 83.3 & 71.4, 82.9 & & 74.3, 88.6 & 68.3, 87.8 & 28.6, 60.0 && 61.8, 82.4 & 42.3, 82.7 & 59.4, 92.2 \\
T = 30 K & 100, 100 & 95.7, 95.7 & 92.3, 92.3 & & 89.3, 92.9 & 90.9, 93.9 & 87.8, 90.2 && 82.1, 89.3 & 45.2, 90.3 & 24.3, 62.2 \\
T = 50 K & 100, 100 & 100, 100 & 100, 100 & & 100, 100 & 100, 100 & 100, 100 && 100, 100 & 100, 100 & 93.3, 93.3 \\
\hline\hline
\end{tabular}
\end{table*}
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.75\textwidth, keepaspectratio]{16262fg2.ps}
\caption{Typical time evolution of the average populations of certain species from MC
(solid lines), HME to the 2$^{\rm nd}$ order (dotted lines), and RE
(dashed lines). Note that the Monte Carlo has been repeated twice. The
y-axis is the number of each species in a volume containing exactly one grain.
To translate it into abundance relative to H nuclei, it should be multiplied by
$2.8\times10^{-12}$. Physical parameters used: $T = 20$ K, $n = 2\times10^5$
cm$^{-3}$, grain radius = 0.1 $\mu$m.}
\label{fig:typicalEvol}
\end{figure*}
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.8\textwidth,
keepaspectratio]{16262fg3.ps}
\caption{Cases in which the agreement between the results of MC and
those of HME are not so good, especially at t = $10^3$ year. The
y-axis is the number of each species in a volume containing exactly one grain.
To translate it into abundance relative to H nuclei, it should be multiplied by
$3.5\times10^{-10}$. Physical parameters used: $T = 20$ K, $n = 2\times10^5$
cm$^{-3}$, grain radius = 0.02 $\mu$m.}
\label{fig:notSoGood}
\end{figure*}
When comparing the results from the HME approach with those from MC
simulation, it is important to see how the intrinsic fluctuation in MC behaves.
If we assume the probability distribution of the population of a species, say
A, is Poissonian, then the variance of A is $\sigma^2(A)=\langle A\rangle$. Hence
if $\langle A\rangle$ is small, the relative fluctuation of the MC result can
be quite large. This fluctuation might be smoothed out by means of a weighted
average in time, but this procedure does not always work. This is why we choose
to only compare species with a population higher than 10, corresponding
to an abundance relative to H nuclei of $2.8\times10^{-11}$ (for grain radius =
0.1 $\mu$m) or $3.5\times10^{-9}$ (for grain radius = 0.02 $\mu$m). For a real
reaction network, it is usually difficult to predict the intrinsic fluctuation
in a MC simulation, unless it is repeated many times. These
fluctuations will not have any observational effects, because along a
line of sight there are always a large number of a certain species (as far as
it is detectable) and the fluctuations are averaged out.
We note that the gas phase processes are not treated identically in our HME
approach and MC simulation. In the MC approach, the gas phase processes are
always treated as being stochastic \citep[see, e.g.,][]{Charnley98a,
Vasyunin09}, in the same way as the surface processes. However, in our HME
approach, the gas phase species are treated in a deterministic way, i.e., REs
are always applied to them. This means that even if two reacting gas phase
species A and B both have average populations much smaller than one, we still
assume that $\langle AB\rangle = \langle A\rangle\langle B\rangle$. This is
physically quite reasonable, because the presence of large amounts of reacting
partners in the gas phase (if not limited to a volume containing only one dust
grain; see, e.g., \citet{Charnley98a}) ensures that the RE is applicable.
However, although it might sound a bit pedantic, mathematically this is not
equivalent to the MC approach, and some discrepancies caused by this are
expected. For a large network, it is impractical to treat the gas phase
processes in the same way as the surface processes in the HME approach, because
in that case the number of independent variables in the ODE system will be
quite large (at least no less than the number of two-body reactions), and the
performance of the ODE solver will be degraded.
\subsection{Test of the HME approach truncated at the third order on a small
surface network} \label{sec:third}
To test the improvement in accuracy when the cutoff is made at a higher order,
we compare the results of the HME approach with a cutoff at the second order to
those obtained from the same approach with a cutoff at the third order.
We use a small surface reaction network
of \citet{Stantcheva04}, containing 17 surface reactions between 21 species,
producing H$_2$O, CH$_3$OH, CH$_4$, NH$_3$, and CO$_2$. No gas phase reactions
are included, except adsorption and desorption processes. The initial gas phase
abundances of the relevant species are obtained from the steady state solution
of the RATE06 network under the corresponding physical conditions.
As before, we run the HME, RE, and the MC code for different sets of physical
parameters. Although by transferring from the RE to the second order HME a
major improvement in accuracy can be obtained, the inclusion of the third
order moments to the HME usually only improves slightly over the second order
case. In Fig. \ref{fig:thirdOrder}, we show an example (T = 10 K, $n_{\rm
H}=2\times10^5$ cm$^{-3}$, grain radius=0.02 $\mu$m), in which the distinctions
between the results from the second and third order HME are relatively large.
For several species, we note that the third order HME is still unable to match
the MC results perfectly, and for gHCO (Fig. \ref{fig:thirdOrder}) the third
order HME even produces an artificial spike in the time evolution curve. The
results from the third order HME are otherwise of greater accuracy than the
second order one, the abundances of gH$_2$CO and gCH$_3$OH in particular being
in almost perfect agreement with those from the MC approach. In the case of
gHCO, the timescale mismatch between HME and MC is alleviated by including the
third order moments.
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.75\textwidth, keepaspectratio]{16262fg4.ps}
\caption{Comparison of the results from MC, HME to the 2$^{\rm nd}$ order, HME
to the 3$^{\rm rd}$ order, and RE. The y-axis is the number of each species in
a volume containing exactly one grain. To translate it into abundance relative
to H nuclei, it should be multiplied by $3.5\times10^{-10}$. Physical
parameters used when running these models include $T = 10$ K, $n_{\rm H} =
2\times10^5$ cm$^{-3}$, grain radius = 0.02 $\mu$m.} \label{fig:thirdOrder}
\end{figure*}
It might be useful to see the difference between the second order HME and the
third order HME in a computational sense. For the current reaction network
with physical parameters described above, the number of variables (same as the
number of equations, which changes with time) is 145 initially in the second
order case, and this number becomes 705 for the third order case. To reach a
time span of $\sim$$10^6$ year, the second order HME takes about 3 seconds,
while the third order one takes about 220 seconds on a standard desktop
computer (a CPU @ 3.00 GHz with double cores, 4 GBytes memory). The number of
variables depends on the network structure, and it is not straightforward to derive
a formula to calculate it. Qualitatively, this number (as a function of the
number of reactions or the number of species) seems to increase with the cutoff
order less quickly than exponential growth. However, such a ``mild'' increase
affects the behavior of the ODE solver quite significantly. This is partly
because the solver contains operations (such as matrix inversion) that become
slower as the number of variables become larger. An increase in the number of
variables might also increase the stiffness of the problem, let alone the
memory limitation of the computer. For the larger reaction network described in
the previous section, the third order HME would involve about 5000 variables
and has not been tested successfully.
\section{Discussion} \label{sec:Discussion}
In our HME approach, we have used a general and automatic way to derive the
MEs. For large gas-grain networks, the MEs are cut off at the second order. For
small networks, a cutoff at the third order is possible and higher accuracy can
be obtained. We incorporate a switching scheme between the ME and RE when the
average population of a species reaches 1.
The results from HME are more accurate than those from the RE in the cases we
have tested, when benchmarked against the exact MC results. The abundances of
almost all the abundant species (${\gtrsim}2.8{\times}10^{-11}$ for a grain
radius of 0.1 $\mu$m and ${\gtrsim}3.5{\times}10^{-9}$ for a grain radius of
0.02 $\mu$m) from HME are accurate to within a factor of two, especially at
later stages of the chemical evolution, while in some cases nearly 40\% of the
results from RE are incorrect by a factor of at least ten.
In terms of computation time, our approach usually takes several tens of
minutes to reach a evolution time span of $10^6$ years, so it is slower than
the RE, but faster than the MC approach (which usually takes from several hours
to days). Our approach may also be slower than the MRE approach of
\citet{Garrod08}, because more variables (namely the moments with orders higher
than one) are present in our method, and the ODE system in HME is usually
stiffer. For example, for a moderate temperature many surface reactions can be
much faster than any gas phase reactions, and yield a very large coefficient in
some of the MEs. However, this is not the case in the stochastic regime of the
MRE approach, because when a competition scheme is used in MRE, such a large
coefficient does not appear. In this sense, we also advocate the MRE approach
of \citet{Garrod08}.
Mathematically, our approach is partially equivalent to the master equation
approach of \citet{Stantcheva04} in two respects.
1) They separated stochastic and deterministic species, which is similar to our
adopting RE for the abundant species.
2) They set a cutoff for the possible states of the stochastic species. This is
in essence equivalent to letting moments containing these species with order
higher than a certain number equal to zero.
Our approach can also be viewed as a combination of some of the ideas of
\citet{Garrod08} and \citet{Barzel07a}. The basic modification and competition
scheme in \citet{Garrod08} can be derived from the MEs, with a semi-steady
state assumption for the second order moments. \citet{Barzel07a} used the MEs,
but they did not include a switch scheme, and their way of deriving the MEs is
different from the one in the present paper.
There are still many possibilities for improvement. Although in principle
moments with any order can be included, the number of equations grows quite
quickly with the cutoff order, which makes the system of equations intractable
with a normal desktop computer. It is unclear whether it is possible to include
the moments selectively. It is unclear whether there are better, and more
mathematically well founded strategies than the switch at an average population
of 1. The present approach is usually stable numerically. However, this is not
always guaranteed, especially if higher order moments are to be included. The
behavior of the numerical solution also depends on other factors, such as the
ODE solver being used and the tunable parameters for it, while the MC approach
does not have such issues. In this sense, the MC approach is the most robust.
Even in the accurate MC approach described above, the detailed morphology of
the grain surface and the detailed reaction mechanism is not taken into
account. One step in this direction would be to take into account the layered
structure of the grain mantle. This was done by \citet{Charnley01a} (see also
\citet{Charnley09}) by means of stochastic simulation. It could also be
included in the HME approach, as far as the underlying physical mechanism could
be described by a master equation.
However, a microscopic MC approach has also been used to study the grain
chemistry \citep[see, e.g.,][]{Chang05, Cuppen07}. In this approach, the
morphology of the grain mantle and the interaction between species are modeled
in detail. As far as we know, this approach is only practical when the network
is small. It remains unclear whether it is possible to incorporate these
details into the current HME approach.
In some cases, errors caused by uncertainties in the reaction mechanism and
rate parameters might be larger than those introduced by the modeling method
\citep{Vasyunin08}. Hence, further experimental study and a more sophisticated
way of interpreting those results would be indispensable.
\begin{acknowledgements}
We thank Rob Garrod, Julia Roberts, Tom Millar, Guillaume Pineau des
For$\hat{\rm e}$ts, and Malcolm Walmsley for answering questions of the first
author during the early stage of this work. We also thank A.G.G.M. Tielens,
Anton Vasyunin, and Eric Herbst for discussions related to the present work,
and Antoine Gusdorf for useful comments. This work is financially supported by
the Deutsche Forschungsgemeinschaft Emmy Noether program under grant
PA1692/1-1. The first author acknowledges travel funding from the IMPRS for
astronomy and astrophysics at the Universities of Bonn and Cologne.
\end{acknowledgements}
\bibliographystyle{aa}
|
2012.13279
|
\section{Introduction}
{Suppose $P_n(x)$, for $n\in\mathbb{N}$,
is a sequence of \textit{classical} orthogonal polynomials (such as Hermite, Laguerre and Jacobi polynomials), then $P_n(x)$ is a solution of a second-order ordinary differential equation\ of the form}
\begin{equation} \label{eq:Pn}
\sigma(x)\deriv[2]{P_n}{x}+\tau(x)\deriv{P_n}{x}=\lambda_nP_n
\end{equation}
where $\sigma(x)$ is a monic polynomial with deg$(\sigma)\leq2$, $\tau(x)$ is a polynomial with deg$(\tau)=1$, and $\lambda_n$ is a real number which depends on the degree of the polynomial solution, see Bochner \cite{refBochner}. Equivalently, the weights of classical orthogonal polynomials satisfy a first-order ordinary differential equation, the \textit{Pearson equation}
\begin{equation} \label{eq:Pearson}
\deriv{}{x}[\sigma(x)\omega(x)]=\tau(x)\omega(x)
\end{equation}
with $\sigma(x)$ and $\tau(x)$ the same polynomials as in \eqref{eq:Pn}, see, for example \cite{refAlNod,refBochner,refChihara}.
\comment{The weights of classical orthogonal polynomials satisfy a first-order ordinary differential equation, the \textit{Pearson equation}
\begin{equation}\label{eq:Pearson}
\deriv{}{x}[\sigma(x)\omega(x)]=\tau(x)\omega(x)
\end{equation}
where $\sigma(x)$ is a monic polynomials of degree at most $2$ and $\tau(x)$ is a polynomial with degree $1$.
For \textit{semi-classical} orthogonal polynomials, the weight function $\omega(x)$ satisfies the Pearson equation \eqref{eq:Pearson} with either deg$(\sigma)>2$ or deg$(\tau)\neq 1$ (cf.~\cite{refHvR,refMaroni}).
For example, the generalised Airy weight
\begin{equation}\label{genAiry}
\omega(x;t,\lambda)=x^{\lambda}\exp\left(-\tfrac13x^3+tx\right),
\end{equation} with parameters $\lambda>-1$ and $t\in \mathbb{R}$,
satisfies the Pearson equation \eqref{eq:Pearson} with
\[\sigma(x)=x,\qquad\tau(x)=-x^3+tx+\lambda+1\]
and the generalised sextic Freud weight
\begin{equation}\label{freud6g}
\omega(x;t,\lambda)=|x|^{2\lambda+1}\exp\left(-x^6+tx^2\right),\qquad x\in \mathbb{R}\end{equation}
with $\lambda>-1$ and $t\in \mathbb{R}$ parameters,
satisfies \eqref{eq:Pearson} with \[\sigma(x)=x,\qquad\tau(x)=2\lambda+2+2tx^2-6x^6.\]
Orthogonal polynomials associated with the exponential cubic weight
\begin{equation} \omega(x)=\exp(-x^3),\qquad x\in\mathcal{C}\label{cubic}\end{equation}
where $\mathcal{C}$ a contour in the complex plane, were investigated in \cite{refDeano,refMFS,refWVAFZ},
while the semiclassical weight
\begin{equation}\omega(x;t)=\exp\left(-\tfrac13x^3+tx\right),\qquad x\in\mathcal{C}\label{tcubic}\end{equation} with $t\in\mathbb{R}$ and $\mathcal{C}$ is a contour in the complex plane, was discussed in \cite{refBD13,refBD16,refBDY,refCAiry,refCLVA,refDeano,refMagnus95}. These studies of the weights \eqref{cubic} and \eqref{tcubic} are for contours in the complex plane. In contrast, in this paper, we study orthogonal polynomials associated with an exponential cubic weight on the positive real axis.
We are concerned with semi-classsical orthogonal polynomials associated with the generalised Airy weight \eqref{genAiry} as well as generalised sextic Freud polynomials associated with the weight function \eqref{freud6g} that arise from a symmetrisation of generalised Airy polynomials. In \S\ref{sec:genairy} we consider some properties of generalised Airy polynomials, their recurrence coefficients and their zeros. In \S\ref{sec:rc} we consider the recurrence coefficients of polynomials orthogonal with respect to \eqref{genAiry} and correct some of the results in Wang \textit{et al.}\ \cite{refWZC20b}. Properties of generalised sextic Freud polynomials and their zeros are considered in \S\ref{sec:gen6freud}. These polynomials were also investigated in \cite{refCJ20,refWZC20a} and we correct one of the results in \cite{refWZC20a} in \S \ref{gen6feq}.
\section{Orthogonal polynomials}
Let $\mu$ be a positive Borel measure with support $S$ defined on $\mathbb{R}$ for which moments of all orders exist, that is
\begin{equation}\label{eq:moment}\mu_n=\int_{S} x^n\, {\rm d}\mu(x), \quad n=0,1,2\dots.\end{equation}
When the sequence $\{\mu_n\}_{n\geq 0}$ is positive, that is for all $n\in\{0,1,2,..\}$ the Hankel determinant
\begin{equation}\label{eq:dets}
\Delta_n=\left|\begin{array}{cccc}
\mu_0&\mu_1& \ldots &\mu_{n-1}\\
\mu_1&\mu_2&\ldots&\mu_{n}\\
\vdots & \vdots& \ddots &\vdots\\
\mu_{n-1}&\mu_{n}&\ldots &\mu_{2n-2}
\end{array}
\right|,\qquad n\geq1
\end{equation}
is positive, the family of monic polynomials
\begin{equation*}
P_n(x)=\frac{1}{\Delta_n}\left|\begin{array}{cccc}
\mu_0&\mu_1&\ldots&\mu_n\\
\mu_1&\mu_2&\ldots&\mu_{n+1}\\
\vdots & \vdots& \ddots &\vdots\\
\mu_{n-1}&\mu_{n}&\ldots &\mu_{2n-1}\\
1&x&\ldots &x^n
\end{array}
\right|
\end{equation*}
is orthogonal with respect to the measure $\mu$ on its support, i.e. \begin{equation} \nonumber\int_S P_m(x)P_n(x)\,{\rm d} \mu(x) = h_n\delta_{m,n},\qquad h_n>0\label{eq:norm}\end{equation}
where $\delta_{m,n}$ denotes the Kronecker delta. The sequence $\{P_n(x)\}_{n=0}^{\infty}$ satisfies the three-term recurrence relation
\begin{equation}\label{eq:3trr}xP_n(x)=P_{n+1}(x)+\alpha_nP_n(x)+\beta_{n}P_{n-1}(x)\end{equation}
with initial conditions $P_{-1}(x)=0$ and $P_0(x)=1$ and where
\begin{equation}\label{eq:anbn} \alpha_n = \frac{1}{h_n}\int_S xP_n^2(x) {\rm d}\mu(x),\qquad
\beta_n =\frac{\Delta_{n-1}\Delta_{n+1}}{\Delta_n^2} >0.\end{equation} Additional information about orthogonal polynomials can be found in, for example, \cite{refChihara,refIsmail,refSzego}.
Now suppose that the measure is absolutely continuous and has the form
\[{\rm d} \mu(x)=\omega(x;t,\lambda)\,\d x}\def\dy{\d y\]
where \begin{equation} \label{scweight}\omega(x;t,\lambda)=x^{\lambda}\omega_0(x)\exp(xt),\qquad x\in\mathbb{R}^+,\qquad\lambda>-1\end{equation}
with finite moments for all $t\in\mathbb{R}$, which is the case for the generalised Airy weight \eqref{genAiry}.
If the weight has the form \eqref{scweight}, which depends on the parameters $t$ and $\lambda$, then the orthogonal polynomials $P_n$,
the recurrence coefficients $\alpha_n$, $\beta_n$ given by \eqref{eq:anbn},
the determinant $\Delta_n$ given by \eqref{eq:dets} and
the moments $\mu_k$ given by \eqref{eq:moment} are now functions of $t$ and $\lambda$.
Specifically, in this case then
\begin{equation}\label{mu0}
\mu_k(t;\lambda)
= \int_{0}^{\infty} x^{k+\lambda} \omega_0(x)\exp(xt)\,{\rm d} x
=\deriv[k]{}{t}\left( \int_{0}^{\infty} x^{\lambda}\omega_0(x)\exp(xt)\,{\rm d} x\right
=\deriv[k]{\mu_0(t;\lambda)}{t}
.\end{equation} Further, the recurrence relation has the form
\begin{equation} \label{eq:scrr}xP_n(x;t,\lambda)=P_{n+1}(x;t,\lambda)+\alpha_n(t;\lambda)P_n(x;t,\lambda)+\beta_n(t;\lambda)P_{n-1}(x;t,\lambda)\end{equation}
where we have explicitly indicated that the coefficients $\alpha_n(t;\lambda)$ and $\beta_n(t;\lambda)$ depend on $t$ and $\lambda$.
\begin{theorem}{\label{thm:2.1}If the weight has the form \eqref{scweight}, then the determinant $\Delta_n(t;\lambda)$ given by \eqref{eq:dets} can be written as
\begin{align}\label{scHank}
\Delta_n(t;\lambda)&=\mathcal{W}\left(\mu_0,\deriv{\mu_0}t,\ldots,\deriv[n-1]{\mu_0}t\right).
\end{align}
where $\mathcal{W}(\varphi_1,\varphi_2,\ldots,\varphi_n)$ is the Wronskian given by
\[\mathcal{W}(\varphi_1,\varphi_2,\ldots,\varphi_n)=\left|\begin{matrix}
\varphi_1 & \varphi_2 & \ldots & \varphi_n\\
\varphi_1^{(1)} & \varphi_2^{(1)} & \ldots & \varphi_n^{(1)}\\
\vdots & \vdots & \ddots & \vdots \\
\varphi_1^{(n-1)} & \varphi_2^{(n-1)} & \ldots & \varphi_n^{(n-1)}
\end{matrix}\right|,\qquad \varphi_j^{(k)}=\deriv[k]{\varphi_j}{t}.\]
}\end{theorem}
\begin{proof}{See, for example, \cite[Theorem 2.1]{refCJ14} }\end{proof}
The Hankel determinant $\Delta_n(t;\lambda)$ satisfies the Toda equation, as shown in the following theorem.
\begin{theorem}{\label{thm:toda}The Hankel determinant $\Delta_n(t;\lambda)$ given by \eqref{scHank} satisfies the Toda equation
\[
\deriv[2]{}{t}\ln\Delta_n(t;\lambda)=\frac{\Delta_{n-1}(t;\lambda)\Delta_{n+1}(t;\lambda)}{\Delta_{n}^2(t;\lambda)}.
\]
}\end{theorem}
\begin{proof}{See, for example, Nakamira and Zhedanov \cite[Proposition 1]{refNZ}; also \cite{refCI97}.}\end{proof}
Using Theorems \ref{thm:2.1} and \ref{thm:toda}, we can express the recurrence coefficients $\alpha_n(t;\lambda)$ and $\beta_n(t;\lambda)$ in terms of derivatives of the Hankel determinant $\Delta_n(t;\lambda)$ and so obtain explicit expressions for these coefficients.
\begin{theorem}{\label{thm:anbn}The coefficients $\alpha_n(t;\lambda)$ and $\beta_n(t;\lambda)$ in the recurrence relation \eqref{eq:scrr}
associated with monic polynomials orthogonal with respect to a weight of the form \eqref{scweight} are given by
\[ \alpha_n(t;\lambda) =\deriv{}{t}\ln \frac{\Delta_{n+1}(t;\lambda)}{\Delta_{n}(t;\lambda)},\qquad
\beta_n(t;\lambda)
\deriv[2]{}{t}\ln\Delta_n(t;\lambda) \]
with $\Delta_n(t;\lambda)$ is the Hankel determinant given by \eqref{scHank}.
}\end{theorem}
\begin{proof}{See Chen and Ismail \cite{refCI97}.}\end{proof}
\comment{Equivalently the recurrence coefficients $\alpha_n(t;\lambda)$ and $\beta_n(t;\lambda)$ can be expressed in terms of $h_n(t;\lambda)$ given by \eqref{def:norm}.
\begin{lemma}{\label{thm:anbn2}The coefficients $\alpha_n(t;\lambda)$ and $\beta_n(t;\lambda)$ in the recurrence relation \eqref{eq:scrr}
associated with monic polynomials orthogonal with respect to a weight of the form \eqref{scweight} are given by
\begin{equation} \label{def:anbn2} \alpha_n(t;\lambda) =\deriv{}{t}\ln h_{n}(t;\lambda),\qquad
\beta_n(t;\lambda) =\frac{h_{n+1}(t;\lambda)}{h_{n}(t;\lambda)}\end{equation}
where $h_n(t;\lambda)$ is given by \eqref{def:norm}.
}\end{lemma}
\begin{proof}{See Chen and Ismail \cite{refCI97}.}\end{proof}}%
Additionally the coefficients $\alpha_n(t;\lambda)$ and $\beta_n(t;\lambda)$ in the recurrence relation \eqref{eq:scrr} satisfy a Toda system.
\begin{theorem}{\label{thm:todasys}
The coefficients $\alpha_n(t;\lambda)$ and $\beta_n(t;\lambda)$ in the recurrence relation \eqref{eq:scrr} associated with a weight of the form \eqref{scweight} satisfy the Toda system
\[ \deriv{\alpha_n}t=\beta_{n+1}-\beta_n,\qquad\deriv{\beta_n}t=\beta_n(\alpha_n-\alpha_{n-1}).\]
}\end{theorem}
\begin{proof}{See Chen and Ismail \cite{refCI97}, Ismail \cite[\S2.8, p.\ 41]{refIsmail}; see also \cite{refFVAZ} for further details.}\end{proof}
\section{Generalised Airy polynomials} \label{sec:genairy}
In this section we are concerned with the generalised Airy weight \eqref{genAiry}
and the polynomials orthogonal with respect to this weight.
\begin{lemma}\label{lem:Freud6weight}
For the generalised Airy weight \eqref{genAiry},
the first moment is given by
\begin{align}
\mu_0(t;\lambda)& =\int_{0}^{\infty} x^\lambda\exp\left(-\tfrac13x^3+tx\right)\,{\rm d} x \nonumber\\
& = 3^{(\lambda-2)/3}\,\Gamma(\tfrac13\lambda+\tfrac13) \;\HyperpFq12(\tfrac13\lambda+\tfrac13;\tfrac13,\tfrac23;\tfrac19t^3)
+ 3^{(\lambda-1)/3}\,t\,\Gamma(\tfrac13\lambda+\tfrac23) \;\HyperpFq12(\tfrac13\lambda+\tfrac23;\tfrac23,\tfrac43;\tfrac19t^3)\nonumber\\ &\qquad\qquad
+ \tfrac12\,3^{\lambda/3}\,t^2\,\Gamma(\tfrac13\lambda+1) \;\HyperpFq12(\tfrac13\lambda+1;\tfrac43,\tfrac53;\tfrac19t^3)
\label{eq:mu0}\end{align}
where
$\HyperpFq12(a_1;b_1,b_2;z)$ is the generalised hypergeometric function. Further, $\varphi(t)=\mu_0(t;\lambda)$ satisfies the third order equation
\[\deriv[3]{\varphi}{t}-t\deriv{\varphi}{t}-(\lambda+1)\varphi=0.\]
\end{lemma}
\begin{proof}See \cite[Lemma 3.1]{refCJ20}.
\end{proof}
\begin{remark}{\label{rmks32}\rm
\begin{enumerate}\item[]
\item If $\lambda=-\tfrac12$ then the first moment is given by
\[ \mu_0(t;-\tfrac12)=\int_{0}^{\infty} x^{-1/2}\exp\left(-\tfrac13x^3+tx\right)\,{\rm d} x = \pi^{3/2}2^{-1/3}[\mathop{\rm Ai}\nolimits^2(\tau)+\mathop{\rm Bi}\nolimits^2(\tau)],\qquad \tau = 2^{-2/3}t\]
where $\mathop{\rm Ai}\nolimits(\tau)$ and $\mathop{\rm Bi}\nolimits(\tau)$ are the Airy functions. This result is equation 9.11.4 in the DLMF \cite{refNIST}, which is due to Muldoon \cite[p32]{refMul77}.
\item For the generalised Airy weight \eqref{genAiry} the $k$-th moment is given by \[\mu_k(t;\lambda) =\int_{0}^{\infty} x^{\lambda+k}\exp\left(-\tfrac13x^3+tx\right)\,{\rm d} x=\mu_0(t,\lambda+k)\] which, using \eqref{mu0}, implies that \[\deriv[k]{\mu_0}{t}=\mu_0(t,\lambda+k)\]
\end{enumerate}
}\end{remark}
\comment{\item The generalised Airy weight \eqref{genAiry} is an example of a semi-classical weight for which the first moment $\mu_0(t;\lambda)$ satisfies a \textit{third order equation}. In our earlier studies of semi-classical weights \cite{refCJ14,refCJ18,refCJK}, the first moment has satisfied a second order equation. For example, for the semi-classical Laguerre weight
\begin{equation} \omega(x;t)=x^\lambda\exp(-x^2+tx),\qquad x\in\mathbb{R}^+\label{eq:scLag1}\end{equation} the first moment is expressed in terms of parabolic cylinder functions $D_{\nu}(z)$, see \cite{refCJ14}.
This is a classical special function that satisfies a second order equation.
\item Equation \eqref{eq;phi} arises in association with threefold symmetric Hahn-classical multiple orthogonal polynomials \cite{refLVA} and in connection with Yablonskii--Vorob'ev polynomials associated with rational solutions of the second Painlev\'e\ equation \cite{refCM03}.
\end{enumerate}\end{remarks}}
From Theorem \ref{thm:anbn}, we have the following representations of $\alpha_n(t;\lambda)$ and $\beta_n(t;\lambda)$.
\begin{theorem}{The coefficients $\alpha_n(t;\lambda)$ and $\beta_n(t;\lambda)$ in the recurrence relation \eqref{eq:scrr}
associated with monic polynomials orthogonal with respect to the generalised Airy weight \eqref{genAiry} are given by
\[ \alpha_n(t;\lambda) =\deriv{}{t}\ln \frac{\Delta_{n+1}(t;\lambda)}{\Delta_{n}(t;\lambda)},\qquad
\beta_n(t;\lambda)
\deriv[2]{}{t}\ln\Delta_n(t;\lambda) \]
where $\Delta_n(t;\lambda)$ is the Hankel determinant given by
\[ \Delta_n(t;\lambda)=\mathcal{W}\left(\mu_0,\deriv{\mu_0}t,\ldots,\deriv[n-1]{\mu_0}t\right)\]
with $\mu_0(t;\lambda)$ given by \eqref{eq:mu0}.
}\end{theorem}
\subsection{Differential and discrete equations satisfied by generalised Airy polynomials}
\label{sec:genairyprop}
We derive a differential-difference equation, a differential equation and a mixed recurrence relation satisfied by generalised Airy
polynomials.
The coefficients ${A}_n(x)$ and ${B}_n(x)$ in the relation
\begin{equation} \label{ddee}\deriv{P_n}{x}(x;t,\lambda)=\beta_n(t;\lambda){A}_n(x)P_{n-1}(x;t,\lambda)-{B}_n(x)P_n(x;t,\lambda)\end{equation}
satisfied by semi-classical orthogonal polynomials can be derived using a technique introduced by Shohat \cite{refShohat39} for weights $\omega(x)$ such that $\displaystyle{\omega'(x)}/{\omega(x)}$ is a rational function. The method of ladder operators was introduced by Chen and Ismail in \cite{refCI97}, see also \cite[Theorem 3.2.1]{refIsmail} and adapted in \cite{refChenFeigin} for the situation where the weight function vanishes at one point. Explicit expressions for the coefficients in the differential-difference equation \eqref{ddee} when the weight function is positive on the real line except for one point are provided in \cite{refCJK}. The coefficients in the differential-difference relation for generalised Airy polynomials associated with the weight \eqref{genAiry} are given in the next result.
\begin{theorem}\label{thm:dde}{For the generalised Airy weight \eqref{genAiry}
the monic orthogonal polynomials $P_{n}(x;t,\lambda)$ with respect to this weight satisfy the differential-difference equation \eqref{ddee} with
\begin{subequations}\label{AnBna}\begin{align}
A_n(x)&= x + \alpha_n + \frac{R_n}{x}\\
B_n(x)&=\beta_n + \frac{r_n}{x}
\end{align}\end{subequations}
where $\alpha_n(t;\lambda)$ and $\beta_n(t;\lambda)$ are the coefficients in the three-term recurrence relation \eqref{eq:scrr} and
\begin{subequations}\label{def:Rnrn}\begin{align}R_n&=\alpha_n^2+\beta_n+\beta_{n+1}-t\\r_n&=\tfrac12 \left(\lambda -\alpha_{n+1} \beta_{n+1}+\alpha_{n-1} \beta_n-\alpha_n^3+t \alpha_n+1\right)-\alpha_n \beta_{n+1}\label{def:Rnrnb}.\end{align}\end{subequations}}\end{theorem}
\begin{proof}Since $P_n(x)=P_n(x;t,\lambda)$ is a polynomial of degree $n$, we can write
\begin{align}\label{quasi}
x\deriv{P_n}{x}(x) &= \sum_{k=0}^{n}c_{n,k} P_k(x).
\end{align}
Multiplying \eqref{quasi} by $P_k(x)\,\omega(x)$, integrating both sides of the equation with respect to $x$ and applying the orthogonality relation, yields, for $k=0,1,2,...,n,$
\begin{align}\label{Ccoeff}
h_kc_{n,k} &= \int_{0}^{\infty} x\deriv{P_n}{x}(x) P_k(x)\,\omega(x) \,{\rm d}{x}, \qquad h_k\neq0.
\end{align}
Integrating the right hand side of \eqref{Ccoeff} by parts, we obtain, for $k=0,1,2,\dots n$,
\begin{align}\nonumber
h_k c_{n,k} &=\Big[ xP_k(x) P_n(x)\,\omega(x;t,\lambda)\Big]_{0}^{\infty}
- \int_{0}^\infty \deriv{}{x}\left[ x P_k (x)\,\omega(x;t,\lambda) \right]P_n(x) \,{\rm d}{x} \nonumber\\
&=- \int_{0}^{\infty} \left[P_n(x) P_k(x)
+ xP_n(x) \deriv{P_k}{x}(x)\right]\omega(x;t,\lambda) \,{\rm d}{x}
- \int_{0}^{\infty} xP_n(x) P_k(x) \deriv{\omega}{x}(x) \,{\rm d}{x}
\nonumber\\
&=- \int_{0}^{\infty} \left[P_n(x) P_k(x)
+ xP_n(x) \deriv{P_k}{x}(x)\right]\omega(x;t,\lambda) \,{\rm d}{x}
- \int_{0}^{\infty} P_n(x) P_k(x) (\lambda+tx-x^3)\,\omega(x;t,\lambda) \,{\rm d}{x}.
\label{maineq2}
\end{align}
For $k=n$, it follows from \eqref{maineq2}, that
\begin{align}
h_n c_{n,n} &= \int_{0}^{\infty} x\deriv{P_n}{x}(x) P_n(x)\,\omega(x;t,\lambda) \,{\rm d}{x}\nonumber\\&=-\tfrac12\int_{0}^{\infty} P_n^2(x) \omega(x;t,\lambda) \,\d x}\def\dy{\d y -\tfrac12\int_{0}^{\infty} P_n^2(x) (\lambda+tx-x^3)\,\omega(x;t,\lambda)\,{\rm d}{x} \nonumber
\\&= - \tfrac12h_n -\tfrac12\lambda h_n-\tfrac 12t\int_{0}^{\infty} xP_n^2(x)\,\omega(x;t,\lambda) \,{\rm d}{x}+\tfrac 12\int_{0}^{\infty} x^3P_n^2(x)\,\omega(x;t,\lambda)\,{\rm d}{x}.\label{eq:cnn1}
\end{align}
Iterating the three-term recurrence relation \eqref{eq:scrr}, yields
\begin{align}
x^3P_n(x) = P_{n+3}(x) &+ (\alpha_{n+2}+ \alpha_{n}+ \alpha_{n+1})P_{n+2}(x) \nonumber\\&
+(\alpha_n^2+\alpha_{n+1} \alpha_n+\alpha_{n+1}^2+\beta_n+\beta_{n+1}+\beta_{n+2})P_{n+1}(x)
\nonumber\\&
+ (2 \alpha_n \left(\beta_n+\beta_{n+1}\right)+\alpha_{n-1} \beta_n+\alpha_{n+1} \beta_{n+1}+\alpha_n^3)P_n(x)
\nonumber\\& +\beta_n \left(\alpha_{n-1}^2+\alpha_n \alpha_{n-1}+\alpha_n^2+\beta_{n-1}+\beta_n+\beta_{n+1}\right)P_{n-1}(x)
\nonumber \\&
+\beta_{n-1}\beta_n(\alpha_n+\alpha_{n-1}+\alpha_{n-2})P_{n-2}(x)+ \beta_n\beta_{n-1}\beta_{n-2}P_{n-3}(x).\label{recurrence3}
\end{align}
Substituting \eqref{eq:scrr} and \eqref{recurrence3} into \eqref{eq:cnn1} it follows that
\begin{align} c_{n,n}&= -\tfrac12(\alpha_nt+\lambda+1-\alpha_{n+1}\beta_{n+1}-\alpha_n^3-\alpha_{n-1}\beta_n)+\alpha_n(\beta_{n+1}+\beta_{n})
\label{eq:cnn4}\end{align}
For $k=0,1,2,\dots n-1$, \eqref{maineq2} yields
\begin{align}\nonumber
h_k c_{n,k}
&= -\int_{0}^{\infty} {P_n(x) P_k(x) \left(\lambda+tx -x^{3}\right) }\,\omega(x;t,\lambda)\,{\rm d}{x}\nonumber
\\&=
\int_{0}^{\infty} \big(x^3-tx\big) P_n(x) P_k(x)\,\omega(x;t,\lambda) \,{\rm d}{x}.\label{maineq}
\end{align}
Substituting \eqref{recurrence3} and \eqref{eq:3trr} into \eqref{maineq} we see that $c_{n,n-j} =0$ for $j=0,1\dots,n-4$ while
\begin{subequations}\label{Aau}
\begin{align}
c_{n,n-1} & = \beta_{n}(\beta_{n+1}+\beta_n+\beta_{n-1}+\alpha_n^2+\alpha_{n-1}\alpha_n+\alpha_{n-1}^2-t)\\
c_{n,n-2} & =\beta_n\beta_{n-1}(\alpha_n+\alpha_{n-1}+\alpha_{n-2})\\
c_{n,n-3} &= \beta_n\beta_{n-1}\beta_{n-2}.
\end{align}
\end{subequations}
We now write \eqref{quasi} as
\begin{align}\label{AQquasi}
x\deriv{P_n}{x}(x) = c_{n,n-3} P_{n-3}(x) + c_{n,n-2} P_{n-2}(x) +c_{n,n-1}P_{n-1}+ c_{n,n} P_{n}(x).
\end{align} Iterating \eqref{eq:3trr} to express $P_{n-3}$ and $P_{n-2}$ in terms of $P_n$ and $P_{n-1}$, we obtain
\begin{subequations}\label{recoa}\begin{align}
P_{n-2}(x)&= \dfrac{x-\alpha_{n-1}}{\beta_{n-1}}\,P_{n-1}(x)-\dfrac{P_n(x)}{\beta_{n-1}}\\
P_{n-3}(x)
= \left\{\dfrac{(x-\alpha_{n-1})(x-\alpha_{n-2})}{\beta_{n-1}\beta_{n-2}}-\dfrac{1}{\beta_{n-2}}\right\} P_{n-1}(x) - \dfrac{x-\alpha_{n-2}}{\beta_{n-1}\beta_{n-2}}\,P_{n}(x).
\end{align}\end{subequations}
Substituting \eqref{eq:cnn4}, \eqref{Aau} and \eqref{recoa} into \eqref{AQquasi} yields
\begin{align*}
x\deriv{P_n}{x}(x)=&\beta_n\left\{x^2+\alpha_nx+\alpha_n^2+\beta_n+\beta_{n+1}-t\right\}P_{n-1}(x)\\&
-\left\{\beta_n x-\alpha_n \beta_{n+1}+\tfrac{1}{2} \left(\lambda -\alpha_{n+1} \beta_{n+1}+\alpha_{n-1} \beta_n-\alpha_n^3+t \alpha_n+1\right)\right\}P_n(x)
\end{align*} and hence
\begin{align*}
\deriv{P_n}{x}(x)= \beta_n A_n(x) P_{n-1}(x)- B_n(x) P_n(x)
\end{align*}
where $ A_n(x)$ and $ B_n(x)$ are given by \eqref{AnBna}.
\end{proof}
A differential equation satisfied by generalised Airy polynomials can be obtained by differentiating the differential-difference equation \eqref{ddee}.
\begin{theorem}For the generalised Airy weight \eqref{genAiry} the monic orthogonal polynomials $P_n(x;t,\lambda)$ with respect to this weight satisfy the differential equation
\[
\deriv[2]{P_n}{x}(x;t,\lambda)+\mathcal{Q}_n(x)\deriv{P_n}{x}(x;t,\lambda)+\mathcal{T}_n(x)P_n(x;t,\lambda)=0
\]
where
\begin{align*}
\mathcal{Q}_n(x)=&\frac{\lambda +t x-x^3+1}{x}-\frac{\alpha _n+2 x}{\mathcal{C}_n(x)}\\
\mathcal{T}_n(x)=&\frac{n-\left(\alpha _{n-1}+\alpha _n\right) \beta _n-\left(\beta _n \mathcal{D}_n(x)-n\right) \left(-\lambda +\beta _n \mathcal{D}_n(x)-n-t x+x^3\right)+\beta _n \mathcal{C}_{n-1}(x) \mathcal{C}_{n}(x)}{x^2}\\&+\frac{\left(n-\beta _n \mathcal{D}_n(x)\right) \left(x^2-\alpha _n^2-\beta _n-\beta _{n+1}+t\right)}{x^2\mathcal{C}_{n} (x)}\end{align*} with\begin{align*} \mathcal{C}_n(x)=&x^2+\beta _n+\beta _{n+1}+\alpha _n \left(\alpha _n+x\right)-t\\\mathcal{D}_n(x)=&\alpha _{n-1}+\alpha _n+x\end{align*}
and $\alpha_n(t;\lambda)$ and $\beta_n(t;\lambda)$ are the coefficients in the three-term recurrence relation \eqref{eq:scrr}.
\end{theorem}
\begin{proof}For the weight \eqref{genAiry}, we have that \[ v(x) =-\ln\omega(x)=\tfrac13x^3-tx-\lambda\ln x.\]
The result follows by substituting the expressions for $v(x)$ and $A_n(x)$ and $B_n(x)$, given in \eqref{AnBna},
into the equations, see equations (3.2.13) and (3.2.14) in \cite{refIsmail},\begin{align*}
\mathcal{Q}_n(x)&=-v'(x)-\frac{A_n'(x)}{A_n(x)}\\
\mathcal{T}_n(x)&=B_n'(x)-B_n(x)\frac{A_n'(x)}{A_n(x)}-B_n(x)[v'(x)+B_n(x)]+{\beta_{n}}{A_{n-1}(x)A_n(x)}.
\end{align*} Note that here equation (3.2.14) in \cite{refIsmail} has been written for monic polynomials.
The expression $r_n=(\alpha_n+\alpha_{n-1})\beta_n-n$ is also used, see \eqref{eq4b} below. \end{proof}
Next, we consider a mixed recurrence relation connecting generalised Airy polynomials associated with different weight functions. Mixed recurrence relations such as these are typically used to prove interlacing and Stieltjes interlacing of the zeros of two polynomials from different sequences and also provide a set of points that can be applied as inner bounds for the extreme zeros of polynomials.
\begin{lemma}\label{mreca} Let $\{P_n(x;t,\lambda)\}_{n=0}^{\infty}$ be the sequence of monic generalised Airy polynomials orthogonal with respect to the weight \eqref{genAiry}, then, for $n$ fixed,
\begin{align}
\label{l+2again}x^2P_{n-2}(x;t,\lambda+2)&=\left[\frac{e_{n}}{\beta_{n-1}}(x-\alpha_{n-1})-d_{n}\right]P_{n-1}(x;t,\lambda)+\left(1-\frac{e_{n}}{\beta_{n-1}}\right)P_n(x;t,\lambda)
\end{align} where
\begin{align*}d_{n}=&\dfrac{P_{n}(0;t,\lambda)}{P_{n-1}(0;t,\lambda)}+\dfrac{P_{n-1}(0;t,\lambda+1)}{P_{n-2}(0;t,\lambda+1)},\qquad
e_{n}=\dfrac{P_{n-1}(0;t,\lambda+1)}{P_{n-2}(0;t,\lambda+1)}\dfrac{P_{n-1}(0;t,\lambda)}{P_{n-2}(0;t,\lambda)}\end{align*} and $\alpha_n(t;\lambda)$ and $\beta_n(t;\lambda)$ are the coefficients in the three-term recurrence relation \eqref{eq:scrr}.
\end{lemma}
\begin{proof}The weight function associated with the monic polynomials $P_n(x;t,\lambda+2)$ is
\begin{align*}\omega(x;t,\lambda+2)&=x^{\lambda+2}\exp\left(-\tfrac13x^3+tx\right)
=x\,\omega(x;t,\lambda+1).\end{align*}
Applying Christoffel's formula (cf.~\cite[Theorem 2.5]{refSzego}, \cite[Theorem 2.7.1]{refIsmail}) to the monic polynomial\\$P_{n-2}(x;t,\lambda+2)$, we can write
\begin{align*}
x P_{n-2}(x;t,\lambda+2)&=\dfrac{-1}{P_{n-2}(0;t,\lambda+1)}\left|\begin{matrix}P_{n-2}(x;t,\lambda+1)&P_{n-1}(x;t,\lambda+1)\\P_{n-2}(0;t,
\lambda+1)&P_{n-1}(0;t,\lambda+1)\end{matrix}\right|.\end{align*}This yields
\begin{align*}
P_{n-2}(x;t,\lambda+2)=&
\dfrac{1}{x}\left[P_{n-1}(x;t,\lambda+1)-\dfrac{P_{n-1}(0;t,\lambda+1)}{P_{n-2}(0;t,\lambda+1)}P_{n-2}(x;t,\lambda+1)\right]\\=&
\dfrac{1}{x}\left\{\dfrac{1}{x}\left[P_{n}(x;t,\lambda)-\dfrac{P_{n}(0;t,\lambda)}{P_{n-1}(0;t,\lambda)}P_{n-1}(x;t,\lambda)\right]\right.\\&\qquad\left.-\dfrac{P_{n-1}(0;t,\lambda+1)}{xP_{n-2}(0;t,\lambda+1)}\left[P_{n-1}(x;t,\lambda)-\dfrac{P_{n-1}(0;t,\lambda)}{P_{n-2}(0;t,\lambda)}P_{n-2}(x;t,\lambda)\right]\right\}
\\=&\dfrac{1}{x^2}\left[P_{n}(x;t,\lambda)-d_nP_{n-1}(x;t,\lambda)+e_nP_{n-2}(x;t,\lambda)\right].
\end{align*}
Using the three-term recurrence relation \[P_{n-2}(x;t,\lambda)=\frac{x-\alpha_{n-1}}{\beta_{n-1}}\, P_{n-1}(x;t,\lambda)-\frac{1}{\beta_{n-1}}P_n(x;t,\lambda)\] to eliminate $P_{n-2}(x;t,\lambda)$ yields the result.
\end{proof}
\comment{\begin{remark} Chihara (cf.~\cite[p. 37]{refChihara}) showed that $$P_n(x;t,\lambda+1)=\dfrac{h_n}{P_n(0;t,\lambda)}K_n(0,x)$$ where
$$K_n(y,x)=\sum_{j=0}^n\dfrac{P_j(y;t,\lambda)P_j(x;t,\lambda)}{h_j}$$ are the monic Kernel polynomials associated with $P_n(x;t,\lambda)$ and \[h_j=\int_{-\infty}^{\infty} P_j(x;t,\lambda)^2{\rm d}\omega(x;t,\lambda).\] It follows that
\[e_{n+2}=\frac{h_{n+1}}{h_n}\frac{K_{n+1}(0,x)}{K_n(0,x)}>0\] and hence
\begin{equation}\label{coef}e_{n+2}-\beta_{n+1}=\frac{h_{n+1}}{h_n}\left(\frac{K_{n+1}(0,x)}{K_n(0,x)}-1\right)>0.
\end{equation}
\end{remark}}
\subsection{Zeros of generalised Airy polynomials}\label{sec:GenAiryzeros}
The property that the zeros $x_{k,n}$, $k\in\{1,2,\dots,n\}$ of $P_n(x)$ where $\{P_n(x)\}_{n=0}^{\infty}$ is a sequence of polynomials orthogonal with respect to a semiclassical weight $\omega(x)>0$ are real, distinct and \[x_{1,n} <x_{1,n-1} <x_{2,n} <\dots<x_{n-1,n} <x_{n-1,n-1}<x_{n,n} \label{sep}\] holds for any semiclassical weight. The method of proof (see, for example, \cite[Theorems 3.3.1 and 3.3.2]{refSzego}) uses the three-term recurrence relation and definition of orthogonality. Note that, for an even weight $\omega(x)$, $x_{\frac{n+1}{2},n} =0$ when $n$ is odd.
Monotonicity of the zeros of semiclassical orthogonal polynomials plays and important role in applications.
\begin{lemma}\label{mono}
Consider the semiclassical weight \begin{equation} \label{scweight1}\omega(x;t,\lambda)=|C(x)|^{\lambda}\omega_0(x)\exp\{tD(x)\},\qquad \qquad\lambda>-1\end{equation}
where $\omega_0(x)$ is a positive function on $(a,b)$. Let $\{P_n(x;t,\lambda)\}_{n=0}^{\infty}$ be the sequence of semiclassical orthogonal polynomials associated with the weight \eqref{scweight1}. Denote the $n$ real zeros of $P_n(x;t,\lambda)$ in increasing order by $x_{n,\nu}(t;\lambda)$, $\nu=1,2,\dots,n$. Then, for a fixed value of $\nu$, $\nu\in \{1,2,\dots, n\}$, the $\nu$-th zero $x_{n,\nu}(t;\lambda)$
\begin{itemize}
\item[(i)] increases when $\lambda$ increases, if $\displaystyle{\frac{1}{C(x)}\deriv{}{x}C(x)>0}$ for $x\in (a,b)$;\\[-0.4cm]
\item[(ii)] increases when $t$ increases, if $\displaystyle{\deriv{}{x}D(x)>0}$ for $x\in (a,b)$.
\end{itemize}
\end{lemma}
\begin{proof}
\begin{itemize}\item[]
\item[(i)]For the semi-classical weight \eqref{scweight1}
\begin{align*}\frac{\partial}{\partial \lambda}\ln \omega(x;t,\lambda)&= \dfrac{|C(x)|^\lambda \exp\{tD(x)\} w_0(x)\ln |C(x)|}{|C(x)|^\lambda \exp\{tD(x)\}w_0(x)}
=\ln|C(x)|\end{align*} and therefore it follows from from Markov's monotonicity theorem (cf.~\cite[Theorem 6.12.1]{refSzego} that the zeros of $P_n(x)$ increase as $\lambda$ increases when $\ln|C(x)|$ is an increasing function of $x$. Since
\[\displaystyle{\deriv{}{x}\ln|C(x)|=\dfrac{1}{|C(x)|} \text{sgn}\, C(x) \deriv[]{}{x}C(x)}\] the result follows.
\item[(ii)]Similarly, since
\begin{align*}\frac{\partial}{\partial t}\ln\omega(x;t,\lambda)&=\dfrac{D(x)|C(x)|^\lambda \exp\{tD(x)\} w_0(x)}{|C(x)|^\lambda \exp\{tD(x)\}w_0(x)}
=D(x) \end{align*}
it follows that the zeros of $P_n(x)$ increase as $t$ increases when $D(x)$ is an increasing function of $x$.\end{itemize}
\end{proof}
\begin{corollary}Let $\{P_n(x)\}_{n=0}^{\infty}$ be the sequence of monic generalised Airy polynomials orthogonal with respect to the weight \eqref{genAiry} and let $0<x_{n,n}<\dots<x_{2,n} <x_{1,n} $ denote the zeros of $P_n(x)$. Then, for $\lambda>-1$ and $t\in \mathbb{R}$ and for a fixed value of $\nu$, $\nu\in \{1,2,\dots, n\}$,
the $\nu$-th zero $x_{n,\nu} $ increases when (i) $\lambda$ increases; and (ii)
$t$ increases.
\end{corollary}
\begin{proof} This follows from Lemma \ref{mono}, taking $C(x)=x$, $D(x)=x$ and $\omega_0(x)=\exp(-\tfrac13x^3)$.
\end{proof}
\comment{\begin{theorem}
Let $\{p_n\}_{n=0}^\infty$ be a sequence of polynomials orthogonal on the interval $(c,d)$.
Fix $k,n \in\mathbb{N}$ with $k < n-1$ and suppose deg$(g_{n-k-1})=n-k-1$ with
\begin{equation}\label{13}f(x)g_{n-k-1}(x)=G_k(x) p_{n-1}(x) + H(x) p_n(x)\end{equation}
where $f(x) \neq 0$ for $x\in(c,d)$ and deg($G_k)=k$. Then, if $g_{n-k-1}$ and $p_n$ are co-prime,
\begin{itemize}
\item[(i)] the $n-1$ real, simple zeros of $G_k g_{n-k-1}$ interlace with the zeros of $p_n$.
\item[(ii)] The largest (smallest) zero of $G_k$ is a strict
lower (upper) bound for the largest (smallest) zero of $p_{n}.$
\end{itemize}
\end{theorem}
\begin{proof}
See \cite{refDJ12}.
\end{proof}}
Next we use \eqref{l+2again} to obtain an upper bound for the smallest zero and a lower bound for the largest zero of generalised Airy polynomials.
\begin{theorem}Let $\{P_n(x;t,\lambda)\}_{n=0}^{\infty}$ be the sequence of monic generalised Airy polynomials orthogonal with respect to the weight \eqref{genAiry} on $(0,\infty)$. For each $n=2,3,\dots,$ the largest zero, $x_{1,n}$, and the smallest zero $x_{n,n}$ of $P_n(x;t,\lambda)$, satisfies \[0<x_{n,n}<\alpha_{n-1}+\frac{d_{n}\beta_{n-1}}{e_{n}}<x_{1,n}\] where $\alpha_n=\alpha_n(t;\lambda)$ and $\beta_n=\beta_n(t;\lambda)$ are the coefficients in the three-term recurrence relation \eqref{eq:scrr} and \begin{align*}d_{n}=&\dfrac{P_{n}(0;t,\lambda)}{P_{n-1}(0;t,\lambda)}+\dfrac{P_{n-1}(0;t,\lambda+1)}{P_{n-2}(0;t,\lambda+1)},\qquad
e_{n}=\dfrac{P_{n-1}(0;t,\lambda+1)}{P_{n-2}(0;t,\lambda+1)}\dfrac{P_{n-1}(0;t,\lambda)}{P_{n-2}(0;t,\lambda)}.\end{align*}
\end{theorem}
\begin{proof}Let \[x_{n,n}<x_{n-1,n}<\dots<x_{2,n}<x_{1,n}\] denote the zeros of $P_n(x;t,\lambda)$.
Consider \eqref{l+2again}\begin{equation}\label{l+2again!}x^2P_{n-2}(x;t,\lambda+2)=G(x)P_{n-1}(x;t,\lambda)+\left(1-\frac{e_{n}}{\beta_{n-1}}\right)P_n(x;t,\lambda)\end{equation} with $\displaystyle{G(x)=\frac{e_{n}}{\beta_{n-1}}(x-\alpha_{n-1})-d_{n}}$. Since $P_{n-1}(x;t,\lambda)$ and $P_{n}(x;t,\lambda)$ are always
co-prime while $P_{n}(x;t,\lambda)$ and \newline $P_{n-2}(x;t,\lambda+2)$ are co-prime by assumption, it follows from \eqref{l+2again!} that $G(x_{j,n})\neq 0$ for every $j\in\{1,2,\dots,n\}.$ From \eqref{l+2again!}, provided $P_n(x;t,\lambda)\neq 0$, we have \[ \frac{x^2
P_{n-2}(x;t,\lambda+2)}{P_n(x;t,\lambda)} = 1-\frac{e_{n}}{\beta_{n-1}}
+\frac{G(x) P_{n-1}(x;t,\lambda)}{ P_{n}(x;t,\lambda)}.\] The decomposition into partial fractions (cf.~\cite[Theorem 3.3.5]{refSzego} \[\frac{P_{n-1}(x;t,\lambda)}){P_{n}(x;t,\lambda)}=\sum_{j=1}^{n}\frac{C_j}{x-x_{j,n}}\] where $C_j>0$ for every $j\in\{1,2,\dots,n\}$, implies that we can write
\[\frac{x^2
P_{n-2}(x;t,\lambda+2)}{P_{n}(x;t,\lambda)} = 1-\frac{e_{n}}{\beta_{n-1}}
+\sum_{j=1}^{n}\frac{G(x)C_j}{x-x_{j,n}},~~~ x\neq x_{j,n}.\]
Suppose that $G(x)$ does not change sign in an interval $(x_{j+1,n},x_{j,n})$ where $j\in\{1,2,\dots,n-1\}$. Since
$C_j>0$ while the right hand side takes arbitrarily large positive and
negative values on $(x_{j+1,n},x_{j,n})$, it follows that $P_{n-2}(x;t,\lambda+2)$ must have an odd number of zeros in every interval in which $G(x)$ does not change
sign. Since $G(x)$ is of degree $1$, there are at least $n-2$ intervals $(x_{j+1,n},x_{j,n})$, $j\in\{1,2,\dots,n-1\}$ in which $G(x)$ does not
change sign and so each of these intervals must contain exactly one of the $n-2$ real, simple zeros of $P_{n-2}(x;t,\lambda+2)$. We deduce that the
zero of $G(x)$, together with the $n-2$ zeros of $P_{n-2}(x;t,\lambda+2)$, interlaces with the $n$ zeros of $P_{n}(x;t,\lambda)$ and therefore the zero $\displaystyle{\alpha_{n-1}+\frac{d_{n}\beta_{n-1}}{e_{n}}}$ of $G(x)$ has to lie between the two extreme zeros of $P_n(x;t,\lambda)$.
\end{proof}
\subsection{Differential and discrete equations satisfied by the recurrence coefficients}
In this section we discuss properties for the recurrence coefficients $\alpha_n(t;\lambda)$ and $\beta_n(t;\lambda)$ in the three-term recurrence relation \eqref{eq:scrr}.
\begin{theorem}
If $P_n(x)$ are the monic orthogonal polynomials for the weight $\omega(x) = {\rm e}^{-v(x)}$, then
\begin{align*}
&\left(\deriv{}{x}+B_n(x)\right) P_n(x)=\beta_nA_n(x)P_{n-1}(z)\\
&\left(\deriv{}{x}-B_n(x)-v'(x)\right) P_{n-1}(x)=-A_{n-1}(x)P_n(x),
\end{align*}
where the functions $A_n(x)$ and $B_n(x)$ are given by
\begin{align*}
A_n(x) &= \frac{1}{h_n}\int_0^\infty \frac{v'(x)-v'(y)}{x-y} \, P_n^2(y)\, \omega(y)\,{\rm d} y\\
B_n(x) &= \frac{1}{h_{n-1}}\int_0^\infty \frac{v'(x)-v'(y)}{x-y} \, P_n(y)P_{n-1}(y) \,\omega(y)\,{\rm d} y.
\end{align*}
}\end{theorem}
The functions $A_n(x)$ and $B_n(x)$ also
satisfy the following supplementary conditions.
\begin{lemma}{The functions $A_n(x)$ and $B_n(x)$ satisfy
\begin{align}
& B_{n+1}(x)+B_n(x) = (x-\alpha_n)A_n(x)-v'(x)\label{eq1}\\
& 1+(x-\alpha_n)[B_{n+1}(x)-B_n(x)] = \beta_{n+1}A_{n+1}(x)-\beta_nA_{n-1}(x)\label{eq2}\\
& B_n^2(x)+v'(x)B_n(x)+\sum_{j=0}^{n-1}A_j(x)=\beta_nA_n(x)A_{n-1}(x).\label{eq3}
\end{align}
}\end{lemma}
\begin{proof} See \cite[Lemma 3.2.2, Theorem 3.2.4]{refIsmail}; see also \cite[Proposition 3.1]{refFVAZ}. \end{proof}
\begin{theorem}\label{thm:anbn1}
The recurrence coefficients $\alpha_n(t;\lambda)$ and $\beta_n(t;\lambda)$ for the generalised Airy weight \eqref{genAiry} satisfy the discrete system
\begin{subequations}\label{anbn:dissys}\begin{align} \label{anbn:dissysa}
& (2\alpha_n+\alpha_{n-1})\beta_n + (\alpha_{n+1}+2\alpha_{n})\beta_{n+1}+\alpha_n^3-t\alpha_n =2n+\lambda+1\\
& \beta_n^3 + (\beta_{n+1}+\beta_{n-1}-2\alpha_n\alpha_{n-1}-2t)\beta_n^2 \nonumber\\ &\qquad + \{ (\beta_{n+1}+\alpha_n^2-t)(\beta_{n-1}+\alpha_{n-1}^2-t) +(\alpha_n+\alpha_{n-1})(2n+\lambda) \}\beta_n = n(n+\lambda).
\end{align}\end{subequations}
\end{theorem}
\begin{proof}
Substituting \eqref{AnBna} into \eqref{eq1}, with $v(x)=\tfrac13x^3-tx-\lambda\ln(x)$, then from the coefficients of $x^0$ and $x^{-1}$ we obtain
\begin{align}
&\beta_n+\beta_{n+1}=R_n-\alpha_n^2 + t \label{eq1a}\\
& r_n+r_{n+1} =-\alpha_n R_n + \lambda\label{eq1b}
\end{align}
\comment{Similarly from \eqref{eq3} we obtain
\begin{align}
&\alpha_n(\beta_n-\beta_{n+1}) + r_{n+1}-r_n + 1 = \alpha_{n+1}\beta_{n+1}-\alpha_{n-1}\beta_n\label{eq2a}\\
&\alpha_n(r_n-r_{n+1}) =R_{n+1}\beta_{n+1}-R_{n-1}\beta_n .\label{eq2b}
\end{align}}%
and substituting \eqref{AnBna} into \eqref{eq2}, then from the coefficients of $x$ and $x^{-2}$ we obtain
\begin{align}
&r_n+n=(\alpha_n+\alpha_{n-1})\beta_n\label{eq3a}\\
&r_n^2-\lambda r_n = \beta_n R_n R_{n-1}.\label{eq3b}
\end{align}
From \eqref{eq1a} and \eqref{eq3a}, respectively, we see that
\begin{align} &R_n = \beta_n+\beta_{n+1}+\alpha_n^2-t\label{eq4a}\\ &r_n= (\alpha_n+\alpha_{n-1})\beta_n-n.\label{eq4b}\end{align}
Substituting these into \eqref{eq1b} and \eqref{eq3b} gives the discrete system \eqref{anbn:dissys}, as required.
\end{proof}
\begin{corollary}
{For the generalised Airy weight \eqref{genAiry}
the monic orthogonal polynomials $P_{n}(x;t,\lambda)$ with respect to this weight satisfy the differential-difference equation \eqref{ddee}
with
\begin{align*}
A_n(x)&= x + \alpha_n + \frac{R_n}{x},\qquad
B_n(x)=\beta_n + \frac{r_n}{x}
\end{align*}
where $\alpha_n(t;\lambda)$ and $\beta_n(t;\lambda)$ are the coefficients in the three-term recurrence relation \eqref{eq:scrr} and
\begin{align*}R_n& = \beta_n+\beta_{n+1}+\alpha_n^2-t,\qquad r_n = (\alpha_n+\alpha_{n-1})\beta_n-n.\end{align*}}
\end{corollary}
\begin{proof} The result follows from Theorem \ref{thm:dde} by substituting \eqref{anbn:dissysa} into \eqref{def:Rnrnb}.\end{proof}
\begin{theorem}\label{thm:anbn2}
The recurrence coefficients $\alpha_n(t;\lambda)$ and $\beta_n(t;\lambda)$ for the generalised Airy weight \eqref{genAiry} satisfy the differential system
\begin{subequations}\label{anbn:diffsys}\begin{align}
&\deriv[2]{\alpha_n}{t}+3\alpha_n\deriv{\alpha_n}{t}+\alpha_n^3+(6\beta_n-t)\alpha_n = 2n+\lambda+1\\
&\left(\deriv{\alpha_n}{t}+\alpha_n^2+2\beta_n-t\right)\deriv[2]{\beta_n}{t}-\left(\deriv{\beta_n}{t}\right)^2-\left(2\alpha_n\deriv{\alpha_n}{t}+2\alpha_n^3-2t\alpha_n+2n+\lambda\right) \deriv{\beta_n}{t}\nonumber\\
&\qquad -\beta_n \left(\deriv{\alpha_n}{t}\right)^2 + 4\beta_n^3-4t\beta_n^2+\{\alpha_n^4-2t\alpha_n^2+2(2n+\lambda)\alpha_n+t^2\}\beta_n = n(n+\lambda).
\end{align}\end{subequations}
\end{theorem}
\begin{proof} Recall that from Theorem \ref{thm:todasys}, $\alpha_n$ and $\beta_n$ satisfy the Toda system
\begin{equation}\label{eq:toda2}
\deriv{\alpha_n}t=\beta_{n+1}-\beta_n,\qquad\deriv{\beta_n}t=\beta_n(\alpha_n-\alpha_{n-1}).\end{equation}
We can use these to eliminate $\beta_{n+1}$, $\beta_{n-1}$, $\alpha_{n+1}$ and $\alpha_{n-1}$ from the discrete system \eqref{anbn:dissys}. Substituting
\begin{align*}&\beta_{n+1}=\beta_n+\deriv{\alpha_n}t, &&
\alpha_{n+1}= \alpha_n +\deriv{}t \ln\beta_{n+1} =\alpha_n +\deriv{}t \ln\left(\beta_n+\deriv{\alpha_n}t\right)\\
&\alpha_{n-1} = \alpha_n-\deriv{}t \ln\beta_n, && \beta_{n-1} = \beta_n-\deriv{\alpha_{n-1}}t = \beta_n-\deriv{}t \alpha_{n}+\deriv[2]{}t \ln\beta_n,
\end{align*}
into \eqref{anbn:dissys} gives the differential system \eqref{anbn:diffsys}, as required.\end{proof}
\comment{ \begin{equation} \deriv[2]{\alpha_n}{t}+3\alpha_n\deriv{\alpha_n}{t}+\alpha_n^3+(6\beta_n-t)\alpha_n = 2n+\lambda+1.
\label{eq4ode}\end{equation}
\comment{Using \eqref{eq:toda2} to eliminate $\beta_{n+2}$, $\beta_{n+1}$, $\alpha_{n+1}$ and $\alpha_{n-1}$ from \eqref{eq6} yields
\[ \deriv[3]{\alpha_n}{t}+3\alpha_n\deriv[2]{\alpha_n}{t}+3\left(\deriv{\alpha_n}{t}\right)^2+(3\alpha_n^2+6\beta_n-t)\deriv{\alpha_n}{t} + 6\alpha_n\deriv{\beta_n}{t} -\alpha_n =0\]
which is the differential of \eqref{eq4ode} w.r.t.\ $t$. }%
Using \eqref{eq:toda2} to eliminate $\beta_{n+1}$, $\alpha_{n+1}$ and $\alpha_{n-1}$ from \eqref{eq6} yields
\begin{align}
&\left(\deriv{\alpha_n}{t}+\alpha_n^2+2\beta_n-t\right)\deriv[2]{\beta_n}{t}-\left(\deriv{\beta_n}{t}\right)^2-\left(2\alpha_n\deriv{\alpha_n}{t}+2\alpha_n^3-2t\alpha_n+2n+\lambda\right) \deriv{\alpha_n}{t}\nonumber\\
&\qquad -\beta_n \left(\deriv{\alpha_n}{t}\right)^2 + 4\beta_n^3-4t\beta_n^2+\{\alpha_n^4-2t\alpha_n^2+2(n+\lambda)\alpha_n+t^2\}\beta_n = n(n+\lambda).\label{eq5ode}
\end{align}}
\subsection{Asymptotics of the recurrence coefficients.}\label{sec:rc
\begin{lemma}{As $n\to\infty$, the recurrence coefficients $\alpha_n(t;\lambda)$ and $\beta_n(t;\lambda)$ have the formal asymptotic expansions
\begin{subequations}\label{nasym:anbn}\begin{align}
\alpha_n(t;\lambda)&= \frac{2n^{1/3}}{\kappa} + \frac{\kappa t}{15n^{1/3}}+\frac{\kappa^{2}(\lambda+1)}{30\,n^{2/3}} + \mathcal{O}(n^{-1})\\
\beta_n(t;\lambda)&= \frac{n^{2/3}}{\kappa^2} + \frac{t}{15}+\frac{\kappa \lambda}{30\,n^{1/3}}+\frac{\kappa^{2}t^2}{900\,n^{2/3}} + \mathcal{O}(n^{-1}),
\end{align}\end{subequations}
where $\kappa=\sqrt[3]{10}$.
}\end{lemma}
\comment{\begin{align*}
\alpha_n(t;\lambda)&= 2N + \frac{t}{15N}+\frac{\lambda+1}{30N^2} + \mathcal{O}(N^{-3})\\
\beta_n(t;\lambda)&= N^2 + \frac{t}{15}+\frac{\lambda}{30N}+\frac{t^2}{900N^2} + \mathcal{O}(N^{-3})\\
\end{align*}
where $N=\sqrt[3]{n/10}$.}
\begin{proof}
From \eqref{anbn:dissys}, it follows that as $n\to\infty$, $\alpha_n\sim\alpha n^{1/3}$ and $\beta_n\sim\beta n^{2/3}$, where $\alpha$ and $\beta$ are constants that satisfy the algebraic system
\begin{align*}
& 6\alpha\beta+\alpha^3 =2,\qquad
4\beta^3 +\alpha^4\beta +4\alpha\beta = 1,
\end{align*}
which has solution $\alpha=2/\kappa$ and $\beta=1/\kappa^2$, with $\kappa=\sqrt[3]{10}$. Now we suppose that as $n\to\infty$
\begin{subequations} \label{anbn:nexp} \begin{align}
\alpha_n(t;\lambda)&= \frac{2n^{1/3}}{\kappa} + a_0(t) + \frac{a_1(t)}{n^{1/3}} + \frac{a_2(t)}{n^{2/3}} + \mathcal{O}(n^{-1})\\
\beta_n(t;\lambda)&= \frac{n^{2/3}}{\kappa^2} + \tilde{b}_1(t)n^{1/3}+ b_0(t) + \frac{b_1(t)}{n^{1/3}} + \frac{b_2(t)}{n^{2/3}} + \mathcal{O}(n^{-1}),
\end{align}\end{subequations}
where $a_0(t)$, $a_1(t)$, $a_2(t)$, $\tilde{b}_1(t)$, $b_0(t)$, $b_1(t)$ and $b_2(t)$ are to be determined. Substituting \eqref{anbn:nexp} into the system \eqref{anbn:diffsys}, equating coefficients of powers of $n$ and solving the resulting system gives
\[ \begin{split} &a_0(t) = 0, \qquad a_1(t)=\frac{\kappa t}{15},\qquad a_2(t)= \frac{\kappa^2(\lambda+1)}{30}\\
& \tilde{b}_1(t)= 0, \qquad b_0(t)=\frac{t}{15},\qquad b_1(t) = \frac{\kappa \lambda}{30},\qquad b_2(t)=\frac{\kappa^{2}t^2}{900},
\end{split} \]
and so we obtain \eqref{nasym:anbn}, as required.
\end{proof}
\begin{lemma}{\label{lem:anbn:asympt} As $t\to\infty$, the recurrence coefficients $\alpha_n(t;\lambda)$ and $\beta_n(t;\lambda)$ for the generalised Airy weight \eqref{genAiry}
have the formal asymptotic expansions
\begin{subequations}\label{asym:anbntp}\begin{align}
\alpha_n(t;\lambda)&= \sqrt{t}- \frac{2n-2\lambda+1}{4t} + \mathcal{O}(t^{-5/2})\label{asym:antp}\\
\beta_n(t;\lambda) &= \frac{n}{2\sqrt{t}} + \frac{n(n-2\lambda)}{4t^2} + \mathcal{O}(t^{-7/2}).\label{asym:bntp}
\end{align}\end{subequations}
As $t\to-\infty$, the recurrence coefficients $\alpha_n(t;\lambda)$ and $\beta_n(t;\lambda)$ have the formal asymptotic expansions
\begin{subequations}\label{asym:anbntm}\begin{align}
\alpha_n(t;\lambda)&= -\frac{2n+\lambda+1}{t}-\frac{(2n+\lambda+1)(10n^2+10n\lambda+\lambda^2+10n+5\lambda+6)}{t^4}+\mathcal{O}(t^{-7})\label{asym:antm}\\
\beta_n(t;\lambda)& = \frac{n(n+\lambda)}{t^2} + \frac{4n(n+\lambda)(5n^2+5n\lambda+\lambda^2+1)}{t^5} +\mathcal{O}(t^{-8}).\label{asym:bntm}
\end{align}\end{subequations}
}\end{lemma}
\begin{proof} When $n=0$ we know that
\[ \alpha_0(t;\lambda)=\deriv{}{t}\ln\mu_0(t;\lambda),\qquad \beta_0(t;\lambda)=0\]
where \[\mu_0(t;\lambda)= \int_0^\infty x^{\lambda}\exp\left(-\tfrac13x^3+tx\right)\,{\rm d} x.\]
Further $\alpha_0(t;\lambda)$ satisfies the equation
\begin{equation} \deriv[2]{\alpha_0}{t}+3\alpha_0\deriv{\alpha_0}{t}+\alpha_0^3-t\alpha_0 = \lambda+1.
\label{eq:a0ode}\end{equation}
Using Laplace's method it follows that as $t\to\infty$
\[\begin{split} \mu_0(t;\lambda) &
= t^{(\lambda+1)/2}\int_0^\infty \xi^{\lambda}\exp\{t^{3/2}\xi(1-\tfrac13\xi^2)\}\,{\rm d}\xi \\
&={t}^{\lambda/2-1/4}\sqrt {\pi}\exp\left(\tfrac23\,{t}^{3/2}\right){\left[1+\frac{12\lambda^2-24\lambda+5}{48t^{3/2}}+\mathcal{O}(t^{-3})\right]}
\end{split}\]
and so it is straightforward to show that as $t\to\infty$
\[ \alpha_0(t;\lambda)=\sqrt{t}+\mathcal{O}({t}^{-1}).\]
Suppose we seek an asymptotic expansion in the form
\[ \alpha_0(t;\lambda)=\sqrt{t}+\frac{a_{0,1}}{t}+\frac{a_{0,2}}{t^{5/2}}+\mathcal{O}(t^{-4})\]
where $a_{0,1}$ and $a_{0,2}$ are constants to be determined.
Substituting this into \eqref{eq:a0ode} and equating coefficients of powers of $t$ gives
\[ a_{0,1}=\tfrac12\lambda-\tfrac14,\qquad a_{0,2} = -\tfrac38\lambda^2+\tfrac34\lambda-\tfrac{5}{32}\]
and so
\[ \alpha_0(t;\lambda)=\sqrt{t}+\frac{2\lambda-1}{4t}-\frac{12\lambda^2-24\lambda+5}{32t^{5/2}}+\mathcal{O}(t^{-4})\]
which is \eqref{asym:antp} with $n=0$.
From the Toda system \eqref{eq:toda2}, since $ \beta_0=0$ then $\displaystyle \beta_1= \deriv{\alpha_0}{t}$,
and so
\[ \beta_1(t;\lambda) = \frac{1}{2\sqrt{t}}-\frac{2\lambda-1}{4t^2} + \mathcal{O}(t^{-7/2})\]
which is \eqref{asym:bntp} with $n=1$.
Now we can use induction. Suppose that \eqref{asym:anbntp} are true, then using the Toda system \eqref{eq:toda2} we have
\begin{align*}\beta_{n+1} &=\beta_n+\deriv{\alpha_n}t = \frac{n+1}{2\sqrt{t}} + \frac{(n+1)(n+1-2\lambda)}{4t^2} + \mathcal{O}(t^{-7/2})\\
\alpha_{n+1} &= \alpha_n +\deriv{}t \ln\beta_{n+1} = \sqrt{t}- \frac{2n-2\lambda+3}{4t} + \mathcal{O}(t^{-5/2}),
\end{align*}
which are \eqref{asym:anbntp} with $n\to n+1$, and hence the result is proved by induction.
The asymptotics \eqref{asym:anbntm} are proved in an analogous way. Using Watson's Lemma it follows that as $t\to-\infty$
\[ \mu_0(t;\lambda) = \frac{\Gamma(\lambda+1)}{(-t)^{\lambda+1}}\left[1+\frac{(\lambda+1)(\lambda+2)(\lambda+3)}{3t^3}+\mathcal{O}(t^{-6})\right]\]
and so as $t\to-\infty$
\[ \alpha_0(t;\lambda)=-\frac{\lambda+1}{t}+\mathcal{O}(t^{-4}).\]
In this case we seek an asymptotic expansion in the form
\[ \alpha_0(t;\lambda)=-\frac{\lambda+1}{t}+\frac{\widetilde{a}_{0,1}}{t^4}+\frac{\widetilde{a}_{0,2}}{t^{7}}+\mathcal{O}(t^{-10})\]
Substituting this into \eqref{eq:a0ode} and equating coefficients of powers of $t$ gives
\[ \widetilde{a}_{0,1}=-(\lambda+1)(\lambda+2)(\lambda+3),\qquad \widetilde{a}_{0,2}=-(\lambda+1)(\lambda+2)(\lambda+3)(3\lambda^2 + 21\lambda + 38)\]
and so
\[ \alpha_0(t;\lambda)=-\frac{\lambda+1}{t}-\frac{(\lambda+1)(\lambda+2)(\lambda+3)}{t^4}+\mathcal{O}(t^{-7})\]
which is \eqref{asym:antm} with $n=0$.
From the Toda system \eqref{eq:toda2}, since $ \beta_0=0$ then $\displaystyle \beta_1= \deriv{\alpha_0}{t}$,
and so
\[ \beta_1(t;\lambda) =\frac{\lambda+1}{t^2}+\frac{4(\lambda+1)(\lambda+2)(\lambda+3)}{t^5}+\mathcal{O}(t^{-8})\]
which is \eqref{asym:bntp} with $n=1$. As for \eqref{asym:anbntp}, the results asymptotics \eqref{asym:anbntm} can be proved using induction and the Toda system \eqref{eq:toda2}.
\end{proof}
An alternative method of proving Lemma \ref{lem:anbn:asympt} is to use the differential system \eqref{anbn:diffsys} by seeking solutions in the form
\begin{align*}
\alpha_n(t;\lambda) &= \sqrt{t} +\frac{a_{n,1}}{t}+\mathcal{O}(t^{-5/2}),\qquad
\beta_n(t;\lambda) = \frac{n}{2\sqrt{t}} + \frac{b_{n,1}}{t^2} +\mathcal{O}(t^{-7/2})
\end{align*}
as $t\to\infty$, where $a_{n,1}$ and $b_{n,1}$ are to be determined, and
\begin{align*}
\alpha_n(t;\lambda) &= \frac{\tilde{a}_{n,0}}{t} + \frac{\tilde{a}_{n,1}}{t^4} + \mathcal{O}(t^{-7}),\qquad
\beta_n(t;\lambda) = \frac{\tilde{b}_{n,0}}{t^2} + \frac{\tilde{b}_{n,1}}{t^5} + \mathcal{O}(t^{-8})
\end{align*}
as $t\to-\infty$, where $\tilde{a}_{n,0}$, $\tilde{a}_{n,1}$, $\tilde{b}_{n,0}$ and $\tilde{b}_{n,1}$ are to be determined.
Plots of $\alpha_n(t;\lambda)$, and $\beta_n(t;\lambda)$, for $n=1,2,\ldots,5$, with $\lambda=0,\tfrac12,2$ are given in Figure~\ref{fig:anbn}.
\begin{figure}[ht]\[ \begin{array}{c@{\quad}c@{\quad}c}
\fig{genAiry_an_la0} & \fig{genAiry_an_la12} &\fig{genAiry_an_la2}\\
\lambda=0 & \lambda=\tfrac12 & \lambda=2\\
\fig{genAiry_bn_la0} &\fig{genAiry_bn_la12} & \fig{genAiry_bn_la2}\\
\end{array}\]
\caption{\label{fig:anbn}Plots of $\alpha_n(t;\lambda)$, upper row, and $\beta_n(t;\lambda)$, lower row, for $n=1$ (black), $n=2$ (red), $n=3$ (blue), $n=4$ (green) and $n=5$ (purple), with $\lambda=0,\tfrac12,2$.}
\end{figure}
From these plots we make the following conjecture.
\begin{conjecture}{\rm
\begin{enumerate}\item[]
\item The recurrence coefficient $\alpha_{n}(t;\lambda)$ is a monotonically increasing function of $t$.
\item If $\lambda$ is fixed, then $\beta_{n+1}(t;\lambda)>\beta_{n}(t;\lambda)$, for all $t$.
\item The recurrence coefficient $\beta_{n}(t;\lambda)$ has one maximum at $t=t^*_{n}$, with $t^*_{n+1}>t^*_{n}$, with $\lambda$ fixed.
\end{enumerate}
}\end{conjecture}
\begin{remark}{Wang \textit{et al.}\ \cite{refWZC20b} claim that the recurrence coefficients $\alpha_n$ and $\beta_n$ for the generalised Airy weight satisfy the differential system
\begin{equation} \label{WZCeq35}
\deriv{\alpha_n}{t} = t-\alpha_n^2-2\beta_n,\qquad \deriv{\beta_n}{t}=2\alpha_n\beta_n-n-\tfrac12\lambda
\end{equation}
and the discrete system
\[ (\alpha_n+\alpha_{n-1})\beta_n=n+\tfrac12\lambda,\qquad \beta_n+\beta_{n+1}+\alpha_n^2=t \]
see equations (34), (35), (37) and (38) in \cite{refWZC20b}.
Theorems \ref{thm:anbn1} and \ref{thm:anbn2} above show that their claim is not correct.
Wang \textit{et al.}\ \cite{refWZC20b} misquote the results of Magnus \cite{refMagnus95}, who considered the weight
\[\omega(x;t)=\exp\left(-\tfrac13x^3+tx\right),\qquad x\in\mathcal{C}\] where $\mathcal{C}$ is a contour in the complex plane, as being for $x\in[0,\infty)$; see equation (2) in \cite{refWZC20b}.
There are other reasons to illustrate that some results in \cite{refWZC20b} are not correct.
Eliminating $\beta_n$ in \eqref{WZCeq35} gives
\[\deriv[2]{\alpha_n}{t}=2\alpha_n^3-2t\alpha_n+2n+\lambda+1\]
which is equivalent to the second Painlev\'e\ equation (\mbox{\rm P$_{\rm II}$})\
\begin{equation}\label{eq:PII} \deriv[2]{q}{z}=2q^3+zq+A\end{equation}
with $A=n+\tfrac12(\lambda+1)$.
In \cite{refWZC20b}, $\alpha_n$ is expressed in terms of the Hankel determinants, therefore giving special function solutions of \mbox{\rm P$_{\rm II}$}\ for \textit{all} positive values of the parameter $A$, which is not true. It is well-known that there are special function solutions of \mbox{\rm P$_{\rm II}$}\ \eqref{eq:PII} if and only if $A=n+\tfrac12$, for $n\in\mathbb{Z}$ \cite{refGambier09}.
\comment{Eliminating $\alpha_n$ gives
\[\deriv[2]{\beta_n}{t}= \frac{1}{2\beta_n}\left(\deriv{\beta_n}{t}\right)^2 - 4\beta_n^2+2t\beta_n-\frac{(2n+\lambda)^2}{8\beta_n}\]
which is equivalent to {$\text{P}_{\!34}$}
\[\deriv[2]{p}{z}= \frac{1}{2p}\left(\deriv{p}{z}\right)^2 - 2p^2-zp-\frac{(2n+\lambda)^2}{2p}.\]}
}\end{remark}
\section{Generalised sextic Freud polynomials}\label{sec:gen6freud}
The generalised sextic Freud weight
\begin{equation}\label{genFreud6} \omega(x;t,\lambda)=|x|^{2\lambda+1}\exp\left(-x^6+tx^2\right),\qquad x\in\mathbb{R}\end{equation}
with $t\in\mathbb{R}$ and $\lambda>-1$ parameters,
is a symmetric weight, i.e.\ $\omega(x;t,\lambda)=\omega(-x;t,\lambda)$, so that $\alpha_n \equiv0$. The generalised sextic Freud weight and recurrence coefficients associated with the weight were discussed in \cite{refCJ20}.
Next we consider some properties of the polynomials associated with the generalised sextic Freud weight (\ref{genFreud6}). Generalised sextic Freud polynomials arise from a symmetrisation of generalised Airy polynomials. Since the weight is symmetric, the monic orthogonal polynomials $S_n(x;t,\lambda)$, $n\in\mathbb{N}$, therefore satisfy the three-term recurrence relation
\begin{equation}\label{eq:3rr}
S_{n+1}(x;t,\lambda)=xS_n(x;t,\lambda)-\beta_n(t;\lambda)S_{n-1}(x;t,\lambda),\qquad n=0,1,2,\ldots\ \end{equation}
with $S_{-1}(x;t,\lambda)=0$ and $S_0(x;t,\lambda)=1$.
\subsection{Differential and discrete equations satisfied by generalised sextic Freud polynomials} \label{gen6feq}
In this section we derive mixed recurrence relations, differential-difference equations and differential equations satisfied by generalised sextic Freud polynomials which are analogous to those for the generalised Airy polynomials in \S\ref{sec:genairyprop}.
\comment{ \begin{theorem}\label{Thm:ABn}
Let \begin{equation} \label{gft}\ww{x}=|x-k|^\rho \exp\{-v(x)\},\qquad x,\,t,\,k\in\mathbb{R}\end{equation} where $v(x)$ is a continuously differentiable function on $\mathbb{R}$. Assume that the polynomials $\{P_n(x)\}_{n=0}^{\infty}$ satisfy the orthogonality relation
\[\int_{-\infty}^{\infty} P_n(x)P_m(x)\,\ww{x}\,\d x}\def\dy{\d y=h_n\delta_{mn}.\]
Then, for ${\rho\geq1}$, $P_{n}(x)$ satisfy the differential-difference equation
\begin{equation} \nonumber \label{dde}
(x-k)\,\deriv{P_n}{x}(x)=A_n(x)P_{n-1}(x)-\mathcal{B}_n(x)P_n(x)
\end{equation}
where
\begin{align*}
A_n(x)&=\frac{x-k}{h_{n-1}}\int_{-\infty}^{\infty} P_n^2(y)\,\mathcal{K}(x,y)\,\omega(y)\,\dy+a_n(x)\\
\mathcal{B}_n(x)&= \frac{x-k}{h_{n-1}}\int_{-\infty}^{\infty} P_n(y)P_{n-1}(y)\,\mathcal{K}(x,y)\,\omega(y)\,\dy+b_n(x)\label{Bn}
\end{align*}
with
\begin{equation} \nonumber \mathcal{K}(x,y)=\frac{v'(x)-v'(y)}{x-y}\label{def:Kxy}\end{equation}
and
\begin{align*} a_n(x)&=\frac{\rho}{h_{n-1}}\int_{-\infty}^{\infty} \frac{P_n^2(y)}{y-k}\,\omega(y)\,\dy,\qquad
b_n(x)=\frac{\rho}{h_{n-1}} \int_{-\infty}^{\infty} \frac{P_n(y)P_{n-1}(y)}{y-k}\,\omega(y)\,\dy.\end{align*}
\end{theorem}
\begin{proof} See \cite[Theorem 2]{refCJK}.
\end{proof}
\begin{lemma}\label{lemmaeven}
Consider the weight defined by \eqref{gft} and assume that $v(x)$ is an even, continuously differentiable function on $\mathbb{R}$. Assume that the polynomials $\{P_n(x)\}_{n=0}^{\infty}$ satisfy the orthogonality relation
\[\int_{-\infty}^{\infty} P_n(x)P_m(x)\,\ww{x}\,\d x}\def\dy{\d y=h_n\delta_{mn}\]
and the three-term recurrence relation
\[\label{3trr}
P_{n+1}(x)=xP_{n}(x)-\beta_n(t)P_{n-1}(x)
\]
with $P_0=1$ and $P_1=x$. Then the polynomials $P_n(x)$ satisfy
\begin{align*} \int_{-\infty}^{\infty} \frac{P_n^2(y)}{y-k}\,\omega(y)\,\dy&=0,\qquad
\int_{-\infty}^\infty \frac{P_n(y)P_{n-1}(y)}{y-k}\,\omega(y)\,\dy= \tfrac12[1-(-1)^n]\,h_{n-1},
\label{int22}
\end{align*} where $n\in\mathbb{N}$ and
\[ h_n = \int_{-\infty}^\infty {P_n^2(y)\,\ww{y}}\,\dy.\]\end{lemma}
\begin{proof} See \cite[Lemma 1]{refCJK}.
\subsubsection{The differential-difference equation satisfied by generalised sextic Freud polynomials}
\begin{lemma}\label{cor:ABn}
Let \[\ww{x}=|x|^\rho \exp\{-v(x)\},\qquad x,\,t,\,k\in\mathbb{R}\] where $v(x)$ is an even, continuously differentiable function on $\mathbb{R}$. Assume that the polynomials $\{P_n(x)\}_{n=0}^{\infty}$ satisfy the orthogonality relation
\[\int_{-\infty}^{\infty} P_n(x)P_m(x)\,\ww{x}\,\d x}\def\dy{\d y=h_n\delta_{mn}.\]
Then, for ${\rho\geq1}$, $P_{n}(x)$ satisfy the differential-difference equation
\[
x\,\deriv{P_n}{x}(x)=\mathcal{A}_n(x)P_{n-1}(x)-\mathcal{B}_n(x)P_n(x)
\]
where
\begin{align*}
\mathcal{A}_n(x)&=\frac{x}{h_{n-1}}\int_{-\infty}^{\infty} P_n^2(y)\,\mathcal{K}(x,y)\,\omega(y)\,\dy\\
\mathcal{B}_n(x)&= \frac{x}{h_{n-1}}\int_{-\infty}^{\infty} P_n(y)P_{n-1}(y)\,\mathcal{K}(x,y)\,\omega(y)\,\dy+\tfrac12{\rho}[1-(-1)^n].
\end{align*}
\end{lemma}
\begin{proof}
See \cite[Corollary 1]{refCJK}.
\end{proof}
\begin{lemma}{\label{lem31}For the generalised sextic Freud weight \eqref{genFreud6}
the monic orthogonal polynomials $P_{n}(x)$ with respect to $\ww{x}$ satisfy
\begin{subequations}
\begin{align}
\int_{-\infty}^{\infty}\mathcal{K}(x,y)&{P_n^2(y)}\,\omega(y)\,\dy \nonumber\\
&=6\big[x^4-\tfrac13t +x^2\big(\beta_n+\beta_{n+1}\big)+\beta_{n+2}\beta_{n+1}
+\big(\beta_{n+1}+\beta_{n}\big)^2+\beta_{n-1}\beta_{n}\big]h_n, \label{int1a}\\
\int_{-\infty}^{\infty}\mathcal{K}(x,y)& {P_n(y)P_{n-1}(y)}\,\omega(y)\,\dy = 6x\big(x^2+\beta_{n+1}+\beta_n+\beta_{n-1}\big)h_n\label{int1b}
\end{align}
\end{subequations}
where
\[\mathcal{K}(x,y)=\frac{v'(x)}%;t)-v'(y)}%;t)}{x-y}\] with
$v(x)=x^6-tx^2$ and
\begin{equation} \nonumber \label{def:hn} h_n = \int_{-\infty}^{\infty} {P_n^2(y)\,\ww{y}}\,\dy.\end{equation} }\end{lemma}
\begin{proof}Since
$v(x)=x^6-tx^2$, we have
\[\mathcal{K}(x,y)= 6(x^4+x^3y+x^2y^2+xy^3+y^4)-2t.\]
Hence for \eqref{int1a}
\begin{align*} \int_{-\infty}^{\infty} &\mathcal{K}(x,y) {P_n^2(y)}\,\omega(y)\,\dy \\
&=
(6x^4-2t)\int_{-\infty}^{\infty} {P_n^2(y)}\,\omega(y)\,\dy
+ 6x^3\int_{-\infty}^{\infty} {yP_n^2(y)}\,\omega(y)\,\dy + 6x^2\int_{-\infty}^{\infty} {y^2P_n^2(y)}\,\omega(y)\,\dy\\
&\qquad + 6x\int_{-\infty}^{\infty} {y^3P_n^2(y)}\,\omega(y)\,\dy + 6\int_{-\infty}^{\infty} {y^4P_n^2(y)}\,\omega(y)\,\dy\\
&= (6x^4-2t)h_n +6x^2\int_{-\infty}^{\infty} \big[P_{n+1}(y) + \beta_nP_{n-1}(y)\big]^2\omega(y)\,\dy\\ &\qquad
+ 6\int_{-\infty}^{\infty} \big[P_{n+2}(y) + (\beta_{n+1}+\beta_n)P_{n}(y)+\beta_n\beta_{n-1}P_{n-2}(y)\big]^2\omega(y)\,\dy\\
&= (6x^4-2t)h_n + 6x^2(h_{n+1}+ \beta_n^2h_{n-1})+6(h_{n+2} + (\beta_{n+1}+\beta_n)^2h_n+\beta_n^2\beta_{n-1}^2h_{n-2})\\
&= 6\big[x^4-\tfrac13t +x^2\big(\beta_n+\beta_{n+1}\big)+\beta_{n+2}\beta_{n+1}+\big(\beta_{n+1}+\beta_{n}\big)^2+\beta_{n-1}\beta_{n}\big]h_n,
\end{align*}
as required, since
\[\int_{-\infty}^{\infty} {y P_n^2(y)}\,\omega(y)\,\dy=\int_{-\infty}^{\infty} {y^3P_n^2(y)}\,\omega(y)\,\dy=0\]
as these have odd integrands,
$\beta_n=h_n/h_{n-1}$, the monic orthogonal polynomials $P_{n}(x)$ satisfy the three-term recurrence relation \eqref{eq:3rr},
and are orthogonal, i.e.
\begin{equation} \int_{-\infty}^{\infty} {P_m(y)}P_{n}(y)\,\omega(y)\,\dy=0,\qquad{\rm if}\quad m\neq n.\label{Pnorth}\end{equation}
Also for \eqref{int1b}
\begin{align*} \int_{-\infty}^{\infty}& \mathcal{K}(x,y) {P_n(y)P_{n-1}(y)}\,\omega(y)\,\dy\nonumber\\ &=
(6x^4-2t) \int_{-\infty}^{\infty} {P_n(y)P_{n-1}(y)}\,\omega(y)\,\dy
+ 6x^3 \int_{-\infty}^{\infty} {yP_n(y)P_{n-1}(y)}\,\omega(y)\,\dy \\ &\qquad\quad
+ 6x^2 \int_{-\infty}^{\infty} {y^2P_n(y)P_{n-1}(y)}\,\omega(y)\,\dy + 6x \int_{-\infty}^{\infty} {y^3P_n(y)P_{n-1}(y)}\,\omega(y)\,\dy \\&\qquad\quad
+ 6 \int_{-\infty}^{\infty} {y^4P_n(y)P_{n-1}(y)}\,\omega(y)\,\dy \\
&= 6x^3\int_{-\infty}^{\infty} P_{n}(y)\big[P_{n}(y)+\beta_{n-1}P_{n-2}(y)\big] \omega(y)\,\dy \\
&\qquad\quad +6x \im
\big[P_{n+2}(y)+\big(\beta_{n+1}+\beta_{n}\big)P_n(y)+\beta_n\beta_{n-1}P_{n-2}(y)\big]
\big[P_{n}(y)+\beta_{n-1}P_{n-2}(y)\big] \omega(y)\,\dy\\
&=6x^3h_n+6x\big[\big(\beta_{n+1}+\beta_{n}\big)h_n+\beta_n\beta_{n-1}^2h_{n-2}\big]\\
&=6x\big(x^2+\beta_{n+1}+\beta_n+\beta_{n-1}\big)h_n,
\end{align*}
as required, since
\[\begin{split}\int_{-\infty}^{\infty} {P_n(y)P_{n-1}(y)}\,\omega(y)\,\dy=\int_{-\infty}^{\infty} y^2 {P_n(y)P_{n-1}(y)}\,\omega(y)\,\dy=
\int_{-\infty}^{\infty} y^4 {P_n(y)P_{n-1}(y)}\,\omega(y)\,\dy&=0\end{split}\]
as these have odd integrands, using the recurrence relation \eqref{eq:3rr} and orthogonality \eqref{Pnorth}.\end{proof}}
\begin{theorem}{For the generalised sextic Freud weight \eqref{genFreud6}
the monic orthogonal polynomials $S_{n}(x;t,\lambda)$ with respect to this weight satisfy the differential-difference equation
\begin{equation} \label{eq:Snddeq}
x\deriv{S_n}{x}(x;t,\lambda)=\mathcal{A}_n(x)S_{n-1}(x;t,\lambda)-\mathcal{B}_n(x)S_n(x;t,\lambda)
\end{equation}
where
\begin{subequations}\label{AnBn2}\begin{align}
\mathcal{A}_n(x)&=6x\beta_n\big[x^4-\tfrac13t +x^2\big(\beta_n+\beta_{n+1}\big)+\beta_{n+2}\beta_{n+1}+\big(\beta_{n+1}+\beta_{n}\big)^2+\beta_{n-1}\beta_{n}\big]\\
\mathcal{B}_n(x)&=6x^2\beta_n\big(x^2+\beta_{n+1}+\beta_n+\beta_{n-1}\big)+(\lambda+\tfrac12)[1-(-1)^n],
\end{align}\end{subequations}
with $\beta_n$ the recurrence coefficient in the three-term recurrence relation \eqref{eq:3rr}.}\end{theorem}
\begin{proof}It was proved in \cite[Corollary 1]{refCJK} that monic orthogonal polynomials $S_n(x)=S_{n}(x;t,\lambda)$ with respect to the weight
\[\omega(x)=|x|^{2\lambda+1}\exp\{-v(x)\}\]
satisfy the differential-difference equation \eqref{eq:Snddeq}, where
\begin{align*}
\mathcal{A}_n(x)&=\frac{x}{h_{n-1}}\int_{-\infty}^{\infty}\mathcal{K}(x,y) {S_n^2(y)}\,\omega(y)\,\dy\\
\mathcal{B}_n(x)&=\frac{x}{h_{n-1}}\int_{-\infty}^{\infty}\mathcal{K}(x,y) {S_n(y)S_{n-1}(y)} \,\omega(y)\,\dy
+(\lambda+\tfrac12)[1-(-1)^n]
\end{align*
and \[\mathcal{K}(x,y)=\frac{v'(x)}%;t)-v'(y)}%;t)}{x-y}.\] For the generalised sextic Freud weight \eqref{genFreud6} we have
$v(x)=x^6-tx^2$ and hence
\[\mathcal{K}(x,y)= 6(x^4+x^3y+x^2y^2+xy^3+y^4)-2t.\]
Hence
\begin{align*} \int_{-\infty}^{\infty} &\mathcal{K}(x,y) {S_n^2(y)}\,\omega(y)\,\dy \\
&=
(6x^4-2t)\int_{-\infty}^{\infty} {S_n^2(y)}\,\omega(y)\,\dy
+ 6x^3\int_{-\infty}^{\infty} {yS_n^2(y)}\,\omega(y)\,\dy + 6x^2\int_{-\infty}^{\infty} {y^2S_n^2(y)}\,\omega(y)\,\dy\\
&\qquad + 6x\int_{-\infty}^{\infty} {y^3S_n^2(y)}\,\omega(y)\,\dy + 6\int_{-\infty}^{\infty} {y^4S_n^2(y)}\,\omega(y)\,\dy\\
&= (6x^4-2t)h_n +6x^2\int_{-\infty}^{\infty} \big[S_{n+1}(y) + \beta_nS_{n-1}(y)\big]^2\omega(y)\,\dy\\ &\qquad
+ 6\int_{-\infty}^{\infty} \big[S_{n+2}(y) + (\beta_{n+1}+\beta_n)S_n(y)+\beta_n\beta_{n-1}S_{n-2}(y)\big]^2\omega(y)\,\dy\\
&= (6x^4-2t)h_n + 6x^2(h_{n+1}+ \beta_n^2h_{n-1})+6(h_{n+2} + (\beta_{n+1}+\beta_n)^2h_n+\beta_n^2\beta_{n-1}^2h_{n-2})\\
&= 6\big[x^4-\tfrac13t +x^2\big(\beta_n+\beta_{n+1}\big)+\beta_{n+2}\beta_{n+1}+\big(\beta_{n+1}+\beta_{n}\big)^2+\beta_{n-1}\beta_{n}\big]h_n,
\end{align*}
since
\[\int_{-\infty}^{\infty} {y S_n^2(y)}\,\omega(y)\,\dy=\int_{-\infty}^{\infty} {y^3S_n^2(y)}\,\omega(y)\,\dy=0\]
as these have odd integrands,
$\beta_n=h_n/h_{n-1}$ where
\begin{equation} \nonumber \label{def:hn} h_n = \int_{-\infty}^{\infty} S_n^2(y)\,\omega(y)\,\dy\end{equation}
the monic orthogonal polynomials $S_{n}(x)$ satisfy the three-term recurrence relation \eqref{eq:3rr},
and are orthogonal, i.e.
\begin{equation} \int_{-\infty}^{\infty} {S_m(y)}S_n(y)\,\omega(y)\,\dy=0,\qquad{\rm if}\quad m\neq n.\label{Pnorth}\end{equation}
Also,
\begin{align*} \int_{-\infty}^{\infty}& \mathcal{K}(x,y) {S_n(y)S_{n-1}(y)}\,\omega(y)\,\dy\nonumber\\ &=
(6x^4-2t) \int_{-\infty}^{\infty} {S_n(y)S_{n-1}(y)}\,\omega(y)\,\dy
+ 6x^3 \int_{-\infty}^{\infty} {yS_n(y)S_{n-1}(y)}\,\omega(y)\,\dy \\ &\qquad\quad
+ 6x^2 \int_{-\infty}^{\infty} {y^2S_n(y)S_{n-1}(y)}\,\omega(y)\,\dy + 6x \int_{-\infty}^{\infty} {y^3S_n(y)S_{n-1}(y)}\,\omega(y)\,\dy \\&\qquad\quad
+ 6 \int_{-\infty}^{\infty} {y^4S_n(y)S_{n-1}(y)}\,\omega(y)\,\dy \\
&= 6x^3\int_{-\infty}^{\infty} S_n(y)\big[S_n(y)+\beta_{n-1}S_{n-2}(y)\big]\omega(y)\,\dy \\
&\qquad\quad +6x \im
\big[S_{n+2}(y)+\big(\beta_{n+1}+\beta_{n}\big)S_n(y)+\beta_n\beta_{n-1}S_{n-2}(y)\big]
\big[S_n(y)+\beta_{n-1}S_{n-2}(y)\big]\omega(y)\,\dy\\
&=6x^3h_n+6x\big[\big(\beta_{n+1}+\beta_{n}\big)h_n+\beta_n\beta_{n-1}^2h_{n-2}\big]\\
&=6x\big(x^2+\beta_{n+1}+\beta_n+\beta_{n-1}\big)h_n,
\end{align*}
since
\[\begin{split}\int_{-\infty}^{\infty} {S_n(y)S_{n-1}(y)}\,\omega(y)\,\dy=\int_{-\infty}^{\infty} y^2 {S_n(y)S_{n-1}(y)}\,\omega(y)\,\dy=
\int_{-\infty}^{\infty} y^4 {S_n(y)S_{n-1}(y)}\,\omega(y)\,\dy&=0\end{split}\]
as these have odd integrands, using the recurrence relation \eqref{eq:3rr} and orthogonality \eqref{Pnorth}.\end{proof}
\begin{remark} In \cite{refWZC20a}, the ladder operator technique was used to obtain the coefficients ${A}_n$ and ${B}_n$ in \eqref{ddee} for the weight \eqref{genFreud6}. Note however that, in their notation, the expression for ${B}_n$ (cf.~\cite[eqn. (39)]{refWZC20a}) should be \[{B}_n(z)=6z^3\beta_n +6z\beta_n\big(\beta_{n+1}+\beta_n+\beta_{n-1}\big)+\frac{\alpha[1-(-1)^n]}{2z}\]
\end{remark}
Now we derive a differential equation satisfied by generalised sextic Freud polynomials.
\begin{theorem} \label{thm:gende}For the generalised sextic Freud weight \eqref{genFreud6}
the monic orthogonal polynomials $S_{n}(x;t,\lambda)$ with respect to this weight satisfy
\[
x\deriv[2]{S_n}{x}(x;t,\lambda)+Q_n(x)\deriv{S_n}{x}(x;t,\lambda)+T_n(x)S_n(x;t,\lambda)=0
\]
where
\begin{align*}
Q_n(x)=&~2 tx^2 -6 x^6+2 \lambda +1
-\frac{2 x^2 \left(2x^2+\beta_n+\beta_{n+1}\right)}{C_n(x)}\\
T_n(x)=&~36 x \beta_{n} C_{n-1}(x)C_n(x)+12 x^3 \beta_{n}+\frac{(2 \lambda +1)}{x} \left\{6 x^2 \beta_{n} D_n(x)+( \lambda +\tfrac12) [1-(-1)^n-1]\right\}+12 x \beta_{n} D_n(x)\\
&-\left\{6 x^2 \beta_{n} D_n(x)+(\lambda +\tfrac12) [1-(-1)^n-1]\right\}\left\{6 x \beta_{n} D_n(x)+\frac{(2 \lambda +1)}{2x} [1-(-1)^n-1]-2 t x+6 x^5\right\}\\&-\frac{\left\{C_n(x)+4x^4+2 x^2(\beta_n+\beta_{n+1})\right\} \left\{6 x^2 \beta_{n} D_n(x)+( \lambda +\tfrac12) [1-(-1)^n-1]\right\}}{x C_n(x)},
\end{align*}
with
\begin{align*}C_n(x)&=x^4-\tfrac13t +x^2\big(\beta_n+\beta_{n+1}\big)+\beta_{n+2}\beta_{n+1}+\big(\beta_{n+1}+\beta_{n}\big)^2+\beta_{n-1}\beta_{n}\\
D_n(x)&=x^2+\beta_{n-1}+\beta_{n}+\beta_{n+1}.\end{align*}
\comment{\begin{align*}
\mathcal{A}_n(x)&=\frac{x}{h_{n-1}}\int_{-\infty}^{\infty} S_n^2(y;t,\lambda)\,\mathcal{K}(x,y)\,w(y;t,\lambda)\,\dy\\
\mathcal{B}_n(x)&= \frac{x}{h_{n-1}}\int_{-\infty}^{\infty} S_n(y;t,\lambda)S_{n-1}(y;t,\lambda)\,\mathcal{K}(x,y)\,\omega(y;t,\lambda)\,\dy
(\lambda+\tfrac12)[1-(-1)^n].
\end{align*}}
\end{theorem}
\begin{proof} In \cite[Theorem 3]{refCJK} it was proved that the coefficients in the differential equation
\[x\deriv[2]{S_n}{x}(x)+Q_n(x)\deriv{S_n}{x}(x)+T_n(x)S_n(x)=0\]
satisfied by polynomials orthogonal with respect to the weight \[\ww{x}=|x|^{\rho}\exp\{-v(x;t)\}\] are given by
\begin{subequations}\label{coeff12}\begin{align
Q_n(x)&=\rho+1-x\deriv{v}{x}-\frac{x}{\mathcal{A}_n(x)}\,\deriv{\mathcal{A}_n}{x}\\[5pt]
T_n(x)&=\frac{\mathcal{A}_n(x)\mathcal{A}_{n-1}(x)}{x\beta_{n-1}}+\deriv{\mathcal{B}_n}{x}
-\mathcal{B}_n(x)\left[\deriv{v}{x} +\frac{\mathcal{B}_n(x)-\rho}{x}\right]-\frac{\mathcal{B}_n(x)}{\mathcal{A}_n(x)}\deriv{\mathcal{A}_n}{x},
\end{align}
with
\begin{align*}
\mathcal{A}_n(x)&=\frac{x}{h_{n-1}}\int_{-\infty}^{\infty} S_n^2(y;t,\lambda)\,\mathcal{K}(x,y)\,\omega(y;t,\lambda)\,\dy\\
\mathcal{B}_n(x)&= \frac{x}{h_{n-1}}\int_{-\infty}^{\infty} S_n(y;t,\lambda)S_{n-1}(y;t,\lambda)\,\mathcal{K}(x,y)\,\omega(y;t,\lambda)\,\dy
+\tfrac12{\rho}[1-(-1)^n].
\end{align*}\end{subequations}
For the generalised sextic Freud weight \eqref{genFreud6} we use \eqref{coeff12} with $k=0$, $\rho=2\lambda+1$ and $v(x)=x^6-tx^2$ to obtain
\begin{subequations}
\begin{align}\label{coef1}
Q_n(x)&=2\lambda+2-6x^6+2tx^2 -\frac{x}{\mathcal{A}_n(x)}\,\deriv{\mathcal{A}_n}{x}\\[5pt]
\label{coef2}T_n(x)&=\frac{\mathcal{A}_n(x)\mathcal{A}_{n-1}(x)}{x\beta_{n-1}}+\deriv{\mathcal{B}_n}{x}
-\mathcal{B}_n(x)\left[6x^5-2tx +\frac{\mathcal{B}_n(x)-(2\lambda+1) }{x}\right]-\frac{\mathcal{B}_n(x)}{\mathcal{A}_n(x)}\deriv{\mathcal{A}_n}{x}.
\end{align}\end{subequations}
Substituting the expressions for $\mathcal{A}_n(x)$ and $\mathcal{B}_n(x)$ given by \eqref{AnBn2} and their derivatives into \eqref{coef1} and \eqref{coef2}, we obtain the stated result on simplification.
\end{proof}
\begin{lemma}\label{mrec} Let $\{S_n(x;t,\lambda)\}_{n=0}^{\infty}$ be the sequence of monic generalised sextic Freud polynomials orthogonal with respect to the weight
\eqref{genFreud6}, then, for $n$ fixed,
\begin{align}\label{l+2}
x^2S_n(x;t,\lambda+1)=xS_{n+1}(x;t,\lambda)-(\beta_{n+1}+a_n)S_n(x;t,\lambda)
\end{align} where
\begin{equation} \nonumber a_n=\begin{cases} \displaystyle\frac{S_{n+2}(0;t,\lambda)}{S_n(0;t,\lambda)},\quad &\text{if}\quad n\quad\text{even}\\[5pt]
\displaystyle\frac{S_{n+2}'(0;t,\lambda)}{S_n'(0;t,\lambda)},\quad &\text{if}\quad n\quad\text{odd}.\end{cases}\label{ed}\end{equation}
\end{lemma}
\begin{proof}The weight function associated with the polynomials $S_n(x;t,\lambda+1)$ is
\begin{align*}\omega(x;t,\lambda+1)&=|x|^{2\lambda+3}\exp(-x^6+tx^2)
=x^2\omega(x;t,\lambda).\end{align*}
The factor $x^2$ by which the weight $\omega(x;t,\lambda)$ is modified has a double zero at the origin and therefore Christoffel's formula (cf.~\cite[Theorem 2.5]{refSzego}), applied to the monic polynomials $S_n(x;t,\lambda+1)$, is
\[
x^2S_{n}(x;t,\lambda+1)=\frac{1}{S_n(0;t,\lambda)S_{n+1}'(0;t,\lambda)-S_n'(0;t,\lambda)S_{n+1}(0;t,\lambda)}\left|\begin{matrix} S_{n}(x;t,\lambda) & S_{n+1}(x;t,\lambda) & S_{n+2}(x;t,\lambda)\\
S_{n}(0;t,\lambda) & S_{n+1}(0;t,\lambda) & S_{n+2}(0;t,\lambda)\\
S_{n}'(0;t,\lambda) & S_{n+1}'(0;t,\lambda) & S_{n+2}'(0;t,\lambda)\\\end{matrix}\right|
\]
Since the weight $\omega(x;t,\lambda)$ is even, we have that $S_{2n+1}(0;t,\lambda)=S_{2n}'(0;t,\lambda)=0$ while $S_{2n}(0;t,\lambda)\neq0$ and $S_{2n+1}'(0;t,\lambda)\neq0$, hence
\[
x^2S_{n}(x;t,\lambda+1)=\frac{-1}{ S_n'(0;t,\lambda)S_{n+1}(0;t,\lambda)}\left|\begin{matrix} S_{n}(x;t,\lambda) & S_{n+1}(x;t,\lambda) & S_{n+2}(x;t,\lambda)\\
0 & S_{n+1}(0;t,\lambda) &0\\
S_{n}'(0;t,\lambda) & 0 & S_{n+2}'(0;t,\lambda)\\\end{matrix}\right|
\] for $n$ odd, while, for $n$ even,
\[
x^2S_{n}(x;t,\lambda+1)=\frac{1}{S_n(0;t,\lambda)S_{n+1}'(0;t,\lambda) }\left|\begin{matrix} S_{n}(x;t,\lambda) & S_{n+1}(x;t,\lambda) & S_{n+2}(x;t,\lambda)\\
S_{n}(0;t,\lambda) & 0& S_{n+2}(0;t,\lambda)\\
0 & S_{n+1}'(0;t,\lambda) &0\\\end{matrix}\right|
\]
This yields
\begin{equation}\label{even}
x^2S_n(x;t,\lambda+1)=S_{n+2}(x;t,\lambda)-a_nS_n(x;t,\lambda)
\end{equation} and the result follows by using the three-term recurrence relation \eqref{eq:3rr} to eliminate $S_{n+2}(x;t,\lambda)$ in \eqref{even}.
\comment{\item[(ii)] Replacing $n$ by $n-2$ in \eqref{l+2} and then using the three-term recurrence relation \[S_{n-2}(x;t,\lambda)=\frac{1}{\beta_{n-1}}\left(xS_{n-1}(x;t,\lambda)-S_n(x;t,\lambda)\right)\] to eliminate $S_{n-2}(x;t,\lambda)$ yields the result.}
\end{proof}
\subsection{Zeros of generalised sextic Freud polynomials}
\noindent When the weight is even, the zeros of the corresponding orthogonal polynomials are symmetric about the origin. This implies that the positive and the negative zeros have opposing monotonicity and therefore we only need to consider the monotonicity of the positive zeros.
\begin{lemma}\label{symmon}
Let $\omega_0(x)$ be a symmetric positive weight on $(a,b)$ for which all the moments exist and let\begin{equation}\label{genws}\omega(x;t,\rho)=|C(x)|^\rho \exp\{tD(x)\} \omega_0(x),\qquad t\in \mathbb{R}, \qquad \rho>-1\end{equation} where $D(x)$ is an even function. Consider the sequence of semiclassical orthogonal polynomials $\{S_n(x;t,\rho)\}_{n=0}^{\infty}$ associated with the weight \eqref{genws} and denote the $\lfloor n/2\rfloor$ real, positive zeros of $S_n(x;t,\gamma)$ in increasing order by $x_{n,k}(t,\gamma)$, $k=1,2,\dots,\lfloor n/2\rfloor$,
where $\lfloor m \rfloor$ is the largest integer less than or equal to $m$.
Then, for a fixed value of $\nu$, $\nu\in \{1,2,\dots, \lfloor n/2 \rfloor\}$, the $\nu$-th zero $x_{n,\nu}(\lambda,t)$
\begin{itemize}
\item[(i)] increases when $\rho$ increases, if $\displaystyle{\frac{1}{C(x)}\deriv{}{x}C(x)>0}$ for $x\in (0,b)$;\\[-0.4cm]
\item[(ii)] increases when $t$ increases, if $\displaystyle{\deriv{}{x}D(x)>0}$ for $x\in (0,b)$;
\end{itemize}
\end{lemma}
\begin{proof} The proof follows along the same lines as that of Lemma \ref{mono} using the generalised version of Markov's monotonicity theorem (cf.~\cite[Theorem 2.1]{refJWZ}) for $x>0$.
\end{proof}
\begin{corollary}Let $\{S_n(x;t,\lambda)\}_{n=0}^{\infty}$ be the sequence of monic generalised sextic Freud polynomials orthogonal with respect to the weight \eqref{genFreud6} and let $0<x_{\lfloor n/2 \rfloor,n}<\dots<x_{2,n} <x_{1,n} $ denote the positive zeros of $S_n(x;t,\lambda)$.
Then, for $\lambda>-1$ and $t\in \mathbb{R}$ and for a fixed value of $\nu$, $\nu\in \{1,2,\dots, \lfloor n/2 \rfloor\}$, the $\nu$-th zero $x_{n,\nu} $ increases when (i) $\lambda$ increases; and (ii)
$t$ increases.
\end{corollary}
\begin{proof} This follows from Lemma \ref{symmon}, taking $C(x)=x$, $D(x)=x^2$, $\rho=2\lambda+1$ and $\omega_0(x)=\exp(-x^6)$.
\end{proof}
Mixed recurrence relations involving polynomials from different orthogonal sequences, such as the relation derived in Lemma \ref{mrec}, provide information on the relative positioning of zeros of the polynomials in the relation. In the next theorem we prove that the zeros of $S_n(x;t,\lambda)$, the monic generalised sextic Freud polynomials orthogonal with respect to the weight
\eqref{genFreud6}, and the zeros of $S_{n-1}(x;t,\lambda+k)$ interlace for $\lambda>-1$, $t\in \mathbb{R}$ and $k\in (0,1]$ fixed.
\begin{theorem} \label{int} Let $\lambda>-1$, $t\in \mathbb{R}$ and $k\in (0,1)$. Let $\{S_n(x;t,\lambda)\}$ be the monic generalised sextic Freud polynomials orthogonal with respect to the weight
\eqref{genFreud6}. Denote
the positive zeros of $S_{n}(x;t,\lambda+k)$ by \[0<x_{\lfloor \frac n2\rfloor,n}^{(t,\lambda+k)}<x_{\lfloor \frac n2\rfloor -1,n}^{(t,\lambda+k)}<\dots<x_{2,n}^{(t,\lambda+k)}<x_{1,n }^{(t,\lambda+k)}.\]
\noindent If $n$ is even, then
\begin{align}\label{inteven}
0<&x_{\lfloor \frac{n}{2}\rfloor,n}^{(t;\lambda)}<x_{\lfloor \frac{n-1}{2} \rfloor,n-1}^{(t;\lambda)}<x_{\lfloor \frac{n-1}{2} \rfloor,n-1}^{(t,\lambda+k)}<x_{\lfloor \frac{n-1}{2} \rfloor,n-1}^{(t,\lambda+1)}<x_{\lfloor \frac{n}{2} \rfloor-1,n}^{(t;\lambda)}<\dots\nonumber\\ &\dots <x_{2,n}^{(t;\lambda)}<x_{1,n-1}^{(t;\lambda)}<x_{1,n-1}^{(t,\lambda+k)}<x_{1,n-1}^{(t,\lambda+1)}<x_{1,n}^{(t;\lambda)}
\end{align}
and if $n$ is odd, then
\begin{align}\label{intodd}
0<&x_{\lfloor \frac{n-1}{2} \rfloor,n-1}^{(t;\lambda)}<x_{\lfloor \frac{n-1}{2} \rfloor,n-1}^{(t,\lambda+k)}<x_{\lfloor \frac{n-1}{2} \rfloor,n-1}^{(t,\lambda+1)}<x_{\lfloor \frac{n}{2} \rfloor,n}^{(t;\lambda)}<x_{\lfloor \frac{n-1}{2} \rfloor-1,n-1}^{(t;\lambda)}<\dots\nonumber\\ &\dots <x_{2,n}^{(t;\lambda)}<x_{1,n-1}^{(t;\lambda)}<x_{1,n-1}^{(t,\lambda+k)}<x_{1,n-1}^{(t,\lambda+1)}<x_{1,n}^{(t;\lambda)}.
\end{align}\end{theorem}
\begin{proof} In Theorem \ref{mono} we proved that the positive zeros of $S_{n-1}(x;t,\lambda)$ monotonically increase as $\lambda$ increases. This implies that, for each fixed $\ell\in\{1,2,\dots,\lfloor \frac{n-1}{2}\rfloor\}$,
\begin{equation}\label{mo} x_{ \ell ,n-1}^{(t;\lambda)}<x_{\ell,n-1}^{(t,\lambda+k)}<x_{ \ell,n -1}^{(t,\lambda+1)}.\end{equation}
On the other hand, the zeros of $S_{n}(x;t,\lambda)$ and $S_{n-1}(x;t,\lambda)$, two consecutive polynomials in the sequence of orthogonal polynomials, are interlacing, that is, when $n$ is even,
\begin{equation}\label{3.13b}
0<x_{\lfloor \frac{n}{2}\rfloor,n}^{(t;\lambda)}<x_{\lfloor\frac{n-1}{2}\rfloor,n-1}^{(t;\lambda)}<x_{\lfloor\frac{n}{2}\rfloor-1,n}^{(t;\lambda)}<\dots<x_{2,n}^{(t;\lambda)}<x_{1,n-1}^{(t;\lambda)}<x_{1,n}^{(t;\lambda)}.
\end{equation}
Next, we prove that the zeros of $S_{n}(x;t,\lambda)$ interlace with those of $S_{n-1}(x;t,\lambda+1)$. Replacing $n$ by $n-1$ in \eqref{l+2} yields
\begin{equation}\label{l+22}
S_{n-1}(x;t,\lambda+1)=\frac{xS_{n}(x;t,\lambda)-(\beta_{n}+a_{n-1})S_{n-1}(x;t,\lambda)}{x^2}.
\end{equation}
Evaluating \eqref{l+22} at consecutive zeros $x_{\ell}=x_{\ell,n}^{(t;\lambda)}$ and $x_{\ell+1}=x_{\ell+1,n}^{(t;\lambda)}$, $\ell=1, 2, \ldots, \lfloor \frac{n}{2}\rfloor-1$, of $S_{n}(x;t,\lambda)(x)$, we obtain
\[
S_{n-1}(x_{\ell};t,\lambda+1)S_{n-1}(x_{\ell+1};t,\lambda+1)=\frac{1}{x_{\ell}^2 x_{\ell+1}^2}(\beta_{n}+a_{n-1})^2S_{n-1}(x_{\ell};t,\lambda)S_{n-1}(x_{\ell+1};t,\lambda)<0
\]
since the zeros of $S_{n}(x;t,\lambda)$ and $S_{n-1}(x;t,\lambda)$ seperate each other. So there is at least one positive zero of $S_{n}(x;t,\lambda+1)$ in the interval $(x_{\ell}, x_{\ell+1})$ for each $\ell=1, 2, \ldots, \lfloor \frac{n}{2}\rfloor-1$
and this implies that
\begin{align}\label{3.14b}
0<x_{\lfloor \frac{n}{2}\rfloor,n}^{(t;\lambda)}<x_{\lfloor \frac{n-1}{2} \rfloor,n-1}^{(t,\lambda+1)}<x_{\lfloor \frac{n}{2}\rfloor -12,n}^{(t;\lambda)}<x_{\lfloor \frac{n-1}{2} \rfloor-1,n-1}^{(t,\lambda+1)}<
\dots<x_{2,n-1}^{(t,\lambda+1)}<x_{2,n}^{(t;\lambda)}<x_{1,n-1}^{(t,\lambda+1)}<x_{1 ,n}^{(t;\lambda)}
\end{align}
\eqref{3.14b}, \eqref{mo} and \eqref{3.13b} yield \eqref{inteven}. The proof of \eqref{intodd} follows along the same lines.\end{proof}
Considering that when the weight function is even, the zeros of $S_n(x;t,\lambda)$ are symmetric about the origin with a zero at the origin when $n$ is odd, we have the following corollary.
\begin{corollary}\label{cor2b} With the same symbols as Theorem \ref{int}, we have for $n$ odd that
\[ x_{n,n}^{(t;\lambda)}<x_{n-1,n-1}^{(t;\lambda)}<x_{n-1,n-1}^{(t,\lambda+k)}<x_{n-1,n-1}^{(t,\lambda+1)}<x_{n-1,n}^{(t;\lambda)}<\dots<x_{2,n}^{(t;\lambda)}<x_{1,n-1}^{(t;\lambda)}<x_{1,n-1}^{(t,\lambda+k)}<x_{1,n-1}^{(t,\lambda+1)}<x_{1,n}^{(t;\lambda)} \]
while for $n$ even
\[x_{n,n}^{(t;\lambda)}<x_{n-1,n-1}^{(t;\lambda)}<x_{n-1,n-1}^{(t,\lambda+k)}<x_{n-1,n-1}^{(t,\lambda+1)}<x_{n-1,n}^{(t;\lambda)}<\dots<x_{\lfloor\frac{n-1}{2}\rfloor+2,n-1}^{(t,\lambda+1)}<x_{\lfloor\frac{n}{2}\rfloor+1,n}^{(t;\lambda)}<0\]
and \[0<x_{\lfloor\frac{n}{2}\rfloor,n}^{(t;\lambda)}<x_{\lfloor\frac{n-1}{2}\rfloor,n-1}^{(t;\lambda)}<x_{\lfloor\frac{n-1}{2}\rfloor,n-1}^{(t,\lambda+k)}<
x_{\lfloor\frac{n-1}{2}\rfloor,n-1}^{(t,\lambda+1)}<x_{\lfloor\frac{n}{2}\rfloor-1,n}^{(t;\lambda)}<\dots<x_{1,n-1}^{(t,\lambda+1)}<x_{1,n}^{(t;\lambda)} \]
with \[x_{\lfloor\frac{n-1}{2}\rfloor+1,n-1}^{(t;\lambda)}=x_{\lfloor\frac{n-1}{2}\rfloor+1,n-1}^{(t,\lambda+k)}=x_{\lfloor\frac{n-1}{2}\rfloor+1,n-1}^{(t,\lambda+1)}=0.\]
\end{corollary}
\comment{\subsection{Bounds for the zeros}}
The three-term recurrence relation yields information on bounds of the extreme zeros of polynomials.
\begin{theorem}Let $\{S_n(x;t,\lambda)\}_{n=0}^{\infty}$ be the sequence of monic generalised sextic Freud polynomials orthogonal with respect to the weight \eqref{genFreud6}. For each $n=2,3,\dots,$ the largest zero, $x_{1,n}$, of $S_n(x;t,\lambda)$, satisfies
\[0<x_{1,n} <\max_{1\leq k\leq n-1}\sqrt{c_n\beta_k(t;\lambda)}\]
where $c_n=4\cos^2\left(\frac{\pi}{n+1}\right)+\varepsilon$, $\varepsilon>0$.
\end{theorem}
\begin{proof}
The upper bound for the largest zero $x_{1,n}$ follows by applying \cite[Theorem 2 and 3]{refIsmailLi}, based on the Wall-Wetzel Theorem (see also \cite{refIsmail}), to the three-term recurrence relation \eqref{eq:3rr}.
\end{proof}
\comment{\subsection{Convexity of the zeros when \boldmath{$\lambda=-\tfrac12$}}}
The Sturm Convexity Theorem (cf.~\cite{refSturm}) on the monotonicity of the distances between consecutive zeros, applies to the zeros of solutions of second-order differential equations in the normal form
\[\deriv[2]{y}{x}+F(x)y=0. \]
Next we consider the implications of the convexity theorem of Sturm for the zeros of generalised sextic Freud polynomials when $\lambda=-\tfrac12$. We begin by considering the differential equation in normal form satisfied by generalised sextic Freud polynomials for $\lambda=-\tfrac12$ proved by Wang \textit{et al.}\ in \cite{refWZC20a}.
\begin{theorem}Let
\begin{equation}
\label{l0w}
\omega(x)=\exp(-x^6+tx^2),\qquad x\in\mathbb{R}\end{equation} with $t\in\mathbb{R}$, and denote the monic orthogonal polynomials with respect to $\omega(x)$ by $\mathcal{S}_n(x)$. Then, for $t<0$, the polynomials\begin{equation}\label{trans}S_n(x;t,\lambda)=\mathcal{S}_n(x)\sqrt{\frac{\omega(x)}{\widetilde{A}_n(x)}}\end{equation} satisfy
\begin{equation}\label{eq:gende}
\deriv[2]{S_n}{x}+F(x)S_n(x;t,\lambda)=0
\end{equation}
where
\begin{align}\label{Fc}
F(x)&=\beta_n \widetilde{A}_{n-1}(x)\widetilde{A}_n(x)-\frac{\omega''(x)}{2\omega(x)}-\widetilde{B}_n(x)\left\{\widetilde{B}_n(x)-6x^5+2tx\right\}+\frac{6x^5-2tx}{4}-3\left\{\frac{\widetilde{A}_n'(x)}{2\widetilde{A}_n(x)}\right\}^2\\&\qquad\nonumber+\widetilde{B}_n'(x)-\frac{(2\widetilde{B}_n(x)+6x^5-2tx)\widetilde{A}_n'(x)-\widetilde{A}_n''(x)}{2\widetilde{A}_n(x)}\\
\widetilde{A}_n(x)&=\frac{\mathcal{A}_n(x)}{x\beta_n}=6\big[x^4-\tfrac13t +x^2\big(\beta_n+\beta_{n+1}\big)+\beta_{n+2}\beta_{n+1}+\big(\beta_{n+1}+\beta_{n}\big)^2+\beta_{n-1}\beta_{n}\big]\nonumber\\
\widetilde{B}_n(x)&=\frac{\mathcal{B}_n(x)}{x}=6x\beta_n\big(x^2+\beta_{n+1}+\beta_n+\beta_{n-1}\big)+(\lambda+\tfrac12)[1-(-1)^n].\nonumber
\end{align}
\end{theorem}
\begin{proof}
See \cite[Theorem 4]{refWZC20a} and note that $\widetilde{A}_n>0$ when $t<0$.
\end{proof}
\begin{theorem}
Let $\{\mathcal{S}_n(x)\}_{n=0}^{\infty}$ be the monic generalised sextic Freud polynomials orthogonal with respect to the weight
\eqref{l0w}
and let $x_{k}$, $k\in\{1,2,\dots,n\}$, denote the $n$ zeros of $\mathcal{S}_n$ in ascending order. Then, for $t>0$,
\begin{itemize}
\item[(i)] if $F(x)$ given in \eqref{Fc} is strictly increasing on $(a,b)$, then, for the zeros $x_k \in (a,b)$, we have $x_{k+2}-x_{k+1}<x_{k+1}-x_k$, i.e. the zeros in $(a,b)$ are concave;
\item[(ii)] if $F(x)$ given in \eqref{Fc} is strictly decreasing on $(a,b)$, then, for the zeros $x_k\in(a,b)$, we have $x_{k+2}-x_{k+1}>x_{k+1}-x_k$, i.e. the zeros in $(a,b)$ are convex.\end{itemize}
\end{theorem}
\begin{proof}
Since the transformation \eqref{trans} does not change the independent variable and $\omega(x)>0$, the zeros of $\mathcal{S}_n(x)$ are the same as those of $S_n(x;t,\lambda)$. The result now follows by applying the Sturm convexity Theorem (cf.~\cite{refJT,refSturm}) to solutions of \eqref{eq:gende}.
\end{proof}
\section{Discussion}
In this paper, we have studied generalised Airy polynomials that are orthogonal polynomials that satisfy a three-term recurrence relation whose coefficients depend on two parameters. We have derived a differential difference equation, a differential equation and a mixed recurrence relation satisfied by the polynomials and used these to study properties of the zeros and recurrence coefficients of the polynomials. We also investigated various asymptotic properties of the recurrence coefficients. Furthermore we have shown that similar results hold for the generalised sextic Freud polynomials and corrected some results in the literature.
\section*{Acknowledgements}
We gratefully acknowledge the support of a Royal Society Newton Advanced Fellowship NAF$\backslash$R2$\backslash$180669.
PAC would like to thank the Isaac Newton Institute for Mathematical Sciences for support and hospitality during the programme ``Complex analysis: techniques, applications and computations" when some of the work on this paper was undertaken. This work was supported by EPSRC grant number EP/R014604/1. We also thank the referees for helpful comments and corrections.
\defCommun. Pure Appl. Math.{Commun. Pure Appl. Math.}
\defCommun. Pure Appl. Math.{Commun. Pure Appl. Math.}
\defFunkcial. Ekvac.{Funkcial. Ekvac.}
\defFunkcial. Ekvac.{Funkcial. Ekvac.}
\defCambridge University Press{Cambridge University Press}
\def\refpp#1#2#3#4{\vspace{-0.2cm}
\bibitem{#1} \textrm{\frenchspacing#2}, \textrm{#3}, #4.}
\def\refjnl#1#2#3#4#5#6#7{\vspace{-0.2cm}
\bibitem{#1}{\frenchspacing\rm#2}, #3,
{\frenchspacing\it#4}, {\bf#5} (#6) #7.}
\def\refjltoap#1#2#3#4#5#6#7{\vspace{-0.2cm}
\bibitem{#1}\textrm{\frenchspacing#2}, \textrm{#3},
\textit{\frenchspacing#4}\ (#6)\ #7.}
\def\refjl#1#2#3#4#5#6#7{\vspace{-0.2cm}
\bibitem{#1}\textrm{\frenchspacing#2}, \textrm{#3},
\textit{\frenchspacing#4}, \textbf{#5}\ (#7)\ #6.}
\def\refbk#1#2#3#4#5{\vspace{-0.2cm}
\bibitem{#1} \textrm{\frenchspacing#2}, \textit{#3}, #4, #5.}
\def\refcf#1#2#3#4#5#6#7{\vspace{-0.2cm}
\bibitem{#1} \textrm{\frenchspacing#2}, \textrm{#3},
in: \textit{#4}, {\frenchspacing#5}, #6, pp.\ #7.}
\defAmerican Mathematical Society{American Mathematical Society}
\defDiff. Eqns.{Diff. Eqns.}
\defJ. Phys. A{J. Phys. A}
\defNagoya Math. J.{Nagoya Math. J.}
\defJ. Comput. Appl. Math.{J. Comput. Appl. Math.}
|
2012.13188
|
\section{Introduction}
Nowadays, computers role an essential part of human life and penetrate in general aspects of people's personal and social lives. Mass-produces and increased availability of personal computers have a growing impact on daily human life. One of Human-computer Interaction (HCI) purposes is to control and improve human and computer communications, like creating an interface between human and computer to transform hand gestures into meaningful commands. Despite HCI, intermediaries like computer mouse are still useable. Since 50 years ago, when the first mouse was introduced, much progress has been made to improve the mouse. Although the computer mouse had significant improvement, this hardware is based on direct contact and curbs the user to control the computer from a close distance. In spite of the COVID-19 virus pandemic, the end lifetime of contact-based gadgets that control machines and computers can be anticipated.
\section{Literature review}
In recent years, machine learning enthusiasts follow human activities ~\cite{gil2020human}. Some of these activities present alternative ways to control computers. For instance, they were using a Kinect ~\cite{ren2013robust} sensor and EEG mouse ~\cite{alomari2014eeg} or EMG signals ~\cite{benalcazar2017real} to classify human actions and allocate mouse commands to them in order to control the pointer of the computer. The methods mentioned above, however, remove the mouse hardware but use other hardware, which is more expensive and bulkier than the usual mouse. Therefore, in order to replace the mouse, it is superior to use software for controlling the pointer of the computer.
Since humans' most useable organs to move objects are their hands, people can benefit from their hands to control intelligence systems too. Thus, one effective way to replace the hardware of the mouse is using hand gesture recognition methods to control the computers. The done works into hand gesture recognition are based on a contact or machine learning and without a single contact. Older works were based on contact, such as a data glove ~\cite{kim20093}, which detects hand gestures and tracks their movement. Besides its price, such gadgets restrain hand movements because of their weight, wiring and sensors. With the advancement of computer vision and obtaining more information from images, these kinds of gloves have become simpler; their wiring was removed and now they use a camera in order to track hand movements ~\cite{wang2009real}. Indeed, the camera can learn to follow the color of the glove or the shape of the palm and fingers. Next, the glove was replaced with some colored fingertips ~\cite{mistry2009sixthsense}. The tips, eventually, were also removed and hand gesture applications become touch-less and based on machine learning techniques and captured frames from a camera.
Machine learning techniques, which use image processing systems like cameras, can be an alternative for contact-based approaches. Maybe the most useful feature of these techniques is degrees of freedom for hand movements, which should be obligatory for the HCI systems. Such techniques are combined with image processing and computer vision methods; Image processing methods convert the captured images of a camera to digital forms and implement scaling, filtering and noise remover on them. Computer vision creates eyesight for computers to distinguish between different gestures the way that human does. For example, in ~\cite{veluchamy2015vision}, by setting a color threshold for skin color, human skin is detected and the background of images is removed. In ~\cite{dhule2014computer}, no learning algorithm is used and instead of frames, a video sequence is imported to the process part and methods, such as skin detection and approximate median model are applied. ~\cite{huang2019hand} detects both hand and head based on the skin color and creates a white and black mask from each frame; Next, a VGGNet model is trained to differentiate between hand and head. In ~\cite{park2008method}, an angle between thumb and index finger is defined for discriminating hand gestures. In some other papers, such as ~\cite{noreen2015hand}, the images convert to the HSV color space to access each color just in one component. In ~\cite{grif2018human}, hands move in front of a blue background to benefit from the variation of human skin and background in the HSV space.
Hand gesture recognition usually has two stages, such as hand localization and gesture classification. SIFT is one of the fastest tools for feature extraction. It is famous for its speed before the emergence of the neural network. In ~\cite{golash2020economical}, using SIFT distinguishes between eight gestures to control machines like fans and washing machines. Although the color-based methods are simple and easy to implement, their generalization from person to person or in challenging conditions is low. These methods, for instance, have skin-color dependency; In ~\cite{globefire}, users should manually import their skin colors. Furthermore, pixel-wise differentiation between human skin and backgrounds is more sophisticated and sensitive than it seems.
The availability of big datasets has increased the use of neural networks and make them a substitute way for classic machine learning ones; since they are invariant to light conditions, perspective variations and different backgrounds. Object detection algorithms that are based on neural networks can be applied for hand detection. Algorithms such as You Only Look Once (YOLO) and Single Shot Multi-Box Detector (SSD) are useable for real-time tasks and higher accuracy than RCNN or Faster RCNN. ~\cite{yi2018long} takes advantages of two SSD detectors; First, one detects the head and shoulder area, and the second one recognizes hand gestures in the detected area. Plus, the SSD algorithm can be learned by new images ~\cite{liu2019hand}. Selective-dropout can reduce computational load ~\cite{ni2018light} to reduce performance lag. If hand detection and gesture recognition be merged during the detection part, their output will be a valid predicted label ~\cite{tanmaiehand}. Hand gesture recognition algorithms based on deep learning are more accurate than classic methods. Even though deep learning approaches do not require feature extraction and reduce hand-crafted designs, they are slow because of their complexities. Also, a data glove and a hand gesture recognition, by means of a hand detector SSD and an SVM, respectively, are used to classify hand gestures of both hands ~\cite{haratiannejadi2019smart}.
\section{Proposed Algorithm}
In this section, a human-computer interface based on hand gestures for the computer's pointer is designed. For designing the HCI, a related dataset should be collected so as to train a convolutional neural network (CNN). The trained CNN will be used for two proposes. Finally, a cursor controller will be designed to convert a predicted label to a cursor task.
\subsection{Dataset}
A hand dataset with 6720 image samples (300 x 300) of 15 subjects in 18 different simple backgrounds is collected. The dataset has four different gesture classes, such as fists, palms, pointing to the right and pointing to the left. Subjects were asked to photograph from both hands and use both palmar and dorsal sides. Figure \ref{fig:dataset} depicts four gesture samples from the dataset. 5120 samples were used for the training set and 1600 for the validation set and test set. Usually, datasets are split randomly to create the training and validation set. Here, the training and validation set, including the test set, have different distributions to avoid overfitting. In fact, the training samples are captured from webcams, without any intermediary software, and the validation samples are the acceptable images from the SSD algorithm, which detects hands. So the CNN classifier is learned by a distribution and cope with another one for the validation data and in real-time.
\begin{figure}[ht]
\centerline{\includegraphics[width=0.4\textwidth]{1-dataset.jpg}}
\caption{Examples from the collected hand dataset. Samples are captured from different cases with simple backgrounds in various light conditions and distances from webcams.}
\label{fig:dataset}
\end{figure}
\subsection{Hand Detection}
Users should move their hands in front of the computer webcam in order to capture frames. The frames are preprocessed and imported to an SSD hand detector. If a hand is detected in a frame, two outputs will be expected from SSD, such as a cropped frame and the center coordinate of the cropped frame. Since the SSD algorithm draws a bounding box around hands in each frame, the hands will be cropped from the bounding box area. Then, the cropped frame is fed to a classification part. The center of the cropped image will be used in mouse commands for moving the computer cursor. If no hand is detected in the frame, the next frame will be considered for hand detection. The process of hand detection is shown in Figure \ref{fig:ssd}.
\begin{figure}[ht]
\centerline{\includegraphics[width=0.9\textwidth]{2-ssd.jpg}}
\caption{The operation of the SSD hand detector. A captured frame feed through the SSD hand detector and two outputs are extracted, such as the cropped frame and center coordinate of the cropped frame.}
\label{fig:ssd}
\end{figure}
\subsection{Classification}
Since the SSD detects all kinds of hand gestures, there are two types of cropped frames that are imported to the classification part: one of four defined classes in the dataset or undefined ones. Four defined classes should become mouse command and no action should occur for other gestures. One of the critical challenges in classification is removing unwanted classes in a way that humans deal with them. So to predict a valid label for frames with defined classes and remove others, a CNN is trained. The architecture of EfficientNet-B0 is used followed by 8 fully connected layers for classification ~\cite{tan2019efficientnet}. The last layer of EfficientNet-B0 has 1280 neurons and and then reduced to 4 neurons by 8 fully connected layers ~\cite{ashtari2020indoor}. The collected hand samples from the dataset with the dimensions of 70 x 70 x 3 train the CNN network in 20 epochs. The CNN reaches 99 percent accuracy for the test set. For evaluation, cropped frames are preprocessed and fed to the frozen CNN network. They are then classified into 4 + 1 categories, 4 as mentioned before and 1 for other gestures.
A Radial Basis Function (RBF) network is designed to solve removing the unwanted cropped frames. After the network is trained, its last 8 dense layers are removed and a similarity network is formed. Indeed, the similarity network acts as an encoder or a feature extractor, which reduces the dimensions of the samples from 70 x 70 x 3 to 1280. Therefore, all samples of each class are fed through the network. The mean vector of the encoded samples of each class is calculated in order to create 4 reference vectors. Next, encoded samples just from the validation set and test set are compared with these references by the euclidean distance. The maximum distance of each class is defined as a threshold. When a cropped frame is imported to the classification part, Its output will be compared with the references and the smallest distance will be chosen. If the chosen distance is lower than its threshold, the cropped frame belongs to the dataset and it, otherwise, is from an unwanted class and should be ignored and a new frame should be given to the SSD.
Hence, in the classification part, two tasks will be done: First, the classifier predicts a label for the cropped frame. Second, the similarity network compares the cropped frame with the reference vectors and then the defined thresholds to determine whether the cropped frames represent one of the four dataset classes or not. Therefore, the classifier and similarity network act independently and the result of the similarity network validates the predicted label. If the classification part predicts a valid label, the computer cursor will respond to that.
\begin{figure}[h!]
\centerline{\includegraphics[width=0.7\textwidth]{3-controller.jpg}}
\caption{Designed control unit. The cursor can be moved and clicked or right-clicked by receiving a valid label from the classification part.}
\label{fig:controller}
\end{figure}
\subsection{Mouse commands}
There should be a controller unit to allocate commands to each predicted label in order to control the computer’s pointer. If the proposed algorithm is off, it will be turned on just by seeing the user’s palm. After that, the recognized palm moves the cursor based on the center coordinate of the cropped frame. Since the SSD uses input images with 300 x 300 resolution, its coordinate should be converted to a meaningful coordinate for the screen. When the proposed algorithm is turned on, one can click or right-click where the cursor is by pointing to the left or right. By showing fists, users can turn the application off and after that, no action will occur until the palm turns it on again. (see Figure \ref{fig:controller})
In figure \ref{fig:algorithm}, the proposed algorithm for controlling the computer cursor is summarized. Webcam captures a frame and imports it to the hand detector. If there is a hand in the frame, the frame will be cropped and its center coordinate calculated. The cropped frame is fed to classification parts and if it represents a defined gesture from the dataset, it will become a valid label and control the computer cursor.
\begin{figure}[h!]
\centerline{\includegraphics[width=1\textwidth]{4-algorithm.jpg}}
\caption{The proposed algorithm for controlling the computer cursor.}
\label{fig:algorithm}
\end{figure}
\begin{figure}[h!]
\centerline{\includegraphics[width=1\textwidth]{5-mouse.jpg}}
\caption{Controlling the cursor of the computer by hand gesture recognition. By recognizing the palm, the algorithm turns on, and by pointing to the left and right, one can click and right-click, respectively. Then, the algorithm is turned off by recognizing the user’s fist.}
\label{fig:mouse}
\end{figure}
\section{Experimental Results}
The proposed algorithm was developed on a personal computer using ubuntu with GTX 1080 Ti GPU and implemented within Python programming language using Keras framework and OpenCV library. The speed of the proposed algorithm to control the pointer, such as clicking, right-clicking and moving the pointer, is 15 frames per second.
\subsection{Evaluation}
In order to evaluate the classification part, two deep neural network architectures such as VGG16 and EfficientNet-b0 are examined. Measures such as the number of parameters, test accuracy and run-time are considered to the performances of the architectures. Even though VGG16 reaches higher accuracy than EfficientNet-B0, its parameters are more than three times of EfficientNet-B0 parameters, so VGG16 requires more run-time rather than EfficientNet-B0. As a result, the classification part uses EfficientNet-B0 as its feature extractor.
\begin{figure}[ht]
\centerline{\includegraphics[width=1\textwidth]{6-backgrounds.jpg}}
\caption{The chosen backgrounds for algorithm evaluation.}
\label{fig:backgrounds}
\end{figure}
As mentioned before, the training set has a different distribution from the validation and test set. In the learning process, approximately 76\% of the dataset and the remaining samples are utilized for the training and validation/test data, respectively. Three new backgrounds, such as a white, simple and complex background, also two distances from the webcam and two light conditions, are pondered for evaluating the proposed algorithm. For each situation, ten frames, including both hands, are captured. The three chosen backgrounds are illustrated in Figure \ref{fig:backgrounds}. Therefore, for each hand's position, 80 frames are checked and the results are presented in Table \ref{table:acc}.
The confusion matrix of the proposed algorithm is shown in Table \ref{table:Confusion_matrix}. The lowest accuracy is for clicking mode because, in the complex background, there was a brown closet that the SSD algorithm confused to detect hand gesture correctly.
\begin{table}[ht]
\caption{Performance of the classification part with different architectures}
\centering
\begin{tabular}{c c c c}
\hline\hline
Network & Accuracy & Parameters & Run-time \\ [0.5ex]
\hline
EfficientNet-B0 &99\% &4.98M &671us/step \\
VGG16 &100\% &15.95M &776us/step \\[1ex]
\hline
\end{tabular}
\label{table:acc}
\end{table}
\begin{table}
\centering
\caption{Confusion matrix for each mode of controlling the cursor}
\begin{tabular}{c c c c c}
\woB{} & \woB{Turn Off} & \woB{Turn On} & \woB{Click} & \woB{R-click} \\ \hhline{~*4{|-}|}
Turn Off & \colorme{0.9125} & 0 & \colorme{0.1375} & \colorme{0.0375} \\ \hhline{~*4{|-}|}
Turn On & \colorme{0.0750} & \colorme{0.9708} & 0 & 0 \\ \hhline{~*4{|-}|}
Click & \colorme{0.0083} & \colorme{0.0250} & \colorme{0.8625} & \colorme{0.0333} \\ \hhline{~*4{|-}|}
R-click & \colorme{0.0041} & \colorme{0.0041} & 0 & \colorme{0.9292} \\ \hhline{~*4{|-}|}
\end{tabular}
\label{table:Confusion_matrix}
\end{table}
\section{Conclusion}
In this paper, a human-computer interaction algorithm based on hand gesture recognition is proposed to control the computer pointer without a single contact. It was also observed that mostly the deep learning methods, such as CNNs, have been used instead of the classic machine learning and image processing tools.
The proposed algorithm can be useful for other intelligence systems in the complex background and different light conditions and camera perspectives. Since the proposed algorithm is touch-less can be used in public places because of the COVID-19 virus. Finally, the designed computer cursor controller has reached 91.88\% accuracy for various backgrounds and 97\% for simple environments. Also, a hand gesture dataset including 6720 colored images with four classes is presented, which can be used for other purposes.
\bibliographystyle{unsrt}
|
1510.03969
|
\section{Introduction}
\label{intro}
Neutron stars (NSs) are among the most compact, dense
and neutron-rich objects known in the Universe. The typical surface
compactness parameter of NSs is of the order of $\eta_{R} \equiv
2GM/Rc^2 \approx 0.4$, which is 5 orders of magnitudes larger than
that at the surface of the sun ($\eta_{\odot} \equiv
2GM_{\odot}/R_{\odot}c^2 \approx 4\times 10^{-6}$). If one uses
instead a more natural measure of the gravitational field strength,
the ``curvature'' parameter $\xi \equiv 2GM/c^2r^3$, which is
related to the non-vanishing components of the Riemann tensor in
vacuum ${\mathcal{R}^1}_{010} = - \xi$ as well as to the Kretschmann
invariant $\mathcal{K} = 2\sqrt{3} \xi$ outside the star, the
surface curvature
for a $1.4M_{\odot}$ neutron star is then 14 orders of magnitudes
larger than
that at the surface of the Sun, where Solar System tests of General
Relativity in the weak field regime are usually performed. The
strong gravitational field makes NSs as good testbeds for the
untested strong-field gravity regime, while their high-density
matter content make them as natural laboratories for determining the
EOS of the super-dense nuclear
matter~\cite{Lattimer:2006xb,Lattimer:2012nd,Li:2008gp,Pani:2011xm,Eksi:2014wia}.
However, it cannot simultaneously be good testing ground for both.
There is a possible degeneracy between the models of the
neutron-star matter EOSs and models of gravity applied to describe
their properties. How to break this degeneracy is a longstanding
problem to which many recent studies have been devoted (see e.g.,
~\cite{Will:2005va,Harada:1998ge,Sotani:2004rq,Lasky:2008fs,Wen:2009av,
Cooney:2009rr,Horbatsch:2010hj,Arapoglu:2010rz,Deliduman:2011nw,
Pani:2011xm,Sotani:2012aa,Lin:2013rea,Yagi:2013mbt,Sotani:2014goa,
Eksi:2014wia}). This talk is based on the work originally
published in Ref.~\cite{He:2014yqa}. Here we present our analyses of the extent to which
a EOS-Gravity degeneracy exists when models of gravity are
limited to classical GR and Scalar-Tensor (ST) theories, while
variations of the EOS appear as a result of variation of the
slope $L$ of the nuclear symmetry energy at saturation density or
the high-density behavior of the nuclear symmetry energy. We
find that the variation of either the density slope $L$ or the
high-density behavior of nuclear symmetry energy within their
uncertainty ranges lead to significant changes in the binding energy
of NSs. In particular, the variations are significantly greater than
those that result from ST theories of gravity, leading to the
conclusion that within those subset of gravity models, measurements
of neutron star properties constrain mainly the EOS. Further
investigations demonstrate that only EOSs with the soft symmetry
energy at high-density are consistent with constraints on the
gravitational binding energy of PSR J0737-3039B.
\section{The Equation of State of Nuclear Matter}
\label{sec:EOS}
In general, the EOS of neutron-rich nucleonic matter
can be expressed in terms of the binding energy per nucleon in
asymmetric nuclear matter of isospin asymmetry
$\alpha=(\rho_n-\rho_p)/\rho$ as
$E(\rho,\alpha)=E_{0}(\rho)+\mathcal{S}(\rho)\alpha^{2}+\mathcal{O}(\alpha^{4})$,
where $E_0(\rho)$ is the binding energy per nucleon in symmetric
nuclear matter ($\alpha=0$), $\mathcal{S}(\rho)$ is the nuclear
symmetry energy, and $\rho=\rho_{n}+\rho_{p}$ is the total baryon
density with $\rho_{n}$ ($\rho_{p}$) being the neutron (proton)
density. The EOS can further be characterized in terms of bulk
nuclear properties by expanding both $E_{0}(\rho)$ and
$\mathcal{S}(\rho)$ in Taylor series around nuclear saturation
density $\rho_0 = 0.16$ fm$^{-3}$, i.e., $E_{0}(\rho)=B_0
+\frac{1}{2} K_{0} \chi^{2}+\mathcal{O}(\chi^{3})$ and
$\mathcal{S}(\rho)=J+L\chi+\frac{1}{2} K_{\rm
sym}\chi^{2}+\mathcal{O}(\chi^{3})$, where $\chi \equiv
(\rho-\rho_{0})/3\rho_{0}$ quantifies the deviations of the nuclear
density from its saturation value. Terrestrial experimental nuclear
data such as the ground state properties of finite nuclei and
energies of giant resonances tightly constrain the binding energy at
saturation $B_0$ and the nuclear incompressibility coefficient
$K_{0}$, hence constrain the EOS of symmetric nuclear matter
$E_{0}(\rho)$. Whereas the symmetry energy at saturation $J$ is more
or less known, its density slope $L$ is largely unconstrained, and
the overall behavior of the symmetry energy at supra-saturation
densities is quite uncertain.
\begin{figure}[h]
\sidecaption
\centering
\includegraphics[width=2.2in,angle=0]{Fig1.png}
\caption{(color online). Density dependence of the nuclear symmetry
energy: the solid black line corresponds to the symmetry energy
predictions in SkIU-FSU, the dashed black line corresponds to the
original IU-FSU model, the dotted red and dash-dotted blue lines are
symmetry energy predictions in the IU-FSU-like models with density
slope $L=60.0$ MeV (IU-FSU$^1$) and $L=100.0$ MeV (IU-FSU$^2$),
respectively. Figure is taken from Ref.~\cite{He:2014yqa}.}.
\label{Fig:EOS}
\end{figure}
For our base models we use two recently established EOSs for
neutron-rich nucleonic matter within the IU-FSU Relativistic Mean
Field (RMF) and the SkIU-FSU Skyrme-Hartree-Fork (SHF)
models~\cite{Fattoyev:2010mx,Fattoyev:2012ch,Fattoyev:2012uu}. The
slope of the symmetry energy at saturation for these models is $L =
47.2$ MeV. While the density dependence of symmetry energy
$\mathcal{S}(\rho)$ at subsaturation densities is almost identical
for these models, at supra-saturation densities of $\gtrsim
1.5\rho_0$ the two models give significantly different predictions
of $\mathcal{S}(\rho)$. To test the sensitivity of the gravitational
binding energy of neutron stars to the variations of properties of
neutron-rich nuclear matter around saturation density, we have
further introduced two RMF models with density slopes of the
symmetry energy at saturation density of $L=60$ MeV and $L=100$ MeV.
Shown in Fig.~\ref{Fig:EOS} are the density dependence of the
symmetry energy for these four models~\cite{He:2014yqa}.
\section{Comparison of Predictions between GR \& Scalar-Tensor Theory Using Different EOSs}
\label{sec:Binding Energy}
Properties of neutron stars are not only sensitive to the underlying
EOS, but also to the strong-field behavior of gravity. We will
consider neutron stars in both GR and in the simplest natural
extension of the GR known as the {\sl scalar-tensor theory} of
gravity. According to this theory, in addition to the second-rank
metric tensor, $g_{\mu\nu}$, the gravitational force is also
mediated by a scalar field, $\varphi$. The action defining this
theory can be written in the most general form
as~\cite{Damour:1996ke,Yazadjiev:2014cza}
\begin{equation}
S = \frac{c^4}{16 \pi G} \int d^4x \sqrt{-g^{\ast}}\left[R^{\ast} -2
g^{\ast \mu \nu} \partial_{\mu} \varphi \partial_{\nu} \varphi -
V(\varphi)\right] + S_{\rm matter}\left(\psi_{\rm matter};
A^2(\varphi)g^{\ast}_{\mu\nu} \right) \ ,
\end{equation}
where $R^{\ast}$ is the Ricci scalar curvature with respect to the
so-called {\sl Einstein frame} metric $g^{\ast}_{\mu\nu}$ and
$V(\varphi)$ is the scalar field potential. The physical
metric is defined as
$g_{\mu\nu} \equiv A^2(\varphi)g^{\ast}_{\mu\nu}$. The relativistic
equations for stellar structure in hydrostatic equilibrium can be
written as
\begin{eqnarray}
\frac{dM_{\rm G}(r)}{dr} &=& \frac{4 \pi}{c^2} r^2 \mathcal{E}(r)
A^4(\varphi) +
\frac{r^2}{2}\left(1-\frac{2GM(r)}{c^2r}\right)\chi^2(r) +
\frac{r^2}{4} V(\varphi) \ , \\
\frac{dM_{\rm B}(r)}{dr} &=& 4\pi m_{\rm B} r^2 \rho(r) A^3(\varphi) \ , \\
\frac{d \varphi(r)}{dr} &=& \chi(r) \ , \\
\nonumber \frac{d \chi(r)}{dr} &=& \Bigg[1-\frac{2 G M(r)}{c^2
r}\Bigg]^{-1}\Bigg\{\frac{4 \pi G}{c^4} A^4(\varphi)
\bigg[\alpha(\varphi) \bigg(\mathcal{E}(r) - 3 P(r)\bigg)+r\chi(r)\bigg(\mathcal{E}(r) - P(r)\bigg)\bigg] \ \\
&-& \frac{2}{r}\left(1-\frac{ G M(r)}{c^2 r}\right) \chi(r)
+\frac{1}{2}r\chi(r)V(\varphi) +
\frac{1}{4}\frac{dV(\varphi)}{d\varphi}\Bigg\} \ , \\
\nonumber \frac{dP(r)}{dr} &=& -\bigg(\mathcal{E}(r) +
P(r)\bigg)\Bigg[1-\frac{2 G M(r)}{c^2 r}\Bigg]^{-1}\Bigg\{ \frac{4
\pi G}{c^4} r A^4(\varphi) P(r) + \frac{G}{c^2r^2}M(r) \ \\
&+& \left(1-\frac{2 G M(r)}{c^2 r}\right)\left(\frac{1}{2}r\chi^2(r)
+ \alpha(\varphi)\chi(r)\right) -\frac{1}{4}rV(\varphi) \Bigg\} \ ,
\end{eqnarray}
where $\alpha(\varphi) \equiv \partial \ln A(\varphi)/\partial
\varphi$. Following the Ref.~\cite{Damour:1996ke} we set
$V(\varphi)=0$ which can only appear in models of modified gravity,
and we consider a coupling function of the form $A(\varphi) =
\exp\left(\alpha_0\varphi + \frac{1}{2}\beta_0
\varphi^2\right)$~\cite{He:2014yqa}. For a given central pressure
$P(0) = P_{\rm c}$ one can integrate the equations above from the
center of the star to $r \rightarrow \infty$, where the only input
required is the EOS of dense matter in chemical equilibrium. At the
center of the star the Einstein frame boundary conditions are given
as $P(0) = P_{\rm c} \ , \mathcal{E}(0) = \mathcal{E}(\rm c) \
,\varphi(0) = \varphi_{\rm c} \ , \chi(0) = 0$, while at infinity we
demand cosmologically flat solution to agree with the observation
$\lim_{r \rightarrow \infty} \varphi(r) = 0$. The stellar coordinate
radius is determined by the condition of $P(r_{\rm s}) = 0$. The
physical radius of a neutron star is found in the Jordan frame as
$R_{\rm NS} = A^2\left[\varphi(r_{\rm s})\right]r_{\rm s}$. Notice
however that the physical stellar mass as measured by an observer at
infinity matches with the coordinate mass, since at infinity the
coupling function approaches unity. According to the latest
observational constraints the quadratic parameter of the
scalar-tensor theory should take values of not smaller than $\beta_0
\gtrsim -5.0$~Ref.~\cite{Freire:2012mg,Damour:1996ke}. Similarly the
absolute value of the linear parameter is also constrained very well
by observation. We will therefore choose the upper bounds on these
parameters as $\alpha_0^2 < 2.0 \times 10^{-5}$ and $\beta_0 >
-5.0$~\cite{Freire:2012mg}.
\begin{figure}[h]
\sidecaption
\includegraphics[width=2.5in,angle=0]{Fig2.png}
\caption{(color online). The mass-versus-radius relation of neutron
stars calculated using EOSs considered in this work. For the
scalar-tensor theory the upper observational bound on parameters
$\{\alpha_0,\beta_0\} = \{\sqrt{2.0 \times 10^{-5}}, -5.0\}$ have
been used. Figure is taken from Ref.~\cite{He:2014yqa}.}
\label{Fig:MR}
\end{figure}
In Fig.~\ref{Fig:MR} we present our results for the mass versus
radius relation of neutron stars as calculated by using the four
EOSs discussed above both within GR and the scalar-tensor theory
with $\{\alpha_0,\beta_0\} = \{\sqrt{2.0 \times 10^{-5}}, -5.0\}$. we observe a larger
radius in the scalar-tensor theories than in GR only for massive
neutron stars. Even so, the changes in radii are much smaller than
those that arise from variation of the stiffness of the symmetry
energy. In general, differences in predictions using the GR and the
scalar-tensor theories within current observational bound on their
parameters are much smaller than those due to the uncertainties in
the EOS.
\section{Constraining the Nuclear Symmetry Energy from the Gravitational Binding Energy}
The fractional gravitational binding energy $\mathcal{B}\equiv
M_{\rm G}-M_{\rm B}$~\cite{Weinberg:1972} [See Eqns.(2)-(3)] as a
function of the total gravitational mass of a neutron star is
displayed in Fig.~\ref{Fig:BE}. Notice that in general the GR
predictions give a lower absolute value of the fractional binding
energy for a given NS mass then the scalar-tensor theory. However
the difference is quite negligible compared to the uncertainties
coming from the variations of the EOS. Furthermore, all low-mass NSs
are indistinguishable for an observer in these two models of
gravity, because the critical value for the so-called ``spontaneous
scalarizaton'' is reached only when NSs masses exceed about
$1.4$ solar mass. Thus measurements of NS mass and radii will lead
to significantly constrain density dependence of the symmetry
energy, rather than constraining gravity models within the GR and
scalar-tensor theories.
\begin{figure}[h]
\sidecaption
\includegraphics[width=2.5in,angle=0]{Fig3.png}
\caption{(color online). The fractional gravitational binding energy
versus the gravitational mass of a neutron star for a set of the
EOSs discussed in the text. The constraints on the gravitational
binding energy of PSR J0737-3039B, under the assumption that it is
formed in an electron-capture supernova, are given by the vertical
brown and purple lines. Figure is taken from
Ref.~\cite{He:2014yqa} with slight modifications.} \label{Fig:BE}
\end{figure}
As both the mass-verus-radius relation and
gravitational binding energy of NSs are more sensitive to different
EOSs, we further examine the effects on binding energy of the
uncertain symmetry energy in more details (see again
Fig.~\ref{Fig:BE}). Notice that with the very similar low-density
symmetry energy up to about $1.5\rho_0$, but different high-density
behaviors, predictions for the $\mathcal{B}/M_{\rm G}$ in the
SkIU-FSU and IU-FSU models are quite different. This effect becomes
more evident with the increase of the total gravitational mass. With
the IU-FSU as a reference baseline model then the relative changes
in the fractional gravitational binding energies are $5.32\%$,
$6.52\%$ and $9.89\%$ for $1.25M_{\odot}$, $1.4M_{\odot}$ and
$1.9M_{\odot}$ neutron stars respectively. This is in contrast to
the predictions of the binding energy in models with soft and stiff
symmetry energies \emph{at saturation}---{\sl e.g.}, the IU-FSU with
$L=47.2$ MeV and the IU-FSU$^2$ with $L=100$ MeV. Again using the
former as a reference model, then the relative changes in the
fractional gravitational binding energies are $-10.26\%$, $-8.85\%$
and $-7.44\%$ for $1.25M_{\odot}$, $1.4M_{\odot}$ and $1.9M_{\odot}$
neutron stars respectively. Hence the fractional binding energy is
more sensitive to the saturation density slope of the symmetry
energy for low-mass neutron stars~\cite{Newton:2009vz}, while more
sensitive to the supra-saturation behavior of the symmetry energy
for massive stars~\cite{He:2014yqa}.
The gravitational mass of the lighter pulsar PSR J0737-3039B is
determined very accurately to be $M_{\rm G} = 1.2489 \pm
0.0007M_{\odot}$~\cite{Kramer:2006nb}, which is one of the lowest
reliably measured mass for any neutron star up to date. Due to its
low mass it was suggested that this NS might have been formed as a
result of the collapse of an electron-capture supernova from a an
O-Ne-Mg white dwarf progenitor~\cite{Podsiadlowski:2005ig}. Under
this assumption, it is estimated that the baryonic mass of the
precollapse O-Ne-Mg core should lie between $1.366 M_{\odot} <
M_{\rm B} < 1.375M_{\odot}$~\cite{Podsiadlowski:2005ig} and $1.358
M_{\odot} < M_{\rm B} < 1.362M_{\odot}$~\cite{Kitaura:2005bt}. In
Fig.~\ref{Fig:BE} we show that only models with the soft symmetry
energy are consistent with these set of constraints.
\section{Summary}
\label{sec:Conclusion} Interpreting properties of neutron stars
require a resolution in the degeneracy between the EOS for
super-dense matter and the strong-field gravity. With the goal of
providing information that may help break this degeneracy we have
studied effects of the nuclear symmetry energy within its current
uncertain range on the mass-versus-radius relation and the binding
energy of NSs within both the GR and the scalar-tensor theory of
gravity. We have found that radii of neutron stars are primarily
sensitive to the underlying EOS through the density dependence of
the symmetry energy. Within the simplest natural extension of the GR
known as the scalar-tensor theory of gravity, and by using upper
observational bounds on the parameters of the theory, we found a
negligible change in the binding energy of NSs over the whole mass
range, and significant changes in radii only for NSs whose mass are
well above $1.4M_{\odot}$. Even then, the changes in radii are found
to be much smaller than those that result from variation of the
stiffness of the symmetry energy. We have also shown that the
gravitational binding energy is moderately sensitive to the nuclear
symmetry energy, with lower mass neutron stars $\lesssim
1.4M_{\odot}$ probing primarily the stiffness of the symmetry energy
at nuclear saturation density, and massive stars being more
sensitive to the high density behavior of the symmetry energy. A
combination of observational and theoretical arguments on the
gravitational binding energy of PSR J0737-3039B imply that only EOSs
with soft symmetry energy at high density are consistent with the
extracted values of the binding energy.
\section{Acknowledgements}
This work is supported in part by the National Natural Science
Foundation of China under Grant No. 11275098, 11275067 and
11320101004, the US National Science Foundation under Grant No.
PHY-1068022 and the CUSTIPEN (China-U.S. Theory Institute for
Physics with Exotic Nuclei) under DOE grant number
DE-FG02-13ER42025, and DOE grants DE-SC0013702 (Texas A\&M
University-Commerce), DE-FG02-87ER40365 (Indiana University) and
DE-SC0008808 (NUCLEI SciDAC Collaboration).
|
1510.04274
|
\section{Introduction}
\label{sec:intro}
The character of the galaxy assembly in the constellation of Antlia is
not as clear as, for example, in the cases of the Virgo or Fornax galaxy
clusters. Some authors used the term "galaxy group" \citep{fer90}, others
"galaxy cluster" \citep{hop85,smi08a}, and
\citet{hop85} identified five "clusters" in the Antlia region ($\alpha=
10^h - 10^h\,50^m$, $\delta= -42^o - -30^o$). The most striking central
cluster is called "Antlia II". Its morphological appearance is
that of two groups concentrated around the giant ellipticals NGC\,3258
and NGC\,3268 in a projected distance of 220\,kpc.
Both dominant galaxies
are extended X-ray sources \citep{nak00,ped97}
that exhibit rich globular cluster systems.
The globular cluster system of NGC\, 3258 contains about 6000 members.
The system of
NGC\,3268 is somewhat poorer, but is still typical of a giant elliptical
galaxy, with almost 5000 globular clusters \citep{bas08}.
\citet{hop85} and, more recently, \citet{hes15} estimated velocity
dispersions of $\sim 500$\,km\,s$^{-1}$ for the bright population
of Antlia, higher than those in clusters like Fornax \citep{dri01}.
The globular cluster luminosity function indicated that NGC\,3258
could be a few Mpc nearer than NGC\,3268 \citep{dir03a,bas08}, which agrees
with the distances obtained with surface brightness fluctuations by
\citet{bla01}. However, \citet{can05} quoted the same distance
moduli for both galaxies.
Despite being nearby, a thorough radial velocity survey of Antlia's
galaxy population has not been done yet. In this paper, we present
new radial velocities of galaxies located in
Antlia. We supplement our galaxy sample with literature data, ending
up with 105 galaxy velocities. These data will help us to
better understanding the structure of Antlia.
In the following, we keep the term "Antlia cluster" for
simplicity. To maintain consistency with earlier papers
\citep[e.g.][]{smi12,cas13a,cas14,cal15}, we adopt a distance
of 35 Mpc, which means a scale of 169.7 pc/arcsec.
\section{Observations and reductions}
\label{sec:reduc}
We performed multi-object spectroscopy with VLT-VIMOS of the galaxy
population in six fields located in the inner part of the Antlia cluster.
The observations were carried out under the programmes 60.A-9050(A) and
079.B-0480(B) (PI Tom Richtler), observed during the first semesters of 2007
and 2008.
Figure\,\ref{campos} shows the four quadrants for each one of the six VIMOS
fields, using different colours.
The grating was HR blue, and the slit width was $1''$.
For each science field, the integration time was 1~h, split into
three individual exposures. This configuration implied a wavelength coverage
spanning $3700\,\rm{\AA} - 6600\,\rm{\AA}$ (depending on the slit positions)
and a spectral resolution of $\sim 2.5\,\rm{\AA}$ .
The data were reduced with \textsc{esorex} in the usual manner for VIMOS
data. First, a master bias was obtained for each field with the recipe VMBIAS
from five individual bias exposures. The normalised master flat field was
created with the recipe VMSPFLAT from a set of dome flat-field exposures. The
recipe VMSPCALDISP was used to determine the wavelength calibrations and
spectral distortions. Typically, more than 20 lines were
identified for each slit. Afterwards, the bias and flat-field corrections were applied
to each science exposure, together with the wavelength calibration. This
was done with the recipe VMMOSOBSSTARE. Individual exposures were then
combined with the \textsc{iraf} task IMCOMBINE to achieve a higher S/N. The
spectra were extracted with the task APALL, also within \textsc{iraf}.
We measured the heliocentric radial velocities using the \textsc{iraf} task
FXCOR within the NOAO.RV package. We used synthetic templates, which were selected from
the single stellar population (SSP) model spectra at the \textsc{miles}
library (${\rm http://www.iac.es/proyecto/miles}$, \citealt{san06}).
We selected SSP models with the metallicities [M/H]=\,-0.71 and
[M/H]=\,-0.4, a unimodal initial mass function with slope $1.30$, and an age
of 10\,Gyr. The wavelength coverage of these templates is
$3700\,\rm{\AA} - 6500\,\rm{\AA}$, and their spectral resolution is
$3\,\rm{\AA}$ FWHM.
\begin{figure}
\includegraphics[width=90mm]{campos.eps}
\caption{Positions of the six VIMOS fields (each one composed of four
quadrants). The black circle is centred on the midpoint between the
projected positions for the two gEs, and its radius is $20'$. North is
up, east to the left.}
\label{campos}
\end{figure}
We also obtained GEMINI-GMOS multi-object spectra from programme GS-2013A-Q-37
(PI J. P. Calder\'on). The grating B600\_G5303 blazed at
$5000\,\rm{\AA}$ was used with a slit width of 1\,arcsec. The wavelength coverage
spans $3300\,\rm{\AA} - 7200\,\rm{\AA}$, depending on the position of the slits. The
data were reduced using the GEMINI.GMOS package within {\sc IRAF}. We refer
to \citet{cas14} for more information about the reduction.
We could determine heliocentric radial velocities ($V_{R,h}$) for 67
galaxies located in the inner $20'$ of the Antlia cluster (i.e.,
the inner 200\,kpc for our adopted distance). In previous
studies \citep{smi08a,smi12,cas13a}, those galaxies with $V_{R,h}$
between 1200\,km\,s$^{-1}$ and 4200\,km\,s$^{-1}$ have been assigned
to Antlia (which already raised doubts owing to the large velocity
interval). In our enhanced sample, we find the lowest velocity to be
$V_{R,h} = 1150$\,km\,s$^{-1}$, and there are no velocities between
4300\,km\,s$^{-1}$ and $\sim7600$\,km\,s$^{-1}$. Galaxies with higher
$V_{R,h}$ than the latter limit were rejected from our sample and are
listed in Table\,\ref{tab.backg}. It can be noticed that several galaxies
from the \citet{fer90} catalogue are indeed in the background. In these
cases, \citet{fer90} classified them as ``likely members'' or
``probable background''.
From the 67 galaxies with $V_{R,h}$ measurements, 45 are thus members of
our sample.
Additional $V_{R,h}$ measurements were collected from the
literature \citep[]{smi08a,smi12}
and the NED\footnote{This research has made use of the NASA/IPAC
Extragalactic Database (NED), which is operated by the Jet Propulsion
Laboratory, California Institute of Technology, under contract with the
National Aeronautics and Space Administration.}.
The final sample of Antlia members consists of 105 galaxies, which are
listed in Table\,\ref{tab.mem}. A fraction of the present galaxies had
been measured earlier. In these cases, we find good agreement between
our measurements and the literature values (Figure\,\ref{vr.comp}).
The mean $V_{R,h}$ difference and dispersion are 20\,km\,s$^{-1}$ and
50\,km\,s$^{-1}$, respectively. In comparison, the uncertainties of our
measurements are typically in the range $10-40$\,km\,s$^{-1}$.
\begin{figure}
\includegraphics[width=90mm]{ambos.eps}
\caption{Comparison between heliocentric radial velocity measurements from
this paper and the literature for Antlia members.}
\label{vr.comp}
\end{figure}
\citet{fer90} provided the photometrical catalogue of galaxies in
Antlia with the largest spatial coverage, while \citet{cal15} have carried
out a deeper survey of the early-type galaxies in a region that
contains our VIMOS fields. To supplement the \citet{fer90}
catalogue, we derived $B$ magnitudes for the galaxies measured
by \citet{cal15} in the Washington $(C,T_1)$ photometric system
\citep{can76}. As a result, we applied equation\,4 from \citet{smi08a} to
transform $(C-T_1)_0$ into $(B-R)_0$ colours. Then we obtained $B$
magnitudes, considering that $R$ and $T_1$ filters only differ in a
small offset \citep{dir03a}. Figure\,\ref{compl} shows the completeness
for those galaxies located within $30'$ of the cluster
centre (see Section\,\ref{morph}), which roughly matches the region
observed with our VIMOS fields and previous spectroscopic studies
\citep{smi08a,smi12}. The bin width is 1\,mag. When we exclude
galaxies with \citet{fer90} membership status `3' (which are the less
probable members), the $80\%$ completeness is reached
at $B_T=17$\,mag and the $60\%$ at $B_T=19$\,mag (small red circles).
\begin{figure}
\includegraphics[width=90mm]{compl.eps}
\caption{Completeness analysis for the spectroscopic sample considering
the entire photometric catalogue (large light-blue circles), and
excluding galaxies with \citet{fer90} membership status `3' (small red
circles). The bin width is 1\,mag.}
\label{compl}
\end{figure}
\section{Results}
\label{sec:resul}
\subsection{$V_{R,h}$ velocity distribution}
\label{sec:dvel}
The histogram of the $V_{R,h}$ distribution for the galaxies in our Antlia
sample is shown in Figure\,\ref{dvel} with a bin width of 150\,km\,s$^{-1}$.
The green solid curve represents the smooth velocity distribution, obtained
with a Gaussian kernel.
The distribution resembles a Gaussian, which would be expected if
the spatial velocity for the Antlia members is described by a Maxwellian
distribution.
\begin{figure}
\includegraphics[width=90mm]{dvel.eps}
\caption{Histogram of the $V_{R,h}$ distribution for the galaxies in our Antlia
sample. The bin width is 150\,km\,s$^{-1}$. Overplotted are the
smooth velocity distribution, obtained with a Gaussian kernel (green
solid curve), and the normal profile fitted by least squares (brown
dashed curve).}
\label{dvel}
\end{figure}
A Shapiro-Wilk normality test returns a p value of 0.9
\citep[][hereafter S-W test]{sha65,roy95}, meaning that we cannot
reject the hypothesis that our sample is drawn from a normal distribution.
Considering this, we fitted a normal profile to the $V_{R,h}$ distribution
by least-squares, assuming Poisson uncertainties for the bin values.
The best fit corresponds to a Gaussian distribution with a mean velocity
of $2708\pm42$\,km\,s$^{-1}$ and a dispersion of $617\pm46$\,km\,s$^{-1}$
(dashed brown curve in Fig.\,\ref{dvel}), which agree with the
values from optical data by \citet{hes15}.
This dispersion fits massive clusters like Virgo \citep{con01}.
\citet{nak00} obtained a gas temperature of 2\,keV for the surroundings
of NGC\,3268, which does not exclude a dispersion of 617 km/s, considering
the $\sigma$-T- relation of \citet{xue00} and its scatter.
However, whether this high velocity dispersion obtained for our Antlia
sample represents a single virialized system, is an interesting
question, which we try to answer here.
Assuming that the individual $V_{R,h}$ uncertainties are the sigma values
of a normal distribution, we simulated the $V_{R,h}$ for each one of the 105 galaxies
by a Monte-Carlo method.
The procedure was repeated 100 times, in all cases obtaining a
non-rejection of the normality hypothesis. This demonstrates that the
intrinsic uncertainties of the measurements do not play a central
role in the result.
\begin{figure*}[ht!]
\includegraphics[width=60mm]{pos_miembros.eps}
\includegraphics[width=60mm]{pos_miembros2.eps}
\includegraphics[width=60mm]{pos_miembros3.eps}
\caption{{\bf Left panel:} projected density distribution for
galaxies, split into three percentile ranges. {\bf Middle panel:} projected
positions for bright galaxies. The colour palette ranges from red to blue,
spanning the $V_{R,h}$ range defined in Sect.\,\ref{sec:reduc}.
{\bf Right panel:} similar plot for dwarf galaxies in the sample.
The black asterisks indicate the positions of NGC\,3258 (south-west)
and NGC\,3268 (north-east). In both panels, the origin of the black
circle is the {\it \emph{bona fide}} centre of the cluster, and its radius
is $30'$.}
\label{espac.typ}
\end{figure*}
Several studies in galaxy clusters found different velocity
dispersions for the dwarf and bright galaxies populations
\citep[e.g.][]{dri01,con01,edw02}. To test whether this
is also true for the Antlia cluster, we subdivided our sample in giants
and dwarfs using the morphological classification from
\citet{smi08a,smi12} and, when this is not available, from \citet{fer90}.
Normal distributions were fitted by least squares to the dwarf and
bright galaxies, separately. The resulting mean values for the bright and
dwarf populations were $2768\pm104$\,km\,s$^{-1}$ and
$2698\pm108$\,km\,s$^{-1}$, respectively. The corresponding dispersions
are $698\pm122$\,km\,s$^{-1}$ and $682\pm120$\,km\,s$^{-1}$.
From a K-S test, we cannot reject the hypothesis that both samples
are drawn from the same distribution with $90\,\%$ confidence.
\subsection{Morphological types and spatial distribution}
\label{morph}
Assuming that the groups of galaxies dominated by the gEs as the
main substructures of the cluster and that both haloes present
similar masses \citep{ped97,nak00}, we consider the midpoint
between them as its approximate centre
($\alpha= 10^h\,29^m\,27^s$, $\delta= -35^o\,27'\,58''$).
The 105 galaxies in our sample occupy a region of $\sim 7\,{\rm degree}^2$.
Therefore, we obtained the mean surface density, $\sim15\,{\rm degree}^{-2}$,
and the mean distance between galaxies, $\sim9$\,arcmin. And for each
galaxy, we calculated the number of neighbours nearer than this value, and we
used it to split the galaxies in four percentiles. The left-hand panel of
Figure\,\ref{espac.typ} shows the regions occupied by the 25, 50,
and 75 percentiles of galaxies with a larger number of neighbours. The
white asteriks correspond to the position of NGC\,3258 (south-west)
and NGC\,3268 (north-east).
The origin of the black circle is the {\it \emph{bona fide}} centre of the
cluster, and its radius is $30'$ (i.e. $\sim 300$\,kpc at Antlia
distance). The 25 percentile seems to clearly represent the
overdensities of galaxies around both gEs, which are also identified
in the 50th percentile. The lower percentile shows
a more extended spatial distribution.
In the middle panel of Figure\,\ref{espac.typ}, the projected
spatial distribution for bright galaxies (i.e., all those galaxies
not classified as dwarfs) is plotted. Different symbols indicate the Hubble type
(i.e., spirals, lenticulars, or ellipticals). The colour palette ranges
from red to blue, spanning the $V_{R,h}$ range defined in
Sect.\,\ref{sec:reduc}. The origin and radius of the black circle are
identical to those in the left-hand panel. The two groups around each of the gEs
can be clearly identified. There are also several galaxies located in
the extrapolation of the straight line joining the two gEs, mainly to
the north-east.
These galaxies, as well as those located in the central part of the
cluster, mainly present intermediate $V_{R,h}$. There are a few galaxies
with $V_{R,h}<2000$\,[km\,s$^{-1}$], whose projected positions agree with
the general scheme of galaxies with intermediate velocities.
However, the galaxies with the higher $V_{R,h}$ seem to present a different
projected spatial distribution. The majority of them were classified as
spirals, despite the relatively isolated elliptical FS90-152 (see
Table\,\ref{tab.mem}).
On the other hand, the few other ellipticals have a projected distance
to the centre of the cluster (d$_{\rm p}$) lower than $25'$ and present
$1775 < V_{R,h}\,[$km\,s$^{-1}] < 3000$.
The right-hand panel of Figure\,\ref{espac.typ} shows the projected spatial
distribution for dwarf galaxies in the sample and the positions of
NGC\,3258 and NGC\,3268. The origin and
radius of the black circle are identical to the previous panels. Their
concentration within the region restricted by d$_{\rm p}= 30'$ is
due to a bias in the spectroscopic survey. Therefore, we cannot arrive
at any conclusion regarding their distribution when d$_{\rm p}> 30'$.
The dwarfs near the centre present a wide range of $V_{R,h}$. They seem
to be more concentrated towards the two gEs and do not seem to be aligned
with the projected position of the two gEs.
\begin{figure}
\includegraphics[width=90mm]{dist_vel.eps}
\caption{Heliocentric radial velocities ($V_{R,h}$) as a function of projected
distances to the galaxy centre (d$_{\rm p}$). Different symbols represent
dwarf (red circles), ellipticals (blue squares), lenticulars (blue-violet
triangles), and spirals (green diamonds). The dashed line indicates
d$_{\rm p}= 30'$, and the dotted one represents the $V_{R,h}$ for both gEs,
$\sim2800$\,km\,s$^{-1}$. The grey regions indicate the caustic curves obtained
with the \citet{pra94} infall model for two sets of parameters.}
\label{dist_vel}
\end{figure}
Figure\,\ref{dist_vel} shows the $V_{R,h}$ as a function of d$_{\rm p}$
for the galaxies in the sample, discriminated by morphological types.
The grey regions represent caustic curves
obtained with the infall model of \citet{pra94}, assuming $\Omega_0=0.3$
and two sets of parameters.
For the inner region, we proposed typical parameters for the Fornax
cluster $r_{vir}=0.7$\,Mpc and $\sigma_{v}=374$\,km\,s$^{-1}$
\citep{dri01}. Considering the high mass derived for the Antlia cluster
by \citet{hes15}, we plotted the second region assuming the Virgo cluster
virial radius, $r_{vir}=1.8$\,Mpc \citep{kim14} and
$\sigma_{v}=617$\,km\,s$^{-1}$, which were previously derived
in Section\,\ref{sec:dvel}. While the first set of parameters implies that
a large number of galaxies lie outside of the caustic curves, for the
second one all galaxies present $V_{R,h}$-values lower than the escape velocity
at their corresponding projected distances from the assumed centre of the cluster.
Dwarf galaxies seem to be spread all over the $V_{R,h}$ range for d$_{\rm p}< 20'$.
Just a few dwarf galaxies outside this limit were measured, but all of them
present intermediate values of $V_{R,h}$. The picture is quite different
when we look at the bright galaxy population. As expected, spiral galaxies
are mainly at larger distances than $20'$ from the {\it \emph{bona fide}} cluster
centre. Moreover, their velocity distribution also distinguishes them from
the rest of the galaxies in the sample. From the thirteen spirals with
d$_{\rm p}> 30'$, seven of them present $V_{R,h}$ around 3200\,km\,s$^{-1}$
and relatively low dispersion. In fact, the mean $V_{R,h}$ and its dispersion
for these spirals are $3190\pm40$\,km\,s$^{-1}$ and 110\,km\,s$^{-1}$.
If we consider all the spirals with d$_{\rm p}> 30'$, their mean $V_{R,h}$
is $2980\pm120$\,km\,s$^{-1}$. For comparison, the eleven lenticulars in the
same radial regime have a mean velocity of $2550\pm125\,$km\,s$^{-1}$. A
Student's test between the two samples rejects the hypothesis that both
samples belong to the same population with $90\,\%$ of confidence.
To study the central part of the cluster, which is expected to
be dominated by the gEs haloes, we restricted our sample to include
the 50th percentile of galaxies with larger number of neighbours. The
upper panel of Figure\,\ref{dvel2} shows the $V_{R,h}$ distribution
for these galaxies, adopting a bin width of 180\,km\,s$^{-1}$. It seems
to represent three groups of galaxies with $V_{R,h}$ around $\sim2000$\,km\,s$^{-1}$,
$\sim2800$\,km\,s$^{-1}$, and $\sim3700$\,km\,s$^{-1}$. To the last
group belong the three bright lenticulars NGC\,3267 (FS90-168), NGC\,3269
(FS90-184), and NGC\,3271 (FS90-224), near to NGC\,3268 in projected distance.
Considering the $V_{R,h}$ for both gEs, the galaxies in the group with
intermediate velocities have the higher probability of belonging to the Antlia
cluster. We applied a S-W test to this sample and obtained a p value of 0.32.
Then, we ran {\textsc GMM} \citep{mur10} for a trimodal case. {\textsc GMM}
calculated three peaks with means of $2020\pm160$\,km\,s$^{-1}$, $2750\pm100$\,km\,s$^{-1}$,
and $3650\pm50$\,km\,s$^{-1}$. For the unimodal hypothesis, we obtained
$\chi^2=13.9$ with eight degrees of freedom ($p\chi^2=0.1$) and a distribution kurtosis
$k=-0.9$. These results point to a multi-modal distribution. The parameter $DD$,
which measures the relevance of the peak detections, was $2.8\pm0.7$
(\citep[$DD>2$ is required for a meaningful detection (][]{ash94,mur10}.
In the middle panel we present the $V_{R,h}$ distribution for galaxies up
to the 75th percentile with the same bin width as in the previous plot. The
three different groups also seem to be present, but smoothed. In this case,
the p value obtained from the S-W test is higher, $\sim 0.80$. For the
trimodal case, the mean from {\textsc GMM} were $2060\pm200$\,km\,s$^{-1}$,
$2780\pm100$\,km\,s$^{-1}$ and $3600\pm130$\,km\,s$^{-1}$. In this case,
$\chi^2=7$ ($p\chi^2=0.5$), $k=-0.5,$ and $DD=2.5\pm0.8$. The parameters
are less conclusive, but point to a multi-modal distribution in $V_{R,h}$.
\begin{figure}
\includegraphics[width=90mm]{dvel2.eps}
\includegraphics[width=90mm]{dvel_tipos.eps}
\caption{{\bf Upper panel:} $V_{R,h}$ distribution for galaxies
belonging to the 50 percentile with larger number of neighbours.
{\bf Middle panel:} $V_{R,h}$ distribution for galaxies
belonging to the 75 percentile with larger number of neighbours.
{\bf Lower panel:} $V_{R,h}$ distribution for Antlia galaxies,
separated by morphology.}
\label{dvel2}
\end{figure}
The lower panel of Fig.\,\ref{dvel2} shows the $V_{R,h}$ distribution
for Antlia galaxies, separated between dwarf and bright galaxies,
and the latter group between early-types (ellipticals and lenticulars)
and late-types (spirals). Spirals fill a wide range of $V_{R,h}$, but
seem to avoid velocities similar to those of the gEs ($\sim 2800$\,km\,s$^{-1}$).
\subsection{Mass estimations}
\citet{ped97} used X-ray observations (ASCA) to measure the mass
enclosed within a radius of 240\,kpc from NGC\,3258 and obtained
$0.9--2.4 \times 10^{13}\,{\rm M_{\odot}}$. NGC\,3268 was studied in
X-rays (ASCA) by \citet{nak00}, who determined
$\sim 2\times 10^{13}\,{\rm M_{\odot}}$ internal to $\sim 260$\,kpc.
Here we compare our galaxy velocities with the results from X-ray
observations. We excluded spirals because of the differences that we found
between their $V_{R,h}$ distribution and those of early-type bright
galaxies in Section\,\ref{morph}.
Two subsets of our sample were made, selecting from the galaxies contained
in the 50th percentile those with shorter projected distance to
NGC\,3258 or NGC\,3268. These results in samples of 17 (up to 10\,arcmin)
and 24 members (up to 18\,arcmin), respectively.
We applied the "tracer mass estimator" (M$_{\rm Tr}$) from \citet{eva03},
\begin{equation}
M = \frac{C}{G N} \sum_i V^2_{LOS,i} R_i
,\end{equation}
\noindent where $R_i$ and $V_{LOS,i}$ are the projected distances from the
corresponding gE and velocities relative to it, respectively. Here, G is the constant of
gravitation, and N the number of tracers. In the case of isotropy, the constant C
is calculated through
\begin{equation}
C = \frac{ 4\,(\alpha + \gamma)}{\pi} \frac{4 - \alpha - \gamma}{3- \gamma}
\frac{1-(r_{in}/r_{out})^{3-\gamma}}{1-(r_{in}/r_{out})^{4-\alpha-\gamma}}
.\end{equation}
We assume that the three-dimensional profile of the tracer population is
represented well by a power law between an inner radius $r_{in}$ and outer
radius $r_{out}$ with exponents $\gamma$ and $\alpha$, respectively (e.g.
see \citealt{mam05a,mam05b}). Under
the assumption of constant circular velocity, $\alpha=0$. In the case of the
tracer population, it is difficult to measure the power law exponent precisely,
but it should not differ too much from $\gamma=3$ \citep{dek05,agn14,cou14}.
Taking this into account, we obtain the mass that corresponds to $\gamma$ ranging from 2.75 to
3.25. In the case of a shallower profile, the resulting masses are somewhat lower,
but not by an order of magnitude.
For the sample surrounding NGC\,3258 we obtained
M$_{\rm Tr}= (8 - 11)\times 10^{13}\,{\rm M}_{\odot}$.
This has to be compared with the X-ray mass of NGC\,3258. \citet{ped97}
obtained $kT_e\approx 1.7$\,keV, assuming an isothermal gas.
The parameters of their beta model are $r_c=8.2$\,arcmin, corresponding
to 83.5 kpc, and $\beta=0.6$.
With the assumption of spherical symmetry, we apply the expression \citep{gre01}
\begin{equation}
M(r) = \frac{3 \beta k T_e}{G \mu m_p} \frac{r^3}{r_c^2+r^2}
\label{eq:xray}
\end{equation}
Adopting $\mu=0.6$, the derived mass within the sphere with radius
$10'$ is $\sim 7 \times 10^{12}\,{\rm M_{\odot}}$.
In the case of the sample surrounding NGC\,3268, the result from the
\citet{eva03} estimator is
M$_{\rm Tr}= (5 - 8)\times 10^{13}\,{\rm M}_{\odot}$.
\medskip
We again applied Eq. \ref{eq:xray} with the parameters derived by
\citet{nak00}: $kT_e= 2$\,keV, $r_c= 5$\,arcmin, and $\beta=0.38$.
Then the mass enclosed within $18'$ around NGC\,3268 is
$\sim 1.4 \times 10^{13}\,{\rm M_{\odot}}$.
In both cases, the mass derived from X-ray observations is significantly
lower than the mass estimated from the $V_{R,h}$ measurements.
It is unlikely that the mass enclosed within $\sim 100$\,kpc around
both gEs reaches values of a few times $10^{13}\,{\rm M_{\odot}}$. We therefore conclude that the galaxies with the most deviant velocities in the two
samples are probably not gravitationally bound to either of the two groups.
From the $V_{R,h}$ histograms in Fig\,\ref{dvel2}, a large number of
the galaxies is symmetrically grouped around
$2750--2800\,{\rm [km\,s^{-1}]}$, spanning approximately the
velocity range $2200 < V_{R,h}\,{\rm [km\,s^{-1}]} < 3400$. This group
corresponds to $\sim 50\%$ of the galaxies that belong to the 50th percentile,
and $\sim 60\%$ of the galaxies for the 75th percentile.
This velocity constraint reduces the size of the samples around NGC\,3258 and
NGC\,3268 to 8 and 13 members, respectively.
Applying the mass estimators to these limited samples, we obtain, for
the galaxies, around NGC\,3258 and NGC\,3268
M$_{\rm Tr}= (1.4 - 2)\times 10^{13}\,{\rm M}_{\odot}$ up to $9'$ and
M$_{\rm Tr}= (1.7 - 2.4)\times 10^{13}\,{\rm M}_{\odot}$ up to $15'$,
respectively.
These estimations are in reasonable agreement with the masses derived
from X-ray observations. The mass that we derived from the analysis of an
UCD sample around NGC3268 is $2.7\times10^{12} M_\odot$ within 47 kpc.
Assuming a constant circular velocity, the extrapolated mass within
101 kpc (corresponding to $10'$) is $6\times10^{12} M_\odot$, and
therefore also is in reasonable agreement with our mass estimation
from both companion galaxies and X-rays. To avoid the need to explain
peculiar radial velocities of the order of 1500\,km\,s$^{-1}$, we conclude
that the large velocity dispersion of our complete sample is caused by
recessional velocities of galaxies in the near foreground or background,
which are mixed in.
\subsection{Tests for cluster substructure}
In this section, we try a different approach.
The nearest-neighbour tests are commonly used
for detecting subgroups in the environment of clusters of galaxies
\citep[e.g.][]{bos10,bos06,bur04,owe09,hou12}. Our aim is to search
for substructure in $V_{R,h}$ besides the obvious existence of the two
groups dominated by NGC\,3258 and NGC\,3268.
The $\Delta$ test \citep{dre88} was intended for probing deviations
from the local mean velocities and dispersions compared with the global
cluster values. To achieve this, the local mean velocity and dispersion
are calculated for each galaxy, restricting the sample to the galaxy
itself and its ten nearest neighbours. Then, their deviation from the
global cluster values are computed, and their sum is defined as the
$\Delta$ parameter.
\citet{col96} proposed the $\kappa$ test, similar in intent to the
$\Delta$ test. The possibility that the scale in which the substructure
is more obvious differs from what is imposed by the ten nearest neighbours
condition is considered, leaving the number of neighbours $n$ as a free
parameter. Arguing that distributions cannot be
characterised by the first two moments in general, they proposed a
statistic ruled by the probability of the K-S two-sample distribution
\citep{kol33,smi48}.
For both tests, the significance of their statistics are estimated by
Monte Carlo simulations, in which the velocities of the cluster galaxies
are shuffled randomly. The significance level $p$ was obtained after
repeating the previous procedure 1\,000 times.
\citet{pin96} found that the existence of radial gradients in the
velocity dispersion of galaxies could produce artificial substructure
when applying 3D statistics.
We therefore ran the tests on galaxies whose projected distances to the
equidistant point to both gEs was less than $20'$, because velocity
dispersions seem to remain constant up to that limit (see
Fig.\,\ref{dist_vel}). This restriction reduced our sample to 57 members.
The area matches the field of view of our VIMOS survey, which
guarantees a homogeneous sample. For the $\kappa$ test, we
selected $n=10,15$. The results were $\Delta= 66.9$ and
$p_{\Delta}=0.12$, $\kappa_{10}=14.4$ and $p_{10}=0.16$, $\kappa_{15}=15.6$
and $p_{15}=0.06$.
To investigate the significance of these numbers, we performed 1\,000 Monte
Carlo simulations. We randomly generated the $V_{R,h}$ for the galaxies
in the sample, assuming a normal distribution with mean and dispersion of
2800\,km\,s$^{-1}$ and 275\,km\,s$^{-1}$, respectively.
Then, we applied the tests to these samples. For the $\Delta$ test,
fewer than $15\%$ of the cases produced a $\Delta$-parameter higher
than the value obtained from the observations, and this percentage is
reduced to $6\%$ for the $\kappa_{15}$ test.
Therefore, the tests point to underlying substructure, most probably due
to the existence of non-member galaxies in our sample.
\section{Discussion}
\subsection{Extreme radial velocities}
To use radial velocities as a discriminant for cluster membership is often difficult
if the depth along the line of sight is considerable. An extreme example is the interesting
pair of Virgo galaxies IC\,3492 with a radial velocity of -575 km/s and IC\,3486 with 1903 km/s,
having a separation of only 1.4 arc min. The deviation from a strict Hubble law may be a
mix of peculiar velocities resulting from large scale structure and local gravitational fields.
Is it possible that the extreme radial velocities in Fig.\ref{dist_vel} are infall velocities?
To answer this, it is useful to consider the highest possible velocities. We can
represent the total mass distribution around NGC\,3268 quite well by an NFW-mass distribution with
a scale length of 22 kpc and a characteristic density of 0.05$M_\odot/pc^3$. A mass probe on an
exactly radial orbit, initially at rest and falling in from a distance of 1.5 Mpc, crosses the
centre after 6.8 Gyr with a velocity of 1550\,km\,s$^{-1}$. However, after an additional 0.06 Gyr, its
velocity has already declined to 1000\,km\,s$^{-1}$. It is obvious that such a configuration is
highly artificial and not suitable to explaining the broad velocity interval.
The more plausible explanation is therefore the mix of recessional and Doppler velocities.
\subsection{Internal structure of the two dominant groups}
What we called the Antlia cluster seems to be mainly formed by two subgroups, each one
dominated by a gE. Despite that, both galaxies present similar $V_{R,h}$, it is uncertain
whether they are located at the same distance \citep[e.g.][]{bla01,can05,bas08}.
A recent study in HI \citep{hes15} indicates that the subgroup dominated by NGC\,3258
could be falling in to NGC\,3268, which they estimated is the actual cluster centre.
It is clear that both gEs dominate their own subgroups, and their peculiar velocities
relative to the group systemic velocity should not differ significantly. For this reason, if
the difference in distance between both galaxies was $2-4\,Mpc$, we would expect a
$V_{R,h}$ difference of $\sim 140-280$\,km\,s$^{-1}$. As a result, the
measured $V_{R,h}$ might agree with the scenario suggested by \citet{hes15},
where both subgroups are in a merging process.
It is interesting to have a closer look at the group around NGC\,3268. Strikingly, all
three S0's around the central galaxy (FS90-224, FS90-168, and FS90-184) have positive
velocity offsets in the range of $900-1000$\,km\,s$^{-1}$. If these velocities were due
to the potential of NGC\,3268 (and its associated dark matter), very radial orbits would be
needed for surpassing the circular velocity that is about 300\,km\,s$^{-1}$ and probably
constant \citep{cas14}. The high velocity offsets are, moreover, related to fairly large
projected spatial offsets (which in the case of FS90-224 is about 65\,kpc), whereas the
highest velocities are found at the bottom of the potential well. Furthermore, the
galaxies do not show any morphological indication of being tidally disturbed. (We comment
separately on the special case NGC\,3269.) Moreover, there is no indication that the
globular cluster system of NGC\,3268 is tidally disturbed \citep{dir03b}, nor does the
X-ray structure show any abnormality \citep{nak00}. We therefore prefer the interpretation
that the three S0s have to be placed in the background of NGC\,3268. As Fig.\ref{dvel2}
suggests, more dwarf galaxies can be associated with this group, but some cases are
ambiguous. There is the dwarf FS90-195 that hosts about ten point sources, which may be
globular clusters.
The radial velocity is 3495\,km\,s$^{-1}$, but it is located close to NGC\,3268 (projected
distance 15\,kpc), so it may be bound and demonstrates how globular clusters can be donated
to the system of NGC3268. In that sense it could resemble SH2 in NGC1316 \citep{ric12c}.
We note that NGC\,3269 (FS90-184), together with NGC\,3267 (FS90-168) and NGC\,3271
(FS90-224), has been identified as the "Lyon Group of Galaxies (LGG)" 202 of
\citet{gar93,gar95} and \citet{bar01}, who already suspected its location behind the Antlia
cluster.
Galaxies of the low velocity group are mostly dwarf-like galaxies and might be {\it \emph{bona fide}}
in the foreground. All look quite discy, which may also indicate a less dense environment.
The situation is somewhat different for the group around NGC\,3258. There are also dwarfs
(ANTLJ102914-353923.6 and FS90-137) with positive velocity offsets of more than
1000\,km\,s$^{-1}$ to NGC\,3258, but the brighter galaxies do not show such extreme radial
velocities. There is also a group around FS90-226 that is less striking than the two dominant
groups, which is also identified as a subgroup by \citet{hes15}.
Therefore the likely explanation for Fig.\ref{dist_vel} is that we are looking along a
filament of galaxies and that at least a fraction of the galaxies in the three velocity
groups apparent in Fig.\ref{dist_vel} are separated by their recession velocities.
\subsection{NGC 3268/3258 as fossil groups?}
If the crowding of galaxies around NGC 3268 is mainly a projection effect, this galaxy
fulfils the criteria for being a fossil group. After applying the definition of \cite{jon03},
it has the required X-ray luminosity of more than $10^{42}$\,erg\,s$^{-1}$, and the next
galaxy in a luminosity ranking would be FS90-177, which is fainter than 2\,mag in
the $R$ band.
The brighter galaxies around NGC\,3258 (FS90-105 and FS90-125) may be companions.
FS90-125 is perhaps a magnitude fainter than NGC\,3258, so only the X-ray criterium applies,
and it cannot be called a fossil group. However, its globular cluster system is even
richer than that of NGC\,3268, so the idea that a galaxy group collapsed at very
early times is plausible.
\subsection{Emission line galaxies}
Only three galaxies from our VIMOS sample present emission lines. One of
them, FS90-131, was classified as a spiral. The other galaxies,
FS90-220 and FS90-222, were classified as lenticulars. Evidence of
strong star formation would not be expected if the environment
of the Antlia gEs was dense: FS90-131 is located at $5'$ from NGC\,3258,
FS90-220 is $\sim 6'$ from NGC\,3268, and FS90-222 is
$\sim 3'$ from NGC\,3273. Moreover, the $V_{R,h}$ in the three
cases is lower than 1200\,km\,s$^{-1}$, which disqualifies them from dynamically
belonging to these groups. Their velocities are better explained by recession
velocities.
\section{Summary and conclusions}
We have presented new radial velocities, measured with VLT-VIMOS and Gemini GMOS-S,
for galaxies in the region normally addressed as the Antlia cluster. The fields are
located in the surroundings of NGC\,3258 and NGC\,3268, the dominant galaxies of
the cluster. Together with the literature data, our list of Antlia galaxies with
measured radial velocities now embraces 105 galaxies. Large numbers of these objects
are projected onto either of the two subgroups around NGC\,3258 or NGC\,3268. Because
the gravitational potentials of these galaxies are constrained by X-ray studies and
stellar dynamical tracers, we could compare the observed radial velocities with those
expected. It turned out that the total range of velocities seems to be too high to
be generated by the gravitational potential of NGC\,3258 or NGC\,3268. There are three
groups of galaxies (not always clearly separated) characterised by radial velocities
of about 1800\,km\,s$^{-1}$, 2700\,km\,s$^{-1}$, and 3700\,km\,s$^{-1}$. We interpreted
these velocities as recession velocities, which places the bright S0s around NGC\,3268,
in particular, into the background. Therefore, NGC\,3268 might qualify as a fossil group,
resembling, for example, NGC\,4636 in the Virgo cluster in its properties. The intermediate
velocity group (i.e.$V_{R,h}$ around 2700\,km\,s$^{-1}$) is the most populated one. The
distances between NGC\,3258 and NGC\,3268 are not well constrained in the literature, and
we cannot discard that the groups dominated by them are in the early stages of a merging
process.
What we originally called the Antlia cluster is therefore characterised by several
groups that, at least in some cases, are not gravitationally bound. We may be
looking along a filament of the cosmic web.
\begin{acknowledgements}
We thank Ylva Schuberth for discussions about the radial velocities of Virgo
galaxies, Mike Fellhauer for permission to use his orbit program, Francisco
Azpillicueta for discussions of statistical issues, and Lilia Bassino for discussions
about the Antlia cluster. We thank the refereee for suggestions that improved this article.\\
This work was based on observations made with ESO telescopes at the La Silla Paranal Observatory under
programmes ID 60.A-905 and ID 079.B-0480, and on observations obtained at
the Gemini Observatory, which is operated by the Association of Universities
for Research in Astronomy, Inc., under a cooperative agreement with the NSF on
behalf of the Gemini partnership: the National Science Foundation (United
States), the National Research Council (Canada), CONICYT (Chile), the
Australian Research Council (Australia), Minist\'{e}rio da Ci\^{e}ncia,
Tecnologia e Inova\c{c}\~{a}o (Brazil) and Ministerio de Ciencia,
Tecnolog\'{i}a e Innovaci\'{o}n Productiva (Argentina), and on observations
acquired through the Gemini Science Archive. This research has made use of the NASA/IPAC
Extragalactic Database (NED), which
is operated by the Jet Propulsion Laboratory, California Institute of Technology,
under contract with the National Aeronautics and Space Administration.\\
This work was funded with grants from Consejo Nacional de Investigaciones
Cient\'{\i}ficas y T\'ecnicas de la Rep\'ublica Argentina, Agencia Nacional de
Promoci\'on Cient\'{\i}fica y Tecnol\'ogica (BID AR PICT-2013-0317), and Universidad Nacional de La Plata
(Argentina). TR is grateful for financial support from FONDECYT project Nr.\,1100620,
and from the BASAL Centro de Astrof\'isica y Tecnolog\'ias Afines (CATA) PFB-06/2007.
\end{acknowledgements}
\begin{table*}[ht]
\begin{center}
\label{tab.backg}
\caption{Heliocentric radial velocities for background galaxies measured in
this paper.}
\begin{tabular}{lccc}
\hline
\\
\multicolumn{1}{l}{ID}&\multicolumn{1}{c}{RA(J2000)}&\multicolumn{1}{c}{DEC(J2000)}&\multicolumn{1}{c}{$V_{\rm R,h}$}\\
\multicolumn{1}{l}{}&\multicolumn{1}{c}{hh mm ss}&\multicolumn{1}{c}{dd mm ss}&\multicolumn{1}{c}{km\,s$^{-1}$}\\
\hline
\\
FS90-75&10 28 12.0&-35 32 20.4&12333$\pm$07\\
FS90-205&10 30 18.5&-35 24 43.2&45907$\pm$30\\
ANTL102823-352754&10 28 23.8&-35 27 54.0&40289$\pm$25\\
ANTL10294-352320&10 29 04.6&-35 23 20.4&51147$\pm$18\\
ANTL10295-352146&10 29 05.5&-35 21 46.8&52928$\pm$23\\
ANTL102817-353552&10 28 17.0&-35 35 52.8&44240$\pm$85\\
ANTL102937-353552&10 29 37.2&-35 35 52.8&7682$\pm$55\\
ANTL102936-353336&10 29 36.7&-35 33 36.0&24420$\pm$50\\
ANTL102951-351210&10 29 51.6&-35 12 10.8&32797$\pm$24\\
ANTL10303-3571&10 30 03.8&-35 7 01.2&28192$\pm$21\\
ANTL103042-35914&10 30 42.2&-35 9 14.4&15875$\pm$14\\
ANTL103049-352031&10 30 49.4&-35 20 31.2&32830$\pm$23\\
ANTL103045-35161&10 30 45.8&-35 16 01.2&32920$\pm$21\\
ANTL102945-35374&10 29 45.4&-35 37 04.8&8900$\pm$19\\
ANTL102936-353336&10 29 36.7&-35 33 36.0&24337$\pm$33\\
ANTL102926-351051&10 29 26.9&-35 10 51.6&20041$\pm$10\\
ANTL103041-351250&10 30 41.0&-35 12 50.4&9833$\pm$65\\
ANTL102940-35258&10 29 40.6&-35 25 08.4&53700$\pm$42\\
ANTL102936-35266&10 29 36.0&-35 26 06.0&24243$\pm$10\\
FS90-83&10 28 23.0&-35 30 57.6&19670$\pm$18\\
FS90-88&10 28 28.1&-35 31 04.8&19624$\pm$74\\
ANTL102838-352027&10 28 38.2&-35 20 27.6&46732$\pm$54\\
\hline
\end{tabular}
\end{center}
\end{table*}
\bibliographystyle{aa}
|
1510.03886
|
\section{Introduction}
\label{introduction}
The rough contact mechanics of solids exhibiting graded rheology at confinement is
of major interest for both technological (e.g. seals, rubber friction, tribology of protective coatings)
and biological (tissue engineering, bio-lubrication in the-bone cartilage contact) applications, to cite few.
On the theoretical side, the graded functionalization of surfaces has been long believed to provide
the short distance cutoff, in the range of available contact roughness wavelengths (usually extending to the atomic distance,
given the fractal nature of the real surfaces), as the physical threshold over which smaller asperities do not
contribute to the effective properties of the contact. As an example, in a generic dry rubber contact,
surface contamination\cite{Greenwood300,SCHWEITZ1986289,Persson20013840} as well as a {\it thin skin}\cite{Persson20013840,schipper}
on the rubber surface, e.g. generated as a consequence of a steady-state wear,
can strongly alter the effective surface properties of the confinement. Typically, this contamination layer
will avoid the smallest roughness asperities to contribute to the energy dissipation, thus
reducing the friction force with respect to the ideal value.
Nevertheless, literature provides no strong theoretical support to this aspect,
mainly due to the lack of (both analytical and numerical) mean field modelling of rough contact mechanics
in presence of graded rheology at confinement, apart few investigations\cite{Persson2012,Paggi2011696}.
In particular, in Ref. \cite{Paggi2011696}, a GW-like (many-asperities) formulation of the rough contact mechanics
under some simplified graded-response assumptions is presented, whereas in Ref. \cite{Persson2012}
the Persson's multiscale contact theory\cite{Persson20013840} has been extended
and the contact area calculated in the case of an elastic coating
bonded onto an elastic half space. In particular, in \cite{Persson2012}
the $M_{zz}(\omega,\mathbf{q})$ function, which provides in the Fourier domain the (out-of-plane) surface displacement response
as a function of the contact pressure field, is derived for the case of a coating bonded
onto a half space, under the assumption of frequency-independent Poisson coefficient.
In this work we will make use of the field decomposition suggested in \cite{Persson20013840}
to determine the effective $M_{zz}(\omega,\mathbf{q})$ (in the Fourier domain) function for the more general case of a stepwise or
continuously-graded block. The Poisson coefficient for the generic layer is assumed frequency-dependent, which makes the theory of more general applicability. This effective surface response will be then implemented in a Fourier-based
residuals molecular dynamic (RMD) formulation of the contact
mechanics\cite{Persson2014,scaraggi.in.prep,scaraggi.in.prep.true}, however, the same function might be easily embedded in
the Persson's mean field analytical contact model as well.
The RMD model will then be adopted to a focussed investigation of the role of small-scale roughness wavelengths on the rubber friction and contact area.
We numerically show that the rough contact exhibits effective interface properties which converge to asymptotes upon increase of the small-scale roughness content,
when a realistic rheology of the confinement, which includes a graded rheology, is taken into account. For the rubber contact case a graded rheology
has been recently experimentally shown in Ref. \cite{schipper}, where the authors clearly demonstrate the existence of a modified surface layer with strongly different
properties than the bulk, inspiring indeed our theoretical investigation.
The manuscript is organized as follows. In Sec. \ref{BEM} we summarize the
BEM (boundary element method) numerical scheme adopted for the investigation of a steady-sliding rough
interaction characterized by arbitrary rheological properties, whereas in
Sec. \ref{navier.section} we more specifically focus on the calculation of
the surface displacement Green's function, in the Fourier domain
M_{zz}\left( \mathbf{q},\omega \right) $, for a generic block with stepwise
graded (isotropic) viscoelastic rheology. In Sec. \ref{results} we apply the
numerical model to the investigation of the role of the (wear) modified
surface layer (MSL) on the rubber friction and contact area for the
simplest case of a rubber bulk covered by an elastic coating, in steady
sliding contact onto a randomly rough rigid surface. Finally, in Sec. \re
{discussion} we more generally discuss on the role of the graded bulk
rheology in rough contact mechanics, whereas in Sec. \ref{conclusions} the
conclusions follow. In Appendix \ref{appendix.1} we solve the Navier's
equation for a homogeneously-viscoelastic infinitely-wide slab (of finite
thickness) in the quasi-static deformation dynamics. Analytical relations
are derived for the surface response of coated bulks in the most general
case of non-constant (in the frequency domain) Poisson's ratio. We show
that, even for the simplest case of coating on bulk, the adoption of constant
Poisson ratio (i.e. independent from frequency) lead to qualitatively different results
with respect to the more general case of frequency dependent Poisson ratio, in
a range of roughness wavelengths. In Appendix \ref{appendix.2}
the surface response for the general case of continuously-graded
viscoelastic rheology is formulated in term of a set of non-linear
differential equations, which is solved in the two representative cases of
linear and sinusoidal variation of the confined elastic properties of a
bulk. The results are then compared with the predictions of the
stepwise-graded theory (Sec. \ref{navier.section}) as applied to a discretized version of the confinement
(at different numbers of divisions in sub-layers)
showing, for the sinusoidal variation, that the number of layers needed for
the stepwise-graded surface response to converge to the continuously-graded
predictions can be (relatively) very large.
\section{Summary of the numerical scheme for a steady-sliding rough
interaction}
\label{BEM}
We consider the case of a rigid, \textit{periodically-rough} surface (of
L_{0}$ periodic length, in both $x$- and $y$-direction, with small
wavelength cut-off frequency $q_{0}=2\pi /L_{0}$) in steady sliding
adhesionless contact with a graded body characterized by linear rheology,
under isothermal and frictionless conditions. We assume the small
deformation regime to apply, as well as a small square slope roughness
h\left( \mathbf{x}\right) $, with $m_{2}=\left\langle \nabla h(\mathbf{x
)^{2}\right\rangle \ll 1$ ($\left\langle h\right\rangle =0$).
\begin{figure}[tbh]
\centering
\subfigure[]{
\includegraphics[
width=0.407\textwidth,
angle=0
]{geometry.eps} \label{geometry.eps}
} \qquad
\subfigure[]{
\includegraphics[
width=0.4\textwidth,
angle=0
]{geometry1.eps} \label{geometry1.eps}
}
\caption{a) Cross section of a generic contact interface. b) Magnified view
of the encircled area (in the left figure), with indication of the gap
equation \protect\autoref{gap} terms. Schematics.}
\label{cross.section}
\end{figure}
In Fig. \ref{cross.section} we show a schematic representation of the
contact interface, in a reference moving with the rough sliding surface. In
such a reference, the local interfacial separation $u\left( \mathbf{x
\right) $ can be agreed to be
\begin{equation}
u\left( \mathbf{x}\right) =\bar{u}+w\left( \mathbf{x}\right) -h\left(
\mathbf{x}\right) , \label{gap}
\end{equation
where $\bar{u}$ is the average interfacial separation, $w\left( \mathbf{x
\right) $ the surface out-of-average-plane displacement and $h\left( \mathbf
x}\right) $ the surface roughness, with $\left\langle w\left( \mathbf{x
\right) \right\rangle =\left\langle h\left( \mathbf{x}\right) \right\rangle
=0$. We can define the following
\begin{equation*}
w\left( \mathbf{q}\right) =\left( 2\pi \right) ^{-2}\int d^{2}\mathbf{x\
w\left( \mathbf{x}\right) e^{-i\mathbf{q}\cdot \mathbf{x}}
\end{equation*
and
\begin{equation*}
\sigma \left( \mathbf{q}\right) =\left( 2\pi \right) ^{-2}\int d^{2}\mathbf
x\ }\sigma \left( \mathbf{x}\right) e^{-i\mathbf{q}\cdot \mathbf{x}},
\end{equation*
where $\sigma \left( \mathbf{x}\right) =\Delta \sigma \left( \mathbf{x
\right) +\sigma _{0}$ is the distribution of interfacial pressures [$\sigma
_{0}=\left\langle \sigma \left( \mathbf{x}\right) \right\rangle $ is the
average contact pressure] in the moving reference. Following the discussion
reported in Sec. \ref{navier.section}, $w\left( \mathbf{x}\right) $ can be
related to $\sigma \left( \mathbf{x}\right) $ through a simple equation in
the Fourier spac
\begin{equation}
w\left( \mathbf{q}\right) =M_{zz}\left( \mathbf{q},\omega =\mathbf{q}\cdot
\mathbf{v}\right) \sigma \left( \mathbf{q}\right) , \label{A1.fourier}
\end{equation
where $M_{zz}\left( \mathbf{q},\omega \right) $ is the complex surface
responce of the block in the frequency domain, and $\mathbf{v}$ the sliding
velocity [in this work $\mathbf{v}=\left( v,0\right) $ without any losss of
generality]. $M_{zz}\left( \mathbf{q},\omega \right) $ depends on the
rheological and geometrical properties of the block, and its formulation
will be specifically presented in Sec. \ref{navier.section}. We observe that
in the simplest case of bulk viscoelastic contact with frequency-independent
Poisson ratio, $M_{zz}\left( \mathbf{q},\omega \right) =2/\left[ \left\vert
\mathbf{q}\right\vert E_{\mathrm{r}}\left( \omega \right) \right] $, where
E_{\mathrm{r}}\left( \omega \right) =E\left( \omega \right) /\left( 1-\nu
^{2}\right) $ is the frequency-dependent (complex) reduced Young's modulus,
\nu $ is the Poisson ratio. In this work the assumption of constant Posson
ratio will not be adopted (unless differently explicited), thus the theory
developed hereinafter is of more general applicability, e.g. it can be
easily adapted to existing mean field contact mechanics formulations as
well. \autoref{A1.fourier} is obtained by considering that the stress in the
fixed reference $\sigma \left( \mathbf{q},\omega \right) =\sigma \left(
\mathbf{q}\right) \delta \left( \omega -\mathbf{q}\cdot \mathbf{v
_{0}\right) $ is related to the displacement in the fixed reference $w\left(
\mathbf{q},\omega \right) =w\left( \mathbf{q}\right) \delta \left( \omega
\mathbf{q}\cdot \mathbf{v}_{0}\right) $ through the constitutive
relationship $w\left( \mathbf{q},\omega \right) =M_{\mathrm{zz}}\left(\mathbf{
q},\omega \right) \sigma \left( \mathbf{q},\omega \right) $ (see Sec. \re
{navier.section}) resulting, after integration over $\omega $, in \re
{A1.fourier}.
Finally, the relation between separation $u(\mathbf{x})$ and interaction
pressure $\sigma (\mathbf{x})$ is calculated within the Derjaguin's
approximation \cite{Derjaguin1934}, and it can be written in term of a
generic interaction law \cite{Persson2014,scaraggi.in.prep}:
\begin{equation}
\sigma (u)=f(u). \label{sigma_from_f}
\end{equation
In this work we have adopted the (integrated) repulsive term of the L-J
potential in \autoref{sigma_from_f} to simulate the adhesionless interaction.
However, the theory can be easily extended to other interface laws \cit
{Persson2014}. \autoref{gap}, \autoref{A1.fourier} and \autoref{sigma_from_f} are
discretized on a regular square mesh of grid size $\delta $ in term of a
residuals molecular dynamics process \cite{scaraggi.in.prep.true,Persson2014
, resulting in the following set of equations:
\begin{align}
L_{ij}& =-u_{ij}+\left( \bar{u}+w_{ij}-h_{ij}\right) \label{Lij} \\
\sigma _{ij}& =f\left( u_{ij}\right) \label{sigmaij} \\
\sigma _{ij}& \rightarrow \Delta \sigma
(q_{hk})=M_{zz}^{-1}w(q_{hk})\rightarrow w\left( x_{ij}\right)
\label{delta.sigmaij}
\end{align
where $L_{ij}$ is the generic residual (related to the generic iterative
solution $u_{ij}$). \autoref{Lij} are solved in $u_{ij}$ though a Verlet
intergation scheme, whereas the solution accuracy is set by requiring
\begin{align*}
\left\langle L_{ij}^{2}/u_{ij}^{2}\right\rangle ^{1/2}& <\varepsilon _
\mathrm{L}} \\
\left\langle \left[ \left( u_{ij}^{n}-u_{ij}^{n-1}\right) /u_{ij}^{n-1
\right] ^{2}\right\rangle ^{1/2}& <\varepsilon _{\mathrm{u}},
\end{align*
where both errors are typically of order $10^{-4}$.
Among the mean physical quantities which can be extracted from the solution
fields, the one of particular relevance for this work is the micro-rolling
friction coefficient $\mu _{\mathrm{r}}=F_{\mathrm{r}}/F_{\mathrm{N}}$,
where the micro-rolling force $F_{\mathrm{r}}$ read
\begin{equation*}
F_{\mathrm{r}}=\left\vert \mathbf{v}\right\vert
^{-1}\int_{A_{0}}d^{2}x~\sigma (\mathbf{x})\left[ \mathbf{\nabla }h\left(
\mathbf{x}\right) \cdot \mathbf{v}\right] ,
\end{equation*
and the normal load $F_{\mathrm{N}}=\int_{A_{0}}d^{2}x\ \sigma (\mathbf{x})$.
\section{$M_{zz}\left( \mathbf{q},\protect\omega \right) \ $for a bulk with
stepwise-graded viscoelastic rheology}
\label{navier.section}In this section we will show how to calculate
M_{zz}\left( \mathbf{q},\omega \right) $ for a stepwise-graded viscoelastic
composite. The case of continuously-graded viscoelastic composite will be
discussed in Appendix \ref{appendix.2}.
\begin{figure}[tbh]
\centering
\includegraphics[
width=0.5\textwidth,
angle=0
]{singlelayer.eps}
\caption{Schematic of a infinitely-wide slab of finite thickness $d$,
characterized by a linear viscoelstic rheology. $\protect\sigma _{\mathrm{up
}$ ($\protect\sigma _{\mathrm{do}}$) and $w_{\mathrm{up}}$ ($w_{\mathrm{do}}
) are, respectively, the stress and displacement fields on the top $z=0$
(bottom $z=-d$) surface.}
\label{singlelayer.eps}
\end{figure}
In particular, we first consider the case of a linearly-viscoelastic
infinitely-wide slab of thickness $d$, see Fig. \ref{singlelayer.eps}. By
considering the following Fourier transform ($t\rightarrow \omega $ and
\mathbf{x}\rightarrow \mathbf{q}$
\begin{eqnarray*}
\mathbf{w}\left( \mathbf{q},z,\omega \right) &=&\left( 2\pi \right)
^{-3}\int dt\int d^{2}x~\mathbf{w}\left( \mathbf{x},z,t\right) e^{-i\left(
\mathbf{q}\cdot \mathbf{x}-\omega t\right) } \\
\mu \left( \omega \right) &=&\int dt~\mu \left( t\right) e^{-i\left( -\omega
t\right) }
\end{eqnarray*
and, inversely
\begin{eqnarray*}
\mu \left( t\right) &=&\left( 2\pi \right) ^{-1}\int d \omega ~e^{i\left(
-\omega t\right) }\mu \left( \omega \right) \\
\mathbf{w}\left( \mathbf{x},z,t\right) &=&\int dt\int d^{2}x~\mathbf{w
\left( \mathbf{q},\omega \right) e^{i\left( \mathbf{q}\cdot \mathbf{x
-\omega t\right) },
\end{eqnarray*
the relation between the stress and displacement fields on the top ($z=0$)
and bottom surface ($z=-d$), in the limit of quasi-static interaction [i.e.
\omega /\left( qc\right) =v/c\ll 1$, see Appendix \ref{appendix.1}, where $c$
is the generic sound speed] reads in matrix for
\begin{equation}
\left[
\begin{array}{c}
\boldsymbol{\sigma }_{\mathrm{up}}/\left[ E_{\mathrm{r}}\left( \omega
\right) q/2\right] \\
\mathbf{w}_{\mathrm{up}
\end{array
\right] =\cosh \left( qd\right) \left[
\begin{array}{cc}
\mathbf{M}_{1} & \mathbf{M}_{2} \\
\mathbf{M}_{3} & \mathbf{M}_{4
\end{array
\right] \left[
\begin{array}{c}
\boldsymbol{\sigma }_{\mathrm{do}}/\left[ E_{\mathrm{r}}\left( \omega
\right) q/2\right] \\
\mathbf{w}_{\mathrm{do}
\end{array
\right] , \label{the.slab.matrix}
\end{equation
where $\mathbf{M}_{j}\left[ \mathbf{q}d,\nu \left( \omega \right) \right] $
is a 3 by 3 matrix. $\boldsymbol{\sigma }_{\mathrm{up}}$ ($\boldsymbol
\sigma }_{\mathrm{do}}$) and $\mathbf{w}_{\mathrm{up}}$ ($\mathbf{w}_
\mathrm{do}}$) are, respectively, the stress and displacement fields on the
top (bottom) surface, see Fig. \ref{singlelayer.eps}. $\mathbf{M}_{j}$ are
determined in Appendix \ref{appendix.1} for the most general case of
frequency-dependent Poisson ratio $\nu \left( \omega \right) $, as well as
reported for the limiting case of constant $\nu $. \autoref{the.slab.matrix} can
be conveniently rephrased depending on the adopted boundary conditions (BCs)
on the bottom surface ($z=-d$), e.g
\begin{equation}
\mathbf{w}_{\mathrm{up}}\left( \mathbf{q},\omega \right) =\mathbf{M}_{3
\mathbf{M}_{1}^{-1}\frac{\boldsymbol{\sigma }_{\mathrm{up}}\left( \mathbf{q
,\omega \right) }{E_{\mathrm{r}}\left( \omega \right) q/2}+\cosh \left(
qd\right) \left[ \mathbf{M}_{4}-\mathbf{M}_{3}\mathbf{M}_{1}^{-1}\mathbf{M
_{2}\right] \mathbf{w}_{\mathrm{do}}\left( \mathbf{q},\omega \right)
\label{BC1}
\end{equation
o
\begin{equation}
\mathbf{w}_{\mathrm{up}}\left( \mathbf{q},\omega \right) =\mathbf{M}_{4
\mathbf{M}_{2}^{-1}\frac{\boldsymbol{\sigma }_{\mathrm{up}}\left( \mathbf{q
,\omega \right) }{E_{\mathrm{r}}\left( \omega \right) q/2}+\cosh \left(
qd\right) \left[ \mathbf{M}_{3}-\mathbf{M}_{4}\mathbf{M}_{2}^{-1}\mathbf{M
_{1}\right] \frac{\mathbf{\sigma }_{\mathrm{do}}\left( \mathbf{q},\omega
\right) }{E_{\mathrm{r}}\left( \omega \right) q/2}. \label{BC2}
\end{equation
\begin{figure}[tbh]
\centering
\includegraphics[
width=0.5\textwidth,
angle=0
]{multilayer1.eps}
\caption{Schematic of an infinitely-wide slab of finite thickness
d=\sum_{j}d_{j}$, characterized by a step-wise graded linear viscoelstic
rheology.}
\label{stepwise.schematic}
\end{figure}
Now, in Fig. \ref{stepwise.schematic} we show the schematic of the generic
composite slab with a step-wise graded rheology, with $j=1..n..N$ bonded
layers. We first assume the generic layer $\left( n-1\right) $ to be
described by the general stress-displacement relatio
\begin{equation*}
\mathbf{w}_{\mathrm{up}}=\left[ \mathbf{M}\right] ^{\left( n-1\right) }\frac
\boldsymbol{\sigma }_{\mathrm{up}}}{\left[ E_{\mathrm{r}}\left( \omega
\right) \right] ^{\left( n-1\right) }q/2},
\end{equation*
where $\mathbf{M}$ is a 3 by 3 matrix. Imposing the continuity of stress and
displacement between layer $\left( n-1\right) $ and $\left( n\right) $, and
by using \autoref{the.slab.matrix}, we get for the layer $\left( n\right)
\begin{equation*}
\mathbf{w}_{\mathrm{up}}=\left[ \mathbf{M}_{3}+\frac{E_{\mathrm{r}}\left(
\omega \right) }{\left[ E_{\mathrm{r}}\left( \omega \right) \right] ^{n-1}
\mathbf{M}_{4}\left[ \mathbf{M}\right] ^{n-1}\right] \left[ \mathbf{M}_{1}
\frac{E_{\mathrm{r}}\left( \omega \right) }{\left[ E_{\mathrm{r}}\left(
\omega \right) \right] ^{n-1}}\mathbf{M}_{2}\left[ \mathbf{M}\right] ^{n-1
\right] ^{-1}\frac{\boldsymbol{\sigma }_{\mathrm{up}}}{E_{\mathrm{r}}\left(
\omega \right) q/2},
\end{equation*
where the index $\left( n\right) $ has been dropped for simplicity. Thus
\mathbf{M}$ for the layer $\left( n\right) $ read
\begin{equation}
\left[ \mathbf{M}\right] ^{\left( n\right) }=\left[ \mathbf{M}_{3}+\frac{E_
\mathrm{r}}\left( \omega \right) }{\left[ E_{\mathrm{r}}\left( \omega
\right) \right] ^{\left( n-1\right) }}\mathbf{M}_{4}\left[ \mathbf{M}\right]
^{\left( n-1\right) }\right] \left[ \mathbf{M}_{1}+\frac{E_{\mathrm{r
}\left( \omega \right) }{\left[ E_{\mathrm{r}}\left( \omega \right) \right]
^{\left( n-1\right) }}\mathbf{M}_{2}\left[ \mathbf{M}\right] ^{\left(
n-1\right) }\right] ^{-1}, \label{graded.stepped}
\end{equation
with agai
\begin{equation*}
\mathbf{w}_{\mathrm{up}}=\left[ \mathbf{M}\right] ^{\left( n\right) }\frac
\boldsymbol{\sigma }_{\mathrm{up}}}{\left[ E_{\mathrm{r}}\left( \omega
\right) \right] ^{\left( n\right) }q/2}.
\end{equation*
\autoref{graded.stepped} shows that the surface responce of a stepwise-graded
composite can be easily determined with a recursive calculation.
Finally, for the stepwise graded composite with $N$-layer
\begin{equation}
M_{zz}\left( \mathbf{q},\omega \right) =\frac{2}{q}\frac{\left[ \mathbf{M
\left( \mathbf{q},\omega \right) \right] _{3,3}^{\left( N\right) }}{\left[
E_{\mathrm{r}}\left( \omega \right) \right] ^{\left( N\right) }},
\label{final}
\end{equation
where $\left[ \mathbf{M}\right] ^{\left( 0\right) }$ [innermost layer,
needed to initialize \autoref{graded.stepped}] is obtained from \autoref{BC1} or \re
{BC2}, depending on the adopted BCs $\mathbf{w}_{\mathrm{do}}\left( \mathbf{
},\omega \right) =0$ (thus $\left[ \mathbf{M}\right] ^{\left( 0\right) }
\mathbf{M}_{3}\mathbf{M}_{1}^{-1}$) or $\mathbf{\sigma }_{\mathrm{do}}\left(
\mathbf{q},\omega \right) $ (thus $\left[ \mathbf{M}\right] ^{\left(
0\right) }=\mathbf{M}_{4}\mathbf{M}_{2}^{-1}$), for $q\neq 0$. In the
simplest case of a bulky $\left( 0\right) $-layer (corresponding to a half
space, i.e.\ $d\rightarrow \infty $), $\left[ \mathbf{M}\right] ^{\left(
0\right) }=\mathbf{M}_{3}\mathbf{M}_{1}^{-1}=\mathbf{M}_{4}\mathbf{M
_{2}^{-1}$ read
\begin{equation}
\left[ \mathbf{M}\right] ^{\left( 0\right) }
\begin{bmatrix}
1+\frac{\nu }{p}\frac{q_{y}^{2}}{q^{2}} & -\frac{\nu }{p}\frac{q_{x}q_{y}}
q^{2}} & i\frac{2\nu -1}{2p}\frac{q_{x}}{q} \\
-\frac{\nu }{p}\frac{q_{x}q_{y}}{q^{2}} & 1+\frac{\nu }{p}\frac{q_{x}^{2}}
q^{2}} & i\frac{2\nu -1}{2p}\frac{q_{y}}{q} \\
i\frac{1-4\nu \left( 1-\nu _{0}\right) }{1-2\nu _{0}}\frac{q_{x}}{2pq} &
\frac{1-4\nu \left( 1-\nu _{0}\right) }{1-2\nu _{0}}\frac{q_{y}}{2pq} &
\frac{p_{0}}{p}\frac{1-2\nu }{1-2\nu _{0}
\end{bmatrix
. \label{bulk.new}
\end{equation
In \autoref{bulk.new} $p=1-\nu \left( \omega \right) $ and $p_{0}=1-\nu _{0}$,
where $\nu _{0}=\nu \left( \omega =0\right) $ is the Posson coefficient in
the rubbery regime. Observe that for the most general case ($p\neq p_{0}$)
\left[ \mathbf{M}\right] _{3,3}^{\left( 0\right) }$ in \autoref{bulk.new} is not
equal to 1, as (indirectly) expected from the theory of the
elastic-viscoelastic correspondance. This also suggests that our model can
easily overtake the restrictions imposed by the adoption of the
elastic-viscoelastic correspondance principle (frequency-independent Poisson
ratio) to the rubber rheological characteristics which can be modelled. In
the simplest case where $\nu \left( \omega \right) =\nu =\nu _{0}$, \re
{bulk.new} simplifies to the well know
\begin{equation}
\left[ \mathbf{M}\right] ^{\left( 0\right) }
\begin{bmatrix}
1+\frac{\nu }{p}\frac{q_{y}^{2}}{q^{2}} & -\frac{\nu }{p}\frac{q_{x}q_{y}}
q^{2}} & i\frac{2\nu -1}{2p}\frac{q_{x}}{q} \\
-\frac{\nu }{p}\frac{q_{x}q_{y}}{q^{2}} & 1+\frac{\nu }{p}\frac{q_{x}^{2}}
q^{2}} & i\frac{2\nu -1}{2p}\frac{q_{y}}{q} \\
i\frac{1-2\nu }{2p}\frac{q_{x}}{q} & i\frac{1-2\nu }{2p}\frac{q_{y}}{q} &
\end{bmatrix
. \label{bulk.old}
\end{equation}
Finally, the linear viscoelastic complex modulus (which can be measured
recurring to standard techniques \cite{Lorenz2014565}) $E\left( \omega
\right) $ can be related to the creep spectrum through a Prony series\cit
{Lorenz2014565,Scaraggi201415,Scaraggi2014}, obtainin
\begin{equation}
\frac{1}{E(\omega )}\approx \frac{1}{E_{\infty }}+\sum_{j=1}^{N}\frac{H(\tau
_{j})}{1-i\omega \tau _{i}} \label{prony}
\end{equation
where $N$ is the number of relaxation times, $H(\tau _{j})$ the discrete
creep function, and $\tau _{j}$ the generic relaxation time. $E_{\infty }$
is the elastic modulus in the glassy regime.
\begin{figure}[tbh]
\centering
\includegraphics[
width=0.4\textwidth,
angle=0
]{RealPart.eps}
\caption{Real part of the (dimensionless) reduced viscoelastic modulus $E_
\mathrm{r}}(\protect\omega )/E_{\mathrm{r}0}$ adopted in this work, as a
function of the frequency (in \textrm{s}$^{-1}$), in \textrm{log}$_{10}$
\textrm{log}$_{10}$. For a car tire rubber compound at a low external
temperature.}
\label{RealPart.eps}
\end{figure}
In Fig. \ref{RealPart.eps} we show the real part of the (dimensionless)
reduced viscoelastic modulus $E_{\mathrm{r}}(\omega )/E_{\mathrm{r}0}$
adopted in this work, as a function of the frequency (in \textrm{s}$^{-1}$),
in a \textrm{log}$_{10}$-\textrm{log}$_{10}$ scale. The adopted viscoelastic
modulus corresponds to a car tire tread-block compound under low operating
temperatures (see e.g. \cite{scaraggi.in.prep}).
\section{Numerical results}
\label{results}In this section we investigate for the first time the rough
contact mechanics of a rubber block (see Fig. \ref{RealPart.eps} for the rheological properties) covered by
a surface layer with modified rheological properties (with respect to the
bulk), with particular focus to hysteretic friction (i.e. micro-rolling
friction) and contact area (directly related to the adhesive contribution to
friction). We observe that a modified surface layer (MSL) of thickness order
$\approx 1\mathrm{\mu m}$ usually occurs as a consequence of rubber wear
\cite{schipper} in e.g. tire tread-road contacts, or in dynamic rubber
seals. The MSL thickness is clearly expected to introduce a high frequency
(physical) cut-off to the roughness spectral content which can be probed by
the bulk, making the contact mechanics unaffected by the small-wavelength
roughness regime beyond such a threshold. We further observe that without
such a physical cut-off mechanism, the hysteretic friction (normalized
contact area) increases (decreases) theoretically unbouded in an ideal
randomly rough interaction \cite{Persson20013840,scaraggi.in.prep}, thus
(classically) making the quantitative prediction of friction and contact
area to be strongly dependent on the arbitrary (or fitted) choise of such
threshold parameter.
The following results are obtained by applying the numerical model developed
in Sec. \ref{BEM} and \ref{navier.section} to the case of an elastic coating
bonded onto a viscoealstic half space, in steady sliding contact with a
rigid isotropically-rough surface. The bulk is characterized by the complex
reduced viscoelastic modulus $E_{\mathrm{r}}\left( \omega \right) $ of the
tread rubber compound reported in Sec. \ref{navier.section}, whereas the
elastic coating is assumed here to be characterized by the reduced Young's
modulus $E_{\mathrm{r0}}=E_{\mathrm{r}}(\omega =0)$, i.e. given by the
rubber relaxed elastic modulus (in qualitative agreement with the
experimental observations \cite{schipper})\footnote
We stress that the graded rheological formulation we have developed can be
applied to any, continuous or stepwise, formulation of the composite, whose
rheological characteristics as a function of the bulk depth can be obtained
e.g. through a sub-surface differential measurements of mechanical
properties. However, within the theoretical purposes of this contribution,
which has been intentionally limited to the fundamental understanding of
graded rheology (as e.g. induced by wear-driven MSL formation) on the rough
contact mechanics, we have numerically simulated a composite formulation
described by the smallest set of interaction parameters (e.g. elastic
coating onto rubber bulk), yet complete enough to capture the basic physics
occurring in the rough interaction between graded solids.}.
\begin{figure}[tbh]
\centering
\includegraphics[
width=0.4\textwidth,
angle=0
]{Psd.eps}
\caption{Roughness power spectral density adopted in the present study, in
\log _{\mathrm{10}}$-$\log _{\mathrm{10}}$ scale. The power spectra have a
low wave vector cut-off for $q_{\mathrm{0}}=0.63\cdot 10^{2}~\mathrm{m}^{-1}
, and a roll-off for $q_{\mathrm{r}}=4q_{\mathrm{0}}$. For $q>q_{\mathrm{r}}$
the power spectra correspond to self-affine fractal surfaces with Hurst
exponent $H=0.8$. We consider three cases where the large wave vector
cut-off is $q_{\mathrm{1}}=256q_{\mathrm{0}}$, $512q_{\mathrm{0}}$, and
1024q_{\mathrm{0}}$ (corresponding to a root mean square slope $s_{\mathrm
rms}}=\left\langle \left\vert \protect\nabla h\right\vert ^{2}\right\rangle
^{1/2}=0.077$, $0.095$, and $0.11$, respectively. The root mean square
roughness, which is mostly determined by large wavelengths content, is
similar for all cases to $h_{\mathrm{rms}}=0.16~\mathrm{mm}$). All
calculations have been performed with $n=8$ divisions at the small
wavelength $\protect\lambda _{1}=2\protect\pi /q_{1}$.}
\label{PSD.eps}
\end{figure}
Fig.\autoref{PSD.eps} shows the generic roughness power spectral density adopted
in the present study. The power spectra have a low wave vector cut-off for
q_{\mathrm{0}}=0.63\cdot 10^{2}~\mathrm{m}^{-1}$, and a roll-off for $q_
\mathrm{r}}=4q_{\mathrm{0}}$. For $q>q_{\mathrm{r}}$ the power spectra
correspond to self-affine fractal surfaces with Hurst exponent $H=0.8$
(related to the fractal dimnsion $D_{\mathrm{F}}=3-H$)\footnote
We stress that whilst roughness self-affine characteristics are often found
in several man- and nature-made surfaces \cite{Persson2006b}, the
self-affine behaviour is here adopted only for convenience (as discussed
before), in order to reduce the number of parameters characterizing the
contact interface. However, there is no particular limitation in the
deterministic or statistically complexity of the rough surfaces to be
simulated.}. We consider three cases where the large wave vector cut-off is
q_{\mathrm{1}}=256q_{\mathrm{0}}$, $512q_{\mathrm{0}}$, and $1024q_{\mathrm{
}}$ (corresponding to a root mean square slope $s_{\mathrm{rms
}=\left\langle \left\vert \nabla h\right\vert ^{2}\right\rangle ^{1/2}=0.077
, $0.095$, and $0.11$, respectively). The root mean square roughess, which
is mostly determined by large wavelength content, is similar for all cases
to $h_{\mathrm{rms}}=0.16~\mathrm{mm}$.
\begin{figure}[tbh]
\centering
\subfigure[]{\includegraphics[
width=0.40\textwidth,
angle=0
]{frictionPavg1.eps} \label{frictionPavg1.eps}
} \qquad
\subfigure[]{\includegraphics[
width=0.40\textwidth,
angle=0
]{frictionPavg2.eps} \label{frictionPavg3.eps}
}
\\
\subfigure[]{\includegraphics[
width=0.40\textwidth,
angle=0
]{frictionPavg3.eps} \label{frictionPavg4.eps}
} \qquad
\subfigure[]{\includegraphics[
width=0.40\textwidth,
angle=0
]{frictionPavg4.eps} \label{frictionPavg2.eps}
}
\caption{Micro-rolling friction $\protect\mu _{\mathrm{r}}$ as a function of
the dimensionless contact pressure $\sigma
_{0}/E_{\mathrm{r}0}$, for $q_{1}/q_{0}=2^{8}$, $2^{9}$
and $2^{10}$. The sliding velocity $v$ is set to $0.1$~$\mathrm{m/s}$. From
(a) to (d) $q_{\mathrm{0}}d\ 10^{3}=0.63$, $3.2$, $4.4$, $6.3$.}
\label{micro.rolling.friction}
\end{figure}
\begin{figure}[tbh]
\centering
\subfigure[]{\includegraphics[
width=0.40\textwidth,
angle=0
]{AcPavg1.eps} \label{AcPavg1.eps}
} \qquad
\subfigure[]{\includegraphics[
width=0.40\textwidth,
angle=0
]{AcPavg2.eps} \label{AcPavg3.eps}
}
\\
\subfigure[]{\includegraphics[
width=0.40\textwidth,
angle=0
]{AcPavg3.eps} \label{AcPavg4.eps}
} \qquad
\subfigure[]{\includegraphics[
width=0.40\textwidth,
angle=0
]{AcPavg4.eps} \label{AcPavg2.eps}
}
\caption{Normalized (projected) contact area $A_{\mathrm{c}}/A_{0}$ as a
function of the dimensionless contact pressure $\sigma
_{0}/E_{\mathrm{r}0}$, for
q_{1}/q_{0}=2^{8}$, $2^{9}$ and $2^{10}$. The sliding velocity $v$ is set to
$0.1$~$\mathrm{m/s}$. From (a) to (d) $ q_{\mathrm{0}}d\
10^{3}=0.63$, $3.2$, $4.4$, $6.3$.}
\label{contact.area}
\end{figure}
In Fig \ref{micro.rolling.friction} and \ref{contact.area} we show,
respectively, the micro-rolling friction $\mu _{\mathrm{r}}$ and the contact
area $A_{\mathrm{c}}/A_{0}$ as a function of the dimensionless contact pressure $\sigma
_{0}/E_{\mathrm{r}0}$, for $q_{1}/q_{0}=2^{8}$, $2^{9}$ and $2^{10}$. The sliding velocity
v $ is set to $0.1$~$\mathrm{m/s}$. From (a) to (d) the coating thickness is
increased from $q_{\mathrm{0}}d=0.63\cdot 10^{-4}$ to $6.3\cdot 10^{-3}$
(i.e. $d=10$ to $100~\mathrm{\mu m}$ for our system). In particular, Figs.
\ref{frictionPavg1.eps} and \ref{AcPavg1.eps} show, respectively, $\mu _
\mathrm{r}}$ and $A_{\mathrm{c}}/A_{0}$ for the thinner coating ($d=10
\mathrm{\mu m}$). We observe that increasing the roughness high frequency
content determines an increase (decrease) of the hysteretic friction (true
contact area). This could be expected from classical mean field theories
when observing that the coating is thin enough to allow the whole range of
asperities, down to the smallest wavelengths (i.e. to $q_{1}/q_{0}=2^{10}$),
to probe the rubber bulk and, therefore, to effectively contribute
generating the stored (responsible for the contact area) and dissipated
interfacial energy. Moreover, in accordance with classical results \cit
{Persson20013840,scaraggi.in.prep}, even a small increase in the roughness
spectral content in the high-frequency regime non-negligibly affects both
friction and contact area, as due to the strong dependence of such physical
quantities on the $s_{\mathrm{rms}}$ (which increases from $0.07$ to $0.11$
from $q_{1}/q_{0}=2^{8}$ to $2^{10}$). Figs. \ref{frictionPavg2.eps} and \re
{AcPavg2.eps} show, respectively, $\mu _{\mathrm{r}}$ and $A_{\mathrm{c
}/A_{0}$ for the thicker coating ($d=100~\mathrm{\mu m}$). In this case the
contact prediction for $q_{1}/q_{0}=2^{9}$ and $2^{10}$ overlap, i.e. an
asymptotic friction and contact area are obtained for $q_{1}/q_{0}=2^{9}$
\textit{in the entire set of investigated contact pressures}. Interestingly,
such asymptotes markedly differ from the corresponding curves of Figs. \re
{frictionPavg1.eps} and \ref{AcPavg1.eps} and are now closer to the
q_{1}/q_{0}=2^{8}$ curve; instead, the predictions at $q_{1}/q_{0}=2^{8}$
are almost unaffected by the coating thickness. This can be easily justified
with the following arguments. In particular, by increasing the coating size
the smallest roughness wavelengths are no more able to probe the
viscoelastic bulk, hence the hysteretic friction is unaffected by the
smallest asperities resulting into a negligible dissipation increase with
respect to the $q_{1}/q_{0}=2^{8}$ roughness case. Furthermore, given the
soft rheological characteristics of the elastic coating, the smallest
asperities are in full contact with the substrate (at increasing coating
thickness), with no relevant effect in term of contact area reduction, even
at small contact area values (i.e. in the linear regime\footnote
Note that in the linear contact regime, the average effective contact
pressure $\bar{\sigma}$ [$\bar{\sigma}\left( q_{1}\right) =\sigma
_{0}A_{0}/A_{\mathrm{c}}\left( q_{1}\right) $, where $A_{\mathrm{c}}\left(
q_{1}\right) $ is the true contact area when the power spectral density
contains roughness down to $q_{1}$ wavenumber], which is responsible for the
local asperity contact condition, is a magnification-only dependent value,
i.e. given $A_{\mathrm{c}}\left( q_{1}\right) \approx A_{0}k\left(
q_{1}\right) m_{2}^{-1/2}\left( q_{1}\right) \sigma _{0}/E_{\mathrm{r
}\left( vq_{1}\right) $, one obtains $\bar{\sigma}\left( q_{1}\right)
k\left( q_{1}\right) m_{2}^{-1/2}\left( q_{1}\right) /E_{\mathrm{r}}\left(
vq_{1}\right) \approx 1$. Thus, locally, the asperities will
be in partial or full contact depending on the actual value of $k\left( q_{1}\right)$.}). Indeed the smallest wavelength asperities probe
a locally soft solid, which is even not subjected to the sliding induced
viscoelastic stiffening, as instead was the case of Fig. \ref{AcPavg1.eps}.
Finally, for intermediate coating thicknesses we find, as expected, an
intermediate scenario for both friction and contact area, see Figs. \re
{frictionPavg3.eps}, \ref{frictionPavg4.eps}, \ref{AcPavg3.eps} and \re
{AcPavg4.eps}.
\begin{figure}[tbh]
\centering
\includegraphics[
width=0.4\textwidth,
angle=0
]{amplitude.eps}
\caption{Wavelength representative amplitude $h_{q}\left( q\right) \approx
q_{0}\protect\sqrt{C\left( q\right) }$ as a function of the wavenumber $q$,
in $\log _{\mathrm{10}}$-$\log _{\mathrm{10}}$ scale, corresponding to the
PSD\ of Fig. \protect\ref{PSD.eps}. The vertical dashed lines indicate the
roughness frequencies $q$ corresponding to $\bar{q}=qd=1$ for the thicker
(black line, with $q_{0}d=6.3\cdot 10^{-3}$) and thinner (red, with
q_{0}d=6.3\cdot 10^{-4}$) coatings adopted in the simulations, where $d$ is
the coating thickness.}
\label{amplitude.wave}
\end{figure}
In Fig. \ref{amplitude.wave} we show the wavelength representative amplitude
h_{q}\left( q\right) \approx q_{0}\sqrt{C\left( q\right) }$ as a function of
the wavenumber $q$, in $\log _{\mathrm{10}}$-$\log _{\mathrm{10}}$ scale,
corresponding to the PSD\ of Fig. \ref{PSD.eps}. The vertical dashed lines
indicate the roughness frequencies $q$ corresponding to $\bar{q}=qd=1$ for
the thicker (black line, with $q_{0}d=6.3\cdot 10^{-3}$) and thinner (red,
with $q_{0}d=6.3\cdot 10^{-4}$) coatings adopted in the simulations. From the
theory developed in Appendix \ref{appendix.1}, the layer thickness enters
the theory mainly through $\tilde{q}=\tanh \left( \bar{q}\right) $, where
again $\bar{q}=qd$. Thus we can qualitatively observe that for frequencies
\bar{q}\geq \bar{q}_{\mathrm{high}}=2$, $\tilde{q}\approx 1$ so that the
roughness wavelengths approximately smaller than the coating thickness are
unable to probe the sub-coating composite rheology, whereas for $\bar{q}\leq
\bar{q}_{\mathrm{low}}=0.1$, $\tilde{q}\approx \bar{q}$ so that the
roughness asperities do mainly probe the bulk. However, whilst we expect
the exact values of
\bar{q}_{\mathrm{high}}$ (and $\bar{q}_{\mathrm{low}}$) to be quantitatively
affected not only by the geometrical composite characteristics, but also
from its rheological properties (see e.g. the discussion in Sec. \re
{discussion}), it is worth in the present context (where both coating and
bulk show a Young's modulus of similar order of magnitude) to show $\bar{q}_
\mathrm{high}}$ (and $\bar{q}_{\mathrm{low}}$) in Fig. \ref{amplitude.wave}
for both the thinner [Fig. \ref{amplitude.wave.t}] and thicker [Fig. \re
{amplitude.wave.T}] coating.
\begin{figure}[tbh]
\centering
\subfigure[Thinner coating]{\includegraphics[
width=0.40\textwidth,
angle=0
]{thinner.eps} \label{amplitude.wave.t}
} \qquad
\subfigure[Thicker coating]{\includegraphics[
width=0.40\textwidth,
angle=0
]{thicker.eps} \label{amplitude.wave.T}
}
\caption{Wavelenth representative amplitude $h_{q}\left( q\right) \approx
q_{0}\protect\sqrt{C\left( q\right) }$ as a function of the vavenumber $q$,
in $\log _{\mathrm{10}}$-$\log _{\mathrm{10}}$ scale, corresponding to the
PSD\ of Fig. \protect\ref{PSD.eps}. The vertical dashed lines indicate the
roughness frequencies $q$ corresponding to $\bar{q}=0.1$ and $2$. (a) for
the thinner ($q_{0}d=6.3\cdot 10^{-4})$, and (b) thicker ($q_{0}d=6.3\cdot
10^{-3})$ coating.}
\label{psd.test}
\end{figure}
\begin{figure}[tbh]
\centering
\subfigure[$\sigma _{0}/E_{\mathrm{r}0}=0.3$]{\includegraphics[
width=0.40\textwidth,
angle=0
]{friction_varthick.eps} \label{friction_varthick.eps}
} \qquad
\subfigure[$\sigma _{0}/E_{\mathrm{r}0}=0.3$]{\includegraphics[
width=0.40\textwidth,
angle=0
]{Contact_area_varthick.eps} \label{area_varthick.eps}
}
\\
\subfigure[$\sigma _{0}/E_{\mathrm{r}0}=0.1$]{\includegraphics[
width=0.40\textwidth,
angle=0
]{Friction_varthick_2.eps} \label{friction_varthick_2.eps}
} \qquad
\subfigure[$\sigma _{0}/E_{\mathrm{r}0}=0.1$]{\includegraphics[
width=0.40\textwidth,
angle=0
]{Contact_area_varthick_2.eps} \label{area_varthick_2.eps}
}
\caption{(a) Micro-rolling friction and (b) normalized projected contact
area as a function of the coating thickness, for $q_{1}/q_{0}=2^{8}$, $2^{9}$
and $2^{10}$. The sliding velocity $v$ is set to $0.1$~
\mathrm{m/s}$.}
\label{asymptote}
\end{figure}
We observe in Fig. \ref{amplitude.wave.t} (thinner coating) that,
accordingly to the previous arguments, the whole range of roughness
frequencies can probe the bulk, whereas for the thicker coating [Fig. \re
{amplitude.wave.T}] the smallest wavelengths are not aware of the bulk,
confirming the friction and contact area results reported in Figs. \re
{micro.rolling.friction} and \ref{contact.area}.
Finally, in Fig. \ref{asymptote} we show (a,c) the micro-rolling friction $\mu
_{\mathrm{r}}$ and (b,d) the contact area $A_{\mathrm{c}}/A_{0}$ as a function
of the coating thickness, for $q_{1}/q_{0}=2^{8}$, $2^{9}$ and $2^{10}$
(sliding velocity $v$ set to $0.1$~$\mathrm{m/s}$), and for
a contact pressure $\sigma _{0}/E_{\mathrm{r}0}=0.1$ and $0.3$ (qualitatively similar
behaviours characterize the interaction at different contact pressures, thus
not shown here for the sake of briefness). It is interesting to observe that
the friction (and contact area) curve for $q_{1}/q_{0}=2^{10}$ converges to
the $q_{1}/q_{0}=2^{9}$ curve at increasing values of coating thickness,
again as expected from the previous arguments. A similar conclusion applies
for the contact area, see Figs. \ref{area_varthick.eps} and \ref{area_varthick_2.eps}\footnote{The contact area
increases at larger coating thickness $d$ since, for the adopted composite,
by increasing $d$ a wider range of PSD wavelengths is allowed to probe only the coating,
which is not subjected to a sliding-induced viscoelastic stiffening. Hence, an increasing
amount of roughness wavelengths is in full-contact with the substrate.}. Thus, roughness
frequencies larger than $q_{1}/q_{0}=2^{9}$ do not affect the interfacial
contact mechanics at such coating size, supporting the general statement
that a physically meaningful characterization (and prediction) of the
friction and contact area properties of a generic interaction can only be
obtained provided that both confinement rheology and surface physics, as
well as surface roughness, are fully characterized to a same degree of
completeness.
\section{Discussion}
\label{discussion}The multiscale nature of the hysteretic friction $\mu _
\mathrm{r}}$ and contact area $A_{\mathrm{c}}/A_{0}$ for randomly rough
interactions is nowdays well accepted among contact mechanics researchers,
mainly thanks to the theoretical achievements of the Persson \cit
{Persson20013840}. At a contact scale of representative size $\lambda =2\pi
/q$, where $\zeta =q/q_{0}$ is the magnification at which the contact is
observed with respect to a contact macroscale $L_{0}=2\pi /q_{0}$, the
dissipation is confined in a bulk volume $\lambda ^{3}$ and corresponds
approximately to a friction force $F_{\mathrm{T}}\approx \lambda ^{3}q
\mathrm{Im}[E(\omega )]h_{\mathrm{\lambda }}^{2}/\lambda ^{2}$. Here $\omega
=\mathbf{q}\cdot \mathbf{v}$ is the angular frequency of excitation with
\mathbf{v}$ as the sliding velocity, $E(\omega )$ is the complex Young's
modulus of rubber, and $h_{\mathrm{\lambda }}$ is the amplitude of the
roughness wavelength $\lambda $. Thus, the frictional shear stress is $\tau
_{\mathrm{T}}\approx q^{2}~\mathrm{Im}[E(\omega )]h_{\mathrm{\lambda }}^{2}
. Hence, for a rough surface with self affine characteristics, one has that
the contribution to friction $\Delta \tau _{\mathrm{T}}$ related to the
roughness wavelength $\lambda $ i
\begin{equation}
\Delta \tau _{\mathrm{T}}/\Delta q\approx \mathrm{Im}[E(\omega
)]q^{3}C(q)\propto \mathrm{Im}[E(\mathbf{q}\cdot \mathbf{v})]q^{-2H+1},
\label{discussion1}
\end{equation
wher
\begin{equation*}
C(q)=\left( 2\pi \right) ^{-2}\int d^{2}\mathbf{x\ }\left\langle h(\mathbf{x
)h(\mathbf{0})\right\rangle e^{-i\mathbf{q}\cdot \mathbf{x}}
\end{equation*
is the power spectral density of the surface roughness, $h(\mathbf{x})$ is
the substrate height measured from the average surface plane, and $H$ is the
Hurst coefficient. \autoref{discussion1} does not show any cut-off mechanism of
friction, and moreover for fractal dimensions $D_{\mathrm{f}}>2.5$ the
contribution to dissipation generated by decreasing roughness length scales
is even unbounded for ideal (infinite) systems. Similar considerations apply
for the contact area. In particular, by observing that $A\left( \zeta
\right) \bar{\sigma}\left( \zeta \right) =\sigma _{0}A_{0}=F_{\mathrm{N}}$,
where $A\left( \zeta \right) $ is the contact area obtained when an
arbitrary high-frequency cut-off is applied to the PSD (i.e. $C\left(
q>\zeta q_{0}\right) =0$), one simply ha
\begin{equation*}
\frac{dA\left( \zeta \right) }{d\zeta }\propto -\frac{d\bar{\sigma}\left(
\zeta \right) }{d\zeta }.
\end{equation*
By approximating $d\bar{\sigma}\left( \zeta \right) /d\zeta $ with $\sqrt
d\left\langle \sigma ^{2}\right\rangle /d\zeta }$, wher
\begin{equation*}
d\left\langle \sigma ^{2}\right\rangle =\frac{E_{\mathrm{r0}}^{2}}{4}d\left[
m_{2,\mathrm{eff}}\left( \zeta \right) \right]
\end{equation*
wit
\begin{eqnarray*}
m_{2,\mathrm{eff}}\left( \zeta \right) &=&\int_{q_{0}}^{q_{0}\zeta
}dq^{2}q^{2}C\left( q\right) \left\vert E_{\mathrm{r,\theta }}\left( \mathbf
q}\cdot \mathbf{v}\right) \right\vert ^{2}/E_{\mathrm{r0}}^{2} \\
d\left[ m_{2,\mathrm{eff}}\left( \zeta \right) \right] &\approx &d\zeta
q_{0}q^{3}C\left( q\right) \left\vert E_{\mathrm{r,\theta }}\left( \mathbf{q
\cdot \mathbf{v}\right) \right\vert ^{2}/E_{\mathrm{r0}}^{2}
\end{eqnarray*
and where $\left\vert E_{\mathrm{r,\theta }}\left( \mathbf{q}\cdot \mathbf{v
\right) \right\vert =\left( 2\pi \right) ^{-1}\int d\theta ~\left\vert
E_{\mathrm{r,\theta }}\left( qv\cos \theta \right) \right\vert $ (assuming
v_{y}=0$). Thus, approximately
\begin{equation}
\Delta A\left( \zeta \right) /\Delta q\propto -\left\vert E_{\mathrm
r,\theta }}\left( \mathbf{q}\cdot \mathbf{v}_{0}\right) \right\vert
q^{\left( -2H+1\right) /2}. \label{discussion2}
\end{equation
We first observe that $\Delta A\left( \zeta \right) /\Delta q<0$, i.e. the
contact continuously decreases by increasing the small scale roughness
spectral content. The power law exponent is similar to the previous case
(thus, similar considerations apply here), and no cut-off mechanism of
contact area appears. Thus, accordingly to both \autoref{discussion1} and \re
{discussion2}, a small-scale cut-off mechanism can only be introduced
through the effective rheological response of the composite $E\left( \omega
\right) $. Whilst this has been already numerically prooved in Sec. \re
{results} for the case of an elastic coating bonded onto a rubber bulk,
however, we will show (a more interesting feature) in the following that the
multiscale design of the effective complex modulus $E\left( \omega \right) $
(i.e. the choise of the composite materials arrangement) can be adopted to
provide extremely tailored contact mechanics properties, such as (but not limited to) an enhanced
micro-rolling friction.
\begin{figure}[tbh]
\centering
\subfigure[]{\includegraphics[
width=0.5\textwidth,
angle=0
]{Half_Space.eps} \label{Half_Space.eps}
}
\\
\subfigure[]{\includegraphics[
width=0.4\textwidth,
angle=0
]{grafico1.eps}\label{grafico1.eps}
} \qquad
\subfigure[]{\includegraphics[
width=0.4\textwidth,
angle=0
]{grafico1_1.eps}\label{grafico1_1.eps}
}
\caption{(a) Schematic of a viscoelastic half space (VHS) in sliding contact with
a rigid rough surface. (b) Real and (c) immaginary part of the dimensionless
surface response $M_{zz}\left( \protect\omega \right) qE_{\mathrm{r0}}/2$
(with $\protect\omega =qv$ and $q_{y}=0$) as a function of the wave number
q/q_{0}$ ($q_{0}=2\protect\pi /L_{0}$). The bulk is characterized by a
single relaxation time $\protect\tau =L_{0}/v_{1}$ and $E_{\mathrm{r}\infty
}/E_{\mathrm{r}0}=10$ (with $\protect\nu \left( \protect\omega \right)
\protect\nu _{0}$). For the dimensionless sliding velocities $v/v_{1}$ in
the set $\left[ 0,10^{-4},10^{-3},10^{-2},\infty \right] $. EHS is for elastic half space.}
\label{bulk}
\end{figure}
\begin{figure}[tbh]
\centering
\subfigure[]{\includegraphics[
width=0.5\textwidth,
angle=0
]{coating_halfpsace.eps} \label{coating_halfpsace.eps}
}
\\
\subfigure[]{\includegraphics[
width=0.4\textwidth,
angle=0
]{grafico2.eps} \label{grafico2.eps}
} \qquad
\subfigure[]{\includegraphics[
width=0.4\textwidth,
angle=0
]{loss_tangent.eps}\label{grafico3.eps}
}
\caption{(a) Schematic of a viscoelastic half space coated with an elastic
layer in sliding contact with a rigid rough surface. (b) Real part of the
dimensionless surface response $M_{zz}\left( \protect\omega \right) qE_
\mathrm{r0}}/2$ and (c) effective composite loss tangent $\mathrm{Im}\left[
M_{zz}\left( \protect\omega \right) ^{-1}\right] /\mathrm{Re}\left[
M_{zz}\left( \protect\omega \right) ^{-1}\right] $ (with $\protect\omega =qv$
and $q_{y}=0$) as a function of the wave number $q/q_{0}$ ($q_{0}=2\protec
\pi /L_{0}$). The bulk is characterized by a single relaxation time $\protec
\tau =L_{0}/v_{1}$ and $E_{\mathrm{r}\infty }/E_{\mathrm{r}0}=10$ (with
constant Poisson ratio). The coating has a reduced elastic modulus $E_
\mathrm{r}0}$. For the dimensionless coating thickness $q_{0}d$ in the set
\left[ 0,9.4,31,63,94,190,\infty \right] 10^{-3}$, with $v/v_{1}=0.02$
(unless differently specified).}
\label{coating}
\end{figure}
\begin{figure}[tbh]
\centering
\subfigure[]{\includegraphics[
width=0.45\textwidth,
angle=0
]{multilayer1.eps} \label{multilayer1.eps}
} \qquad
\subfigure[]{\includegraphics[
width=0.4\textwidth,
angle=0
]{multi_slab_single_slab.eps} \label{grafico4.eps}
}
\caption{(a) Schematic of a composite block constituted by a rubber bulk
coated by three rubber layers (with different rheologies) in sliding contact
with a rigid rough surface. (b) Effective composite loss tangent $\mathrm{Im
\left[ M_{zz}\left( \protect\omega \right) ^{-1}\right] /\mathrm{Re}\left[
M_{zz}\left( \protect\omega \right) ^{-1}\right] $ (with $\protect\omega =qv$
and $q_{y}=0$) as a function of the wave number $q/q_{0}$ ($q_{0}=2\protec
\pi /L_{0}$). The bulk is characterized by a single relaxation time $\protec
\tau =L_{0}/v_{1}$ and $E_{\mathrm{r}\infty }/E_{\mathrm{r}0}=10$. The
coatings are viscoelastic layers with the same rubbery ($E_{\mathrm{r}0}$)
and glassy ($E_{\mathrm{r}\infty }$) modulus of the bulk, but with different
relaxation time $\protect\tau _{j}$, where $j=1$ for the innermost layer. In
particular, $\protect\tau _{j}q_{0}v_{1}=2s_{j}\protect\pi $, with $s_{j}
\left[ 10^{-1},10^{-2},10^{-3}\right] $, whereas the coating size follows
the rule $q_{0}d_{j}=2s_{j}\protect\pi $. For sliding velocities $v/v_{1}=1
. In (b) the dashed curves represent the loss tangent of each composite
layer, as shown in descriptive balloons (corresponding to $s_j$), whereas the continuous line is the
effective loss tangent.}
\label{composite}
\end{figure}
In Fig.\ref{bulk} we show, for a viscoelastic half space in sliding contact
with a generic rigid rough surface, the cross section (at $q_{y}=0$) of the (b) real
and (c) immaginary part of the dimensionless surface response $M_{zz}\left(
\omega \right) /\left[ 2/\left( qE_{\mathrm{r0}}\right) \right] \ $(with
\omega =qv$) as a function of the wave number $q/q_{0}$ (with $q_{0}=2\pi
/L_{0}$). The bulk is characterized by a single relaxation time $\tau
=L_{0}/v_{1}$ and by $E_{\mathrm{r}\infty }/E_{\mathrm{r}0}=10$. Several
dimensionless sliding velocities $v/v_{1}$ are reported, beloging to the set
$\left[ 0,10^{-4},10^{-3},10^{-2},\infty \right] $. For $v/v_{1}\rightarrow 0
$ ($\infty $), the solid is elastically probed in its rubbery or relaxed
(glassy) regime, see also Fig. \ref{grafico1.eps}. For intermediate sliding
velocities, the roughness wavelengths probe the rubber at different degree
of stiffening, and in particular a monotonic rubber stiffening occurs at
increasing roughness frequencies [Fig. \ref{grafico1.eps}]. Fig. \re
{grafico1_1.eps} shows the immaginary part corresponding to Fig. \re
{grafico1.eps}. We observe as expected that, by varying the sliding speed, a
different range of roughness wavelengths can probe the rubber at the highest
dissipation. However, no cut-off mechanism occurs in both Figs. \re
{grafico1.eps} and \ref{grafico1_1.eps}, as previously discussed.
In Fig. \ref{coating} the case of a viscoelastic half space coated with an
elastic layer in sliding contact with a rigid rough surface is reported. In
particular, we show the cross section (at $q_{y}=0$) of the (b) real part of
the dimensionless surface response $M_{zz}\left( \omega \right) /\left[
2/\left( qE_{\mathrm{r0}}\right) \right] $ and (c) effective composite loss
tangent $\mathrm{Im}\left[ M_{zz}\left( \omega \right) ^{-1}\right] /\mathrm
Re}\left[ M_{zz}\left( \omega \right) ^{-1}\right] $ (with $\omega =qv$) as
a function of the wave number $q/q_{0}$ (with $q_{0}=2\pi /L_{0}$). The bulk
is characterized by a single relaxation time $\tau =L_{0}/v_{1}$ and $E_
\mathrm{r}\infty }/E_{\mathrm{r}0}=10$, whereas the coating has a reduced
elastic modulus $E_{\mathrm{r}0}$. The dimensionless coating thickness
q_{0}d$ is let to vary in the set $\left[ 0,9.4,31,63,94,190,\infty \right]
10^{-3}$, with $v/v_{1}=0.02$ (unless differently specified).
In Fig. \ref{grafico2.eps}, the red curve corresponds to the limiting case
of an elastic coating with a negligible thickness; thus, the composite
undergoes a monotonic stiffening at increasing roughness frequencies, as
previously reported in Fig. \ref{grafico1.eps}. For increasing coating size
q_{0}d$, interestingly, the effective surface response shows a minimum for
intermediate frequencies, whereas both large and small wavelengths probe the
composite in its compliant regime (corresponding to the rubber relaxed
modulus $E_{\mathrm{r}0}$). Thus, for such a composite, increasing the
small-scale roughness content is expected not to affect the true contact
area, as previously demonstrated with the arguments of Sec. \ref{results}.
In term of effective loss tangent, Fig. \ref{grafico3.eps} shows as red
curves the two limiting cases of coating with negligible thickness ($q_{0}d=0
$) and with infinite thickness ($q_{0}d\rightarrow \infty $). For
q_{0}d\rightarrow \infty $\ the contact regime occurs under pure elasticity,
obviously determining a null dissipation and, correspondingly, a null loss
tangent. For intermediate coating sizes, the loss tangent loses the
classical bell shape (in the log scale) and, interestingly, a dissipation
cut-off frequency $q_{\mu }$ appears, so that wavelenths smaller than
\approx q_{\mu }^{-1}$ do not quantitatively probe the viscoelastic bulk,
with no resulting contribution in term of hysteretic friction (even
considering that those small scale wavelenghts are in full contact with the
composite). Thus, this behaviour generates the physical scenario presented
in Sec. \ref{results}.
Finally, Fig. \ref{composite} shows the case of (a) a composite block
constituted by a rubber bulk coated by three rubber layers (with different
rheologies) in sliding contact with a rigid rough surface. In particular, we
show the cross section (at $q_{y}=0$) of the (b) effective composite loss
tangent $\mathrm{Im}\left[ M_{zz}\left( \omega \right) ^{-1}\right] /\mathrm
Re}\left[ M_{zz}\left( \omega \right) ^{-1}\right] $ (with $\omega =qv$ and
q_{y}=0$) as a function of the wave number $q/q_{0}$ ($q_{0}=2\pi /L_{0}$).
The bulk is characterized by a single relaxation time $\tau =L_{0}/v_{1}$
and $E_{\mathrm{r}\infty }/E_{\mathrm{r}0}=10$, whereas the coatings are
viscoelastic layers with the same rubbery ($E_{\mathrm{r}0}$) and glassy (
E_{\mathrm{r}\infty }$) modulus of the bulk, but with different relaxation
time $\tau _{j}$, where $j=1,2,3$ ($1$ for the innermost layer). In
particular, $\tau _{j}q_{0}v_{1}=2s_{j}\pi $, with $s_{j}=\left[
10^{-1},10^{-2},10^{-3}\right] $, whereas the coating size follows the rule
q_{0}d_{j}=2s_{j}\pi $. The sliding speed is set to $v/v_{1}=1$. In Fig. \re
{grafico4.eps} the dashed curves rapresent the loss tangent of each
composite layer, as shown in descriptive baloons, whereas the continuous
line is the effective loss tangent. We observe that the properly-designed
building of the composite layers allows to provide effective block
dissipation characteristics which are almost constant and independent from
the probing roughness wavelengths, e.g. in order to let the viscoelastic
friction be maximized. More generally, however, our model can be adopted to
determine the optimal composite packaging which provides tailored contact
properties, such as friction or adherence. Furthermore, the interface can be
designed to be roughness specific, i.e. providing contact mechanics
properties only over a windowed roughness spectral content, e.g. for
biological sensing, bio-adhesion, tire grip control, to cite some.
\section{Conclusions}
We have presented the first numerical contact mechanics model for (randomly or deterministic)
rough surfaces, to be applied for the prediction of the rough contact mechanics
of a general viscoelastic block, with graded rheology, in steady sliding contact with a rough rigid surface.
In particular, our model is able to handle both stepwise or continuously-graded block rheologies, with a (reduced) computational effort typical of
the residuals molecular dynamics scheme.
We have critically discussed on the role of small-scale wavelengths on rubber friction and contact area, and we showed for the first time that
the rough contact mechanics exhibits effective interface properties which converge to asymptotes upon increase of the small-scale roughness content,
under the adoption of some realistic description of the rheology of the confinement. Furthermore, we show that our model can be
effectively adopted for the design of the composite-layers packaging providing
contact mechanics characteristics (such as friction and adhesion) tailored to be roughness specific,
e.g. for biological sensing, bio-adhesion, tire grip control, to cite some possible applications.
\label{conclusions}
\begin{acknowledgments}
DC gratefully acknowledges the support the European Research Council (ERC starting researcher grant 'INTERFACES', No. 279439).
\end{acknowledgments}
\newpage
|
1806.03847
|
\section{GAN architectures}
\begin{table}[h]
\normalsize
\centering
\caption{\small Hidden layers structures and configurations for MMC-GAN }\label{tab:config}
\begin{tabular}{|c|c|l|}
\hline
Num. &GAN & $100$, $100$, $100$, $100$ \\
\cline{2-3}
nodes &CGAN & $100$, $100$, $100$, $100$ \\
\cline{2-3}
(G) &WGAN & $100$, $100$, $100$, $100$ \\
\hline
\hline
&(1) & $100$, $200$ \\
\cline{2-3}
Num.&(2) & $100$, $200$, $400$ \\
\cline{2-3}
nodes & (3) & $100$, $200$, $400$, $1000$ \\
\cline{2-3}
(D) &(4) & $100$, $200$, $400$, $1000$, $1000$\\
\cline{2-3}
&(5)& $100$, $200$, $600$, $400$, $1000$, $600$, $1000$ \\
\hline
\end{tabular}
\end{table}
\end{appendices}
\section{CONCLUSION}
Motivated by the increasing demand in domestic service robots, we proposed a method to understand ambiguous language instructions for carry-and-place tasks. More explicitly, we proposed a method based on the instruction of the user and the scene situation to predict target areas suitable for the task, by taking into account the space available and the robot's physical abilities.
The following sums up our key contributions.
\begin{itemize}
\item[$\bullet$ ] We built a multimodal dataset associating linguistic and visual inputs of target areas.
\item[$\bullet$ ] We proposed the MultiModal Classifier GAN (MMC-GAN) which predicts the suitable target areas from multimodal inputs (linguistic and visual). Our results emphasize that MMC-GAN generalization ability outperforms the DNN-based classifier by $4.1\%$ in accuracy.
\item[$\bullet$ ] Several variations of MMC-GAN have been developed with the cutting edge methods derived from GAN: namely WGAN or CGAN. The best results were obtained from a CGAN architecture with $87.5\%$ accuracy.
\end{itemize}
In future work, we plan to extend MMC-GAN with a fully connected structure \Update(Extractor, Discriminator and Generator trained simultaneously) \Done which should be more convenient. A physical experimental study with non-expert users and HSR is also planned for a future work.
\section{EXPERIMENTS}\label{sec:exp}
\subsection{Setup}
The parameter settings of MMC-GAN are summarized in Table \ref{tab:param}.
\begin{table}[b]
\vspace{-2.5mm}
\vspace{-2.5mm}
\normalsize
\centering
\caption{\small Parameter settings and structures of MMC-GAN}\label{tab:param}
\begin{tabular}{|c|c|l|}
\hline
&Opt. & Adam (Learning rate= $0.00005$, \\
&method & $\beta_1=0.5$, $\beta_2=0.9$), $\lambda$=0.2 \\
\hline
&Batch & $64$ (E), $50$ ($G$ and $D$)\\
\hline
\hline
Num. &GAN & $100$, $100$, $100$, $100$ \\
\cline{2-3}
nodes &CGAN & $100$, $100$, $100$, $100$ \\
\cline{2-3}
(G) &WGAN & $100$, $100$, $100$, $100$ \\
\hline
\hline
&(1) & $100$, $200$ \\
\cline{2-3}
Num.&(2) & $100$, $200$, $400$ \\
\cline{2-3}
nodes & (3) & $100$, $200$, $400$, $1000$ \\
\cline{2-3}
(D) &(4) & $100$, $200$, $400$, $1000$, $1000$\\
\hline
\end{tabular}
\end{table}
\begin{table*}[h]
\normalsize
\caption{\small Mean validation/test-set accuracy and sample standard deviation for the best model of each method considering the instruction only (I), the instruction and context only (I+C), the visual features only (V) and the instruction, context and visual features (I+C+V). These results are based on ten random experiments. (*) report partial results for configurations when all the experiments did not converge.}
\label{tab:results}
\centering
\begin{tabular}{cc|cc|cc|cc|cc}
\hline
\multicolumn{1}{c}{}&\multicolumn{1}{c|}{$[\%]$} &\multicolumn{8}{c}{Input features }\\
\hline
&GAN& \multicolumn{2}{c|}{\bf I} & \multicolumn{2}{c|}{\bf I + C}& \multicolumn{2}{c|}{\bf V}&\multicolumn{2}{c}{\bf I + C + V} \\
\cline{3-10}
Method & type & Valid & Test& Valid & Test& Valid & Test & Valid& Test\\
\hline
\hline
Baseline &- &61.4$\pm$0.5&59.4$\pm$0.3 &60.7$\pm$1.5 &60.2$\pm$0.8& 63.3$\pm$0.5 & \bf 61.1$\pm$1.1&83.1$\pm$1.7 &82.2$\pm$2.8 \\
\hline
Ours & GAN &59.3$\pm$1.4&57.5$^*$$\pm$3.3 &58.0$\pm$2.2 &59.5$^*$$\pm$ 3.7 &60.3$\pm$1.0 & 58.1$\pm$1.5&85.9$\pm$0.4&85.3$\pm$1.2\\
\hline
Ours & CGAN& 60.1$\pm$1.5& 56.4$^*$$\pm$3.7& 58.7$\pm$1.7&56.7$^*$$\pm$4.2 &57.0$\pm$0.8 & 58.2$\pm$1.0&86.6$\pm$0.3 &\bf 86.2$\pm$0.8 \\
\hline
Ours & WGAN &62.0$\pm$1.5 & \bf 61.8$\pm$2.1& 63.3$\pm$0.4 &\bf 62.7$\pm$2.1 &59.6$\pm$3.1 &59.7$\pm$1.9&84.1$\pm$0.4 &84.4$\pm$1.1\\
\hline
\end{tabular}
\vspace{-2.5mm}
\vspace{-2.5mm}
\end{table*}
\subsubsection{Extractor}
As discussed in the previous section, \Update the visual inputs ${\bf v}_{d}$ and ${\bf v}_{meta}$ \Done are processed in the Extractor $E$ through a CNN. This CNN (see Fig. \ref{fig:mmcgan}) was already trained over $100$ epochs to obtain stable latent space features. The latent features were then extracted from the model for which the validation set accuracy had reached its maximum value. Note that $E$ is trained considering the labeled data. From the CNN output, we obtain ${\bf x}_v$ of dimension $d_v=200$.
In parallel, the paragraph vector model was trained from a corpus of 4.72 million sentences similarly to \cite{sugiura2018grounded}. The output of the PV-DM generated ${\bf x}_{inst}$ and ${\bf x}_c$ which have respectively a dimensions $d_{inst}=200$ and $d_{c}=200$. We then obtained the features ${\bf x}_{real}$ with $d_{real}=600$
\subsubsection{Generator}
For the MMC-GAN, we developed several versions of the \Update Generator $G$ and Discriminator $D$ \Done as summarized in Table \ref{tab:param}. Generator $G$ is composed of four layers using ReLU activation functions, except for the last layer which uses a tanh activation function. Batch normalization was also applied to these layers. The random variable ${\bf z}$ is defined as ${\bf z}={\bf z}_1\sim \mathcal{N}(0,N)$ with a dimension $d_z=100$ in the GAN and \Update WGAN (Wasserstein GAN) approaches \Done. For the \Update CGAN (Conditional GAN) \Done approach ${\bf z}= \{{\bf z}_1,{\bf c}\}$ is used and $\bf c$ is sampled from a categorical distribution. For the latter case, the input dimension is $d_z=104$ because we consider a 4-class problem. The output of the generator has dimension $d_{fake}=600$.
\subsubsection{Discriminator}
\Update
For $D$ we consider several structures ($(1)$ to $(4)$) for which we varied the number of layers and/or nodes. These structures are detailed in the Table \ref{tab:param}. \Don
\subsection{Results}
Qualitative results of MMC-GAN prediction are shown in Fig. \ref{fig:samples} which illustrates typical true and false predictions.
\begin{figure}[t]
\centering
\includegraphics[scale=0.38]{qresults.png}
\caption{\small Prediction samples (left corner true/predicted) for given instructions, contexts and depth images. The samples are mostly correctly classified except for the bottom samples.}\label{fig:samples}
\vspace{-2.5mm}
\vspace{-2.5mm}
\end{figure}
These errors are more thoroughly analyzed in Section \ref{sec:error_analysis}.
We evaluated the accuracy of different MMC-GAN variations in a standard way; that is the test set accuracy when the best validation accuracy is obtained. Hence, these results reflect more objectively the performance of MMC-GAN models on unseen data.
The results reported in Table \ref{tab:results} are based on the mean accuracy and sample standard deviation, over ten trials. Our methods (MMC-GANs) are compared with a baseline method (simple DNN) with the same dataset. For a fair comparison, the DNN has the same architecture as $D$ (structure $4$ in Table \ref{tab:param}) and the same input ${\bf x}_{real}$.
The third column results confirm that the MMC-GAN approach outperforms DNN-based classification. This suggests the generalization ability of GAN-based methods independently of their type, although the best results were obtained with the CGAN structure.
\Update
Moreover, to assess the benefits of our multimodal approach, we repeated these experiments with the instruction sentence only I (${\bf x}_{real}={\bf x}_{instr}$), with the instruction and context sentences I+C (${\bf x}_{real}=\{{\bf x}_{instr}, {\bf x}_{c}\}$) and with vision input only V (${\bf x}_{real}={\bf x}_{v}$) in comparison with our multimodal approach I+C+V.
The results in the two first columns of Table \ref{tab:results} show that linguistic features are not enough to correctly predict the likelihood of a target area. The context sentence does not statistically significantly improve the results beccause the context is not informative enough. Nonetheless, this context is particularly useful for real experiments for guiding the robot's behavior. Similarly visual inputs does not give accurate results as reported in the third column. In contrast, the multi-modal gives accurate results that outperforms (by more than $20\%$) the other approaches. These results validate our method to disambiguate instructions. \Update We can also emphasize that our model captures the correspondence between linguistic instruction and the scene. This is illustrated by the middle right sample in Fig. \ref{fig:samples} that is rejected since the visual input does not correspond to the instruction mentioning a drawer.\Done
The results reported for the GAN and CGAN networks are partial, since several experiments were not able to converge ($5$ and $2$ for GAN, $4$ and $7$ for CGAN). Tweaking the network hyperparameters would certainly allow to improve the convergence rate. However, it is relevant to emphasize this limitation of GAN-based approaches that has been studied in many recent works \cite{salimans2016improved, gulrajani2017improved}. In this case, WGAN can be an interesting alternative, since it offers a better stability but at a cost of slight accuracy decrease in our tests.
\Done
Eventually, we evaluated our approach for the different variations of $D$ and $G$. The best results, considering the full set of features, for each type of architecture are reported in Table \ref{tab:results2}. This time, these results emphasize that independently of their structures, GAN-based classifier outperforms DNN. The prediction results of MMC-GAN are improved by $4.8\%$ compared to the baseline, for the structure $4$.
\subsection{Error Analysis}\label{sec:error_analysis}
In this section we analyze the results categorization obtained by our MMC-GAN approach. The confusion matrix given in Table \ref{tab:conf} show the classification result for our best model ($87.5 \%$ of accuracy). It can immediately be noticed that most of the confusion in the categorization task are between classes $A4$ and $A3$ as well as between classes $A2$ and $A1$. One probable reason is that the difference in depth image for classes $A4$ and $A3$ (respectively for $A2$ and $A1$) is not sufficient because of the noise in the depth image. Our classifier mainly relies on the object depth value that when smeared with noise are not detected. Oppositely noise pixels could also be detected as object. For instance in the bottom right image in Fig. \ref{fig:samples}, noisy elements (black pixels) appear horizontally on the front side of the desk, which may explain the misclassifcation as $A2$ instead of $A1$. Furthermore, the bottom left image illustrates the case where the labeling strategy is not a strict process: this sample could also be labeled as $A3$ since there is some space appearing between the obstacles.
\begin{table}[t]
\normalsize
\caption{\small Maximum test set accuracy for the different network structures.}
\label{tab:results2}
\centering
\begin{tabular}{c c|c|c|c|c}
\hline
\multicolumn{1}{c}{}&\multicolumn{1}{c}{GAN}&\multicolumn{4}{|c}{Architecture}\\
\cline{3-6}
Method & type& (1) & (2) & (3)& (4) \\
\hline
\hline
Baseline&- &80.5&81.3&81.3& 83.4 \\
\hline
Ours& GAN &82.3 &83.3& 85.7& 87.0\\
\hline
Ours &CGAN &\bf 85.6 & \bf 85.7&\bf 86.7& \bf{87.5}\\
\hline
Ours&WGAN &79.2&80.2&83.7&86.3 \\
\hline
\end{tabular}
\vspace{-2.5mm}
\end{table}
\begin{table}[h]
\normalsize
\caption{\small Confusion matrix of the test set for the best model.}\label{tab:conf}
\centering
\begin{tabular}{|c|c|cccc|}
\hline
\multicolumn{2}{|c}{}&\multicolumn{4}{|c|}{\bf Prediction }\\
\cline{3-6}
\multicolumn{1}{|c}{}&\multicolumn{1}{c|}{\bf $[\%]$}&$A1$ &$A2$ & $A3$ & $A4$ \\
\hline
&$A1$ &\cellcolor{gray!120}{\bf 94}& \cellcolor{gray!5}2&0& 4 \\
&$A2$ & \cellcolor{gray!10}4& \cellcolor{gray!80}{\bf 89} &\cellcolor{gray!10}3&\cellcolor{gray!10}4\\
\rot{\rlap{\bf True}}&$A3$ & 0 & \cellcolor{gray!10}4 &\cellcolor{gray!78}{\bf 82} &\cellcolor{gray!40}14\\
&$A4$ &0&\cellcolor{gray!2}1&\cellcolor{gray!20}9 &\cellcolor{gray!90}{\bf 90}\\
\hline
\end{tabular}
\vspace{-2.5mm}
\vspace{-2.5mm}
\end{table}
Fortunately, for application on HSR, the robot would perform the task only for target areas classified as $A1$ or $A2$. Hence, the accuracy of our system drastically increases which leads to a potential accuracy over $95 \%$.
\section{Introduction}
With the growth of an aging population, improving the quality of life of this segment of the population is a major issue for society.
Robots represent a credible solution providing support to elderly and/or disabled people.
Recently, domestic service robots (DSRs) hardware and software are being standardized and many studies have been conducted \cite{piyathilaka2015human, smarr2014domestic,iocchi2015robocup}. However, in most DSRs, the communication ability is limited. However, for communicative DSRs, it is crucial to be able to interpret user commands for object manipulation tasks. These commands are naturally generated from language.
In this context, our work focuses on language understanding for ``carry-and-place'' tasks. We define carry-and-place tasks as those in which the robot moves an object from an initial area to a target area. The main challenge of using natural language is that linguistic information may be ambiguous or insufficient: in our configuration, we assume that the target area for placing the object is missing or insufficiently specified in the instruction, {\it e.g.}, ``Put away the milk and cereal.''
A simple approach consists of asking the user for the missing information ({\it e.g.}, Robocup@Home \cite{iocchi2015robocup}). Nevertheless, it sometimes takes more than one minute to disambiguate the instruction, which is cumbersome. More advanced approaches \cite{kollar2013learning, gemignani2015language} rely on linguistic knowledge with the development of dialog to fill in the missing slots in the language understanding model. Unfortunately, these methods can also be time-consuming.
\begin{figure}[tp]
\centering
\includegraphics[scale=0.28]{eyecatch_v5.png}
\caption{\small Architecture of our method to solve ambiguous instructions based on MultiModal Classifier GAN (MMC-GAN).}
\label{fig:method}
\vspace{-2.5mm}
\vspace{-2.5mm}
\vspace{-2.5mm}
\end{figure}
In this work, we propose a method that allows us to predict the likelihood of target areas for ambiguous instructions by taking into account the physical ability of the robot as well as the available space. We solve this problem by introducing a generative adversarial nets (GAN) \cite{goodfellow2014generative} classifier that is able to address multimodal inputs called Multi-Modal Classifier GAN (MMC-GAN).
Recently several studies have employed GAN-based approaches for classification tasks \cite{springenberg2015unsupervised, sugiura2018grounded}: however they only use a single modality.
In contrast, our approach is based on linguistic (user instructions and scene context) as well as visual inputs related to the different target area candidates. Using a latent space representation of these inputs, MMC-GAN can address both modalities through a unified framework. As a result, our classification method accuracy exceeds $80\%$. A demonstration video is available at this URL\footnote{\protect\url{https://youtu.be/_YQuziz4eGY}}.
The key contributions of this paper can be summarized as follows:
\begin{itemize}
\item[$\bullet$] A multi-modal GAN classifier based on the user instruction, task context, and depth image of a given target area is introduced in Section \ref{sec:prop}.
\item[$\bullet$] Inspired by the wide literature on GAN, we propose and evaluate several variations of MMC-GAN in Section \ref{sec:exp}.
\end{itemize}
\section{carry and place multi-modal data set}\label{sec:label}
In this section, we describe the carry-and-place multi-modal data set used for training and evaluating MMC-GAN. To the best of our knowledge such a data set is unavailable in the literature and has consequently been built specially for this work.
\subsection{Overview of the Data Set Construction}\label{sec:data_constr}
The procedure given below describes the different steps used to build the carry-and-place multi-modal data set. Each of these steps will be detailed in the following sections.
\begin{itemize}
\item[$1$.] Setup an area with everyday objects.
\item[$2$.] Record the scene with a camera. \Update
\item[$3$.] Generate a pseudo-instruction sentence from randomly selected objects and pieces of furniture.
\item[$4$.] Generate a pseudo-context sentence from randomly selected objects and pieces of furniture.
\item[$5$.] Instruct a labeler to rewrite a natural sentence based on $3$ and $4$, according to the scene.
\item[$6$.] Instruct a labeler to label the samples according to $6$ and $2$ .
\end{itemize}
\Done
\subsection{Depth Image Inputs} \label{sec:depth}
Each linguistic input namely ${\bf w}_{inst}$ and ${\bf w}_c$ should be associated with a couple of visual features $\{{\bf v}_d, {\bf v}_{meta}\}$ obtained as follows. To characterize the variability of daily life environments, different types of tables, drawers, shelves or desks, from various areas of our office building, were used to build the set ${\bf T}$. The dataset of visual inputs ${\bf v}_d$ is then based on the depth image of a candidate area $T_i$ ($T_i \in {\bf T}$) with different levels of clutter and layout variations as illustrated in Fig. \ref{fig:data}.
\begin{figure}[b]
\centering
\includegraphics[scale=0.45]{data.png}
\caption{\small Samples of the $1282$ depth images (right) ${\bf v}_d$ and their corresponding RGB images (left). Only the depth data is used.}
\label{fig:data}
\vspace{-2.5mm}
\end{figure}
The depth images were collected from an \textit{Asus Xtion Pro live} depth sensor, which is identical to the one equipped on HSR. As summarized in Table \ref{tab:data_coll}, $1282$ valid images could be collected from $37$ target areas belonging to four drawers, four tables, four desks and two shelves. For each target area $T_i$, several configurations were recorded. We used $19$ objects as obstacles, and varied the obstacle layout for each image recorded.
\begin{table}[t]
\normalsize
\caption{\small Data collection statistics }
\label{tab:data_coll}
\centering
\begin{tabular}{lccc}
\hline
$\#$ &Instance &Target areas &Images\\
\hline
Table &4&18&321 \\
Drawer &4 &10&336\\
Desk&4&8&425 \\
Shelf&2&5&200 \\
\hline
$\sum$ &{\bf 14}& {\bf 37}& {\bf 1282}\\
\hline
\end{tabular}
\vspace{-2.5mm}
\vspace{-2.5mm}
\end{table}
Note that the depth images recorded by this type of sensor are particularly noisy. Blobs related to missing information (represented as black pixels in Fig. \ref{fig:data}) deteriorate the quality of the image and add complexity to the classification task.
Eventually, ${\bf v}_{d}$ was coupled with the corresponding ${\bf v}_{meta}$. The input ${\bf v}_{meta}$ enables us to differentiate the observation points of the robot ({\it i.e.}, image perspective).
\subsection{Instructions and contexts}
\Update
This dataset relates a linguistic input to each visual input (see Fig. \ref{fig:features}). We assume that a pseudo-sentence is randomly generated from a randomly selected target piece of furniture and/or object .
A labeler is then instructed to rewrite a natural language sentence based on a randomly selected verb phrase, a noun phrase and a preposition phrase. The labeler is also instructed to rewrite the natural sentences under two conditions so as to obtain two sets $Y$ and $N$ characterized by the instructions $\{I_Y,I_N\}$:
\begin{itemize}
\item $I_{Y}$: A target piece of furniture $F$ should be mentioned in the instruction, e.g., ``Move the coke bottle to the kitchen and put it down on the table''
\item $I_N$: The instruction should not contain any target piece of furniture, e.g., ``Move the coke bottle to the kitchen and put it down''
\end{itemize}
The context sentence is obtained in a similar manner by the labeler writing a natural sentence. The context sentence ${\bf w}_{c}$ refers to the situation of the robot with respect to carried object $O$. Example context sentences are :
``$O$ is held by the robot'', ``$O$ is already on $F$'', ``The robot is holding $O$'' and ``$O$ can be found on $F$''.
Note that we considered the case where instruction/context sentences contain a target piece of furniture $F$.
\Done
\begin{figure}[tp]
\centering
\includegraphics[scale=0.4]{feature.png}
\caption{\small Example of multimodal input composed of a depth image ${\bf v}_d$ and the associated meta-data ${\bf v}_{meta}$ (camera height, target height, camera angle) with the instruction ${\bf w}_{inst}$ and context ${\bf w}_{c}$.}
\label{fig:features}
\vspace{-2.5mm}
\vspace{-2.5mm}
\end{figure}
\subsection{Labeling Strategy}\label{sec:labeling}
The samples were manually labeled into categories by a robotic expert. The categories are based on the physical abilities of HSR and the scene context. The labeling has been performed by a single labeler; having multiple labelers will be considered in future work.
\Update
The labeling strategy should emphasize several concepts. The accessibility to an area of action: areas that would be too far, too high (approx. above $0.8$ m) or too low (approx. below $0.6$ m) for the robot should be penalized. The scene clutter: collision with obstacles or even with the given piece of furniture edge/border should be avoided. Moreover, the scene should match with the initial instruction, in particular, when target pieces of furniture are specified.
With the knowledge of HSR's capability ({\it e.g.}, arm's reach and gripper size), for each sample ${\bf x}_{raw}$, we specify the labeling rule as follows.
\begin{itemize}
\item[($A1$)] The target area $T_i$ is very likely in terms of linguistic and visual information: the area is within the natural reach of the robot and have enough space.
\item[($A2$)] The target area $T_i$ is likely in terms of linguistic and visual information: however the area is not within the natural reach of the robot or have few clutters.
\item[($A3$)] The target area $T_i$ is unlikely in terms of linguistic and/or visual information: there is limited space available on $T_i$, which might lead to collisions with obstacles.
\item[($A4$)] The target area $T_i$ is very unlikely in terms of linguistic and/or visual information: there is not enough space on $T_i$, with obstacles preventing the robot from placing the object.
\end{itemize}
\Done
From this labeling strategy, after removing the invalid features, our initial data set was randomly split into training, validation and test sets. We obtained the different sets given in Table \ref{tab:data_table}.
\Update
It should also interesting to mention that during the labeling phase, the instructions referring to tables (resp. desks) could match with images of desks (resp. tables), depending on the evaluation of the labeler.
\Done
\begin{table}[h]
\normalsize
\caption{\small Statistics of the data set ${\bf x}_{real}$}
\label{tab:data_table}
\centering
\begin{tabular}{cccc|c }
\hline
$\#$ & Train& Valid& Test& $\sum$\\
\hline
$A1$ &158&29&25&{\bf 212} \\
$A2$&359&34&39&{\bf 432}\\
$A3$&350&26&22 &{\bf 398} \\
$A4$&203&17&20 & {\bf 240}\\
\hline
$\sum$& {\bf 1070} ({\it 83 $\%$}) & {\bf 106} ({\it 8.5 $\%$}) & {\bf 106} ({\it 8.5 $\%$}) & \multicolumn{1}{ |c}{\bf 1282}\\
\hline
\vspace{-2.5mm}
\vspace{-2.5mm}
\vspace{-2.5mm}
\end{tabular}
\end{table}
\section{LAtent Classifier-GAN}\label{sec:def}
To solve the target task, we extend the LAtent Classifier GAN (LAC-GAN) proposed in \cite{sugiura2018grounded}. Our choice is motivated by the data augmentation property of the GAN framework that is particularly interesting in robotics. In contrast to computer vision or speech recognition, a large-scale dataset cannot be collected due to the diversity of environments and robotic setups. Hence, the data augmentation property of GAN allows us to design/use complex models without overfitting to the dataset.
Moreover, from LAC-GAN's initial structure, we can exploit latent space features that are more suitable for processing multimodal inputs.
\subsection{ Generative Adversarial Nets }
The GAN framework was initially proposed in \cite{goodfellow2014generative} as a generative framework and is composed of two networks, a Discriminator $D$ and a Generator $G$, which compete with each other. $G$ creates fake data by mimicking the real data distribution while $D$ discriminates the real data from the fake data. By mutual enhancement, $G$ is trained to create more realistic fake data, while the discrimination ability of $D$ improves.
More formally, $G$ is a network with a multi-dimensional random input ${\bf z}$ that outputs the data ${\bf x}_{fake}$:
\begin{equation}\label{equ:G_out}
{\bf x}_{fake}= G({\bf z}).
\end{equation}
The input ${\bf z}$ is generally sampled from a standard normal distribution and its details for our case study are given in Section \ref{sec:exp}.
In contrast, $D$ whose task is to discriminate the real data ${\bf x}_{real}$ from ${\bf x}_{fake}$, is alternately input with ${\bf x}={\bf x}_{real}$ or ${\bf x}={\bf x}_{fake} $ from a source $S \in \{real,fake\}$. $D$'s output is then given by
\begin{equation}\label{equ:D_out}
p_D(S=real|{\bf x})=D({\bf x}).
\end{equation}
The cost functions of $G$ and $D$, respectively $J_G$ and $J_D$, are defined as follows:
\begin{equation}\label{equ:J_S}
J_S=-\frac{1}{2} \mathbb{E}_{{\bf x}_{real}} \log D ({\bf x}_{real}) - \frac{1}{2} \mathbb{E}_{\bf z} \log(1-D({\bf x}_{fake})),
\end{equation}
leading to $J_D=J_S$ and $J_G=-J_S$.
\subsection{Classification using LAC-GAN }
In contrast to GAN, LAC-GAN is designed for classification tasks, in which the generated samples from $G$ are used for data augmentation. However, unlike the other GAN-based classification methods based on raw inputs \cite{odena2016conditional} ({\it e.g.}, images or text), LAC-GAN uses a third component, the extractor $E$ network, to process latent space features in GAN. Actually, $G$ does not necessarily produce a raw data representation if the task is not generation but classification.
As a result, the input of $E$ is the raw data ${\bf x}_{raw}$ while the latent space feature ${\bf x}_{real}$ is output and injected into $D$. Hence, in addition to \eqref{equ:D_out}, $D$ has a second output $ p_D({\bf y})$ which is the predicted category output. According to this new structure, the cost function of $D$ is modified as follows:
\begin{equation}\label{equ:J_D}
J_D=J_S+ \lambda J_C,
\end{equation}
where $\lambda$ is a weighting parameter and $J_C$ is a cross-entropy loss function:
\begin{equation}\label{equ:J_C}
J_C=-\sum_n \sum_{j} {y^{*}_n}_j \log p_D({y_n}_j),
\end{equation}
where ${ y^{*}_n}_j$ denotes the label of the $j$-th dimension of the $n$-th sample.
Note that $J_C$ is also the cost function of $E$ by replacing $p_D({ y_n}_j)$ with $p_E({y_n}_j)$. The global framework of LAC-GAN is illustrated in Fig. \ref{fig:lacgan}.
\begin{figure}[tp]
\centering
\includegraphics[scale=0.4]{GAN.png}
\caption{\small LAC-GAN architecture involving three components that are the Extractor, Generator and Discriminator.}
\label{fig:lacgan}
\vspace{-2.5mm}
\vspace{-2.5mm}
\end{figure}
\Update
Such a structure implies that $E$ is trained considering the category labels ${\bf y}^{*}$. The training of LAC-GAN is then divided into two phases. First, $E$ is trained knowing the labels ${\bf y}^{*}$ with the cost function \eqref{equ:J_C}. The parameters of $E$ are detailed in Section \ref{sec:prop}. Subsequently, when the set ${\bf x}_{real}$ is extracted ({\it i.e.}, $E$ training is finished ), $D$ and $G$ are alternately trained to improve the classification prediction $p_D({\bf y})$.
\Done
\begin{figure*}[tp]
\centering
\includegraphics[scale=0.35]{MMCGAN.png}
\caption{\small MMC-GAN addresses multimodal inputs through a CNN for visual input and a model of paragraph vector (PV-DM) for linguistic inputs. The number of nodes are shown above each layers. Both convolutional (Conv) and pooling (Pool) layers use ($2$x$2$)-sized filters. A fully connected layer (FC) is used after the convolutional layers. Batch normalization (BN) and dropout (DO) operations are applied on the output of the first and fifth layers. } \label{fig:mmcgan}
\vspace{-2.5mm}
\vspace{-2.5mm}
\end{figure*}
\section{Proposed Method: Multimodal Classifier GAN} \label{sec:prop}
In \cite{sugiura2018grounded}, the Extractor $E$ was defined as a fully-connected feedforward DNN. However, this structure is not optimal for multimodal inputs including visual features. The structure of $E$ should hence be modified for our study.
\subsection{Multimodal inputs}
We propose MMC-GAN which extends the LAC-GAN architecture to deal with multimodal inputs, namely linguistic and visual inputs. This emphasizes another key advantage of using latent space features: with a similar representation, different inputs can be processed in the same network. Indeed, usually visual and linguistic inputs are different in dimension, which makes it difficult to generate from a unified Generator $G$ structure. Unlike MMC-GAN, a typical GAN classifier would require us to develop a separate generator for each type of input and develop a method to merge $G$ results.
In contrast to LAC-GAN, the Extractor $E$ of MMC-GAN, illustrated in Fig. \ref{fig:mmcgan}, is composed of a convolutional neural network (CNN) and two paragraph vector models. The CNN is used to process the visual inputs, while linguistic inputs are processed via the of a paragraph vector distributed memory (PV-DM) model \cite{le2014distributed}. $E$'s input, ${\bf x}_{raw}$, is then defined as a set of inputs
\begin{equation}\label{equ:E_in}
{\bf x}_{raw}=\{{\bf w}_{inst}, {\bf w}_{c},{\bf v}_{d},{\bf v}_{meta}\}
\end{equation}
where ${\bf w}_{inst}$ and ${\bf w}_{c}$ are the linguistic features and ${\bf v}_{d}$ and ${\bf v}_{meta}$ correspond to visual features.
In detail, ${\bf w}_{inst}$ and ${\bf w}_{c}$ correspond to the instruction and context sentences respectively.
For instance a typical carry-and-place task is characterized by
\begin{equation*}
\left\lbrace
\begin{aligned}
&{\bf w}_{inst}= \text{``Move the towel to the shelf.''}\\
&{\bf w}_{c}=\text{``The robot is holding a towel.''}
\end{aligned} \right.
\end{equation*}
Here, ${\bf w}_{inst}$ and ${\bf w}_{c}$ are two word sequences using a one-hot vector representation that are processed through the PV-DM. As output, $200$-dimensional paragraph vectors ${\bf x}_{instr}$ and ${\bf x}_{c}$, that are also latent space features, are created from respectively from ${\bf w}_{inst}$ and ${\bf w}_{c}$. Both ${\bf w}_{inst}$ and ${\bf w}_{c}$ used for training the MMC-GAN model are more thoroughly described in Section \ref{sec:label}.
For the visual inputs, ${\bf v}_{d}$ is the depth image of a target area. Although other inputs could be used ({\it e.g.}, RGB), we limited our method to depth data that is sufficient to characterize a given target area. In addition, ${\bf v}_{meta}$ describes the situation of each ${\bf v}_{d}$, that is, the candidate area height, the robot camera height and angle.
These inputs are processed in a CNN composed of seven layers in which ${\bf v}_{d}$ ($640\text{x}480$ pixels) and ${\bf v}_{meta}$ ($1$x$3$) are transformed into a $200$-dimensional latent space feature ${\bf x}_{v}$. ${\bf x}_v$ is extracted from the penultimate layer ($N=6$) similarly to bottleneck network structures. The first three convolutional (Conv) layers process the depth image while a concatenation operation is performed on the fifth layer to input the metadata. We use ReLU for the activation functions. Dropout is applied to the first layer and batch normalization \cite{ioffe2015batch} is applied to the fifth layer. The cost function of the CNN, $J_{v}$ is defined as follows:
\begin{equation}\label{equ:J_cnn}
J_{v}=J_C
\end{equation}
As a result, the output ${\bf x}_{real}$ of $E$ is defined as
\begin{equation}\label{equ:E_out}
{\bf x}_{real}=\{ {\bf x}_{instr},{\bf x}_{c}, {\bf x}_{v}\}.
\end{equation}
where ${\bf x}_{real}$ is a $600$-dimensional vector representing the instruction, context and a given target area.
\Update
Note that similarly to the LAC-GAN structure, $E$ is trained beforehand to extract ${\bf x}_{real}$. Only when ${\bf x}_{real}$ is extracted, can the training of Generator $G$ and Discriminator $D$ be performed.
\Done
\subsection{Generator Architecture}
Several variations of MMC-GAN are considered in this work. These variations are based on different architectures of the Generator $G$. In addition to common GAN architecture, conditional GAN (CGAN) \cite{mirza2014conditional} and Wasserstein GAN (WGAN) \cite{arjovsky2017wasserstein} are used.
\subsubsection{CGAN}
In the CGAN architecture, the network is conditioned by ${\bf c}$, that corresponds to the category distribution, in our case. In this way, $G$ generates a feature ${\bf x}_{fake}$ following the categories given in $\bf c$, which is particularly relevant in classification problems. Indeed, in the initial GAN method, there are no constraints on the class of the data generated by $G$ which makes the training of $D$ more complex when ${\bf x}_{fake}$ and $ {\bf x}_{real}$ belong to different classes. Hence this architecture simplifies the GAN training process by matching the class of ${\bf x}_{fake}$ to $ {\bf x}_{real}$.
Nonetheless, our considered CGAN architecture is different from the original one proposed in \cite{mirza2014conditional}. In our classification problem, only $G$ is conditioned by $\bf c$. Indeed $D$ which also outputs $p_D({\bf y})$ should not input $\bf y$ or $\bf c$. As a result, $G$ is modified as follows:
\begin{equation}\label{equ:G_out2}
{\bf x}_{fake}= G({\bf z}, {\bf c}).
\end{equation}
\subsubsection{WGAN }
WGAN corresponds to a different training method for GAN. The aim of WGAN is to improve the stability of the model's learning as well as to avoid mode collapse. Actually, GAN networks are known to be slow and unstable notably because of the vanishing gradient problem: the initial loss function \eqref{equ:J_S} falls to zero and the training becoming slow because of a nearly null gradient.
To solve this problem, WGAN adopts a different loss function derived from the Wasserstein distance, which is a measure of two distributions that quantifies the cost of matching the first distribution to the second one. Considering our features ${\bf x}_{real}$ and ${\bf x}_{fake}$, the Wasserstein distance becomes
\begin{equation}\label{equ:wass}
\sum_{{\bf x}_{fake}, {\bf x}_{real}} \gamma({\bf x}_{fake}, {\bf x}_{real}) \| {\bf x}_{fake}-{\bf x}_{real} \|
\end{equation}
where $\gamma({\bf x}_{fake}, {\bf x}_{real})$ represents the cost for matching the two distributions.
From this metric, a new loss function for both $D$ and $G$ networks is defined as follows:
\begin{equation}\label{equ:J_S_wass}
J_S=- \frac{1}{2} \mathbb{E}_{{\bf x}_{real}} D ({\bf x}_{real}) + \frac{1}{2} \mathbb{E}_{\bf z} D({\bf x}_{fake})
\end{equation}
\section{Problem Statement}\label{sec:prob}
\subsection{Task Description}
The target task of this study is the understanding of ambiguous language instructions in carry-and-place tasks. Typical instructions follow a pattern that can be described by the sentence \textbf{``Put away an object ($O$) (in a target area $T_i$)''}. As an appropriate response to this instruction, a robot should be able to predict a suitable target area $T_i$, where $T_i$ is not explicitly or fully specified in the instruction.
We assume the following inputs and output:
\Update
\begin{itemize}
\item[$\bullet$]{\bf Inputs}: Linguistic instruction, linguistic context and pre-collected camera images of candidate target areas.
\item[$\bullet$]{\bf Output}: Likelihood of the target areas.
\end{itemize}
The likelihood refers to the probability that the robot should place the designated object in a given target area for four output classes. The target areas are afterwards ranked by simple binarization from four classes to two classes (sum of the two best class and two worst class probabilities). Hence, this is not a multi-class evaluation of pieces of furniture but a multi-class evaluation to determine the suitability of the target areas, described as a four-class problem and detailed in Section \ref{sec:label}.
\Done
The first type of solution to solve this problem is a dialogue-based approach \cite{kollar2013learning,johnson2011enhanced} to recover the missing information through explicit instructions from the user. However, this solution can be cumbersome for the user, when the robot starts a dialog for each task it must perform. Because our focus is on how to reduce the cumbersomeness of the interaction, no additional dialogue is allowed. Furthermore, because only the top $n$ candidates are shown for usability, a rank based on the target likelihood should be an appropriate output.
We assume that this task depends on the space available and the robot's physical ability. The task environment is composed of several pieces of furniture $F$ such as shelves, bookshelves, tables, desks, or other items and may contain one or several target areas $T_i$ each. Each target $T_i$ may also be cluttered with obstacles.
Moreover, some instructions might also be ambiguous from the context of the task. For instance, this is stressed by the instruction, \textbf{``Move the milk box ($O$) on the table ($T_i$)''}. Depending on whether the robot is carrying $O$ or not, this instruction can lead to two distinct actions: putting down $O$ on the table or picking up $O$ on the table and placing it somewhere else.
For visual inputs, while RGB data is mainly exploited in the computer vision community, we assume a set of visual inputs only composed of depth information D. In our problem, the depth image provides sufficient information, about target areas, to predict the likelihood of a given $T_i$ as it is shown in Section \ref{sec:exp}.
\begin{figure}[tp]
\centering
\subfloat[HSR]{\label{fig:hsr}\includegraphics[width= 3.5cm, height=4 cm]{hsr.jpg}} \enskip
\subfloat[NICT case]{\label{fig:nict_case}\includegraphics[width= 3.cm, height=4 cm] {nict_case.jpg}}
\caption{\small NICT cases allow HSR to manipulate different types of object easily. }
\vspace{-2.5mm}
\vspace{-2.5mm}
\end{figure}
\subsection{Hardware Assumptions}
We consider a standardized robotic platform because a target area likelihood also depends on the robot's limitations. For this work, we use HSR (Human Support Robot), a service robot developed by Toyota endowed with object manipulation capability (see Fig. \ref{fig:hsr}). Since 2017, this robot has been used as the domestic standard platform of RoboCup@Home competitions \cite{iocchi2015robocup}.
Moreover, because we are not dealing with the grasping problem, we introduce ``Nothing-is Corresponding To-the-marker (NICT)'' cases (see Fig. \ref{fig:nict_case}) which are the containers of the movable objects $O$ in the scene. These cases simplify the grasping task, while not being particularly inconvenient for the user. In this case, the robot has to manipulate rigid bodies with a known shape, independently of the object type or consistency inside.
\section{Related work}
Inferring a user's intention does not only rely on linguistic inputs but also on proprioceptive and contextual knowledge. Several studies in the robotic community focus on mapping instructions to the environment context. In \cite{misra2016tell} manipulation tasks are addressed based on cloud data, while in \cite{tellex2011understanding} navigation and path planning tasks are addressed.
Like many pick-and-place approaches, we are interested in placing tasks in daily life environments. However, most of these approaches focus either on the grasping and manipulation part \cite{jiang2012learning} or on the the method of placing an object. Assumptions are made that the target areas already specified and available. This is the case in \cite{abdo2015robot} where the authors proposed a solution based on user preferences for placing objects on shelves, and in \cite{schuster2010perceiving} where objects are placed on the uncluttered parts of flat surfaces using image segmentation. More importantly, these studies do not focus on the instruction understanding of the robot.
In contrast, we focus on determining suitable target areas for placing an object in a realistic environment when the target area is undefined in the user instruction. In this way, we think that our work complements these studies.
Recently, GAN and all its variation have spurred the field of image reconstruction and enhancement \cite{ledig2016photo, denton2015deep}. Interestingly, GAN-based approaches have also been used to address classification problems \cite{springenberg2015unsupervised, sugiura2018grounded, odena2016conditional}. The latter studies proposed to improve the classification task by exploiting the data augmentation property of a GAN. Our work, extending GAN by considering multimodal inputs, is inspired by these methods.
|
1806.03844
|
\section{Introduction}\label{sec1}
Let us consider a typical cluster sampling design: the
entire population consists of different clusters, and the
probability for each cluster to be selected into a sample is
known. The sum of sample elements is then equal to
$S=w_1S_1+w_2S_2+\cdots+w_NS_N$. Here, $S_i$ is the sum of
independent identically distributed (iid) random variables (rvs) from
the $i$-th cluster.
A similar situation arises in actuarial mathematics
when the sum $S$ models the discounted amount of the total net loss of
a company, see, for example,
\cite{TaTs03}. Note that then $S_i$ may be the sum of dependent rvs. Of
course, in actuarial models,
$w_i$ are also typically random, which makes our research just a first
step in this direction.
In many papers, the limiting behavior of weighted sums is
investigated with the emphasis on weights or tails of distributions,
see, for example, \cite
{DLS17,LChS17,Liang06,MOC12,Soo01,YCS11,YLS16,YW12,W11,WV16,ZSW09},
and references therein. We,
however,
concentrate on the impact of $S-w_iS_i$ on $w_iS_i$.
Our research is motivated by the following simple example. Let us
assume that $S_i$ is in some sense close to $Z_i$, $i=1,2$. Then a
natural approximation to $w_1S_1+w_2S_2$ is $w_1Z_1+w_2Z_2$. Suppose
that we want to estimate the closeness of both sums in some metric
$d(\cdot,\cdot)$. The standard approach which works for the majority of
metrics then gives
\begin{equation}
d(w_1S_1+w_2S_2,w_1Z_1+w_2Z_2)
\leqslant d(w_1S_1,w_1Z_1)+d(w_2S_2,w_2Z_2).\label{afirst}
\end{equation}
The triangle inequality (\ref{afirst}) is not always useful. For example,
let $S_1$ and $Z_1$ have the same Poisson distribution with parameter
$n$ and let $S_2$ and $Z_2$ be Bernoulli variables with probabilities
1/3 and 1/4, respectively. Then (\ref{afirst}) ensures the trivial order of
approximation $O(1)$ only. Meanwhile, both $S$ and $Z$ can be treated
as small (albeit different) perturbations to the same Poisson variable
and, therefore, one can expect closeness of their distributions at
least for large $n$. The `smoothing' effect that other sums have on the
approximation of $w_iS_i$ is already observed in \cite{ElCe15} (see
also references therein). For some general results involving the
concentration functions, see, for example, \cite{Hipp85,Ro03}.
To make our goals more explicit, we need additional notation. Let $\ZZ$
denote the set of all integers. Let
$\F$ (resp.~$\F_Z$, resp.~$\M$) denote the set of
probability distributions (resp. distributions concentrated on integers,
resp.~finite signed measures) on $\RR$.
Let $I_a$ denote the distribution concentrated at real $a$ and set $I =
I_0$ . Henceforth,
the products and powers of measures are understood in the convolution sense.
Further, for a measure $M$, we set $M^0 = I$ and $\exp\{M\}=\sum_{k=0}^\infty M^k/k!$.
We denote by $\widehat M(t)$ the Fourier--Stieltjes
transform of $M$. The real part of $\w M(t)$ is denoted by $Re\w
M(t)$. Observe also that $\w{\exp\{M(t)\}}=\exp\{\w M(t)\}$. We
also use $\eL(\xi)$ to denote the distribution of $\xi$.
The
Kolmogorov (uniform) norm $\vert M\vert_K$ and the total variation
norm $\| M\|$ of $M$ are defined by
\[
\vert M\vert_K=\sup_{x\in\RR}\bigl\vert M\bigl((-\infty,x]\bigr)
\bigr\vert,\qquad \| M\|=M^{+}\{\RR\}+M^{-}\{\RR\},
\]
respectively. Here $M=M^{+}-M^{-}$ is the Jordan--Hahn
decomposition of $M$.
Also, for any two measures $M$ and $V$, $\vert M\vert_K\leqslant\| M\|$,
$\vert MV\vert_K\leqslant\| M\|\cdot\vert V\vert_K$, $\vert\w
M(t)\vert\leqslant\| M\|$, $\| \exp\{M\}\|\leqslant\exp\{\| M\|\}
$. If $F\in\F$, then
$\vert F\vert_K=\| F\|=\| \exp\{F-\dirac\}\|=1$. Observe also that, if
$M$ is concentrated on integers, then
\[
M=\sum_{k=-\infty}^\infty M\{k\}\,
\dirac_k,\qquad\w M(t)=\sum_{k=-\infty
}^\infty
\ee^{\ii tk}M\{k\},\qquad\| M\|=\sum_{k=-\infty}^\infty
\bigl\vert M\{k\}\bigr\vert.
\]
For $F\in\F$, $h\geqslant0$, L\' evy's
concentration function is defined by
\begin{equation*}
Q(F,h)=\sup_xF \bigl\{[x,x+h] \bigr\}.
\end{equation*}
All absolute positive
constants are denoted by the same symbol $C$. Sometimes to avoid
possible ambiguities, the constants $C$ are supplied
with indices. Also, the constants depending on parameter $N$ are
denoted by $C(N)$. We also assume usual conventions $\sum_{j=a}^b=0$
and $\prod_{j=a}^b=1$, if $b<a$. The notation $\varTheta$ is used for any
signed measure satisfying $\| \varTheta\|\leqslant1$. The notation
$\theta$ is used for any real or complex number satisfying $\vert
\theta\vert\leqslant1$.
\section{Sums of independent rvs}
The results of this section are partially inspired by a comprehensive
analytic research of probability generating functions in \cite{Hw99}
and the papers on \emph{mod}-Poisson convergence, see \cite
{BKN14,KN10,KNN15}, and references therein.
Assumptions in the above-mentioned papers are made about the behavior
of characteristic or probability generating functions. The inversion
inequalities are then used to translate their differences
to the differences of distributions.
In principle, \emph{mod}-Poisson convergence means that if an initial
rv is a perturbation of some Poisson
rv, then their distributions must be close. Formally, it is required
for $\exp\{-\tilde\lambda_n(\ee^{\ii t}-1)\}f_n(t)$ to have a limit
for some sequence of Poisson parameters $\tilde\lambda_n$, as $n\to
\infty$. Here, $f_n(t)$ is a characteristic function of an investigated
rv. Division by a certain Poisson characteristic function is one of the
crucial steps in the proof of Theorem \ref{Teorema} below, which makes it
applicable to rvs satisfying the \emph{mod}-Poisson convergence
definition, provided they can be expressed as sums of independent rvs.
Though we use factorial moments, similar to Section 7.1 in \cite
{BKN14}, our work is much more closer in spirit to \cite{SC88},
where general lemmas about the closeness of lattice measures are
proved.
In this section, we consider a general case of independent
non-identically distributed rvs, forming a triangular array (a scheme
of series). Let $S_i=X_{i1}+X_{i2}+\cdots+X_{in_i}$,
$Z_i=Z_{i1}+Z_{i2}+\cdots+Z_{in_i}$, $i=1,2,\dots,N$. We assume that
all the $X_{ij}$, $Z_{ij}$ are mutually independent and integer-valued.
Observe that, in general, $S=\sum_{i=1}^Nw_iS_i$ and $Z=\sum_{i=1}^Nw_iZ_i$ are
\emph{not} integer-valued and, therefore, the
standard methods of estimation of lattice rvs do not apply. Note also
that, since any infinitely divisible distribution can be expressed as a
sum of rvs, Poisson, compound Poisson and negative binomial rvs can be
used as $Z_i$.
The distribution of $X_{ij}$ (resp. $Z_{ij}$) is denoted by $F_{ij}$
(resp. $G_{ij}$). The closeness of characteristic functions will be
determined by the closeness of corresponding factorial moments. Though
it is proposed in \cite{BKN14} to use standard factorial moments even
for rvs taking negative values, we think that right-hand side and
left-hand side factorial moments, already used in \cite{SC88}, are more
natural characteristics.
Let, for $k=1,2,\dots$, and any $F\in\F_Z$,
\begin{align*}
\nu_k^{+}(F_{ij})&=\sum
_{m=k}^\infty m(m-1)\cdots(m-k+1)F_{ij}\{m\},
\\
\nu_k^{-}(F_{ij})&=\sum
_{m=k}^\infty m(m-1)\cdots(m-k+1)F_{ij}\{-m\}.
\end{align*}
For the estimation of the remainder terms we also need the following
notation:\break
$\beta_k^{\pm}(F_{ij},G_{ij})=\nu_k^{\pm}(F_{ij})+\nu_k^{\pm
}(G_{ij})$, $\sigma^2_{ij}=\max(\Var(X_{ij}),\Var(Z_{ij}))$, and\vadjust{\goodbreak}
\begin{align*}
u_{ij}&=\min \biggl\{1-\frac{1}{2}\bigl\| F_{ij}(
\dirac_1-\dirac)\bigr\| ;1-\frac
{1}{2}\bigl\| G_{ij}(
\dirac_1-\dirac)\bigr\| \biggr\}
\\
&=\min \Biggl\{\sum_{k=-\infty}^\infty\min
\bigl(F_{ij}\{k\},F_{ij}\{k-1\} \bigr);\sum
_{k=-\infty}^\infty\min \bigl(G_{ij}\{k
\},G_{ij}\{k-1\} \bigr) \Biggr\}.
\end{align*}
For the last equality, see (1.9) and (5.15) in \cite{Ce16}. Next we
formulate our assumptions. For some fixed integer $s\geqslant1$,
$i=1,\dots, N,\ j=1,\dots,n_i$,
\begin{align}
u_{ij}&>0,\qquad\sum_{j=1}^{n_i}u_{ij}
\geqslant1, \qquad n_i\geqslant1,\qquad w_i>0,\label{sal1}
\\
\nu_k^{+}(F_{ij})&=\nu_k^{+}(G_{ij}),
\qquad\nu_k^{-}(F_{ij})=\nu_k^{-}(G_{ij}),
\quad k=1,2,\dots,s\label{sal2}
\\
\beta_{s+1}^{+}(F_{ij},G_{ij})&+
\beta_{s+1}^{-}(F_{ij},G_{ij})<\infty.
\label{sal4}
\end{align}
Now we are in position to formulate the main result of this section.
\begin{theorem} \label{Teorema} Let assumptions (\ref{sal1})--(\ref
{sal4}) hold. Then
\begin{align}
\!\!\!\bigl\vert\eL(S)-\eL(Z)\bigr\vert_K &\leqslant C(N,s)\frac{\max_jw_j}{\min_j
w_j}
\Biggl(\sum_{i=1}^N\sum
_{l=1}^{n_i}u_{il} \Biggr)^{-1/2}
\prod_{l=1}^N \Biggl(1+\sum
_{k=1}^{n_l}\sigma^2_{lk} /\sum
_{k=1}^{n_l} u_{lk} \Biggr)
\nonumber
\\
&\quad\times \sum_{i=1}^N\sum
_{j=1}^{n_i} \bigl[\beta^{+}_{s+1}(F_{ij},G_{ij})+
\beta_{s+1}^{-}(F_{ij},G_{ij})\bigr]
\Biggl(\sum_{k=1}^{n_i}u_{ik}
\Biggr)^{-s/2}. \label{BTa}
\end{align}
If, in addition, $s$ is even and
$\beta_{s+2}^{+}(F_{ij},G_{ij})+\beta_{s+2}^{-}(F_{ij},G_{ij})<\infty$,
then
\begin{align}
\bigl\vert\eL(S)-\eL(Z)\bigr\vert_K &\leqslant C(N,s)\frac{\max_jw_j}{\min_j
w_j}
\Biggl(\sum_{i=1}^N\sum
_{l=1}^{n_i}u_{il} \Biggr)^{-1/2}
\prod_{l=1}^N \Biggl(1+\sum
_{k=1}^{n_l}\sigma^2_{lk} /\sum
_{k=1}^{n_l} u_{lk} \Biggr)
\nonumber
\\
&\quad\times \sum_{i=1}^N\sum
_{j=1}^{n_i} \Biggl(\sum_{k=1}^{n_i}u_{ik}
\Biggr)^{-s/2} \Biggl(\bigl\vert\beta ^{+}_{s+1}(F_{ij},G_{ij})-
\beta_{s+1}^{-}(F_{ij},G_{ij})\bigr\vert
\nonumber
\\
&\quad+ \bigl[\beta^{+}_{s+2}(F_{ij},G_{ij})+
\beta _{s+2}^{-}(F_{ij},G_{ij})
\nonumber
\\
&\quad+ \beta^{-}_{s+1}(F_{ij},G_{ij})
\bigr] \Biggl(\sum_{k=1}^{n_i}u_{ik}
\Biggr)^{-1/2} \Biggr). \label{BTb}
\end{align}
\end{theorem}
The factor $(\sum_{i=1}^n\sum_{j=1}^{n_i}u_{ij})^{-1/2}$ estimates the
impact of $S$ on approximation of $w_iS_i$.
The estimate (\ref{BTb}) takes care of a possible symmetry of distributions.
If, in each sum $S_i$ and $Z_i$, all the rvs are identically
distributed, then we can get rid of the factor containing variances.
We say that condition (\textit{ID}) is satisfied if, for each $i=1,2,\dots,N$,
all rvs $X_{ij}$ and $Z_{ij}$ ($j=1,\dots, n_i$) are iid with
distributions $F_i$
and $G_i$, respectively. Observe, that if condition (\textit{ID}) is
satisfied, then the characteristic functions of $S$ and $Z$ are
respectively equal to
\[
\prod_{i=1}^N\w F_i^{n_i}(w_it),
\qquad\prod_{i=1}^N\w G_i^{n_i}(w_it).
\]
We also use notation $u_i$ instead of $u_{ij}$, since now
$u_{i1}=u_{i2}=\cdots=u_{in_i}$.\vadjust{\goodbreak}
\begin{theorem}\label{TeoremaID} Let the assumptions (\ref
{sal1})--(\ref
{sal4}) and the condition (ID) hold. Then
\begin{align}
\bigl\vert\eL(S)-\eL(Z)\bigr\vert_K&\leqslant C(N,s)\frac{\max_jw_j}{\min_j
w_j} \Biggl(
\sum_{i=1}^Nn_iu_{i}
\Biggr)^{-1/2}
\nonumber
\\
&\quad\times\sum_{i=1}^N
\frac{\beta^{+}_{s+1}(F_{i},G_{i})+\beta
_{s+1}^{-}(F_{i},G_{i})}{n_i^{s/2-1}u_i^{s/2}}. \label{BTcc}
\end{align}
\end{theorem}
How does Theorem \ref{Teorema} compare to the known results? In \cite
{CeEl14}, compound Poisson-type approximations to non-negative iid rvs
in each sum were considered under the additional Franken-type condition:
\begin{equation}
\nu_1^{+}(F_j)- \bigl(\nu_1^{+}(F_j)
\bigr)^2-\nu_2^{+}(F_j)>0,
\label{Franken}
\end{equation}
see \cite{Frank64}.
Similar assumptions were used in \cite{ElCe15,SC88}. Observe that
Franken's condition requires almost all probabilistic mass to be
concentrated at 0 and 1. Indeed, then
$\nu_1^{+}(F_j)<1$ and $F_j\{1\}\geqslant\sum_{k=3}^\infty
k(k-2)F_j\{
k\}$.
Meanwhile, Theorems \ref{Teorema} and \ref{TeoremaID} hold under much
milder assumptions and, as demonstrated in the example below, can be
useful even if (\ref{Franken}) is not satisfied. Therefore, even for
the case of one sum when $N=1$, our results are new.
\textbf{Example.} Let $N=2$, $w_1=1$, $w_2=\sqrt{2}$, and $F_j$ and
$G_j$ be defined by
$F_j\{0\}=0.375$, $F_j\{1\}=0.5$, $F_j\{4\}=0.125$,
$G_j\{0\}=0.45$, $G_j\{1\}=0.25$, $G_j\{2\}=0.25$,
$G_j\{5\}=0.05$, $(j=1,2)$. We assume that $n_2=n$ and $n_1=\lceil
\sqrt
n\,\rceil$ is the smallest integer greater or equal to $\sqrt{n}$. Then
$\nu_k^{+}(F_j)=\nu_k^{+}(G_j)$, $k=1,2,3$, $\beta_4^{+}(F_j,G_j)=9$,
$u_j=3/8$, $(j=1,2)$.
Therefore, by Theorem \ref{TeoremaID}
\[
\bigl\vert\eL(S)-\eL(Z)\bigr\vert_K\leqslant\frac{C}{\sqrt
{n_1+n_2}} \biggl(
\frac
{1}{n_1}+\frac{1}{n_2} \biggr)=O \bigl(n^{-1} \bigr).
\]
In this case, Franken's condition (\ref{Franken}) is
not satisfied, since
$\nu_1^{+}(F_j)-\nu_2^{+}(F_j)-\break(\nu_1^{+}(F_j))^2<0$.
Next we apply Theorem \ref{TeoremaID} to the negative binomial
distribution. For real $r > 0$ and $0<\tilde p <1$, let $\xi\sim\NB
(r, \tilde p)$
denote the distribution with
\begin{equation*}
\label{defY} \Prob(\xi= k) = {r + k -1 \choose k} \tilde p^r
\tilde q^k, \quad k =0,1,\ldots.
\end{equation*}
Here $\tilde q = 1 -\tilde p$. Note that $r$ is not
necessarily an integer.
Let $X_{1j}$ be concentrated on non-negative
integers ($\nu_k^{-}(F_j)=0$). We approximate $S_i$ by
$Z_i\sim\NB(r_i,p_i)$ with
\begin{equation*}
r_i = \frac{(\Expect S_i)^2}{\Var S_i - \Expect S_i}, \qquad\tilde p_i =
\frac{\Expect S_i}{\Var S_i},\label{alpu}
\end{equation*}
so that $\Expect S_i = r_i \tilde q_i/\tilde p_i$ and
$\Var S_i = r_i \tilde q_i/\tilde p_i^2$.
Observe that
\begin{equation}
\label{nbg}
\w G_j(t)= \biggl(\frac{\tilde p_j}{1-\tilde q_j\ee^{\ii
t}}
\biggr)^{r_j/n_j}.
\end{equation}
\begin{corollary} \label{cornb} Let assumptions of Theorem \ref
{TeoremaID} hold with $X_{1j}$ concentrated on non-negative
integers and let $\Expect X^3_{1j}<\infty$, $(j=1,\dots,N)$. Let $G_j$
be defined by (\ref{nbg}). Then
\begin{align}
\bigl\vert\eL(S)-\eL(Z)\bigr\vert_K &\leqslant C\frac{\max_jw_j}{\min_j
w_j} \Biggl(\sum
_{i=1}^Nn_i\tilde
u_i \Biggr)^{-1/2}
\nonumber
\\
&\quad\times\sum_{k=1}^N \biggl[
\nu_3^{+}(F_k)+\nu_1^{+}(F_k)
\nu_2^{+}(F_k)+ \bigl(\nu_1^{+}(F_k)
\bigr)^3
\nonumber
\\
&\quad+\frac{(\nu_2^{+}(F_k)-(\nu_1^{+}(F_k))^2)^2}{\nu
_1^{+}(F_k)} \biggr]\tilde u_k^{-1}.
\label{nba}
\end{align}
Here
\[
\tilde u_k=1-\frac{1}{2}\max \biggl(\bigl\| ( \dirac_1-
\dirac)F_k\bigr\|, \biggl(r_k\ln \frac{1}{\tilde p_k}
\biggr)^{-1/2} \biggr).
\]
\end{corollary}
\begin{remark} (i) Note that
\[
r_k\ln\frac{1}{\tilde p_k}=\frac{(\nu_1^{+}(F_k))^2}{\nu_2^{+}(F_k)-
(\nu_1^{+}(F_k))^2}\ln\frac{\nu_2^{+}(F_k)-(\nu_1^{+}(F_k))^2+\nu
_1^{+}(F_k)}{\nu_1^{+}(F_k)}.
\]
(ii) Let $\nu_k^{+}(F_j)\asymp C, w_j\asymp C$. Then the accuracy of
approximation in (\ref{nba}) is of the order $O((n_1+\cdots+n_N)^{-1/2})$.
\end{remark}
\section{Sums of Markov Binomial rvs}
We already mentioned that it is not always natural to assume
independence of rvs. In this section, we still assume that
$S=w_1S_1+w_2S_2+\cdots+w_NS_N$ with mutually independent $S_i$. On the
other hand, we assume that each $S_i$ has a Markov Binomial (MB)
distribution, that is, $S_i$ is
a sum of Markov dependent Bernoulli variables.
Such a sum $S$ has a slightly more realistic interpretation in
actuarial mathematics. Assume, for example, that we have $N$ insurance
policy holders, $i$-th of whom can get ill during an insurance period
and be paid a claim $w_i$. The health of the policy holder depends on the
state of her/his health in the previous period. Therefore, we have a
natural two state (healthy, ill) Markov chain. Then $S_i$ is an
aggregate claim for $i$th insurance policy holder after $n_i$ periods,
meanwhile $S$ is an aggregate claim of all holders.
Limit behavior of the MB distribution is a popular topic among
mathematicians, discussed in numerous papers, see, for example, \cite
{CeVe10,Gan82,Aki93}, and references therein.
Let $0,\xi_{i1},\dots,\xi_{in_i},\dots$ , ($i=1,2,\dots,N$) be a Markov
chain with the transition probabilities
\begin{align*}
&\Prob(\xi_{ik}=1\,|\,\xi_{i,k-1}=1)=p_i, \qquad
\Prob(\xi_{ik}=0\,|\,\xi_{i,k-1}=1)=q_i,
\\
&\Prob(\xi_{i,k}=1\,|\,\xi_{i,k-1}=0)= \qubar_i,
\qquad\Prob(\xi_{ik}=0\,|\,\xi_{i,k-1}=0)= \pbar_i,
\\
& p_i+q_i=\qubar_i+ \pbar_i=1,
\quad\qquad p_i,\qubar_i\in (0,1),\quad k\in\NN.
\end{align*}
The distribution of $S_i=\xi_{i1}+\cdots+\xi_{in_i}$
$(n_i\in\NN)$ is called
the Markov binomial distribution with parameters $p_i,q_i,\pbar
_i,\qubar
_i,n_i$. The definition of a MB rv slightly differs from paper to
paper. We use the one from \cite{CeVe10}. Note that the Markov chain,
considered above, is not necessarily stationary.
Furthermore, the distribution of $w_iS_i$ is denoted by $H_{in}=\eL
(w_iS_i)$. For approximation of $H_{in}$ we use the signed compound
Poisson (CP) measure with matching mean and variance. Such signed CP
approximations usually outperform both the normal and CP
approximations, see, for example,
\cite{BaXi99,CeVe10,Ro03}.
Let
\[
\gamma_i=\frac{q_i\qubar_i}{q_i+\qubar_i},\qquad\w Y_i(t)=
\frac{q_i\ee
^{\ii w_it}}{1-p_i\ee^{\ii w_i t}}-1
\]
Observe that $\w Y_i(t)+1$ is the characteristic function of the
geometric distribution. Let $Y_i$ be a measure corresponding to $\w Y_i(t)$.
For approximation of $H_{in}$ we use the signed CP measure $D_{in}$
\begin{align}
D_{in}&=\exp \biggl\{ \biggl(\frac{\gamma_i(\qubar
_i-p_i)}{q_i+\qubar
_i}+n_i
\gamma_i \biggr) Y_i
\nonumber
\\
&\quad-n_i \biggl(\frac{q_i\qubar_i^2}{(q_i+\qubar_i)^2} \biggl(p_i+
\frac
{q_i}{q_i+\qubar_i} \biggr)+\frac{\gamma_i^2}{2} \biggr) Y_i^2
\biggr\}. \label{Din}
\end{align}
The CP limit occurs when $n\qubar_i\to\tilde\lambda$, see, for
example, \cite{CeVe10}. Therefore, we assume $\qubar_i$ to be small,
though not necessarily vanishing. Let, for some fixed integer
$k_0\geqslant2$,
\begin{equation}
\qubar_i\,{\geqslant}\,\frac{1}{n^{k_0}},\qquad0\,{<}
\,p_i \,{\leqslant}\,\frac{1}{2},\qquad\qubar_i\,{
\leqslant}\,\frac{1}{30}, \qquad w_i\,{>}\,0,\qquad
n_i\,{\geqslant}\,1, \quad i\,{=}\,1,\dots,N. \label{cond1}
\end{equation}
In principle, the first assumption in (\ref{cond1}) can be dropped, but
then exponentially vanishing remainder terms appear in all results,
making them very complicated.
\begin{theorem} \label{TMARK} Let $H_{in}=\eL(w_iS_i)$ and let $D_{in}$
be defined by (\ref{Din}), $i=1,\dots,N$. Let the conditions stated in
(\ref{cond1}) be satisfied. Then
\begin{equation}
\label{MBT} \Biggl\vert\prod_{i=1}^NH_{in}-
\prod_{i=1}^ND_{in}
\Biggr\vert_K \leqslant C(N,k_0)\frac
{\max w_i}{\min w_i}\cdot
\frac{\sum_{i=1}^N\qubar_i(p_i+\qubar
_i)}{\sqrt{
\sum_{k=1}^N\max(n_k\qubar_k,1)}}.
\end{equation}
\end{theorem}
\begin{remark}
Let all $\qubar_i\geqslant C$, $i=1,\dots,N$. Then, obviously, the
right-hand side of (\ref{MBT}) is majorized by
\[
C(N,k_0)\frac{\max w_i}{\min w_i}\cdot\frac{1}{\sqrt{
\max n_k}}.
\]
Therefore, even in this case, the result is comparable with the
Berry--Esseen theorem.
\end{remark}
\section{Auxiliary results}
\begin{lemma}\label{ad} Let $h>0$, $W\in\M$, $W\{\RR\}=0$, $U\in\F
$ and
$\vert\widehat U(t)\vert\leqslant C\widehat V(t)$, for
$\vert t\vert\leqslant1/h$ and some symmetric distribution $V$ having
non-negative characteristic function. Then
\begin{align*}
\vert W U\vert_K&\leqslant C\int_{\vert t\vert\leqslant1/h} \biggl\vert
\frac{\widehat W(t)\widehat U(t)}{t} \biggr\vert\,\dd t + C \llVert W \rrVert Q(U, h)
\nonumber
\\
&\leqslant C \biggl(\sup_{\vert t\vert\leqslant1/h} \frac{\vert
\widehat W(t)\vert}{\vert t\vert}\cdot
\frac{1}{h}+ \llVert W \rrVert \biggr)Q(V, h).
\end{align*}
\end{lemma}
Lemma \ref{ad} is a version of Le Cam's smoothing inequality, see
Lemma 9.3 in \cite{Ce16} and Lemma 3 on p. 402 in \cite{LeCam86}.
\begin{lemma}\label{ac} Let $F \in\F$, $h>0$ and $a>0$. Then
\begin{align}
Q(F, h)&\leqslant \biggl(\frac{96}{95} \biggr)^2h\int
_{\vert t\vert\leqslant1/h} \bigl\vert\widehat F(t) \bigr\vert\, \integrald t, \label{ac1}
\\
Q(F, h)&\leqslant \biggl(1+ \biggl(\frac{h}{a} \biggr) \biggr)
Q(F, a), \label{ac3}
\\
Q \bigl(\exp\bigl\{a(F-I)\bigr\}, h \bigr)&\leqslant\frac{C}{\sqrt{aF \{\vert
x\vert>h \} }}.
\label{ac4}
\end{align}
If, in addition, $\widehat F(t)\geqslant0$, then
\begin{equation}
h \int_{\vert t\vert\leqslant1/h} \bigl\vert\widehat F(t)\bigr\vert\, \integrald t \leqslant
CQ(F, h). \label{ac5}
\end{equation}
\end{lemma}
Lemma \ref{ac} contains well-known properties of Levy's
concentration function, see, for example, Chapter 1 in \cite{Pet95} or
Section 1.5 in \cite{Ce16}.
Expansion in left-hand and right-hand factorial moments for
Fourier--Stieltjes transforms is given
in \cite{SC88}. Here we need its analogue for distributions.
\begin{lemma}\label{bee} Let $F\in\F_Z$ and, for some $s\geqslant1$,
$\nu_{s+1}^{+}(F)+\nu_{s+1}^{-}(F)<\infty$. Then
\begin{align}
F&=\dirac+\sum_{m=1}^s
\frac{\nu_m^{+}(F)}{m!}(\dirac_1-\dirac)^m+\sum
_{m=1}^s\frac{\nu_m^{-}(F)}{m!}(\dirac_{-1}-
\dirac)^m
\nonumber
\\
&\quad+ \frac{\nu_{s+1}^{+}(F)+\nu_{s+1}^{-}(F)}{(s+1)!}(\dirac _1-\dirac)^{s+1}
\varTheta. \label{nuF}
\end{align}
\end{lemma}
\begin{proof}
For measures, concentrated on non-negative integers, (\ref{nuF}) is given
in \cite{Ce16}, Lemma 2.1. Observe that distribution $F$ can be
expressed as a mixture $F=p^{+}F^{+}+p^{-}F^{-}$ of distributions
$F^{+}$, $F^{-}$ concentrated on non-negative and negative integers,
respectively. Then Lemma 2.1 from \cite{Ce16} can be applied in turn to
$F^{+}$ and to $F^{-}$ (with $\dirac_{-1}$). The remainder terms can be
combined, since $(\dirac_{-1}-\dirac)=\dirac_{-1}(\dirac-\dirac
_1)=(\dirac_1-\dirac)\varTheta$.
\end{proof}
\begin{lemma}\label{f-g} Let $F,G\in\F_Z$ and, for some $s\geqslant1$,
$\nu_j^{+}(F)=\nu_{j}^{+}(G)$, $\nu_j^{-}(F)=\nu_j^{-}(G)$,
$(j=1,2,\dots,s)$. If
$\beta_{s+1}^{+}(F,G)+\beta_{s+1}(F,G)<\infty$, then
\[
F-G=\frac{\beta_{s+1}^{+}(F,G)+\beta_{s+1}^{-}(F,G)}{(s+1)!}(\dirac _1-\dirac)^{s+1}\varTheta.
\]
If, in addition, $\beta_{s+2}^{+}(F,G)+\beta_{s+2}(F,G)<\infty$ and
$s$ is even, then
\begin{align*}
F-G&=\frac{\beta_{s+1}^{+}(F,G)-\beta_{s+1}^{-}(F,G)}{(s+1)!}(\dirac _1-\dirac)^{s+1}
\\
&\quad+ \bigl[\beta_{s+2}^{+}(F,G)+\beta_{s+2}^{-}(F,G)+
\beta_{s+1}^{-}(F,G) \bigr](\dirac_1-
\dirac)^{s+2}\varTheta C(s).
\end{align*}
\end{lemma}
\begin{proof} Observe that
\begin{align*}
(\dirac_1-\dirac)^{s+1}+(\dirac_{-1}-
\dirac)^{s+1}&= (\dirac_1-\dirac)^{s+1}-(
\dirac_{-1})^{s+1}(\dirac_1-\dirac
)^{s+1}
\\
&=(\dirac_1-\dirac)^{s+1}\dirac_{-1}(
\dirac_1-\dirac)\sum_{j=1}^{s+1}(
\dirac_{-1})^{s+1-j}
\\
&= (\dirac_1-\dirac)^{s+2}\varTheta(s+1).
\end{align*}
The lemma now follows from (\ref{nuF}).
\end{proof}
\begin{lemma} Let $F\in\F_Z$ with mean $\mu(F)$ and variance $\sigma
^2(F)$, both finite.
Then, for all $\vert t\vert\leqslant\pi$,
\begin{align}
\bigl\vert\w F(t)\bigr\vert&\leqslant1-\frac{(1-\| (\dirac_1-\dirac)F\|
/2)t^2}{4\pi
}
\nonumber
\\
&\leqslant\exp \biggl\{-\frac{(1-\| (\dirac_1-\dirac)F\|/2)}{\pi
}\sin^2\frac {t}{2}
\biggr\},\label{mineka2}
\\
\bigl\vert \bigl(\w F(t)\ee^{-\ii t\mu(F)} \bigr)'\bigr\vert&\leqslant
\pi^2\sigma^2(F)\bigl\vert\sin(t/2)\bigr\vert.\label{roos}
\end{align}
\end{lemma}
The first estimate in (\ref{mineka2}) is given in \cite{BKN14} p. 884,
the second estimate in (\ref{mineka2}) is trivial. For the proof of
(\ref{roos}), see p.~81 in \cite{Ce16}.
\begin{lemma}\label{varijotas} Let $M\in\M$ be concentrated on $\ZZ$,
$\sum_{k\in\ZZ}\vert k\vert\vert M\{k\}\vert<\infty$. Then, for any
$a\in\RR$, $b>0$ the following inequality holds
\begin{equation*}
\| M\|\leqslant(1+b\pi)^{1/2} \Biggl( \frac{1}{2\pi}\int
_{-\pi}^\pi \biggl( \bigl\vert\w M(t) \bigr\vert^2+
\frac{1}{b^2} \bigl\vert \bigl(\ee^{-\ii ta}\w M(t) \bigr)'
\bigr\vert^2 \biggr)\,\dd t \Biggr)^{1/2}. \label{var}
\end{equation*}
\end{lemma}
Lemma \ref{varijotas} is a well-known inversion inequality for lattice
distributions. Its proof can be found, for example, in \cite{Ce16},
Lemma 5.1.
\begin{lemma}\label{MBlem} Let $H_{in}=\eL(w_iS_i)$ and let $D_{in}$ be
defined by (\ref{Din}), $i=1,\dots,N$. Let conditions (\ref{cond1})
hold. Then, for $i=1,2,\dots,N$,
\begin{align*}
H_{in}-D_{in}&=\qubar_i(p_i+
\qubar_i)Y_i\exp\{n_i\gamma_iY_i/60
\}\varTheta C+(p_i+ \qubar_i) (\dirac_{w_i}-
\dirac)\varTheta C \ee^{-C_in_i},
\\
H_{in}&=\exp\{n_i\gamma_iY_i/30
\} \varTheta C+(p_i+\qubar_i) (\dirac_{w_i}-
\dirac)\varTheta C\ee^{-C_in_i},
\\
D_{in}&=\exp\{n_i\gamma_iY_i/30
\} \varTheta C,\qquad\ee^{-C_in_i}\leqslant\frac{C(k_0)\qubar
_i}{\sqrt{\max(n_i\qubar_i,1)}},
\\
\bigl\vert\w Y_i(t)\bigr\vert&\leqslant4\bigl\vert\sin(tw_i/2)\bigr\vert,
\qquad Re \w Y_i(t)\geqslant-\frac{4}{3}\sin^2(tw_i/2),
\qquad\frac{\qubar
_i}{2}\leqslant\gamma_i\leqslant
\qubar_i.
\end{align*}
\end{lemma}
\begin{proof} The statements follow from Lemma 5.4, Lemma 5.1 and the
relations given on pp.~1131--1132 in \cite{CeVe10}. The estimate for
$\ee^{-C_in_i}$ follows from the first assumption in (\ref{cond1}) and
the following simple estimate
\begin{align*}
\ee^{-C_in_i}&\leqslant\ee^{-C_in_i/2}\ee^{-C_in_i\qubar
_i/2}\leqslant
\frac{C(k_0)}{n_i^{k_0}}\frac{2}{1+C_in_i\qubar_1}
\\
&\leqslant\frac{C(k_0)\qubar_i}{\min(1,C_i)(1+n_i\qubar
_i)}\leqslant\frac{C(k_0)\qubar_i}{\min(1,C_i)\max(n_i\qubar
_i,1)}.\qedhere
\end{align*}
\end{proof}
\section{Proofs for sums of independent rvs}
\begin{proof}[Proof of Theorem \ref{Teorema}] Let $F_{ij,w}$ (resp.
$G_{ij,w}$) denote the distribution of $w_iX_{ij}$ (resp. $w_iZ_{ij}$).
Note that
$\w F_{ij,w}(t)=\w F_{ij}(w_it)$. By the triangle inequality
\begin{align*}
\bigl\vert\eL(S)-\eL(Z)\bigr\vert_K&= \Biggl\vert\prod
_{i=1}^N \eL (w_iS_i)-
\prod_{i=1}^N \eL(w_iZ_i)
\Biggr\vert_K
\\
&\leqslant\sum_{i=1}^N \Biggl\vert \bigl(
\eL(w_iS_i)-\eL (w_iZ_i)
\bigr) \prod_{l=1}^{i-1}\eL(w_lS_l)
\prod_{l=i+1}^N\eL (w_lZ_l)
\Biggr\vert_K.
\end{align*}
Similarly,
\begin{align*}
\eL(w_iS_i)-\eL(w_iZ_i)&=
\prod_{j=1}^{n_i}F_{ij,w}-\prod
_{j=1}^{n_i}G_{ij,w}
\\
&= \sum_{j=1}^{n_i}(F_{ij,w}-G_{ij,w})
\prod_{k=1}^{j-1}F_{ik,w}\prod
_{k=j+1}^{n_i}G_{ik,w}.
\end{align*}
For the sake of brevity, let
\begin{align*}
E_{ij}&:=\prod_{k=1}^{j-1}F_{ik,w}
\prod_{k=j+1}^{n_i}G_{ik,w},
\\
T_i&:=\prod_{l=1}^{i-1}
\eL(w_lS_l)\prod_{l=i+1}^N
\eL(w_lZ_l)= \prod_{l=1}^{i-1}
\prod_{m=1}^{n_l}F_{lm,w} \prod
_{l=i+1}^N\prod_{m=1}^{n_l}G_{lm,w}.
\end{align*}
Then, combining both equations given above with Lemma \ref{f-g} , we get
\begin{align}
\bigl\vert\eL(S)-\eL(Z)\bigr\vert_K&\leqslant C(s) \sum
_{i=1}^N\sum_{j=1}^{n_i}
\bigl[\beta_{s+1}^+(F_{ij},G_{ij})
\nonumber
\\
&\quad+ \beta_{s+1}^{-} (F_{ij},G_{ij})
\bigr]\bigl\vert(\dirac_{w_i}-\dirac)^{s+1}E_{ij}T_i
\bigr\vert_K.\label{bndr1}
\end{align}
Let $\vert t\vert\leqslant\pi/\max_iw_i$. Then it follows from
(\ref
{mineka2}) that
\begin{equation}
\bigl\vert\w E_{ij}(t)\w T_i(t)\bigr\vert\leqslant
\ee^{u_{ij}\sin
^2(tw_i/2)/\pi
} \exp \Biggl\{-\frac{1}{\pi}\sum
_{l=1}^N \sum_{m=1}^{n_l}u_{lm}
\sin ^2\frac {tw_l}{2} \Biggr\}. \label{expo}
\end{equation}
Observe that $\ee^{u_{ij}\sin^2(tw_i/2)/\pi}\leqslant\ee^{1/\pi}=C$.
Next, let
\begin{equation}
L:=\frac{1}{8\pi}\sum_{l=1}^N\sum
_{m=1}^{n_l}u_{lm} \bigl[(
\dirac_{w_l}-\dirac)+(\dirac_{-w_l}-\dirac) \bigr].
\label{virsus}
\end{equation}
It is not difficult to check, that $\exp\{L\}$ is a CP distribution
with non-negative characteristic function. Also, by the definition of
exponential measure, $\exp\{-L\}$, which can be called \emph{the
inverse} to $\exp\{L\}$, is a signed measure with finite variation.
We have
\begin{equation}
\vert(\dirac_{w_i}-\dirac)^{s+1}E_{ij}T_i
\vert_K= \vert(\dirac_{w_i}-\dirac)^{s+1}E_{ij}T_i
\exp\{-L\}\exp\{L\}\vert _K. \label{jo}\vadjust{\goodbreak}
\end{equation}
Next step is similar to the definition of \emph{mod}-Poisson
convergence. We apply Lemma \ref{ad} with $h=\max w_i/\pi$ and
$U_1=\exp\{L\}$ and $W_1=(\dirac_{w_i}-\dirac
)^{s+1}E_{ij}T_i\exp\{-L\}$. By Lemma \ref{ac},
\begin{align}
Q\bigl(\exp\{L\},h\bigr)&\leqslant C\frac{\max w_i}{\min w_i}\cdot Q\bigl(\exp\{L\} ,
\min w_i/2\bigr)
\nonumber
\\
&\leqslant C\frac{\max w_i}{\min w_i} \Biggl( \sum_{l=1}^N
\sum_{m=1}^{n_l}u_{lm}
\Biggr)^{-1/2}. \label{quu}
\end{align}
From (\ref{expo}) and (\ref{virsus}), it follows that
\begin{align}
\biggl\vert\frac{\w W_1(t)}{t} \biggr\vert\cdot\frac{1}{h}&\leqslant C(s)
\frac{\vert
\sin(tw_i/2)\vert^{s+1}}{h\vert t\vert} \exp \Biggl\{-\frac{1}{2\pi}\sum
_{l=1}^N \sum_{m=1}^{n_l}u_{lm}
\sin^2\frac {tw_l}{2} \Biggr\}
\nonumber
\\
&\leqslant C(s)\frac{w_i}{h}\bigl\vert\sin(tw_i/2)
\bigr\vert^s \exp \Biggl\{-\frac{1}{2\pi }\sum
_{m=1}^{n_i}u_{im} \sin ^2(tw_i/2)
\Biggr\}
\nonumber
\\
&\leqslant C(s) \Biggl(\sum_{m=1}^{n_i}u_{im}
\Biggr)^{-s/2}. \label{jo2}
\end{align}
It remains to estimate $\| W_1\|$.
Let
\begin{align*}
\varPhi_{lm,w}&:=F_{lm,w}\exp \biggl\{\frac{1}{8\pi }u_{lm}
\bigl[(\dirac_{w_l}-\dirac)+(\dirac_{-w_l}-\dirac) \bigr]
\biggr\},
\\
\varPsi_{lm,w}&:=G_{lm,w}\exp \biggl\{\frac{1}{8\pi}u_{lm}
\bigl[(\dirac_{w_l}-\dirac)+(\dirac_{-w_l}-\dirac) \bigr]
\biggr\}
\end{align*}
Then by the properties of the total variation norm,
\begin{align}
\| W_1\|&\leqslant \biggl\|\exp \biggl\{\frac{1}{8}u_{ij}
\bigl[(\dirac_{w_i}-\dirac)+(\dirac_{-w_i}-\dirac) \bigr]
\biggr\} \biggr\|
\nonumber
\\
&\quad\times \Biggl\|(\dirac_{w_i}-\dirac)^{s+1}\prod
_{k=1}^{j-1}\varPhi_{ik,w}\prod
_{k=j+1}^{n_i}\varPsi_{ik,w} \Biggr\|
\nonumber
\\
&\quad\times\prod_{l=1}^{i-1} \Biggl\|\prod
_{m=1}^{n_l}\varPhi _{lm,w} \Biggr\|\prod
_{l=i+1}^N \Biggl\|\prod
_{m=1}^{n_l}\varPsi_{lm,w} \Biggr\|. \label{W1a}
\end{align}
The first norm in (\ref{W1a}) is bounded by $\exp \{\frac
{1}{8}u_{ij}[\| \dirac_{w_i}-\dirac\|+\| \dirac_{-w_i}-\dirac\|
] \}\leqslant
\exp\{1/2\}$. The total variation norm is invariant with respect to
scale. Therefore, without loss of generality, we can switch to $w_l=1$.
In this case, we use the notations $\varPhi_{ik},\varPsi_{ik}$. Then, again
employing the inverse CP measures, we get
\begin{align*}
& \Biggl\|(\dirac_{w_i}-\dirac)^{s+1}\prod
_{k=1}^{j-1}\varPhi _{ik,w}\prod
_{k=j+1}^{n_i}\varPsi_{ik,w} \Biggr\|
\\
&\quad= \Biggl\|(\dirac_{1}-\dirac)^{s+1}\prod
_{k=1}^{j-1}\varPhi _{ik}\prod
_{k=j+1}^{n_i}\varPsi_{ik} \Biggr\|
\\
&\quad= \Biggl\|(\dirac_{1}-\dirac)^{s+1}\prod
_{k=1}^{j-1}\varPhi _{ik}\prod
_{k=j+1}^{n_i}\varPsi_{ik} \exp\bigl
\{u_{ij}(\dirac_1-\dirac )\bigr\}\exp\bigl
\{u_{ij}(\dirac-\dirac_1)\bigr\} \Biggr\|
\\
&\quad\leqslant\ee^2 \Biggl\|(\dirac_{1}-\dirac)^{s+1}
\exp\bigl\{ u_{ij}(\dirac_1-\dirac)\bigr\}\prod
_{k=1}^{j-1}\varPhi_{ik}\prod
_{k=j+1}^{n_i}\varPsi_{ik} \Biggr\|.
\end{align*}
We apply Lemma \ref{varijotas} with $a=u_{ij}+\sum_{k\ne
i}^{n_i}\mu_{ik}$, $b=1$, where $\mu_{ik}=\nu_1^{+}(
F_{ik})+\nu_1^{-}(F_{ik})$ is the mean of $F_{ik}$ and, due to
assumption (\ref{sal2}), of $G_{ik}$. Let
\[
\w\Delta(t):= \bigl(\ee^{\ii t}-1 \bigr)^{s+1} \exp\bigl
\{u_{ij} \bigl(\ee^{\ii t}-1-it \bigr)\bigr\}\prod
_{k=1}^{j-1}\w\varPhi_{ik}(t)
\ee^{-\ii t\mu_{ik}}\prod_{k=j+1}^{n_i}\w
\varPsi_{ik}\ee^{-\ii t\mu_{ik}}.
\]
It follows from (\ref{mineka2}) that
\begin{align*}
\bigl\vert\Delta(t)\bigr\vert&\leqslant C(s)\bigl\vert\sin(t/2)\bigr\vert ^{s+1}\exp \Biggl
\{- \frac {1}{2\pi}\sum_{m=1}^{n_i}u_{im}
\sin ^2(t/2) \Biggr\}
\\
&\leqslant C(s) \Biggl(\sum_{m=1}^{n_i}u_{im}
\Biggr)^{-s/2}
\label{fin1}
\end{align*}
For the estimation of $\vert\Delta'(t)\vert$, observe that by (\ref
{mineka2}) and (\ref{roos})
\begin{align*}
\bigl\vert \bigl(\w\varPhi_{ik}(t)\ee^{-\ii t\mu_{ik}} \bigr)'
\bigr\vert& \leqslant \biggl\vert\w F_{ik}(t)\ee^{-\ii t\mu_{ik}}\frac
{u_{ik}}{\pi}
\sin(t/2)\ee^{(u_{ik}/2\pi)\sin^2(t/2)} \biggr\vert
\\
&\quad+ \bigl\vert \bigl(\w F_{ik}(t)\ee^{-\ii t\mu_{ik}}
\bigr)' \ee^{(u_{ik}/2\pi )\sin^2(t/2)} \bigr\vert
\\
&\leqslant C(s) \bigl(u_{ik}+\sigma^2_{ik}
\bigr)\bigl\vert\sin(t/2)\bigr\vert
\\
&\leqslant C(s) \bigl(u_{ik}+\sigma^2_{ik}
\bigr)\bigl\vert\sin(t/2)\bigr\vert\exp \biggl\{-\frac {u_{ik}}{\pi}\sin ^2(t/2)
\biggr\} \ee^{1/\pi}.
\end{align*}
The same bound holds for $\vert(\w\varPsi_{ik}(t)\exp\{-\ii t\mu
_{ik}\})'\vert$. The direct calculation shows that
\[
\bigl\vert \bigl( \bigl(\ee^{\ii t}-1 \bigr)^{s+1}\exp\bigl
\{u_{ij} \bigl(\ee ^{\ii t}-1-\ii t \bigr)\bigr\}
\bigr)'\bigr\vert\leqslan
C(s)
\bigl\vert\sin(t/2)\bigr\vert^{s}\exp \biggl\{-\frac{1}{\pi}
u_{ij} \sin ^2(t/2) \biggr\}.
\]
Taking into account of the previous two estimates, it is not difficult
to prove that
\begin{align*}
\big\vert\Delta'(t)\big\vert&\leqslant C(s)\big\vert\sin(t/2)
\big\vert^s \exp \Biggl\{-\frac{1}{\pi }\sum
_{k=1}^{n_i}u_{ik} \sin^2(t/2)
\Biggr\}
\\
&\quad\times \Biggl(1+\sin^2(t/2)\sum_{k=1,k\ne j}^{n_i}
\bigl(u_{ik}+\sigma^2_{ik} \bigr) \Biggr)
\nonumber
\\
&\leqslant C(s) \Biggl(\sum_{k=1}^{n_i}u_{ik}
\Biggr)^{-s/2} \Biggl(1+\sum_{k=1}^{n_i}
\sigma^2_{ik} / \sum_{k=1}^{n_i}u_{ik}
\Biggr).
\end{align*}
From Lemma \ref{varijotas}, it follows that
\begin{align}
\Biggl\|(\dirac_{w_i}-\dirac)^{s+1}\prod
_{k=1}^{j-1}\varPhi _{ik,w}\prod
_{k=j+1}^{n_i}\varPsi_{ik,w} \Biggr\| \leqslant C(s)
\Biggl(\sum_{k=1}^{n_i}u_{ik}
\Biggr)^{-s/2} \Biggl(1+\sum_{k=1}^{n_i}
\sigma^2_{ik} / \sum_{k=1}^{n_i}u_{ik}
\Biggr).\label{fin3}
\end{align}
The remaining two norms in (\ref{W1a}) can be estimated similarly:
\begin{equation}
\Biggl\|\prod_{m=1}^{n_l}\varPhi_{lm,w}
\Biggr\|, \Biggl\|\prod_{m=1}^{n_l}\varPsi _{lm,w}
\Biggr\|\leqslant C \Biggl(1+\sum_{m=1}^{n_l}
\sigma^2_{lm} /\sum_{m=1}^{n_l}u_{lm}
\Biggr). \label{fin4}
\end{equation}
Substituting (\ref{fin3}), (\ref{fin4}) into (\ref{W1a}), we obtain
\begin{equation}
\| W_1\|\leqslant C(N,s) \Biggl(\sum_{m=1}^{n_i}u_{im}
\Biggr)^{-s/2} \prod_{l=1}^N
\Biggl(1+\sum_{k=1}^{n_l}\sigma^2_{lk}
/\sum_{k=1}^{n_l} u_{lk}
\Biggr).\label{fin5}
\end{equation}
Combining (\ref{fin5}) with (\ref{quu}), (\ref{jo2}) and (\ref
{jo}), we get
\begin{align*}
\bigl\vert(\dirac_{w_i}-\dirac)^{s+1}E_{ij}T_i
\bigr\vert_K &\leqslant C(N,s)\frac{\max_jw_j}{\min_j
w_j} \Biggl(\sum
_{i=1}^N \sum_{k=1}^{n_i}u_{ik}
\Biggr)^{-1/2}
\\
&\quad\times \Biggl(\sum_{m=1}^{n_i}u_{im}
\Biggr)^{-s/2}\prod_{l=1}^N
\Biggl(1+\sum_{k=1}^{n_l}\sigma^2_{lk}
/\sum_{k=1}^{n_l} u_{lk} \Biggr).
\end{align*}
Substituting the last estimate into (\ref{bndr1}) we complete the
proof of
(\ref{BTa}).
The proof of (\ref{BTb}) is very similar and, therefore, omitted.
\end{proof}
\begin{proof}[Proof of Theorem \ref{TeoremaID}] We outline only the
differences from the proof of Theorem \ref{Teorema}.
No use
of convolution with the inverse Poisson measure is required, since we
have powers of $F_i^{n_i}$, which can be used for Levy's
concentration function. Let $\lfloor a\rfloor$
denote an integer part of $a$ and let $a(k):=\lfloor(k-1)/2\rfloor$,
$b(k):=\lfloor(n_i-k)/2\rfloor$. Then, as in the proof of Theorem
\ref
{Teorema}, we obtain
\begin{align*}
\bigl\vert\eL(S)-\eL(Z)\bigr\vert_K&\leqslant C(s)\sum
_{i=1}^N\sum_{k=1}^{n_i}
\bigl(\beta_{s+1}^{+}(F_i,G_i)+
\beta_{s+1}^{-}(F_i,G_i) \bigr)
\\
&\quad\times \Biggl\vert(\dirac_{w_i}-\dirac)^{s+1}F_{iw}^{a(k)}G_{iw}^{b(k)}
F_{iw}^{a(k)}G_{iw}^{b(k)}\prod
_{j=1}^{i-1}F_{jw}^{n_j} \prod
_{j=i+1}^NG_{jw}^{n_j}
\Biggr\vert_K.
\end{align*}
Here $F_{iw}$ and $G_{iw}$ denote the distributions of $w_iX_{ij}$ and
$w_iZ_{ij}$, respectively. We can apply Lemma \ref{ad} to the
Kolmogorov norm given above, taking $W=(\dirac_{w_i}-\dirac
)^{s+1}F_{iw}^{a(k)}G_{iw}^{b(k)}$. The remaining distribution is used
in Levy's
concentration function. The Fourier--Stieltjes transform $\w W(t)/t$ is
estimated exactly as in the proof of Theorem \ref{Teorema}. The total
variation of any distribution is equal to 1, therefore $\| W\|\leqslant
\| \dirac_{w_i}-\dirac\|\leqslant2$ and we can avoid
application of Lemma \ref{varijotas}.
\end{proof}
\begin{proof}[Proof of Corollary \ref{cornb}] As proved in \cite{BaXi99}, p.~144,
\[
\frac{1}{2}\bigl\| G_k(\dirac_1-\dirac)\bigr\|\leqslant
\biggl(\frac{p_k\nu
_1^{+}(F_k)}{q_k}\ln\frac{1}{p_k} \biggr)^{-1/2}.
\]
Observe that $
\nu_1^{+}(F_j)=\nu_1^{+}(G_j)$ and $\nu_2^{+}(F_j)=\nu_2^{+}(G_j)$. It
remains to find $\nu_3^+(G_j)$ and apply Theorem \ref{TeoremaID}.
\end{proof}
\section{Proof of Theorem \ref{TMARK}}
The proof is similar to the one given in \cite{SlCe16}. Let
$A_i=\exp\{n_i\gamma_iY_i/30\}$.
From Lemma \ref{MBlem}, it follows that
\[
H_{in}=A_i\varTheta_i C +\ee^{-C_in_i}
\varTheta_i C,\qquad D_{in}=A_i
\varTheta_i C,\quad i=1,2,\dots,N.
\]
Here we have added index to $\varTheta_i$ emphasizing that they might be
different for different $i$.
As usual, we assume that the convolution $\prod_{k=N+1}^N=\prod_{k=1}^0=\dirac$.
Let also denote by $\sum_i^*$ summation over all indices
$\{j_1,j_2,\dots,j_{i-1}\in\{0,1\}\}$.
Taking into account Lemma \ref{MBlem} and the properties of the
Kolmogorov and total variation norms given in the Introduction, we get
\begin{align}
& \Biggl\vert\prod_{i=1}^NH_{in}-
\prod_{i=1}^ND_{in} \Biggr\vert
_K
\nonumber
\\
&\quad\leqslant\sum_{i=1}^N
\Biggl\vert(H_{in}-D_{in})\prod_{k=1}^{i-1}H_{kn}
\prod_{k=i+1}^ND_{kn}
\Biggr\vert_K
\nonumber
\\
&\quad\leqslant\sum_{i=1}^N
\Biggl\vert(H_{in}-D_{in})\sum\nolimits ^*_i
\prod_{k=1}^{i-1}A_k^{j_k}
\varTheta_kC
\nonumber
\\
&\qquad \times\prod_{k=i+1}^NA_k
\varTheta_k C \prod_{k=1}^{i-1}\ee
^{-(1-j_k)n_kC_k}\varTheta_kC \Biggr\vert_K
\nonumber
\\
&\quad\leqslant C(N)\sum_{i=1}^N
\qubar_i(p_i+\qubar_i)\sum
\nolimits_i^{*} \Biggl\vert Y_i\exp
\{n_i\gamma_iY_i/60\} \prod
_{k=1}^{i-1}A_k^{j_k}\prod
_{k=i+1}^NA_k
\Biggr\vert_K
\nonumber
\\
&\qquad\times\prod_{k=1}^{i-1}
\ee^{-(1-j_k)n_kC_k} +C\sum_{i=1}^N(p_i+
\qubar_i)\ee^{-C_in_i}
\nonumber
\\
&\qquad\times\sum\nolimits_i^{*} \Biggl\vert(
\dirac_{w_i}-\dirac)\prod_{k=1}^{i-1}A_k^{j_k}
\prod_{k=i+1}^NA_k
\Biggr\vert_K\prod_{k=1}^{i-1}
\ee^{-(1-j_k)n_kC_k}.\label{marko1}
\end{align}
Both summands on the right-hand side of (\ref{marko1}) are estimated
similarly. Observe that
\begin{align*}
& \Biggl\vert Y_i\exp\{n_i\gamma_iY_i/60
\} {\prod_{k=1}^{i-1}A_k^{j_k}
\prod_{k=i+1}^NA_k}_K
\Biggr\vert
\\
&\quad= \Biggl\vert Y_i \exp \Biggl\{\frac{n_i\gamma_iY_i}{60}+
\frac
{1}{30} \sum_{k=1}^{i-1}j_kn_k
\gamma_kY_k+\frac{1}{30}\sum
_{k=i+1}^Nn_k\gamma_kY_k
\Biggr\} \Biggr\vert_K.
\end{align*}
Next we apply Lemma \ref{ad} with $W=Y_i$ and $h=\max w_i/\pi$ and
$V$ with
\begin{align*}
\w V(t)&= \exp \Biggl\{-\frac{1}{90} \Biggl[\sum
_{k=1}^{i-1}j_k\max(n_k
\qubar_k,1)\sin^2(tw_k/2)
\\
&\quad+\sum_{k=i}^N\max(n_k
\qubar_k,1)\sin^2(tw_k/2) \Biggr] \Biggr\}.
\end{align*}
By Lemma \ref{MBlem}
\[
\frac{\vert\w Y_i(t)\vert}{t}\frac{1}{h}+\| Y_i\|\leqslant C.
\]
Observe that
\begin{align*}
& \Biggl\vert\exp \Biggl\{\frac{n_i\gamma_i}{60}\w Y_i(t)+\frac
{1}{30}
\sum_{k=1}^{i-1}j_kn_k
\gamma_k\w Y_k(t)+\frac{1}{30}\sum
_{k=i+1}^N\gamma _k\w Y_k(t)
\Biggr\} \Biggr\vert
\nonumber
\\
&\quad\leqslant \exp\Biggl\{-\frac{n_i\gamma_i\sin^2(tw_i/2)}{45}-\frac{2}{45}\sum
_{k=1}^{i-1}j_kn_k
\gamma_k\sin^2(tw_k/2)
\\
&\qquad-\frac{2}{45}\sum_{k=i+1}^N
n_k\gamma_k\sin^2(tw_k/2) \Biggr
\}
\nonumber
\\
&\quad\leqslant\exp \Biggl\{-\frac{1}{90} \Biggl[\sum
_{k=1}^{i-1}j_kn_k\qubar
_k\sin^2(tw_k/2) +\sum
_{k=i}^Nn_k\qubar _k
\sin^2(tw_k/2) \Biggr] \Biggr\}
\nonumber
\\
&\quad\leqslant\ee^{N/90}\exp\Biggl\{-\frac{1}{90} \Biggl[\sum
_{k=1}^{i-1}j_k(n_k
\qubar_k+1)\sin^2(tw_k/2) \\
&\qquad+\sum
_{k=i}^N(n_k\qubar_k+1)
\sin^2(tw_k/2) \Biggr] \Biggr\}
\nonumber
\\
&\quad\leqslant\ee^{N/90} \exp\Biggl\{-\frac{1}{90} \Biggl[\sum
_{k=1}^{i-1}j_k
\max(n_k\qubar_k,1)\sin^2(tw_k/2)
\\
&\qquad+\sum_{k=i}^N\max(n_k
\qubar_k,1)\sin^2(tw_k/2) \Biggr] \Biggr\}
\nonumber
\\
&\quad=\ee^{N/90}\w V(t).
\end{align*}
Therefore, using Lemma \ref{ac}, we prove
\begin{align}
& \Biggl\vert Y_i\exp\{n_i\gamma_iY_i/60
\} \prod_{k=1}^{i-1}A_k^{j_k}
\prod_{k=i+1}^NA_k
\Biggr\vert_K\notag
\\
&\quad\leqslant C(N)Q\Bigl(V,\max_i w_i/h
\Bigr)
\nonumber
\\
&\quad\leqslant C(N) \biggl(\frac{\max w_i}{\min w_i} \biggr) Q(V,\min
w_i/2)
\nonumber
\\
&\quad\leqslant C(N) \biggl(\frac{\max w_i}{\min w_i} \biggr) \Biggl(\sum
_{k=1}^{i-1}j_k\max(n_k
\qubar_k,1)+\sum_{k=i+1}^N
\max(n_k\qubar _k,1) \Biggr)^{-1/2}.\label{mmm}
\end{align}
Next observe that by Lemma \ref{MBlem},
\begin{align*}
\Biggl\vert\prod_{k=1}^{i-1}\ee^{-(1-j_k)n_kC_k}
\Biggr\vert &= C\exp \Biggl\{-\sum_{k=1}^{i-1}(1-j_k)C_kn_k
\Biggr\}
\\
&\leqslant\frac{C(k_0,N)}{\max(1,\sqrt{\sum_{k=1}^{i-1}(1-j_k)\max
(n_k\qubar
_k,1)} )}.
\end{align*}
The last estimate, (\ref{mmm}) and the trivial inequality
$1/(ab)<2/(a+b)$, valid for any $a,b\geqslant1$, allow us to obtain
\begin{align*}
&\sum_{i=1}^N\qubar_i(p_i+
\qubar_i)\sum\nolimits_i^{*} \Biggl\vert
Y_i\exp\{n_i\gamma_iY_i/60\}
\prod_{k=1}^{i-1}A_k^{j_k}
\prod_{k=i+1}^NA_k
\Biggr\vert_K \prod_{k=1}^{i-1}
\ee^{-(1-j_k)n_kC_k}
\\
&\quad\leqslant C(k_0,N)\frac{\max w_j}{\min w_j}\cdot \frac{\sum_{i=1}^N\qubar_i(p_i+\qubar_i)}{\sqrt{\sum_{k=1}^N\max
(n_k\qubar_k,1)}}.
\end{align*}
The estimation of the second sum in (\ref{marko1}) is almost identical
and, therefore, omitted. \qed
\begin{acknowledgement
The main part of the work was accomplished during the first
author's stay at the Department of Mathematics, IIT Bombay, during
January, 2018. The first author would like to thank the members
of the Department for their hospitality.
We are grateful to the
referees for useful remarks.
\end{acknowledgement}
|
1408.6109
|
\section{Introduction}
\label{sec:intro}
The increasing density of wireless networks, due to the proliferation of mobile devices, leverages the deployment of cooperative systems, where the communication between a source and a destination takes place via intermediate relay nodes. The incorporation of multiple relays can lead to significant improvements, by appropriately exploiting the degrees of freedom that are introduced in the network. However, the fact that several relay nodes require simultaneous access to the channel stresses the need for new Medium Access Control (MAC) protocols for the effective relay coordination. The efficient MAC protocol design and assessment require the consideration of realistic physical (PHY) layer models and channel conditions (e.g., fast fading and shadowing), making imperative the need for a MAC/PHY cross-layer approach \cite{cl}.
Although the cross-layer concept was initially applied in conventional networks \cite{cl5,cl1,shad3,cl6}, its potential is also significant in cooperative scenarios, where the role of the PHY layer is even more pronounced, since the selection of the relay set and the need for cooperation are determined by the quality of the links between the communicating nodes. To that end, the authors in \cite{cross} propose a cross-layer theoretical model to analyze the performance of a cooperative wireless system that employs an Automatic Repeat reQuest (ARQ) mechanism for error control in fast fading environments. The same idea is extended in \cite{cl7}, where the authors present an analytical framework for studying the performance of reliable ARQ-based relaying schemes in multihop cooperative systems. The study in \cite{shad4} introduces a cross-layer analytical model for the assessment of a multi-relay cooperative ARQ MAC protocol by taking into account the shadowing effect. In \cite{cl8}, a cooperative cross-layer MAC protocol, which combines space-time coding and adaptive modulation at the PHY layer, is proposed and analyzed. More recently, the work published in \cite{cl2} studies fundamental cooperative issues (i.e., when and whom to cooperate with) from a cross-layer perspective in distributed wireless networks.
In addition to the one-way cooperative schemes, during the last few years, the implementation of new software applications, based on Voice over IP (VoIP) and instant messaging, has driven the need for two-way (bidirectional) communication, further complicating the design of effective cooperative systems. To deal with this new trend for bidirectional communication, Network Coding (NC) has been proposed as an alternative routing mechanism that enables the relays to mix the incoming data packets before forwarding them to their final destinations. Apparently, the application of NC implies straightforward gains in bidirectional networks, since the relay nodes require less resources for their transmissions. This potential advantage has lately inspired several works \cite{cope,argyriou,phoenix,wang,umehara}, focusing on the design of novel cooperative MAC protocols with NC capabilities to enhance the throughput, the energy efficiency and the robustness of wireless networks. In the same context, motivated by the great interest that ARQ schemes have attracted in the literature, we have introduced an NC-aided Cooperative ARQ-based MAC protocol \cite{nccarq}, namely NCCARQ, which exploits the benefits of both NC and ARQ to improve the performance of cooperative wireless networks.
Despite their inherent differences on the channel access rules, most NC-aided MAC protocols share the common assumption of either ideal channel conditions or simplified PHY layer models. However, the existing cross-layer models for simple one-way cooperative networks do not apply directly in bidirectional communications, where the relays are selected according to the packets that have been received from both directions. In addition, another basic limitation of the existing models is the assumption of independent wireless links in the network, although recent studies \cite{cor1,cor2,cor3,cor4} have indicated the impact of shadowing spatial correlation (due to geographically proximate wireless links) on the performance of cooperative MAC protocols. Hence, considering the above limitations, the accurate performance evaluation of NC-aided protocols in correlated environments becomes essential for an efficient network planning, reducing the deployment and operational cost of the cooperative systems.
In this paper, taking into account the gaps in the current literature along with the importance of cross-layer modeling, we present a joint MAC/PHY theoretical framework to evaluate the throughput and the energy efficiency of NC-aided ARQ schemes under correlated shadowing conditions. Our main contributions can be summarized as follows:
\begin{enumerate}
\item We introduce a cross-layer analytical framework that jointly considers the MAC layer operation and the PHY layer conditions in NC-based communication scenarios. Without loss of generality, we use as an exemplary case the recently proposed NCCARQ MAC protocol \cite{nccarq} to study how correlated shadowing affects crucial protocol parameters.
\item We analytically demonstrate that the average number of active relays in the network is independent of the correlation among the wireless links from the end nodes to the relays.
\item We provide practical insights for efficient network planning for NC-based cooperative communications by revealing interesting tradeoffs between the throughput and energy efficiency performance in the network under realistic channel conditions.
\end{enumerate}
The remainder of this paper is organized as follows. Section \ref{sec:system} presents our system model, focusing on a two-way communication scenario with correlated wireless links. Section \ref{sec:impact} provides an overview for NCCARQ, highlighting the impact of the PHY layer on the protocol design and performance. In Section \ref{sec:analysis}, we introduce a joint MAC/PHY analytical framework for the throughput and the energy efficiency of the network. The validation of the model and the performance evaluation of the protocol under correlated shadowing conditions are provided in Section \ref{sec:performance}. Finally, Section \ref{sec:conclusions} concludes the paper.
\section{System Description}
\label{sec:system}
\subsection{Channel Model}
\label{sec:channel}
The network under consideration (Fig. \ref{f1}) consists of two end nodes ($A$ and $B$) that have data packets to exchange in a bidirectional communication, and a set of $n$ intermediate nodes ($R_1,R_2,...R_n$) with NC capabilities that act as relays in this network setup, assisting the communication towards both directions. The instantaneous received power at any given node $j$ from transmissions by node $i$ is denoted by $\gamma_{ij}=\frac{P_{Tx}}{d_{ij}^a} \left|h_{f_{ij}}\right|^2\left|h_{s_{ij}}\right|^2$ \cite[Eq. (1.1)]{thesismary}, where: i) $P_{Tx}$ is the common transmission power for all nodes in the network, ii) $d_{ij}$ is the $(i,j)$ distance, iii) $a$ is the path-loss coefficient, iv) $h_{f_{ij}}$ is the fast fading coefficient, modeled as a Nakagami-m random variable (RV) with $\mathbf{E}\left[\left|h_{f_{ij}}\right|^2\right]=1$, and v) $h_{s_{ij}}$ is the shadowing coefficient.
\begin{figure}[htb]
\centering
\includegraphics[width=1\columnwidth]{system-model.pdf}
\caption{System Model}\label{f1}
\end{figure}
With regard to the channel coefficients, the fast fading ergodicity allows the calculation of its mean value from a sufficiently long sample realization of the process (e.g., data packet duration). On the other hand, shadowing is a slowly varying procedure and, thus, it can be considered unaltered for the same or even larger period of time \cite{thesismary}. In our work, we assume that shadowing remains constant during a communication round, which consists of the direct transmission and the cooperation phase. Therefore, since the analysis is performed in packet-level, the average received power computed over the duration of one packet may be written as $\bar{\gamma}_{ij}$=$\mathbf{E}\left[\frac{P_{Tx}}{d_{ij}^a}\left|h_{s_{ij}}\right|^2\left|h_{f_{ij}}\right|^2\right]$ = $\frac{P_{Tx}}{d_{ij}^a}\left|h_{s_{ij}}\right|^2$. According to several experimental studies (e.g., \cite{5288484}), $h_{s_{ij}}$ and, consequently, $\bar{\gamma}_{ij}$ can be modeled as a log-normal RV, which implies that $\bar{\gamma}_{ij_{dB}} = 10 log_{10}\left(\bar{\gamma}_{ij}\right)$ is a normally distributed RV with mean value $\mu_{ij_{dB}}$ and standard deviation $\sigma_{ij_{dB}}$\footnote{In the rest of the paper, for the sake of clarity and without loss of generality, the values of $\gamma, \mu, \sigma$ are always expressed in dB.}.
Regarding the correlation model, we denote by $\rho_{1_{x,y}}$ the correlation factor between two links $AR_{x}$ and $AR_{y}$, and by $\rho_{2_{x,y}}$ the correlation factor between two links $BR_{x}$ and $BR_{y}$, respectively. On the other hand, no correlation is assumed between $AR_{x}$ and $BR_{y}$ links, i.e., $\rho(\bar{\gamma}_{AR_{x}},\bar{\gamma}_{BR_{y}}) = 0, \forall\ x,y$. The correlation factors $\rho_{1_{x,y}}$ and $\rho_{2_{x,y}}$ can be estimated as:
\begin{equation}
\rho_{1_{x,y}}=\rho\left(\bar{\gamma}_{AR_{x}},\bar{\gamma}_{AR_{y}}\right)=\mathbf{E}\left[\left(\bar{\gamma}_{AR_{x}} - \mu_{AR_{x}}\right)\left(\bar{\gamma}_{AR_{y}} - \mu_{AR_{y}}\right)\right]/\sigma_{AR_{x}}\sigma_{AR_{y}}, \forall x,y\in [1,n]
\end{equation}
\begin{equation}
\rho_{2_{x,y}}=\rho\left(\bar{\gamma}_{BR_{x}},\bar{\gamma}_{BR_{y}}\right)=\mathbf{E}\left[\left(\bar{\gamma}_{BR_{x}} - \mu_{BR_{x}}\right)\left(\bar{\gamma}_{BR_{y}} - \mu_{BR_{y}}\right)\right]/\sigma_{BR_{x}}\sigma_{BR_{y}}, \forall x,y\in [1,n].
\end{equation}
In addition, taking into account that the links to each direction have a common end point, we assume that the correlation between any pair of links $\rho_{1_{x,y}}$ or $\rho_{2_{x,y}}$ decreases exponentially as the distance between them increases, i.e., $\rho_{x,y} = \rho^{\left|x-y\right|}$ where $\rho \in [0,1]$\cite{104090}. To that end, a set of exponentially correlated normal RVs $\boldsymbol{\gamma}_{AR_{x}}=\left[\bar{\gamma}_{AR_{1}}, \ldots, \bar{\gamma}_{AR_{n}}\right]$ can be generated as:
\begingroup
\begin{equation}
\boldsymbol{\gamma}_{AR_{x}} = \mathbf{\sigma}_1 \left(\mathbf{\Sigma_n}\left(\rho_1\right)\right)^{1/2}\textbf{X}_{n\times 1}+\mathbf{\mu}_1, \label{eq1}
\end{equation}
\endgroup
\noindent where $\textbf{X}_{n\times 1} = \left[X_1,\ldots,X_n\right]^T$, with $X_i\sim \mathcal{N}(0,1)$, $\mathbf{\mu}_1 = \left[\mu_{AR_{1}},\ldots,\mu_{AR_{n}}\right]^T$, $\mathbf{\sigma}_1$ is a diagonal matrix that contains the $\sigma_{AR_{i}}$ values in its main diagonal, i.e., $\mathbf{\sigma}_1 = diag\{\sigma_{AR_{1}},\ldots,\sigma_{AR_{n}}\}$, while $\mathbf{\Sigma_n}\left(\rho_1\right)$ can be expressed as a Toeplitz matrix\footnote{This matrix is also known as Kac-Murdock-Szeg\"{o} matrix \cite{toeplitz}.}, whose entries depend on the correlation factor $\rho_1$:
\begingroup
\begin{equation}
\mathbf{\Sigma_n}\left(\rho_1\right) = \left[\begin{array}{ccccc}
1&\rho_1&\rho_1^2&\cdots&\rho_1^n\\
\rho_1&1&\rho_1&\cdots&\rho_1^{n-1}\\
\rho_1^2&\rho_1&1&\cdots&\rho_1^{n-2}\\
\vdots&\ddots&\ddots&\ddots&\vdots\\
\rho_1^n&\rho_1^{n-1}&\cdots&\rho_1&1
\end{array}\right].
\end{equation}
\endgroup
\noindent Accordingly, the exponentially correlated normal RVs $\boldsymbol{\gamma}_{BR_{x}}=\left[\bar{\gamma}_{BR_{1}}, \ldots, \bar{\gamma}_{BR_{n}}\right]$ can be generated as:
\begingroup
\begin{equation}
\boldsymbol{\gamma}_{BR_{x}} = \mathbf{\sigma}_2 \left(\mathbf{\Sigma_n}^{1/2}\left(\rho_2\right)\right)\textbf{Y}_{n\times 1}+\mathbf{\mu}_2, \label{eq2}
\end{equation}
\endgroup
\noindent where $\textbf{Y}_{n\times 1} = \left[Y_1,\ldots,Y_n\right]^T$, with $Y_i\sim \mathcal{N}(0,1)$, $\mathbf{\mu}_2 = \left[\mu_{BR_{1}},\ldots,\mu_{BR_{n}}\right]^T$, $\mathbf{\sigma}_2=diag\left\{\sigma_{BR_{1}},\ldots,\sigma_{BR_{n}}\right\}$ and $\mathbf{\Sigma_n}\left(\rho_2\right)$ is a Toeplitz matrix, function of the correlation factor $\rho_2$.
We further assume that node $B$ is marginally located in the transmission range of node $A$ (and vice versa), which implies a weak direct link with relatively low $\bar{\gamma}_{AB}$. However, the erroneous direct transmissions are compensated by employing network cooperation through ARQ control mechanisms.
\subsection{Packet Acceptance Criteria}
\label{sec:criteria}
In wireless networks, different applications (e.g., video, gaming, e-mail, etc.) require different levels of QoS, which can be provisioned through a target Packet Error Rate (PER) denoted by the probability $p^*$. Therefore, metrics such as the Average PER (APER) or the Outage PER (OPER) have to be employed in order to determine the correct reception of a packet according to the target value of $p^*$. In our case, a given relay should receive correct packets by both $A$ and $B$ in order to be able to apply NC and participate in the cooperation phase. As a result, the realistic channel conditions affect: i) the size of the active relay set ($\mathcal{A}_n$), which is composed of the relays that successfully receive packets from both end nodes, and ii) the network outage probability ($p_{out}$), defined as the probability that none of the available $n$ relays in the system receives both packets successfully, as it is assumed that the shadowing coefficients remain constant during one communication round.
In this point, let us focus on the metrics that are used to verify the correct packet reception under fast and slow fading conditions. In environments where shadowing is not considered, the ergodicity of fast fading allows the utilization of average metrics, such as the APER, to characterize the system performance and determine the acceptance of a packet. Thus, under fast fading conditions for a given PHY layer set up, the APER between two nodes $i$ and $j$ increases monotonically with their distance, i.e., $APER_{ij} = f(d_{ij})$ \cite{5288484}.
On the other hand, the criterion of correct packet reception is substantially modified in the presence of slow fading, which is a non-ergodic process. As we have seen in Section~\ref{sec:channel}, the received power $\bar{\gamma}_{ij}$ eventually depends only on the $h_{s_{ij}}$ coefficient, since $h_{f_{ij}}$ can be averaged because of its ergodicity. This implies that, in slow fading environments, the APER is a function of distance and shadowing (i.e., $APER_{ij} = f(d_{ij},h_{s_{ij}})$), and the QoS requirement $APER_{ij}\leq p^*$ is equivalent to $\bar{\gamma}_{ij}>\gamma^*$ \cite{5288484}. However, although shadowing is a non-ergodic process, $\bar{\gamma}_{ij}$ is still an RV that requires statistical characterization. In this case, the most suitable metric is the OPER (i.e., the probability of receiving an erroneous packet) and the normal distribution of $\bar{\gamma}_{ij}$ (in dB) allows us to express it as:
\begin{equation}
OPER_{ij}=Pr\left\{APER_{ij}> p^*\right\}=Pr\left\{\bar{\gamma}_{{ij}}\leq \gamma^*\right\}=1-Q\left(\frac{\gamma^*-\mu_{{ij}}}{\sigma_{{ij}}}\right),
\end{equation}
where $Q\left(\cdot\right)$ is the standard one dimensional Gaussian Q-function, traditionally defined by $Q\left(x\right) = \int^{\infty}_{x}\frac{1}{\sqrt{2\pi}}e^{\frac{-t^2}{2}}dt$. The above expression suggests as sufficient and necessary condition for the packet acceptance that the mean received power should be above the threshold value $\gamma^*$. Mary et al. \cite{5288484} have provided closed-form formulas for $\gamma^*$ as a function of a target symbol error probability set by the application layer for log-normal shadowing and Nakagami-m wireless channels.
\section{NCCARQ Overview and PHY Layer Impact}
\label{sec:impact}
The goal of this section is to highlight the impact of realistic PHY layer on the performance of NC-aided MAC protocols. To that end, we use as a representative case study the NCCARQ MAC protocol \cite{nccarq}, which coordinates the channel access among a set of NC-capable relay nodes in a bidirectional wireless communication. In the following sections, we briefly review the protocol's operation and we explicitly study the changes due to the realistic PHY layer consideration.
\subsection{NCCARQ Overview}
\label{sec:overview}
NCCARQ \cite{nccarq} MAC protocol has been designed to exploit the benefits of both ARQ and NC in two-way cooperative wireless networks, being backwards compatible with the Distributed Coordination Function (DCF) of the IEEE 802.11 Standard \cite{80211}. The function of the protocol is based on two main factors: i) the broadcast nature of wireless communications, which enables the cooperation between the mobile nodes, and ii) the capability of the intermediate relay nodes to perform NC before any transmission.
Fig. \ref{nccmac} presents an example of the frame sequence in NCCARQ, where two end nodes ($A$ and $B$) want to exchange their data packets (a and b, respectively) with the assistance of three NC-capable relay nodes ($R_1,R_2,R_3$). In this particular example, the protocol operates as follows:
\begin{itemize}
\item Node A transmits packet a to node B. The relays overhear the transmission correctly, while we assume that node B fails to demodulate the received packet.
\item Node B triggers the cooperation phase by broadcasting a Request For Cooperation (RFC) control packet. In addition, unlike conventional cooperative ARQ protocols, NCCARQ allows piggyback data transmissions along with the RFC (in this example, the data packet b), thus leveraging the NC application.
\item After the reception of the RFC and since we assume ideal channel conditions, the relays apply NC to the two data packets (a and b) and set up their backoff counters according to the DCF rules in order to gain channel access and transmit the NC packet (a$\oplus$b) to the end nodes.
\item In this example, we assume that the three relays select the values $R_1=2$, $R_2=2$ and $R_3=3$ for their backoff counters, respectively. As a result, after two time slots, $R_1$ and $R_2$ attempt a concurrent transmission and $R_3$ freezes its counter.
\item The simultaneous packet transmission results in a collision and, according to the DCF rules, the two relays reset their backoff counters to $R_1=5$ and $R_2=12$, respectively, while $R_3=1$. Therefore, after one time slot, $R_3$ transmits the coded packet and the two destinations sequentially broadcast acknowledgment (ACK) packets, terminating the cooperation phase.
\end{itemize}
\begin{figure}[htb]
\centering
\includegraphics[width=1\columnwidth]{nccmac1.jpg}
\caption{NCCARQ operation without PHY layer consideration}\label{nccmac}
\end{figure}
Hence, the participation of multiple nodes in the contention phase results in idle slots and collisions in the network, before eventually a relay node manages to successfully transmit the coded packet. However, apart from the collisions and the idle periods, the protocol performance may be also degraded due to fading (either fast or slow) introduced by taking into account non-ideal channel conditions. In the next section, we provide some insights for the modifications that the realistic PHY layer potentially brings to the protocol operation.
\subsection{PHY Layer Impact}
\label{sec:phy}
The PHY layer consideration significantly modifies the protocol operation, as it is depicted in Fig. \ref{nccphy}. In this case, the protocol operates as follows:
\begin{itemize}
\item Node A transmits packet a to node B. Node B and $R_3$ fail to demodulate the received packet, while $R_1$ and $R_2$ overhear the transmission correctly.
\item Node B triggers the cooperation phase by broadcasting an RFC control packet along with data packet b. In this example, we assume that only $R_3$ receives correctly the data packet.
\item Since $R_1$ and $R_2$ have received only packet a, and $R_3$ has received only packet b, there is no node in the relay set that can apply NC. As a result, after a predefined time ($T_{timeout}$), node A starts a new communication by transmitting packet a, which is correctly received by $R_1$ and $R_3$.
\item Node B broadcasts again an RFC control packet along with the data packet b, which is correctly received by all the relays.
\item Consequently, in this communication round, $R_1$ and $R_3$ have correctly received both packets, thus being able to participate in the cooperation phase. Accordingly, they set up their backoff counters to $R_1=2$ and $R_3=3$, respectively, and $R_2$ gains access to the channel after two time slots.
\item The two destinations receive correctly the coded packet and they are able to extract the original packets a and b, terminating the cooperation phase by transmitting the respective ACK packets.
\end{itemize}
Apparently, the correct packet transmissions define the active relay set ($\mathcal{A}_n$), introducing the concept of a node being in outage. Hence, in the extreme case where no relay node has received both packets from $A$ and $B$, the relay set is in outage and the cooperation phase ends after a predefined time ($T_{timeout}$), which is not considered in systems that operate under ideal channel conditions. On the other hand, the reduction of the active relay set due to non-successful packet receptions could be beneficial in networks with many relays, since a smaller number of active relays would lead to a lower packet collision probability in the network. Hence, the aforementioned issues stress the necessity for designing accurate cross-layer models that consider the protocol operation in realistic conditions.
\begin{figure}[htb]
\centering
\includegraphics[width=1\columnwidth]{nccphy1.jpg}
\caption{NCCARQ operation with PHY layer consideration}\label{nccphy}
\end{figure}
\section{Joint MAC/PHY Analytical Framework}
\label{sec:analysis}
In this section, we introduce a joint MAC/PHY analytical framework to model the throughput and the energy efficiency achieved by NCCARQ under correlated shadowing and fast fading conditions. Although a complete analysis from the MAC layer point of view under ideal channel conditions is presented in \cite{nccarq}, the PHY layer consideration introduces new challenges in the theoretical derivations. In particular, concepts such as the network outage probability ($p_{out}$), the expected size of the active relay set ($\mathbf{E}\left[\left|\mathcal{A}_n\right|\right]$) and the OPER should be explicitly considered for an accurate analytical design. In the remainder of this section, we first focus on the parameters that are affected by the realistic PHY layer assumption and, then, we appropriately incorporate these parameters in a modified analysis from the MAC layer point of view.
\subsection{Physical Layer Impact on $p_{out}$ and $\mathbf{E}\left[\left|\mathcal{A}_n\right|\right]$}
\label{sec3}
The probability of having exactly $k$ active out of $n$ total relays in the system (i.e., $Pr\left\{\left|\mathcal{A}_n\right|=k\right\}$) is a required parameter for the estimation of both the network outage probability and the expected size of the active relay set. To that end, let us define by $\boldsymbol{1}_{AR_i} = \left\{\bar{\gamma}_{{AR_i}}>\gamma^*\right\}$ and $\boldsymbol{0}_{AR_i} = \left\{\bar{\gamma}_{{AR_i}}\leq\gamma^*\right\}$ the events that the relay $i$ receives from node $A$ a ``correct" or an ``erroneous" packet, respectively. In addition, we introduce the notation $\mathfrak{b}_{A\chi_n}$ to identify which of the $n$ relays have correctly received packets transmitted by node $A$. In particular, $\chi\in [0,2^n-1]$ is a natural number, whose value is specified by the combination of accepted and discarded packets in all $AR_i$ links, while $\mathfrak{b}_{A\chi_n}$ corresponds to the $n$-bit representation of $\chi$, where the positions of 1s indicate the specific relays in which the average received power $\bar{\gamma}_{AR_i}$ is above the reliability threshold $\gamma^*$\footnote{For example, $\mathfrak{b}_{A5_3}=[\boldsymbol{1}_{AR_1},\boldsymbol{0}_{AR_2},\boldsymbol{1}_{AR_3}]$ indicates that: i) there are 3 relays in the network ($R_1,R_2,R_3$), and ii) only $R_1$ and $R_3$ have received correct packets from node A.}. Accordingly, $\mathfrak{b}_{B\psi_n}$ identifies which relays have successfully received packets transmitted by node $B$, where $\psi$ has the same characteristics as $\chi$.
Hence, the probability that exactly $k$ out of $n$ relays have successfully received packets from both $A$ and $B$ (i.e., $Pr\left\{\left|\mathcal{A}_n\right|=k\right\}$) may be estimated by taking into account all the possible binary codewords $\mathfrak{b}_{A\chi_n}$ and $\mathfrak{b}_{B\psi_n}$ that satisfy the following condition:
\begin{equation}
H_w\left(\mathfrak{b}_{A\chi_n}\odot \mathfrak{b}_{B\psi_n}\right)=k,
\end{equation}
where $\odot$ denotes the bit wise AND operation, while $H_w\left(\mathfrak{b}\right)$ corresponds to the Hamming weight function that returns the number of 1s in the binary word $\mathfrak{b}$. Denoting as $Pr\{\mathfrak{b}_{A\chi_n}\}$ and $Pr\{\mathfrak{b}_{B\psi_n}\}$ the probability of occurrence of each possible event $\mathfrak{b}_{A\chi_n}$ and $\mathfrak{b}_{B\psi_n}$, respectively, the aforementioned probability is given by:
\vspace{-1pt}
\begingroup
\begin{equation}
Pr\left\{\left|\mathcal{A}_n\right|=k\right\} = \mathop{\sum_{\chi=0}^{2^n-1}\sum_{\psi=0}^{2^n-1}}_{\{H_w\left(\mathfrak{b}_{A\chi_n}\odot \mathfrak{b}_{B\psi_n}\right) = k\}}Pr\{\mathfrak{b}_{A\chi_n}\}Pr\{\mathfrak{b}_{B\psi_n}\}. \label{P-k-n-2}
\end{equation}
\endgroup
\subsubsection{Theoretical Estimation of the Probabilities $Pr\{\mathfrak{b}_{A\chi_n}\}$ and $Pr\{\mathfrak{b}_{B\psi_n}\}$}
In order to further clarify the concepts and derive expressions for the probabilities $Pr\{\mathfrak{b}_{A\chi_n}\}$ and $Pr\{\mathfrak{b}_{B\psi_n}\}$, let us provide an example for the theoretical estimation of $Pr\{\mathfrak{b}_{A1_n}\}$, which corresponds to the probability that only relay $n$ receives a ``correct" packet from node A, while all the other relays (i.e., $R_1,\ldots, R_{n-1}$) receive ``erroneous" packets:
\begingroup
\begin{eqnarray}
&&Pr\{\mathfrak{b}_{A1_n}\} = Pr\left\{\boldsymbol{0}_{AR_1},\boldsymbol{0}_{AR_2},...,\boldsymbol{1}_{AR_n}\right\}\nonumber\\
&&=Pr\left\{\bar{\gamma}_{{AR_1}}\leq \gamma^*,\bar{\gamma}_{{AR_2}}\leq \gamma^*....,\bar{\gamma}_{{AR_n}}> \gamma^*\right\}\nonumber\\
&&= \int^{\frac{\gamma^*-\mu_{AR_1}}{\sigma_{AR_1}}}_{-\infty}\cdots\int^{\infty}_{\frac{\gamma^*-\mu_{AR_n}}{\sigma_{AR_n}}} f_y\left(y_1, \ldots, y_n\right) dy_1\cdots dy_n,\label{eq:exp-int}
\end{eqnarray}
\endgroup
where $f_y\left(y_1, \ldots, y_n\right)$ corresponds to the joint Probability Density Function (PDF) of the RVs $y_i = \left(\bar{\gamma}_{{AR_1}} -\mu_{AR_1}\right)/ \sigma_{AR_1}$ and can be written as:
\begin{equation}
f_y\left(y_1, \ldots, y_n\right) = f_y\left(\mathbf{y}\right) = [det\left(\mathbf{\Sigma_n}\left(\rho_1\right)\right)]^{-1/2}\left(2\pi\right)^{-n/2}exp\left(-\frac{\mathbf{y}^T\mathbf{\Sigma_n}^{-1}\left(\rho_1\right)\mathbf{y}}{2}\right),\label{jointpdf}
\end{equation}
where $det\left(\mathbf{\Sigma_n}\left(\rho_1\right)\right)$ denotes the determinant of matrix $\mathbf{\Sigma_n}\left(\rho_1\right)$. Taking into account the Toeplitz symmetric structure of $\mathbf{\Sigma_n}\left(\rho_1\right)$, it can be shown that \cite{toeplitz}:
\begin{equation}
\mathbf{\Sigma_n}^{-1}\left(\rho_1\right) = \frac{1}{1-\rho_1^2}\left[\begin{array}{cccccc}
1&-\rho_1&0&\cdots&0&0\\
-\rho_1&1+\rho_1^2&-\rho_1&0&\cdots&0\\
0&-\rho_1&1+\rho^2_1&-\rho_1&0&\cdots\\
\vdots&\ddots&\ddots&\ddots&&\vdots\\
0&0&\cdots&0&-\rho_1&1
\end{array}\right].\label{eq-inv}
\end{equation}
By combining Eq.(\ref{jointpdf}) and (\ref{eq-inv}), the joint PDF $f_y\left(\mathbf{y}\right)$ is written as:
\begingroup
\begin{eqnarray}
f_y\left(\textbf{y}\right) &=& [det\left(\mathbf{\Sigma_n}\left(\rho_1\right)\right)]^{-1/2}\left(2\pi\right)^{-n/2}exp\left(-\frac{\left(y^2_1-2\rho_1 y_1y_2\right)}{2\left(1-\rho_1^2\right)}\right)\nonumber\\
&&\times exp\left(-\frac{ \sum^{n-1}_{i=2} \left(\left(\rho^2_1+1\right)y^2_i-2\rho_1 y_iy_{i+1}\right)}{2\left(1-\rho^2_1\right)}\right)\times exp\left(-\frac{ y^2_n}{2\left(1-\rho^2_1\right)}\right).\label{jointpdf2}
\end{eqnarray}
\endgroup
Therefore, the tridiagonal structure of $\mathbf{\Sigma_n}^{-1}\left(\rho_1\right)$ simplifies the theoretical estimation of the probability $Pr\{\mathfrak{b}_{A1_n}\}$, since the multiple integral in Eq.(\ref{eq:exp-int}) can be estimated by iteratively evaluating single integrals. More specifically, by substituting Eq.\eqref{jointpdf2} in Eq.\eqref{eq:exp-int}, the probability $Pr\{\mathfrak{b}_{A1_n}\}$ can be written as:
\begingroup
\begin{eqnarray}
&&Pr\{\mathfrak{b}_{A1_n}\} = C_0\int^{\infty}_{\frac{\gamma^*-\mu_{AR_n}}{\sigma_{AR_n}}}exp\left(-\frac{ y^2_n}{2\left(1-\rho_1^2\right)}\right)q_{n-1}\left(y_n\right)dy_n,\label{eq:exp-int2}
\end{eqnarray}
\endgroup
where $C_0 = [det\left(\mathbf{\Sigma_n}\left(\rho_1\right)\right)]^{-1/2}\left(2\pi\right)^{-n/2}$ and
\begingroup
\begin{eqnarray}
q_{k}\left(x\right) & = & \int^{\frac{\gamma^*-\mu_{AR_k}}{\sigma_{AR_k}}}_{-\infty}exp\left(-\frac{\left(\rho_1^2+1\right)y^2_k-2\rho_1 y_kx}{2\left(1-\rho_1^2\right)}\right)q_{k-1}(y_k)dy_k,\ k \in [2,n-1]\label{eq:qk}\\
q_1\left(x\right) & = & \int^{\frac{\gamma^*-\mu_{AR_1}}{\sigma_{AR_1}}}_{-\infty}exp\left(-\frac{y^2_1-2\rho_1 y_1x}{2\left(1-\rho_1^2\right)}\right) dy_1.
\end{eqnarray}
\endgroup
\noindent In order to provide a closed-form expression for $q_1\left(x\right)$, we apply Eq.\cite[(15.74)]{book2}:
\begingroup
\begin{eqnarray}
\int^{\infty}_{0}exp\left(-\left(ax^2+bx+c\right)\right)dx &=&\sqrt{\frac{\pi}{a}}exp\left(\frac{b^2-4ac}{4a}\right)Q\left(\frac{b}{\sqrt{a}}\right).\label{closedform}
\end{eqnarray}
\endgroup
Hence, setting $t= \frac{\gamma^*-\mu_{AR_1}}{\sigma_{AR_1}}$, $a = 1/2(1-\rho_1^2)$, $b=(2\rho_1 x-2 t)/2(1-\rho_1^2)$ and $c=(t^2-2\rho_1 tx)/2(1-\rho_1^2)$, the integral $q_1\left(x\right)$ can be written as:
\begingroup
\begin{eqnarray}
q_1(x) & = &
\sqrt{2\pi\left(1-\rho^2\right)} Q\left(\frac{\rho_1 x-t}{\sqrt{\left(1-\rho_1^2\right)}}\right)exp\left(\frac{\rho_1^2x^2}{2(1-\rho_1^2)}\right).
\end{eqnarray}
\endgroup
For the evaluation of the rest integrals $q_k\left(x\right)$, $\forall k \in [2,n-1]$, we adopt the Gaussian quadratures for the integral $\int^{\infty}_{0}exp\left(-x^2\right)f(x)dx$ \cite[Table II, N=15]{1969method}. Making changes in the variables\footnote{The detailed derivation is provided in Appendix \ref{a4}}, Eq.(\ref{eq:qk}) may be rewritten in the form:
\begingroup
\begin{eqnarray}
q_k(x) & = & \sqrt{\frac{2\left(1-\rho_1^2\right)}{1+\rho_1^2}}exp\left(-\frac{\left(\rho_1^2+1\right)t^2-2\rho_1 xt}{2\left(1-\rho_1^2\right)}\right) \sum^{N_{GQR}}_{i=1}w_i \nonumber\\
&& \times exp\left(-\frac{\left(2\rho_1 x - 2t\left(\rho_1^2+1\right)\right)r_i}{\sqrt{2\left(1-\rho_1^4\right)}}\right)q_{k-1}\left(-\sqrt{\frac{2\left(1-\rho_1^2\right)}{1+\rho_1^2}}r_i+t\right)\label{eq:qk1},
\end{eqnarray}
\endgroup
\noindent where $t=\frac{\gamma^*-\mu_{AR_k}}{\sigma_{AR_k}}$, $w_i$ and $r_i$ denote the weights and the roots of the Gaussian quadratures\cite{1969method}, respectively, and $N_{GQR}$ is the number of points used for the integral evaluation. After evaluating $q_k(x)$ at the points $x_i = -\sqrt{\frac{2\left(1-\rho^2_1\right)}{1+\rho^2_1}}r_i+t$, $\forall k \in [2,n-1]$, the probability $Pr\{\mathfrak{b}_{A1_n}\}$ may be computed as:
\begingroup
\begin{eqnarray}
Pr\{\mathfrak{b}_{A1_n}\} & = & [det\left(\mathbf{\Sigma_n}(\rho_1)\right)]^{-1/2}\left(2\pi\right)^{-N/2}\sum^{N_{GQR}}_{i=1}w_iexp\left(-\frac{t^2 - 2\sqrt{2\left(1-\rho_1^2\right)}tr_i}{2\left(1-\rho_1^2\right)}\right)\nonumber\\
&& \times q_{N-1}\left(t-\sqrt{2\left(1-\rho_1^2\right)}r_i\right),
\end{eqnarray}
\endgroup
\noindent where $t=\frac{\gamma^*-\mu_{AR_n}}{\sigma_{AR_n}}$.
Following the same line of thought, the above procedure can be generalized for the theoretical estimation of $Pr\{\mathfrak{b}_{A\chi_n}\}$, $Pr\{\mathfrak{b}_{B\psi_n}\}$ $\forall$ $\chi,\psi\in \left[0,2^n-1\right]$, as it is described in Appendix \ref{a1}.
\subsubsection{Network Outage Probability ($p_{out}$)}
The network outage probability $p_{out}$, i.e., the probability that none of the relays in the system has successfully received both packets from nodes $A$ and $B$, may be directly derived from Eq.(\ref{P-k-n-2}) by setting $k=0$. Therefore:
\begingroup
\small
\begin{equation}
p_{out} = \mathop{\sum_{\chi=0}^{2^n-1}\sum_{\psi=0}^{2^n-1}}_{\{H_w\left(\mathfrak{b}_{A\chi_n}\odot \mathfrak{b}_{B\psi_n}\right) = 0\}}Pr\{\mathfrak{b}_{A\chi_n}\}Pr\{\mathfrak{b}_{B\psi_n}\}, \label{P-out}
\end{equation}
\endgroup
\noindent where $Pr\{\mathfrak{b}_{A\chi_n}\}$, $Pr\{\mathfrak{b}_{B\psi_n}\}$ are computed as described in Appendix \ref{a1}.
\vspace{-1pt}
\subsubsection{Expected Size of the Active Relay Set ($\mathbf{E}\left[\left|\mathcal{A}_n\right|\right]$)}
\label{subsec}
In this section, we provide a closed-form expression to compute the average number of active relays $\mathbf{E}\left[\left|\mathcal{A}_n\right|\right]$, proving that it is independent of the correlation coefficients $\rho_1, \rho_2$. Following the induction method, we initially prove the aforementioned statement for a network with 2 relays. By applying Eq.(\ref{P-k-n-2}) for $n=2$, we may write the probabilities that $k$ relays are active for $k=1,2$ as:
\vspace{-2pt}
\begingroup
{\setlength{\arraycolsep}{0em}\begin{eqnarray}
Pr\left\{\left|\mathcal{A}_2\right|=2\right\} &=& Pr\{\mathfrak{b}_{A3_2}\} Pr\{\mathfrak{b}_{B3_2}\} \label{p-2-2}\\
Pr\left\{\left|\mathcal{A}_2\right|=1\right\} &=& Pr\{\mathfrak{b}_{A1_2}\}\left(Pr\{\mathfrak{b}_{B1_2}\}+Pr\{\mathfrak{b}_{B3_2}\}\right)\nonumber\\
&&+Pr\{\mathfrak{b}_{A2_2}\}\left(Pr\{\mathfrak{b}_{B2_2}\}+ Pr\{\mathfrak{b}_{B3_2}\}\right)\nonumber\\
&&+Pr\{\mathfrak{b}_{A3_2}\}\left(Pr\{\mathfrak{b}_{B1_2}\}+Pr\{\mathfrak{b}_{B2_2}\}\right), \label{p-1-2}
\end{eqnarray}}
\endgroup
\vspace{-2pt}
\noindent where the probabilities $ Pr\{\mathfrak{b}_{A\chi_2}\}$, $Pr\{\mathfrak{b}_{B\psi_2}\}$, $\chi,\psi\in [0,3]$ may be written as in Eq.(\ref{eq:exp-int}) by setting $n=2$. The average number of active relays in case of $n=2$ may be written as follows:
\begingroup
\begin{eqnarray}
\label{eq:E-2}
\mathbf{E}\left[\left|\mathcal{A}_2\right|\right] &=& \sum^2_{i=1} iPr\left\{\left|\mathcal{A}_2\right|=i\right\}.
\end{eqnarray}
\endgroup
\vspace{-1pt}
By taking into account the Eq.(\ref{eq:exp-int}), (\ref{closedform}) and Lemma 1 below, it can be shown\footnote{The detailed derivation is provided in Appendix \ref{a5}} that Eq.(\ref{eq:E-2}) can be written in closed-form as:
\vspace{-1pt}
\begingroup
\begin{eqnarray}
\mathbf{E}\left[\left|\mathcal{A}_2\right|\right] &=& \sum^2_{i=1} Q\left(\frac{\gamma^* - \mu_{ARi}}{\sigma_{ARi}}\right)Q\left(\frac{\gamma^* - \mu_{{BRi}}}{\sigma_{{BRi}}}\right)\label{eq:average2}.
\end{eqnarray}
\endgroup
\begin{lemma}
For any given $\rho$, $\gamma^*$, $\mu$, $\sigma$ it holds that:
\begingroup
\begin{eqnarray}
\frac{1}{\sqrt{2\pi}}\int^{\infty}_{-\infty}Q\left(\frac{\gamma^* - \mu-\sigma \sqrt{\rho} t}{\sigma\sqrt{1-\rho}}\right)e^{-t^2/2} dt
=Q\left(\frac{\gamma^* - \mu}{\sigma}\right)
\end{eqnarray}
\endgroup
\end{lemma}
\begin{proof}
The proof of Lemma 1 is given in Appendix \ref{a2}.
\end{proof}
\noindent The generalization of this result for a network with $n$ relays may be stated as follows:
\begin{proposition}
Let us consider the cooperative network of Fig. \ref{f1} operating in a correlated shadowing environment. The average number of active relays $E\left[\left|\mathcal{A}_n\right|\right]$ is independent of the correlation between the links and it is given by:
\begingroup
\begin{equation}
\mathbf{E}\left[\left|\mathcal{A}_n\right|\right] {=} \sum^n_{i=1} Q\left(\frac{\gamma^* - \mu_{ARi}}{\sigma_{ARi}}\right)Q\left(\frac{\gamma^* - \mu_{{BRi}}}{\sigma_{{BRi}}}\right).
\label{eq:Average}
\end{equation}
\endgroup
\end{proposition}
\begin{proof}
The proof of Proposition 1 is given in Appendix \ref{a3}.
\end{proof}
Having derived accurate closed-form expressions for crucial network parameters (i.e., the network outage probability and the expected number of active relays), we can now incorporate them into a MAC layer analytical model to study important end-to-end metrics, such as the network throughput and energy efficiency.
\subsection{Analytical Formulation from the MAC Layer Perspective}
\label{sec:analysis_mac}
\subsubsection{Throughput}
\label{sec:throughput}
The network throughput, measured in b/s, is defined as the rate of successful data delivery in a given period of time. Hence, taking into account the protocol operation, the expected total throughput ($S_{total}$) of the network, can be decomposed into the throughput achieved by the direct successful transmissions ($S_d$) and the throughput produced by the relay nodes ($S_{coop}$) during the cooperation phase. This can be mathematically expressed as:
\begin{equation}
\label{eq:Stotal}
\mathbf{E}[S_{total}]=\mathbf{E}[S_{d}]+\mathbf{E}[S_{coop}],
\end{equation}
where
\begin{equation}
\label{eq:SD}
\mathbf{E}[S_{d}]=(1-OPER_{AB})\cdot \frac{\mathbf{E}[Payload]}{\mathbf{E}[T_d]}
\end{equation}
and
\begin{equation}
\label{eq:Scoop}
\mathbf{E}[S_{coop}]=2\cdot OPER_{AB}\cdot (1-p_{out})\cdot \frac{\mathbf{E}[Payload]}{\mathbf{E}[T_{d}]+\mathbf{E}[T_{coop}]}.
\end{equation}
In the above equations, $OPER_{AB}$ corresponds to the $OPER$ in the direct link from node $A$ to $B$, $\mathbf{E}[Payload]$ is the average packet payload and $p_{out}$ denotes the probability that there are no active relays in the network, i.e., all relays are in outage. In addition, $\mathbf{E}[T_d]$ and $\mathbf{E}[T_{coop}]$ represent the average time required for a successful direct transmission and a transmission that takes place via the relays, respectively. Let us also emphasize that NC techniques enable the simultaneous transmission of two data packets and, hence, the coefficient 2 has to be included in Eq.(\ref{eq:Scoop}).
The average time for the direct transmission ($\mathbf{E}[T_d]$) can be estimated by the total packet size (including MAC and PHY headers) and the transmission data rate ($Data\ Tx.Rate$) as:
\begin{equation}
\label{eq:Td}
\mathbf{E}[T_d]= \frac{\mathbf{E}[Packet\ Size]}{Data\ Tx.Rate}.
\end{equation}
On the other hand, the term $\mathbf{E}[T_{coop}]$ can be written as the sum of the minimum deterministic default time ($T_{def}$) in the beginning of the cooperation, and the overhead time due to the contention of the relays:
\begin{equation}
\label{eq:Tcoop}
\mathbf{E}[T_{coop}]= T_{def}+\mathbf{E}[T_{ovh}].
\end{equation}
The default time, which mainly corresponds to the transmission for the RFC and the data packet b, is equal to:
\begin{equation}
\label{eq:Tdef}
T_{def}=T_{SIFS}+T_{RFC}+T_{b},
\end{equation}
where $T_{RFC}$ and $T_{b}$ denote the transmission time for RFC and data packet $b$, respectively, while $T_{SIFS}$ corresponds to the SIFS duration. On the other hand, the overhead time can be caused due to either the network outage or the contention phase:
\begin{equation}
\label{eq:Tovh}
\mathbf{E}[T_{ovh}]=p_{out}\cdot T_{timeout}+(1-p_{out})\cdot\mathbf{E}[T_{cont}],
\end{equation}
where $p_{out}$ is the network outage probability, $T_{timeout}$ denotes the period of time that all nodes wait in case of no active relay in the network, and $T_{cont}$ represents the total time duration until the correct acknowledgement of both original packets, equal to:
\begin{equation}
\label{eq:Tcont}
\mathbf{E}[T_{cont}]=T_{ONC}+T_{DIFS}+\mathbf{E}[T_{C}]+T_{a \oplus b}+2\cdot T_{SIFS}+2\cdot T_{ACK}.
\end{equation}
Eq.(\ref{eq:Tcont}) explicitly considers: i) the expected time required for a coded packet to be transmitted via the relays ($\mathbf{E}[T_{C}]$), taking into account the idle slots and the collision overhead, ii) the overhead time needed to perform NC ($T_{ONC}$), iii) the sensing times $T_{DIFS}$ and $T_{SIFS}$, iv) the transmission time for the NC packet $T_{a \oplus b}$, and v) the transmission time for the ACK packets ($T_{ACK}$).
Since NCCARQ is characterized by backwards compatibility with the IEEE 802.11 Standard, the channel access can be modeled according to the Markov chain introduced in \cite{bianchi}, where the states correspond to the values of the backoff counter and the transition probabilities follow the DCF operation. Hereafter, we provide the slightly modified formulas for the sake of the paper's self-completeness, while the interested reader should be referred to the Appendix of \cite{nccarq} for the detailed protocol analysis. Thus, the average time until a successful transmission is calculated as:
\begin{equation}
\label{eq:Tc}
\mathbf{E}[T_{C}]=(\frac{1}{p_s}-1)[(\frac{p_i}{1-p_s})T_{slot}+(\frac{p_c}{1-p_s})T_{col}],
\end{equation}
where $T_{slot}$ represents the idle slot duration and $T_{col}$ corresponds to the collision time, equal to: $T_{col}=T_{DIFS}+T_{a \oplus b}+T_{SIFS}$. In addition, the probabilities of having an idle ($p_i$), a successful ($p_s$), or a collided ($p_c$) slot can be written as:
\begin{equation}
\label{eq:pi}
p_i=1-p_{tr}
\end{equation}
\begin{equation}
\label{eq:ps}
p_s=p_{tr}\cdot p_{s|tr}
\end{equation}
\begin{equation}
\label{eq:pc}
p_c=p_{tr}\cdot(1-p_{s|tr}),
\end{equation}
where $p_{tr}$ is the probability that at least one relay attempts to transmit:
\begin{equation}
\label{eq:ptr}
p_{tr}=1-(1-\tau)^{\mathbf{E}[|\mathcal{A}_n|]}
\end{equation}
and $p_{s|tr}$ denotes the probability of a successful transmission (i.e., exactly one station transmits conditioned on the fact that at least one station transmits):
\begin{equation}
\label{eq:pstr}
p_{s|tr}=\frac {\mathbf{E}[|\mathcal{A}_n|]\tau (1-\tau)^{\mathbf{E}[|\mathcal{A}_n|]-1}}{1-(1-\tau)^{\mathbf{E}[|\mathcal{A}_n|]}}.
\end{equation}
In Eq.(\ref{eq:ptr}) and (\ref{eq:pstr}), $\tau$ is the probability that a node transmits in a randomly selected slot and $\mathbf{E}[|\mathcal{A}_n|]$ is the expected number of active relays during the cooperation phase as we have seen in Section~\ref{subsec}. It is worth noting that traditional MAC-oriented analytical works usually neglect the impact of the PHY layer by including the total number of relays~($n$) in the theoretical expressions, while in our work, this set is restricted by taking into account realistic PHY layer conditions.
\subsubsection{Energy Efficiency}
\label{sec:energy}
The network energy efficiency, measured in b/J, can be defined as the amount of transmitted useful information per energy unit. Considering the protocol operation, the expected energy efficiency~($\eta$) may be written as:
\begin{equation}
\label{eq:Eef}
\mathbf{E}[\eta]=\frac{(1-OPER_{AB})\cdot\mathbf{E}[Payload]+2\cdot OPER_{AB}\cdot(1-p_{out})\cdot\mathbf{E}[Payload]}{\mathbf{E}[\mathcal{E}_{total}]},
\end{equation}
where the numerator corresponds to the expected number of delivered useful bits during one communication round, and the denominator represents the average energy consumption at the same time period.
Regarding the expected total energy consumption in the network, following the same line of thought, we discompose the operation into the direct transmission and the cooperation phase. Hence:
\begin{equation}
\label{eq:Etotal}
\mathbf{E}[\mathcal{E}_{total}]=\mathcal{E}_{d}+OPER_{AB}\cdot \mathbf{E}[\mathcal{E}_{coop}].
\end{equation}
Let us recall that the network consists of two nodes ($A$ and $B$) that exchange packets with the assistance of $n$ relays. Defining as $P_{Tx}$, $P_{Rx}$ and $P_{idle}$ the power levels associated to the transmission ($Tx$), reception ($Rx$) and idle mode, respectively, the energy consumption during the direct transmissions can be estimated as:
\begin{equation}
\label{eq:Ed}
\mathcal{E}_{d}=P_{Tx}\cdot T_a+(n+1)\cdot P_{Rx}\cdot T_a.
\end{equation}
On the other hand, the term $\mathbf{E}[\mathcal{E}_{coop}]$ is composed of the energy consumption during the network outage and the energy consumed in the successful cooperation:
\begin{equation}
\label{eq:Ecoop}
\mathbf{E}[\mathcal{E}_{coop}]=p_{out}\cdot \mathcal{E}_{out}+(1-p_{out})\cdot \mathbf{E}[\mathcal{E}_{suc\_coop}],
\end{equation}
where the $\mathbf{E}[\mathcal{E}_{suc\_coop}]$ includes the required energy for a perfectly scheduled cooperative phase ($\mathcal{E}_{min}$), and the energy consumption during the contention phase ($\mathbf{E}[\mathcal{E}_{cont}]$):
\begin{equation}
\label{eq:Ecoop}
\mathbf{E}[\mathcal{E}_{suc\_coop}]=\mathcal{E}_{min}+\mathbf{E}[\mathcal{E}_{cont}].
\end{equation}
Hence, considering the network topology and the protocol's operation, we have:
\begin{equation}
\label{eq:Eout}
\mathcal{E}_{out}=(n+2)\cdot P_{idle}\cdot T_{timeout}
\end{equation}
\footnotesize
\begin{equation}
\label{eq:Emin}
\begin{aligned}
&\mathcal{E}_{min}=(n+2)\cdot P_{idle}\cdot T_{SIFS}+P_{Tx}\cdot (T_{RFC}+T_B)+(n+1)\cdot P_{Rx}\cdot(T_{RFC}+T_B)+(n+2)\cdot P_{idle}\cdot T_{ONC}+\\
&+(n+2)\cdot P_{idle}\cdot T_{DIFS}+P_{Tx}\cdot T_{a\oplus b}+2\cdot P_{Rx} \cdot T_{a\oplus b}+(n-1)\cdot P_{idle}\cdot T_{a\oplus b}+(n+2)\cdot P_{idle}\cdot T_{SIFS}+\\
&+2\cdot P_{Tx}\cdot T_{ACK}+2\cdot (n+1)\cdot P_{Rx}\cdot T_{ACK}+(n+2)\cdot P_{idle} \cdot T_{SIFS}\\
\end {aligned}
\end{equation}
\normalsize
\begin{equation}
\label{eq:Econt}
\mathbf{E}[\mathcal{E}_{cont}]=p_i\cdot ((n+2)\cdot P_{idle}\cdot T_{slot})+p_c\cdot (\mathbf{E}[L]\cdot P_{Tx}\cdot T_{col}+2\cdot P_{Rx}\cdot T_{col}+(n-\mathbf{E}[L])\cdot P_{idle}\cdot T_{col}),
\end{equation}
where $\mathbf{E}[L]$ represents the average number of relays that transmit a packet simultaneously. Given the existence of $\mathbf{E}[|\mathcal{A}_n|]$ active relays, the probability $p_l$ that exactly $l$ stations are involved in a collision can be expressed as:
\begin{equation}
\label{eq:pk}
p_l=\frac{\dbinom{\mathbf{E}[|\mathcal{A}_n|]}{l}\tau^{l}(1-\tau)^{\mathbf{E}[|\mathcal{A}_n|]-l}}{p_c}
\end{equation}
and, therefore:
\begin{equation}
\label{eq:Ek}
\mathbf{E}[L]=\displaystyle\sum_{l=2}^{\mathbf{E}[|\mathcal{A}_n|]}l\cdot p_{l}=\displaystyle\sum_{l=2}^{\mathbf{E}[|\mathcal{A}_n|]}l\cdot\frac{\dbinom{\mathbf{E}[|\mathcal{A}_n|]}{l}\tau^{l}(1-\tau)^{\mathbf{E}[|\mathcal{A}_n|]-l}}{p_c}.
\end{equation}
\section{Model Validation and Performance Evaluation}
\label{sec:performance}
We have developed a MATLAB simulator that incorporates both the NCCARQ rules and the PHY layer design, in order to validate our analytical model and study the impact of exponentially correlated shadowing on the performance of NC-based MAC protocols. In this section, we present the simulation setup along with the results of our experiments.
\subsection{Simulation Setup}
\label{sec:setup}
The considered network, depicted in Fig. \ref{f1}, consists of two nodes ($A$ and $B$) that participate in a bidirectional wireless communication, and $n$ relay nodes that contribute to the data exchange. In the same figure, the shadowing correlation between the different links is highlighted, assuming that: i) all $AR_i$ links are exponentially correlated as described in Eq.(\ref{eq1}), ii) all $BR_i$ links are also exponentially correlated according to Eq.(\ref{eq2}), and iii) pairs of $AR_i$ and $BR_i$ links are independent, which is a reasonable assumption according to measurements in \cite{cor1}. Furthermore, we adopt a symmetric network topology with $\rho_1=\rho_2=\rho$.
The MAC layer parameters have been selected in line with the IEEE 802.11g Standard specifications \cite{80211std}. In particular, the initial Contention Window ($CW$) for all nodes is 32, the MAC header overhead is 34~bytes, while the time for the application of NC to the data packets is considered negligible, as the coding takes place only between two packets. We also consider time slots, SIFS, DIFS and timeout interval of 20, 10, 50 and 80~$\mu$s, respectively. In addition, based on the work of Ebert et al. \cite{ebert} on the power consumption of the wireless interface, we have chosen the following power levels for our scenarios: $P_{Tx}=1900$~mW and $P_{Rx}=P_{idle}=1340$~mW.
Regarding the PHY layer parameters, we have set the reliability threshold $\gamma^*=41.14=16.14$~dB, which corresponds to a target $APER=10^{-1}$. Furthermore, we assume a relatively weak direct ($AB$) link ($\mu_{AB}=8$~dB) with respect to the SNR threshold $\gamma^*$, in order to trigger the cooperation and focus our study on the impact of correlated shadowing. The simulation parameters are summarized in Table \ref{t2}. Through the experimental assessment, we want to validate our proposed models and study the effect of the number of relays ($n$) and the correlation factor ($\rho$) on the protocol performance.
\begin{table}[htb]
\caption{System Parameters} \label{t2}
\begin{center}
\begin{tabular}{|c|c||c|c|}
\hline
\textbf{Parameter} & \textbf{Value} & \textbf{Parameter} & \textbf{Value} \\ \hline
\textit{Packet Payload} & 1500 bytes & \textit{$CW_{min}$} & 32\\ \hline
\textit{$T_{slot}$} & 20 $\mu$s & \textit{$T_{timeout}$} & 80 $\mu$s \\ \hline
\textit{SIFS} & 10 $\mu$s & \textit{DIFS} & 50 $\mu$s\\ \hline
\textit{MAC Header} & 34 bytes & \textit{PHY Header} & 96 $\mu$s \\ \hline
\textit{Data Tx.Rate} & 54 Mb/s & \textit{Control Tx.Rate} & 6 Mb/s \\ \hline
$\gamma^*$ & 16.14 dB & $\sigma$ & [0,10] dB\\ \hline
$\mu_{AR_i}=\mu_{BR_i}$ & \{15,20\} dB & $\mu_{AB}$ & 8 dB\\ \hline
\textit{$P_{Tx}$} & 1900 mW & \textit{$P_{Rx}$} & 1340 mW \\ \hline
\textit{$P_{idle}$} & 1340 mW & $\rho$ & [0,1) \\ \hline
\end{tabular}
\end{center}
\end{table}
\subsection{Model Validation}
\label{sec:validation}
In the first set of our experiments, we study the PHY layer impact on the communication, while we validate the derived theoretical expressions. Fig. \ref{phyf1}-\ref{val2} depict the expected size of the active relay set ($\mathbf{E}[|\mathcal{A}_n|]$), the network outage probability ($p_{out}$), the expected network throughput ($\mathbf{E}[S_{total}]$) and the expected energy efficiency ($\mathbf{E}[\eta]$), respectively, for different values of the shadowing standard deviation $\sigma$, assuming strong links between the end nodes~($A$,$B$) and the relays~($R_i$), i.e., $\mu_{AR_i}=\mu_{BR_i}=20$~dB.
\begin{figure}[htb]
\centering
\includegraphics[width=1\columnwidth]{average.pdf}
\caption{Expected size of the active relay set ($\mathbf{E}[|\mathcal{A}_n|]$) vs. Shadowing standard deviation ($\sigma$) ($\mu_{AR_i}=\mu_{BR_i}=20$~dB)}\label{phyf1}
\end{figure}
In Fig. \ref{phyf1}, we consider different total number of relays and various indicative values for the correlation factor ($\rho$), deriving two important conclusions. First, the experiments validate our analysis, demonstrating that the average number of active relays is independent of the shadowing correlation among the wireless links. The second important remark concerns the negative effect of $\sigma$ in the number of active relays. In this particular scenario, where the mean SNR value is above the threshold $\gamma^*$, the shadowing variation has a detrimental role in the communication. As a result, higher values of $\sigma$ restrict the potential diversity benefits by reducing the expected size of the active relay set. However, it should be mentioned that in the opposite case (i.e., when the mean SNR value is below the reliability threshold), high values of $\sigma$ would imply lower outage probability and higher number of active relays, hence increasing the expected network throughput in the system, as we will examine in the following section.
Fig. \ref{phyf2} illustrates the theoretical and simulation results for the network outage probability for different correlation factors ($\rho$) and number of relays ($n$). Similar to the previous case, the shadowing deviation deteriorates the system performance, increasing the probability of having no active relay in the system. However, in this case, the impact of shadowing correlation on the system is clearly demonstrated in the figure, since high values of $\rho$ cause almost identical outage probability for the network independently of $n$, annulling the advantages of the distributed cooperation. On the other hand, independent wireless links ($\rho=0$) exploit the diversity offered by the relays, considerably reducing the outage probability as the total number of relays in the system increases (e.g., $n=5$). As a result, the significant effect of $\rho$ on the probability of outage has a direct impact on the end-to-end metrics under study, highlighting the importance of having the exact knowledge of the shadowing correlation conditions in the network.
\begin{figure}[htb]
\centering
\includegraphics[width=1\columnwidth]{pout2.pdf}
\caption{Network outage probability ($p_{out}$) vs. Shadowing standard deviation ($\sigma$) ($\mu_{AR_i}=\mu_{BR_i}=20$~dB)}\label{phyf2}
\end{figure}
In Fig. \ref{val1}, we study the impact of shadowing standard deviation ($\sigma$) on the network throughput for different number of relay nodes ($n$). In this specific case, where $\mu_{AR_i}=\mu_{BR_i}>\gamma^*$, the wireless communication would always be successful without the shadowing random fluctuations and, hence, shadowing is harmful for the system, as it introduces many events where the received SNR is below the threshold $\gamma^*$. In addition, two important remarks are highlighted: i) distributed cooperation is beneficial, as the throughput increases with the number of available relays ($n$), and ii) shadowing correlation is detrimental to the potential gain introduced by cooperation.
\begin{figure}[htb]
\centering
\includegraphics[width=1\columnwidth]{val-th1.pdf}
\caption{Average network throughput ($\mathbf{E}[S_{total}]$) vs. shadowing standard deviation ($\sigma$), assuming $\mu_{AR_i}=\mu_{BR_i}=20$~dB and a) $\rho=0.00$, b) $\rho=0.50$, c) $\rho=0.99$} \label{val1}
\end{figure}
The expected network energy efficiency for different number of relays and correlated conditions is plotted in Fig. \ref{val2}, validating our model and revealing intriguing facets of the problem, since they disclose a notable trade-off between the system throughput and energy efficiency. In particular, although distributed cooperation provides significant gains in the throughput for high SNR scenarios (Fig. \ref{val1}), it has a negative impact on the energy efficiency, reducing it up to 100\% under specific conditions. This fact can be explained by taking into account the high throughput (12~Mb/s) achieved in single-relay networks under good channel conditions. Cooperation may increase this performance up to 18~Mb/s, but the aggregated energy consumption of many relays in the network results in a significant reduction of the total energy efficiency. The next section presents a thorough performance evaluation with regard to the impact of the number of relays and the shadowing correlation on the network performance.
\begin{figure}[htb]
\centering
\includegraphics[width=1\columnwidth]{val-ee1.pdf}
\caption{Average energy efficiency ($\mathbf{E}[\eta]$)vs. shadowing standard deviation ($\sigma$), assuming $\mu_{AR_i}=\mu_{BR_i}=20$~dB and a) $\rho=0.00$, b) $\rho=0.50$, c) $\rho=0.99$} \label{val2}
\end{figure}
\subsection{Performance Evaluation}
In Fig. \ref{th1}-\ref{ee2}, we study the impact of the correlation and the number of relays ($n$) on the network throughput and energy efficiency, for three different topologies (i.e., $\rho=0$, $\rho=0.5$ and $\rho=0.99$). In this set of experiments, we have set $\mu_{AR_i}=\mu_{BR_i}=15$~dB, which is a value close to the reliability threshold ($\gamma^*$). In addition, in order to emphasize the importance of the shadowing standard deviation, we have adopted two extreme values of $\sigma$, i.e., $\sigma=2$ dB and $\sigma=10$ dB.
\subsubsection{Impact of the number of relays ($n$) in the network}
Fig. \ref{th1} studies the network throughput versus the number of relays, deriving three important conclusions. First, the high number of relays in the network is beneficial for the average system throughput, especially for low and medium values of $\rho$, where the incorporation of more relays in the network results in higher diversity. The second interesting remark is related to the influence of the shadowing correlation factor on the protocol performance. In particular, as expected by studying the outage probability, high values of $\rho$ compensate the benefits from the cooperation, as the throughput increases only slightly with the number of relays. However, for medium and low values of the $\rho$, the protocol performance seems to remain unaffected, which implies that the network throughput is not proportional to the correlation among the links. This observation would be particularly important for network design, since it allows the network deployment under relatively high correlation conditions (e.g. $\rho=0.5$), although they may sound prohibitive on principle. Finally, comparing Fig. \ref{th1}a and Fig. \ref{th1}b, we observe the impact of $\sigma$ on the throughput, as high values of the shadowing standard deviation ($\sigma$=10 dB) significantly increase the protocol performance, especially for small number of relays in the network. For example, in the case of single-relay systems ($n=1$), the throughput is almost quadruple, something that can be intuitively explained by taking into account that the mean value of the SNR ($\mu_{AR_i}=\mu_{BR_i}$) is marginally lower than the decoding threshold and, as a result, the random fluctuations introduced by shadowing due to high values of $\sigma$ enable the correct packet decoding more often.
\begin{figure}[htb]
\centering
\includegraphics[width=1\columnwidth]{thr_vs_N.pdf}
\caption{Average network throughput ($\mathbf{E}[S_{total}]$) vs. Number of relays ($n$) for $\rho=0$, $\rho=0.5$, $\rho=0.99$, considering: a) $\sigma$=2~dB and b) $\sigma$=10~dB ($\mu_{AR_i}=\mu_{BR_i}=15$~dB)} \label{th1}
\end{figure}
Figures \ref{ee1}a and \ref{ee1}b present the network energy efficiency for $\sigma$=2~dB and $\sigma$=10~dB, respectively. The first clear outcome from both figures is the negative role of shadowing correlation in the energy efficiency of the network. In addition, the shadowing standard variation plays again an important role, as the energy efficiency increases with $\sigma$, mainly due to the significant throughput increase in the network. However, as the number of relays increases, the relative gain due to $\sigma$ decreases, since the respective throughput gains are significantly higher for low values of $\sigma$. Moreover, it is worth commenting on the different behavior of the plots in each figure. In Fig. \ref{ee1}a, we observe that, for low and medium correlation factors, the energy efficiency increases until $n=4$, as the throughput gains we achieve by adding relays in the network deserve the increased energy consumption in the system. For higher number of relays (i.e., $n>4$), the energy efficiency decreases slowly, since the incorporation of more relays (which need extra energy resources) in the network does not fully justify the increase in throughput. In high correlated scenarios (i.e., $\rho$=0.99), as expected, the energy efficiency decreases as the number of relays ggrwos, since, due to the almost identical conditions in the wireless links, the throughput of the network is not significantly affected. Regarding the case of $\sigma$=10~dB (Fig. \ref{ee1}b), we can see that the increase in the number of relays causes a significant reduction in the network energy efficiency, although the average throughput (Fig. \ref{th1}b) follows again an increasing trend. However, in this case, we should consider that the throughput with only one relay in the network is relatively high and, therefore, the price (in terms of energy) we have to pay by adding more relays is higher than the actual gains we get in terms of performance.
\begin{figure}[htb]
\centering
\includegraphics[width=1\columnwidth]{ee_vs_N.pdf}
\caption{Network energy efficiency ($\eta$) vs. Number of relays ($n$) for $\rho=0$, $\rho=0.5$, $\rho=0.99$, considering: a) $\sigma$=2~dB and b) $\sigma$=10~dB ($\mu_{AR_i}=\mu_{BR_i}=15$~dB)}\label{ee1}
\end{figure}
\subsubsection{Impact of the correlation factor ($\rho$)}
In Fig. \ref{th2}a and \ref{th2}b, we study the impact of the shadowing correlation factor ($\rho$) on the network throughput for $\sigma=2$ dB and $\sigma=10$ dB, respectively, while we also plot the case of one relay ($n=1$) in the network as a reference scenario, although the correlation in this case has no practical meaning. Once more, we confirm that the high number of relays, as well as high values of $\sigma$ are beneficial for the throughput in cooperative scenarios with the specific setup. With regard to the impact of shadowing correlation, in both cases, we observe that the cooperation tends to be useless for high correlation factors ($\rho\rightarrow 1$), since all the relays experience very similar shadowing attenuations, and the throughput reduces to that of a single-relay network. However, it can be remarked that the impact of correlation is more severe in environments with low $\sigma$, as the difference in the throughput performance in case of independent ($\rho=0$) and fully correlated ($\rho=0.99$) links is much higher in Fig. \ref{th2}a. In addition, we can verify the conclusions of the previous set of experiments, where it was shown that the results for $\rho=0$ and $\rho=0.5$ (which are common values for outdoor environments \cite{out_cor}) were not significantly different. In this figure, we can explicitly specify that the severe performance degradation occurs for extremely high values of the correlation factor (i.e., $\rho>0.7$), which are usually found in indoor environments \cite{in_cor1,in_cor2}.
\begin{figure}[htb]
\centering
\includegraphics[width=1\columnwidth]{thr_vs_rho.pdf}
\caption{Average network throughput ($\mathbf{E}[S_{total}]$) vs. Shadowing correlation factor ($\rho$) for $n=1,2,5,10$, considering: a) $\sigma$=2~dB and b) $\sigma$=10~dB ($\mu_{AR_i}=\mu_{BR_i}=15$~dB)}\label{th2}
\end{figure}
The impact of correlation on the network energy efficiency is shown in Fig.~\ref{ee2}. Starting from the case of $\sigma=10$ dB (Fig.~\ref{ee2}b), we can see that adding more relays in the network causes a considerable reduction in the energy efficiency, while the effect of correlation is not particularly harmful. This fact can be explained by comparing the throughput performance of networks with $n=1$ and $n=10$ relays in Fig. \ref{th2}b. Apparently, we need 10 relays in order to double the throughput of single-relay networks. However, the significantly increased energy consumption in the network is not in accordance with the throughput enhancement, thus resulting in lower energy efficiency in the network. The same conclusion can be also supported by noticing that, in contrast to the throughput performance results, the baseline network energy efficiency (i.e., with one relay in the network) is higher in all cases, since the protocol is able to achieve high performance standards under these conditions, even with only one relay in the system. On the other hand, the results in Fig.~\ref{ee2}a are not straightforward, as they identify the necessity of carefully choosing the number of relays in order to achieve the highest energy efficiency. Unlike Fig.~\ref{ee2}b, where the incorporation of additional relays in the system always results in energy efficiency degradation, in this case, the existence of more than one relay in the system, besides throughput, can also be beneficial for the energy efficiency of the network, especially in low correlation scenarios. For instance, in our particular experiment, we can see that the energy efficiency increases by adding few relays in the system (e.g., up to $n=5$), since the achieved throughput raises considerably even with only a small number of deployed relay nodes (Fig.~\ref{th2}a). As the number of relays increases (e.g., $n=10$), the energy consumption grows in higher rates than the throughput, which results in lower energy efficiency in the network. However, as the correlation among the wireless links increases (i.e., $\rho>0.7$), the deployment of multiple relays does not provide significant performance enhancement, something that is directly reflected in the energy efficiency, which drops significantly.
\begin{figure}[htb]
\centering
\includegraphics[width=1\columnwidth]{ee_vs_rho.pdf}
\caption{Network energy efficiency ($\eta$) vs. Shadowing correlation factor ($\rho$) for $n=1,2,5,10$, considering: a) $\sigma$=2~dB and b) $\sigma$=10~dB ($\mu_{AR_i}=\mu_{BR_i}=15$~dB)}\label{ee2}
\end{figure}
In all cases, the experimental results clearly showcase that: i) the system performance is notably affected only by extremely high values of $\rho$, and ii) although shadowing correlation does not have an influence on the average number of active relays, it significantly affects the outage probability and, thus, the network should be designed taking into account the exact physical parameters and application requirements. In addition, very interesting tradeoffs between the throughput and the energy efficiency in the network have been revealed by the extensive performance assessment. In particular, the throughput improvement offered by the distributed cooperation comes with the respective energy costs that should not be neglected. The incorporation of many relays in the communication increases the total energy consumption in the network, without yielding the expected throughput gains in all cases. More specifically, in highly correlated scenarios where the cluster of relays does not offer significant throughput gains, the energy efficiency is remarkably reduced. It is also worth noting that shadowing variations can be either beneficial or harmful for the communication, depending on the quality of the cooperative links in the network. To that end, the proposed cross-layer analytical model provides the network designer with accurate estimations that facilitate the decision for the optimum number of relays in the network and their best possible placement in order to reduce the deployment and operational cost, guaranteeing, at the same time, the desired network throughput.
\section{Concluding Remarks}
\label{sec:conclusions}
In this paper, we have proposed a cross-layer analytical framework to model end-to-end metrics (i.e., throughput and energy efficiency), in two-way cooperative networks under realistic correlated shadowing conditions. The proposed model jointly considers the MAC layer operation along with crucial PHY layer parameters, such as the network outage probability and the average number of active relays in the network. The extensive performance assessment has revealed interesting tradeoffs between throughput and energy efficiency, while the PHY layer analysis has demonstrated that the average number of relays is independent of the shadowing correlation in the wireless links. The proposed analytical model can provide useful insights that can be exploited for effective network planning in realistic channel conditions. In our future work, we plan to study the temporal shadowing correlation and design effective cross-layer mechanisms that enhance the performance of the state-of-the-art NC-aided MAC protocols.
\appendices
\section{Derivation of Equation (\ref{eq:qk1})}
\label{a4}
After setting $y_k = -z+t$, where $t = \frac{\gamma^* -\mu_{AR_k}}{\sigma_{AR_k}}$, Eq. (\ref{eq:qk}) may be written as:
\begingroup
\begin{eqnarray}
q_k(x) & = & \int^{t}_{-\infty}exp\left[-\frac{\left(\rho_1^2+1\right)y^2_k-2\rho_1 y_kx}{2\left(1-\rho_1^2\right)}\right]q_{k-1}(y_k)dy_k\nonumber\\
& = & \int^{\infty}_{0}exp\left[-\frac{\left(\rho_1^2+1\right)\left(z-t\right)^2+2\rho_1 x\left(z-t\right)}{2\left(1-\rho_1^2\right)}\right]q_{k-1}(-z+t)dz\nonumber\\
& = & \sqrt{\frac{2\left(1-\rho_1^2\right)}{1+\rho_1^2}}exp\left[-\frac{\left(\rho_1^2+1\right)t^2-2\rho_1 xt}{2\left(1-\rho_1^2\right)}\right] \int^{\infty}_{0}exp\left[-z^2\right]\nonumber\\
&& \times exp\left[-\frac{\left(2\rho_1 x - 2t\left(\rho_1^2+1\right)\right)z}{\sqrt{2\left(1-\rho_1^4\right)}}\right]q_{k-1}(-\sqrt{\frac{2\left(1-\rho_1^2\right)}{1+\rho_1^2}}z+t)dz.\label{eq:qk2}
\end{eqnarray}
\endgroup
\noindent The aforementioned expression may be evaluated using the Gaussian quadratures for the integral $\int^{\infty}_{0}exp\left[-x^2\right]f(x)dx$, originally proposed in \cite[Table II, N=15]{1969method}, as follows:
\begin{equation}
\int^{\infty}_{0}exp\left[-x^2\right]f(x)dx \approx \sum^{N_{GQR}}_{i=1}w_i f(r_i),
\end{equation}
\noindent where $w_i$, $r_i$ denote the weights and roots of the Gaussian quadratures as defined in \cite[Table II, N=15]{1969method}, and $N_{GQR}$ is the number of points used for the integral evaluation. By using the aforementioned formula, we may write Eq. \eqref{eq:qk2} as follows:
\begingroup
\begin{eqnarray}
q_k(x) & = & \sqrt{\frac{2\left(1-\rho_1^2\right)}{1+\rho_1^2}}exp\left[-\frac{\left(\rho_1^2+1\right)t^2-2\rho_1 xt}{2\left(1-\rho_1^2\right)}\right] \sum^{N_{GQR}}_{i=1}w_i \nonumber\\
&& \times exp\left[-\frac{\left(2\rho_1 x - 2t\left(\rho_1^2+1\right)\right)r_i}{\sqrt{2\left(1-\rho_1^4\right)}}\right]q_{k-1}(-\sqrt{\frac{2\left(1-\rho_1^2\right)}{1+\rho_1^2}}r_i+t),
\end{eqnarray}
\endgroup
thus deriving Eq. \eqref{eq:qk1}.
\section{Iterative Computation of $Pr\{\mathfrak{b}_{A\chi_n}\}$ by Using the Gaussian Quadratures}
\label{a1}
\begin{flushleft}
\texttt{Step 1: Evaluate $q_1\left(x\right)$ at the points $x_i = \sqrt{\frac{2\left(1-\rho_1^2\right)}{1+\rho_1^2}}r_i-t$ if
$\boldsymbol{1}_{AR_1}$ or $x_i = -\sqrt{\frac{2\left(1-\rho_1^2\right)}{1+\rho_1^2}}r_i+t$ if $\boldsymbol{0}_{AR_1}$, according to:}
\end{flushleft}
\begingroup
\footnotesize
\begin{equation}
q_1(x) = \left\{\begin{array}{c}\sqrt{2\pi\left(1-\rho_1^2\right)} Q\left(\frac{t-\rho_1 x}{\sqrt{\left(1-\rho_1^2\right)}}\right)exp\left(\frac{\rho_1^2x^2}{2(1-\rho_1^2)}\right) \ if\ \boldsymbol{1}_{AR_1}\\
\sqrt{2\pi\left(1-\rho_1^2\right)} Q\left(\frac{\rho_1 x-t}{\sqrt{\left(1-\rho_1^2\right)}}\right)exp\left(\frac{\rho_1^2x^2}{2(1-\rho_1^2)}\right)\ otherwise\end{array}\right.,\ t = \frac{\gamma^*-\mu_{AR_1}}{\sigma_{AR_1}}\label{eq-1}
\end{equation}
\endgroup
\begin{flushleft}
\texttt{Step k: Evaluate $q_k\left(x\right)$, $\forall k \in[2,n-1]$ at the points, $x_i = \sqrt{\frac{2\left(1-\rho_1^2\right)}{1+\rho_1^2}}r_i-t$ if
$\boldsymbol{1}_{AR_k}$ or $x_i = -\sqrt{\frac{2\left(1-\rho_1^2\right)}{1+\rho_1^2}}r_i+t$ if $\boldsymbol{0}_{AR_k}$ according to:}
\end{flushleft}
\begingroup
\tiny
\begin{eqnarray}
q_k(x) & = & \left\{
\begin{array}{c}
\sqrt{\frac{2\left(1-\rho_1^2\right)}{1+\rho_1^2}}exp\left(-\frac{\left(\rho_1^2+1\right)t^2-2\rho_1 xt}{2\left(1-\rho_1^2\right)}\right)\sum^{N_{GQR}}_{i=1}w_i exp\left(\frac{\left(2\rho_1 x - 2t\left(\rho_1^2+1\right)\right)r_i}{\sqrt{2\left(1-\rho_1^4\right)}}\right)q_{k-1}\left(\sqrt{\frac{2\left(1-\rho_1^2\right)}{1+\rho_1^2}}r_i-t\right)\ if \ \boldsymbol{1}_{AR_k}\\
\sqrt{\frac{2\left(1-\rho_1^2\right)}{1+\rho_1^2}}exp\left(-\frac{\left(\rho_1^2+1\right)t^2-2\rho_1 xt}{2\left(1-\rho_1^2\right)}\right)\sum^{N_{GQR}}_{i=1}w_i exp\left(-\frac{\left(2\rho_1 x - 2t\left(\rho_1^2+1\right)\right)r_i}{\sqrt{2\left(1-\rho_1^4\right)}}\right)q_{k-1}\left(-\sqrt{\frac{2\left(1-\rho_1^2\right)}{1+\rho_1^2}}r_i+t\right)\ if \ \boldsymbol{0}_{AR_k}
\end{array}\right.,\
\end{eqnarray}
\endgroup
\begin{flushleft}
\texttt{where $t = \frac{\gamma^*-\mu_{AR_k}}{\sigma_{AR_k}}$}
\end{flushleft}
\begin{flushleft}
\texttt{Step n: Evaluate $Pr\{\mathfrak{b}_{A\chi_n}\}$ according to:}
\end{flushleft}
\begingroup
\footnotesize
\begin{eqnarray}
Pr\{\mathfrak{b}_{A\chi_n}\} & = & \left\{
\begin{array}{c}
C_0\sum^{N_{GQR}}_{i=1}w_iexp\left(-\frac{t^2 + 2\sqrt{2\left(1-\rho_1^2\right)}tr_i}{2\left(1-\rho_1^2\right)}\right)q_{n-1}\left(\sqrt{2\left(1-\rho_1^2\right)}r_i+t\right)\ if\ \boldsymbol{1}_{AR_n}\\
C_0\sum^{N_{GQR}}_{i=1}w_iexp\left(-\frac{t^2 - 2\sqrt{2\left(1-\rho_1^2\right)}tr_i}{2\left(1-\rho_1^2\right)}\right)q_{n-1}\left(t-\sqrt{2\left(1-\rho_1^2\right)}r_i\right)\ if\ \boldsymbol{0}_{AR_n}
\end{array}\right.
\end{eqnarray}
\endgroup
\begin{flushleft}
\texttt{where $C_0 = [det\left(\mathbf{\Sigma}\left(\rho_1\right)\right)]^{-1/2}\left(2\pi\right)^{-n/2}$, $t = \frac{\gamma^*-\mu_{AR_n}}{\sigma_{AR_n}}$.}
\end{flushleft}
\section{Proof of Lemma 1}
\label{a2}
Any RV $Z\sim N(\mu,\sigma^2)$ may be written in the form:
\vspace{-1pt}
\begingroup
\begin{equation}
Z= \sigma \sqrt{\rho} X_1 + \sigma \sqrt{1-\rho} X_2 + \mu
\end{equation}
\endgroup
for any given $0\leq \rho< 1$, where $X_1,X_2\sim N(0,1)$. Considering $W Z-\mu$, $ X =\sigma \sqrt{\rho} X_1$, $Y=\sigma \sqrt{1-\rho} X_2$, the above equation may be written as $W=X+Y$,
\noindent where $W\sim N\left(0,\sigma^2\right)$, $X\sim N \left(0,\sigma^2\rho \right)$, $Y \sim N\left(0,\sigma^2 \left(1-\rho\right)\right)$.
The Cumulative Distribution Function (CDF) of $W$ can be written as:
\vspace{-1pt}
\begingroup
\begin{eqnarray}
F_w(t)
&=& \int^{+\infty}_{-\infty}F_y\left(t-x\right)f_x(x) dx \label{eq:57}
\end{eqnarray}
\endgroup
After making some changes in variables, Eq.(\ref{eq:57}) may be written as follows:
\vspace{-1em}
\begingroup
\begin{eqnarray}
F_z(t) &=& \frac{1}{\sqrt{2\pi \sigma \rho}}\int^{+\infty}_{-\infty}F_{x_2}\left(\frac{t-x-\mu}{ \sigma \sqrt{1-\rho} }\right)e^{-\frac{x^2}{2\sigma^2 \rho}} dx
\end{eqnarray}
\endgroup
and, consequently:
\begingroup
\begin{eqnarray}
Q\left(\frac{t-\mu}{ \sigma }\right) &=&\frac{1}{\sqrt{2\pi}}\int^{+\infty}_{-\infty}Q\left(\frac{t-\sigma\sqrt{\rho}x_1-\mu}{ \sigma \sqrt{1-\rho} }\right)e^{-\frac{x^2}{2}}dx.
\end{eqnarray}
\endgroup
\noindent The above equation concludes our proof.
\section{Derivation of Equation (\ref{eq:average2})}
\label{a5}
In the special case where the number of available relays in the system is equal to 2, then, alternatively to Eq. (3), the two correlated links $\bar{\gamma}_{AR_{1}}, \bar{\gamma}_{AR_{2}}$ can be generated as follows:
\begingroup
{\setlength{\arraycolsep}{0em}
\begin{eqnarray}
\bar{\gamma}_{AR_{1}} &=& \sigma_{AR_{1}} \left(\sqrt{1-\rho_1^2}X_1+\rho_1 X_0\right)+\mu_{AR_{1}}\label{Rvs}\\
\bar{\gamma}_{AR_{2}} &=& \sigma_{AR_{2}} \left(\sqrt{1-\rho_1^2}X_2+\rho_1 X_0\right)+\mu_{AR_{2}}\label{Rvs2},
\end{eqnarray}}
\endgroup
\noindent with $X_i\sim \mathcal{N}(0,1)$, $\forall i$. In that case, we may write:
\begingroup
{\setlength{\arraycolsep}{0em}
\begin{eqnarray}
&&Pr\{\mathfrak{b}_{A1_2}\} = Pr\left\{\boldsymbol{0}_{AR_1},\boldsymbol{1}_{AR_2}\right\}\nonumber\\
&& = \int^{\infty}_{-\infty}\Pr\left\{\bar{\gamma}_{AR_{1}}\leq \gamma^*,\bar{\gamma}_{AR_{2}}> \gamma^*\left|X_0=t\right.\right\}f_{X_0}\left(t\right)dt\nonumber\\
&& = \int^{\infty}_{-\infty}\Pr\left\{X_1\leq a_1(t)\right\}\Pr\left\{X_2> a_2(t)\right\}f_{X_0}\left(t\right)dt\nonumber\\
&& = \frac{1}{\sqrt{2\pi}}\int^{\infty}_{-\infty}\left(1-Q\left(a_{1}\left(t\right)\right)\right) Q\left(a_{2}\left(t\right)\right)e^{\frac{-t^2}{2}}dt, \label{A1}
\end{eqnarray}}
\endgroup
\noindent where $a_{i}(t) = \left(\gamma^*- \mu_{AR_{i}}-\sigma_{AR_{i}} \rho_1 t\right)/\left(\sigma_{AR_{i}}\sqrt{1-\rho^2_1}\right)$ and $f_{X_0}\left(t\right)$ is the probability density function of $X_0$. Similarly, we may write:
\begingroup
{\setlength{\arraycolsep}{0em}
\begin{eqnarray}
&&Pr\{\mathfrak{b}_{A2_2}\} = \frac{1}{\sqrt{2\pi}}\int^{\infty}_{-\infty}Q\left(a_{1}\left(t\right)\right)\left(1 - Q\left(a_{2}\left(t\right)\right)\right)e^{\frac{-t^2}{2}}dt \label{A2}\\
&&Pr\{\mathfrak{b}_{A3_2}\} = \frac{1}{\sqrt{2\pi}}\int^{\infty}_{-\infty}Q\left(a_{1}\left(t\right)\right)Q\left(a_{2}\left(t\right)\right)e^{\frac{-t^2}{2}}dt. \label{A3}
\end{eqnarray}}
\endgroup
\noindent Equivalently, the probabilities $Pr\{\mathfrak{b}_{B\psi_2}\}$ can be written as follows:
\begingroup
{\setlength{\arraycolsep}{0em}
\begin{eqnarray}
&&Pr\{\mathfrak{b}_{B1_2}\} = \frac{1}{\sqrt{2\pi}}\int^{\infty}_{-\infty}\left(1-Q\left(b_{1}\left(t\right)\right)\right) Q\left(b_{2}\left(t\right)\right)e^{\frac{-t^2}{2}}dt \label{B1}\\
&&Pr\{\mathfrak{b}_{B2_2}\} = \frac{1}{\sqrt{2\pi}}\int^{\infty}_{-\infty}Q\left(b_{1}\left(t\right)\right)\left(1 - Q\left(b_{2}\left(t\right)\right)\right)e^{\frac{-t^2}{2}}dt \label{B2}\\
&&Pr\{\mathfrak{b}_{B3_2}\} = \frac{1}{\sqrt{2\pi}}\int^{\infty}_{-\infty}Q\left(b_{1}\left(t\right)\right)Q\left(b_{2}\left(t\right)\right)e^{\frac{-t^2}{2}}dt, \label{B3}
\end{eqnarray}}
\endgroup
\noindent where $b_{i}(t) = \left(\gamma^*- \mu_{BR_{i}}-\sigma_{BR_{i}} \rho_2 t\right)/\left(\sigma_{BR_{i}}\sqrt{1-\rho^2_2}\right)$. By combining Eq. (24) - (26), the average number of active relays is given by:
\begingroup
\begin{eqnarray}
\label{eq:E-3}
\mathbf{E}\left[\left|\mathcal{A}_2\right|\right] &=& \sum^2_{i=1} iPr\left\{\left|\mathcal{A}_2\right|=i\right\}\nonumber\\
&=& \left(Pr\{\mathfrak{b}_{A3_2}\} + Pr\{\mathfrak{b}_{A1_2}\}\right)\left(Pr\{\mathfrak{b}_{B3_2}\}+Pr\{\mathfrak{b}_{B1_2}\}\right) \nonumber\\
&&+ \left(Pr\{\mathfrak{b}_{A3_2}\} + Pr\{\mathfrak{b}_{A2_2}\}\right)\left(Pr\{\mathfrak{b}_{B3_2}+Pr\{\mathfrak{b}_{B2_2}\}\right). \label{average}
\end{eqnarray}
\endgroup
By substituting Eq. (\ref{A1}) - (\ref{B3}) in Eq. (\ref{average}), we get:
\begingroup
\begin{eqnarray}
\label{eq:E-2}
\mathbf{E}\left[\left|\mathcal{A}_2\right|\right] &=& \sum^2_{i=1} \frac{1}{\sqrt{2\pi}}\int^{\infty}_{-\infty} Q\left(a_{i}\left(t\right)\right)e^{\frac{-t^2}{2}}dt\nonumber\\
&& \times\frac{1}{\sqrt{2\pi}}\int^{\infty}_{-\infty} Q\left(b_{i}\left(t\right)\right)e^{\frac{-t^2}{2}}dt.
\end{eqnarray}
\endgroup
Finally, applying Lemma 1 in Eq. (\ref{eq:E-2}), we derive Eq. (27).
\section{Proof of Proposition 1}
\label{a3}
In Section \ref{subsec}, it has been proven that the average number of active relays in a bidirectional cooperative network with two relays is given by Eq.(\ref{eq:average2}). Let us assume that, for a network with $k$ relays, it holds that:
\begin{equation}
\mathbf{E}\left[\left|\mathcal{A}_k\right|\right]=\sum^k_{i=1}Q\left(\left( \gamma^* - \mu_{AR_i}\right)/\sigma_{AR_i}\right)Q\left(\left( \gamma^* - \mu_{BR_i}\right)/\sigma_{BR_i}\right).
\end{equation}
Using the induction method, we are going to prove that:
\begin{eqnarray}
\mathbf{E}\left[\left|\mathcal{A}_{k+1}\right|\right] = \mathbf{E}\left[\left|\mathcal{A}_{k}\right|\right]+Q\left(\frac{\gamma^* - \mu_{AR_{k+1}}}{\sigma_{AR_{k+1}}}\right)Q\left(\frac{\gamma^* - \mu_{BR_{k+1}}}{\sigma_{BR_{k+1}}}\right).
\end{eqnarray}
Let us denote by $p^{k+1}_{R_i} = Pr\left\{\left|\mathcal{A}_{k+1}\right|=i\right\}$ the probability that $i$ relays are active in a network with $k+1$ relays, and by $B_{k+1} = \left\{\boldsymbol{1}_{AR_{k+1}},\boldsymbol{1}_{BR_{k+1}}\right\}$ the event that the $R_{k+1}$ relay is active. Then, we may write:
\begingroup
\begin{eqnarray}
p^{k+1}_{R_i} &{=}& p^k_{\left.R_i\right|\overline{B}_{k+1}}Pr\left\{\overline{B}_{k+1}\right\}+p^k_{\left.R_{i-1}\right|B_{k+1}}p_{B_{k+1}}
\end{eqnarray}
\endgroup
\noindent and $p^{k+1}_{R_{k+1}} {=} p^{k}_{\left.R_k\right|B_{k+1}}p_{B_{k+1}}$. The average number of active relays may be then written as:
\begingroup
{\setlength{\arraycolsep}{0em}\begin{eqnarray}
&&\mathbf{E}\left[\left|\mathcal{A}_{k+1}\right|\right] = \sum^{k+1}_{i=1}ip^{k+1}_{R_i}\nonumber\\
&&=\sum^{k}_{i=1}i\left(p^k_{\left.R_i\right|\overline{B}_{k+1}}p_{\overline{B}_{k+1}}+p^k_{\left.R_{i-1}\right|B_{k+1}}p_{B_{k+1}}\right)+(k+1)p^{k+1}_{R_{k+1}}\nonumber\\
&&=\sum^{k}_{i=1}ip^k_{\left.R_i\right|\overline{B}_{k+1}}p_{\overline{B}_{k+1}}+\sum^{k}_{j=0}(j+1)p^k_{\left.R_j\right|B_{k+1}}p_{B_{k+1}}\nonumber\\
&&=\sum^k_{i=1}i\left[p^k_{\left.R_i\right|\overline{B}_{k+1}}p_{\overline{B}_{k+1}}+p^k_{\left.R_i\right|B_{k+1}}p_{B_{k+1}}\right]+\sum^{k}_{i=0}p^k_{\left.R_i\right|B_{k+1}}p_{B_{k+1}}\nonumber\\
&&= \mathbf{E}\left[\left|\mathcal{A}_{k}\right|\right] + p_{B_{k+1}},
\end{eqnarray}}
\endgroup
\noindent where $p_{B_{k+1}} = Pr\left\{\boldsymbol{1}_{AR_{k+1}},\boldsymbol{1}_{BR_{k+1}}\right\}$ is the probability that both $AR_{k+1}$ and $BR_{k+1}$ links are active, given by:
\begin{equation}
p_{B_{k+1}} = Q\left(\frac{\gamma^* - \mu_{AR_{k+1}}}{\sigma_{AR_{k+1}}}\right)Q\left(\frac{\gamma^* - \mu_{BR_{k+1}}}{\sigma_{BR_{k+1}}}\right).
\end{equation}
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
physics/0603044
|
\section{Introduction}
\label{sec:intro}
There has been a growing interest in the study of social phenomena
through the use of tools from statistical physics
\cite{Forsythe1996,Stanley1999,Stauffer2000,Stauffer2003}. This
trend has been in part stimulated by developments in complex networks
\cite{Barabasi2002,Dorogovtsev2002,Newman2003,Boccaletti2006}, which
have uncovered properties of the structures underlying the
interactions between agents in many natural, technological, and social
systems. Social processes can be simulated through the use of complex
networks models over which a dynamical interaction between the agents
represented by the nodes is defined, yielding results that can be
compared with the macroscopic results found in real social networks.
Election of representatives are important social processes in
democracies, where a large number of people take part and that
represent the result of many social factors. It was found
\cite{Costa1999} that the number of candidates with a given number of
votes in the 1998 Brazilian elections follows a power law with slope
$-1$ for some orders of magnitude, or a generalized Zipf's law
\cite{Lyra2003}.
Elections depend on the process of opinion formation by the voters.
Each voter chooses one candidate based on its beliefs and through
interaction with other voters. Many works have been carried out on
opinion formation while considering several types of dynamics and
underlying network topologies. Bernades \emph{et al.}\
\cite{Bernardes2002} and Gonz\'alez \emph{et al.}\ \cite{Gonzalez2004}
succeded in reproducing the general $-1$ slope of candidates with a
given number of votes in Brazilian election results by using the
Sznajd \cite{Sznajd2000} opinion formation model adapted to complex
networks.
In the Sznajd model, two neighbors that happen to have the same
opinion may convince their other neighbors. In this article, we adopt
a simpler model, where each single voter tries to convince its
neighbors, regardless of their previous opinion. The obtained results
exhibited a substantial agreement with real election results for some
network models.
The article is organized as follows. Firts we describe the network
(Sec.~\ref{sec:nets}) and opinion (Sec.~\ref{sec:opinion}) models used
in the simulations. Then, in Sec.~\ref{sec:results} we present and
discuss the simulation results and study the effect of the model
parameters. Finally, the conclusions are summarized in
Sec.~\ref{sec:conclusions}.
\section{Opinion and Network Models Used}
As done in other related works, we assume that the opinion formation
for the voting process occurs as interactions between agents connected
through a complex network. The result is thus determined by two
factors: (i)~the structure of the network that specify the possible
interactions between agents, and (ii)~the dynamics of opinion
formation between interacting agents. The following subsections
describe the models used in this work.
\subsection{Network Models}\label{sec:nets}
The voters and their social interactions are represented as a network,
so that the individuals are represented by nodes in the network and
every social interaction between pairs of voters is represented by a
link between the two corresponding nodes. The number of links attached
to a node is called the \emph{degree} of the node; the social distance
between to voters is given by the geodesic distance in the network,
defined as the minimum number of links that must be traversed in order
to reach one of the nodes starting from the other. Two important
network properties~\cite{Newman2003} are the degree distribution and
the average distance between pairs of nodes.
For the simulation of the opinion formation model we adopted the
Erd\H{o}s-R\'enyi and the Barab\'asi-Albert \cite{Barabasi2002} models
of complex networks. For comparison, simulations were also performed
in two-dimensional lattices and two-dimensional lattices with random
connections added between its nodes. The Erd\H{o}s-R\'{e}nyi networks
are characterized by a Poisson degree distribuion and the presence of
the ``small world'' property: the average distance between nodes grows
slowly with the number of nodes in the network. The Barab\'asi-Albert
model also has the small world property, but its degree distribuition
follows a power law, resembling in that sense many social
networks. The regular lattice was chosen as an example of a network
without the small world property, while the addition of random
connections enables a controled introduction of this property
(see~\cite{Watts1998}).
In the Barab\'asi-Albert model, the network starts with $m+1$
completely connected nodes and grows by the successive addition of
single nodes with $m$ connections established with the older nodes,
chosen according to the preferential attachment rule. The growth
stops when the desired number of nodes $N$ is reached.
To generate the Erd\H{o}s-R\'enyi network, we start with $N$ isolated
nodes and insert $L$ links connecting pairs of nodes chosen with
uniform probability, avoiding self- and duplicate connections; for
comparison with the Barab\'asi-Albert model, we choose $L$ so that
$m=L/N$ is the same as the $m$ values used for the Barab\'asi-Albert
model.
For the two-dimensional lattices, the $N$ nodes are distributed in a
square and the connections are established between neighboring nodes
in the lattice. Afterwards, additional connections can be
incorporated between uniformly random chosen pairs of nodes until a
desired number of average additional links per node is included. This
kind of randomly augmented regular network is similar to that used in
Newman and Watts small-world model \cite{Newman1999}.
\subsection{Opinion Model}\label{sec:opinion}
For a given network with $N$ voters (nodes), we start by distributing
the $C$ candidates among randomly chosen nodes (with uniform
probability), that is, each candidate is assigned to just one node in
the network (this reflects the fact that the candidates are also
voters). The remaining voters start as ``undecided'', meaning that
they have no favorite candidate yet. The following process is
subsequently repeated a total of $S N$ times: choose at random a voter
$i$ that already has an associated candidate $c_i$; for \emph{all}
neighbors of voter $i$, if they have no associated candidate (i.e. are
as yet undecided), they are associated with candidate $c_i$, otherwise
they change to candidate $c_i$ with a given \emph{switching
probability} $p$. The constant $S$ introduced above is henceforth
called the \emph{number of steps} of the algorithm (average number of
interactions of each node). This opinion model is motivated by the
following assumptions: (i)~undecided voters are passive, in the sense
that they do not spread their lack of opinion to other voters;
(ii)~undecided voters are easily convinced by interaction with someone
that already has a formed opinion; (iii)~the flexibility to change
opinions due to an interaction, quantificed by the parameter $p$, is
the same for all voters. Despite the many limitations which can be
identified in these hypotheses, they seem to constitute a good first
approximation an can be easily generalized in future works.
This model is similar to a simple spreading to unoccupied sites, and
can be reduced to an asynchronous spreading if the switching
probability is zero. In spite of its simplicity, the model yields
interesting results, as discussed below.
\section{Results}\label{sec:results}
In the following, we present and discuss the histograms expressing the
number of candidates with a given number of nodes. The plots are in
logarithmic scale, and the bin size doubles from one point to the next
in order to provide uniformity. The number of candidates in a bin are
normalized by the bin size. All results correspond to mean values
obtained after $30$ different realizations of the model with the given
parameters.
As becomes clear from an analysis of the following graphs, larger
values of $N/C$ tend to lead to more interesting results, motivating
the adoption of large $N$ and small $C$. The use of too large values
of $N$ implies a high computational and memory cost; the use of too
small values of $C$ leads to poor statistics implied by the large
variations in the number of candidates inside the bins. The standard
values of $N=2\,000\,000$ and $C=1\,000$ adopted in the following
represent a good compromise considering our computational resources.
Figure~\ref{fig:errors} shows the results of the simulation for
Erd\H{o}s-R\'enyi and Barab\'asi-Albert networks after $30$ steps and
with a switching probability of $0.1$. The result for the
Erd\H{o}s-R\'enyi network is very similar to results of real elections
\cite{Costa1999}. There is a power-law regime for intermediate number
of votes, a plateau for small number of votes and a cutoff for large
number of votes; the power-law regime has an exponent of $-1$, which
is almost the same as that obtained for real
elections~\cite{Costa1999}. The large variability on the plateau
region is also consistent with the differences found at this part of
the curves when considering different elections outcomes (see for
example the data in \cite{Lyra2003}).
For the Barab\'asi-Albert model, although two power-law regimes with
different exponents can be identified, neither corresponds to the
experimental value of $-1$.
\begin{figure}
\includegraphics[width=0.48\textwidth]{er.eps}
\includegraphics[width=0.48\textwidth]{ba.eps}
\caption{Distribution of candidates with a given number of votes
after $30$ steps for networks with $2\,000\,000$ voters, $1\,000$
candidates, $5$ links per node and a switching probability of
$0.1$. On the lefthand side for Erd\H{o}s-R\'enyi and on the
righthand side for Barab\'asi-Albert networks. Error bars show one
standard deviation.} \label{fig:errors}
\end{figure}
The lefthand side of Figure~\ref{fig:lattice} shows the result for the
simulation on a two-dimensional lattice. There is no sign of a
power-law regime and a clear peak around $1\,000$ votes can be noted,
in disagreement with the scale-free nature of the experimental
results. On the righthand side of the same figure, the effect of
adding random connections to the lattice can be easily visualized. It
is remarkable that the addition of just a small number of new links
(about half the number of nodes) is enough to get a result similar to
the one of the Erd\H{o}s-R\'enyi model. It is a known fact
\cite{Watts1998} that a small number of random links in a regular
network are enough to the emergence of the ``small world''
phenomenon. By enabling a candidate to reach the whole network of
voters in a small number of steps, this phenomenon increases the
chance of a candidate getting a very large number of votes, therefore
broadening the distribution.
\begin{figure}
\includegraphics[width=0.48\textwidth]{lt.eps}
\includegraphics[width=0.48\textwidth]{lattice.eps}
\caption{Distribution of candidates with a given number of votes
after $30$ steps for two-dimensional lattices with $2\,000\,000$
voters, $1\,000$ candidates, $5$ links per node and a switching
probability of $0.1$. On the lefthand side for a pure lattice (error
bars show one standard deviation) and on the righthand side for
lattices with the addition of the given average number of shortcut
links per node between randomly selected nodes. The result for the
Erd\H{o}s-R\'enyi network is also shown for comparison.}
\label{fig:lattice}
\end{figure}
Now we turn our attention to the influence of the parameters of the
model. In Figure~\ref{fig:cands} the effect of changing the number of
candidates while keeping the other parameters fixed is shown. For the
Erd\H{o}s-R\'enyi model, the effect of increasing the number of
candidates translates itself as an upward shift of the curve while, at
the same time, the cutoff is shifted to the left. This is an expected
result: as the number of candidates grows with a fixed number of
voters, the candidates are initially distributed closer to one another
in the network, and have therefore fewer opportunities to spread
influence before hitting a voter already with an opinion; this leads
to a cutoff in smaller number of votes and in an increase in the
number of candidates with less votes than the cutoff. In the
Barab\'asi-Albert model, the behavior for small number of votes is
similar: the curve is shifted up; but for the power-law regime of
large number of votes, the curve decays more steeply as more
candidates are added.
\begin{figure}
\includegraphics[width=0.48\textwidth]{cands.eps}
\includegraphics[width=0.48\textwidth]{cands-ba.eps}
\caption{Effect of the number of candidates. Distributions after
$30$ steps for networks with $2\,000\,000$ voters, $5$ links per
node, a switching probability of $0.1$, and different number of
candidates. On the lefthand side for Erd\H{o}s-R\'enyi and on the
righthand side for Barab\'asi-Albert networks.}
\label{fig:cands}
\end{figure}
Changing the number of voters has an impact limited almost exclusively
to the tail of the curves, as seen in Figure~\ref{fig:voters}. When
the number of voters is increased, in the Erd\H{o}s-R\'enyi model, the
cutoff is shifted to the left and the power-law regime is
correspondingly increased. In the Barab\'asi-Albert model, the maximum
number of votes is shifted and the inclination of the second power-law
regime is changed to acomodate this displacement. Comparing with
Figure~\ref{fig:cands}, we see that the tail of the curve for the
Barab\'asi-Albert model adapts its inclination according to the
relation between number of voters and candidates, i.e. a larger value
of $N/C$ implies a flatter tail.
\begin{figure}
\includegraphics[width=0.48\textwidth]{voters.eps}
\includegraphics[width=0.48\textwidth]{voters-ba.eps}
\caption{Effect of the number of voters. Distributions after $30$
steps for networks with $1\,000$ candidates, $5$ links per node, a
switching probability of $0.1$, and different number of voters. On
the lefthand side for Erd\H{o}-R\'enyi and on the righthand side for
Barab\'asi-Albert networks.}
\label{fig:voters}
\end{figure}
From Figure~\ref{fig:links} we can see that the behavior that is being
discussed appears only if the network is sufficiently connected: for
$m=1$ there is no power-law regime for the Erd\H{o}s-R\'enyi model and
the behavior for the Barab\'asi-Albert model is complex, with three
different regions and a peak for small number of votes. Also for this
latter model, the inclination of the tail of the curve appears to be
slightly influenced by the average connectivity, with steeper tails
for smaller connectivities.
\begin{figure}
\includegraphics[width=0.48\textwidth]{links.eps}
\includegraphics[width=0.48\textwidth]{links-ba.eps}
\caption{Effect of the number of links. Distributions after $30$
steps for networks with $2\,000\,000$ voters, $1\,000$ candidates,
a switching probability of $0.1$, and different number of links
per node. On the lefthand side for Erd\H{o}s-R\'enyi and on the
righthand side for Barab\'asi-Albert networks.}
\label{fig:links}
\end{figure}
The switching probability has effect only on the first part of the
curve, as can be seen from Figure~\ref{fig:change}. In both models,
this part of the curve is shifted down as the probability increases
and its range is extended until it touchs the original (for zero
probability) curve. Note that the inclination of the Barab\'asi-Albert
curve corresponding to small number of votes is maintained for the
different values of switching probability (but is different for zero
probability).
\begin{figure}
\includegraphics[width=0.48\textwidth]{change.eps}
\includegraphics[width=0.48\textwidth]{change-ba.eps}
\caption{Effect of the swtiching probability. Distributions after
$30$ steps for networks with $2\,000\,000$ voters, $1\,000$
candidates, $5$ links per node, and different values for the
switching probability. On the lefthand side for Erd\H{o}s-R\'enyi
and on the righthand side for Barab\'asi-Albert networks.}
\label{fig:change}
\end{figure}
A similar effect has been obtained while changing the number of steps
(Figure~\ref{fig:steps}). As the number of steps is increased, the
curve remains unchanged for large number of votes, but is donwshifted
for small number of votes. The similarity between an increase in the
number of steps and an increase in switching probability is easyly
explained: after all voters have a candidate, changes occur only by
switching candidates. In other words, increasing the number of steps
has as an effect increase in the number of times a switching is tried,
resulting in a similar effect as increasing the switching probability.
\begin{figure}
\includegraphics[width=0.48\textwidth]{steps.eps}
\includegraphics[width=0.48\textwidth]{steps-ba.eps}
\caption{Effect of the number of steps. Distributions for networks
with $2\,000\,000$ voters, $1\,000$ candidates, $5$ links per
node, a switching probability of $0.1$, and different total number
of steps. On the lefthand side for Erd\H{o}s-R\'enyi and on the
righthand side for Barab\'asi-Albert networks.}
\label{fig:steps}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
We suggested and studied a simple voting model based on the spreading
of opinions through the links of a network. The results of the
simulation of the model show a remarkable qualitative agreement with
experimental results for proportinal voting in Brazilian and Indian
elections \cite{Costa1999} when the network model used is of
Erd\H{o}s-R\'enyi type or a lattice with sufficient random shortcuts
added. In these networks, the model results in a power-law
distribution with an exponent of $-1$, but with a shortcut for large
number of votes and a plateau for small number of votes, as observed
in real elections. The ``small world'' effect appears to be of central
importance in this result, as the result for a lattice without
shortcuts is very different, without any power-law regime.
Interestingly, the Barab\'asi-Albert network model gives results that
are not consistent with real elections: there are two power-law
regimes without a shortcut and the second (and dominant) power-law
regime is not universal, depending on the number of links per node in
the network and the relation between number of voters and number of
candidates. Also, the first power-law regime is not characterized by
the experimental value of $-1$. This is somewhat puzzling, as many
social networks have power-law degree distribuitions \cite{Newman2003}
and are in this respect better related to the Barab\'asi-Albert model
than to the other two models investigated. We suspect the explanation
to this is related to the importance of clustering and communities in
social networks, neither of which represented in the Barab\'asi-Albert
model, although they are not present also in the Erd\H{o}s-R\'enyi
networks. This in an issue deserving further investigation.
\vspace{0.5cm}
\noindent {\bf Acknowledgements:} L. da F. Costa is grateful to CNPq
(308231/03-1) for financial sponsorship.
\bibliographystyle{unsrt}
|
physics/0603112
|
\section{Symptoms of a crisis in the foundations of particle theory}
There can be no doubt that after almost a century of impressive success,
fundamental physics is in the midst of a deep crisis. Its epicenter is in
particle theory, but its repercussions may even influence the direction of
experimental particle physics and affect adjacent areas of fundamental
research which traditionally used innovative ideas of quantum field theory
(QFT). They also led to quite bizarre ideas about the philosophy underlying
fundamental sciences, which partially explains why they attracted considerable
attention beyond the community of specialists in particle physics.
One does not have to be a physicist in order to be amazed when reputable
members of the particle physics community \cite{Suss} recommend a paradigmatic
change away from the observation based setting of physics which, since the
time of Galileio, Newton, Einstein and the protagonists of quantum theory has
been the de-mystification of nature by mathematically formulated concepts with
experimentally verifiable consequences. The new message, which has been formed
under the strong influence of string theory, is that it is scientifically
acceptable to use ones own existence in reasonings about matters of
theoretical physics, even if this leads to a vast collection of in principle
unobservable concepts such as \textit{multiverses} and \textit{parallel
worlds}. This new physics excepts metaphors but calls them principles, as e.g.
the \textit{anthropic principle}; its underlying philosophy resembles a
religious faith with its unobservable regions of heaven and hell rather than
physics, as we know it since the times of Galileio. It certainly amounts to a
rupture with traditional natural sciences and the philosophy of enlightenment.
Despite assurances to the contrary, it looks like an avatar of
\textit{intelligent design}.
Instead of \textquotedblleft cogito ergo sum\textquotedblright\ of the
rationalists, the new \textit{anthropic} maxim coming from this new doctrine
attaches explanatory power to its inversion: "I exist and therefore things are
the way they are", since otherwise I would not exist. Its main purpose is to
uphold the uniqueness of the string theorists dream of a theory of everything
(TOE). Even with an enormous number of string solutions with different
fundamental laws and fundamental constants, the use of the anthropic principle
permits to uphold uniqueness by claiming that we are living in a multiverse
consisting of as many universes as it needs to account for all string
solutions; in this way one is able to claim that these describe actually
existing but inaccessible and invisible parallel worlds or multiverses of a
unique TOE. In this context anthropic reasoning is not meant as a temporary
auxiliary selective device, pending better understanding and further
refinements of the theory, but rather as a way to uphold the TOE status of
string theory. To physicists outside the string community, the logic behind
this doctrine resembles the "if you cannot solve a problem then enlarge it"
motto of some politicians.
To demonstrate the physical relevance of string theory in this anthropic
setting it would suffice to show that there is one solution which looks like
our universe; but whereas the number of solutions has been estimated, nobody
has an idea how to arrange such a search. How can one find something in a
haystack if one does not even know how to characterize our universe in terms
of moduli and other string-theoretic notions which distinguish a reference
state (the string theory vacuum)?
To be fair, the anthropic dogma of a \textit{multiverse} instead of a universe
i.e. the belief that all these different solutions with quantum matter obeying
different laws (including different values of fundamental constants) exist and
form "the landscape" \cite{Suss}, is not shared by all string theorists. \
Such a picture is still confined to a vocal and influential minority in
particle theory, but it is not difficult to notice a general trend of moving
away from the traditional scientific setting based on autonomous physical
principles, towards more free-wheeling metaphoric consistency arguments.
Ironically some of the new aggressive science-based atheists are strong
defenders of the metaphors about the string-inspired multiverses.
The ascent of this metaphoric approach is fueled by the increasing popularity
of string theory and the marketing skill of its proponents to secure its
funding. This, and certainly not the extremely meager physical results, is
what at least partially accounts for its present dominant status in particle
theory. Whereas the attraction it exerts on newcomers in physics is often
related to career-building, the attention it receives from the media and a
broader scientifically interested public is the result of the entertaining
fictional qualities of its "revolutionary" achievements.
This developments take place against the background of frustration within the
particle physics community as a result of inconclusive or failed attempts to
make further progress with the standard model (SM). The latter has remained
particle physics finest achievement ever since its discovery more than three
decades ago. It continued the line of gradual unification which started
already with Faraday, Maxwell, Einstein Heisenberg and others. This kind of
unification has been the result of a natural process of the development of
ideas, i.e. the protagonists did not set out with the proclaimed aim to
construct a TOE, rather the coming together was the result of a natural
unfolding of ideas following the intrinsic theoretical logic of principles,
but always with observations and experiments having the ultimate say.
Particle physics is a conceptually and mathematically quite demanding science
and its progress sometimes requires an adventurous\ "into the blue yonder"
spirit. But precisely because of this it needs a critical balance whose
intellectual depth at least matches that of its target. To some extend this is
part of an inner theoretical process in which the main issue is that of
conceptual consistency. Particle theory is very rich in established
fundamental principles, and a good part of theoretical research consists in
unfolding their strong intrinsic logic in the concrete context of models.
Experiments cannot decide whether a theoretical proposal is conceptually
consistent, but they can support or reject a theory or select between several
consistent theories and last not least lead to a limitation of the validity of
principles and hints how to amend them.
A theoretician should carry his criticism, if possible, right into the
conceptual-mathematical core of a theory. The most basic property in the
formulation and interpretation of models in particle physics has been and
still is the \textit{localization of objects in spacetime}. With respect to
string theory this amounts to the question whether it really represents, as
its name indicates, a theory of string-like localized objects in spacetime.
This will be the main theme in section 6. It is not easy to criticise
something which is conceptually and mathematically as opaque as string theory.
Fortunately this question allows a rigorous answer in the case of the
"founding" model of string theory: the canonically quantized Nambu-Goto model
\cite{N}\cite{G}.
The answer will be that the quantum string is described by a generalized free
field whose infinite mass/spin tower spectrum is such that the relative
weights of irreducible components in the Kallen-Lehmann function depend on the
chosen classical string configuration. In other words the possible
configurations associated with the classical N-G functional integrand become
encoded into a mass/spin tower of a point-like infinite component field but
have no bearing on spacetime localization i.e. on "wiggling strings" in
spacetime. This calls for a more general understanding of the relation between
classical and quantum localization beyond the point-like case (where both
coalesce) based on an autonomous notion quantum localization.
String theory was conceived as an S-matrix theory. An S-matrix theory has no
direct relation to localization, the indirect link goes only through the
inverse scattering problem. Two previous attempts at a pure S-matrix theory,
that of Heisenberg's 1943 and the Chew-Mandelstam-Stapp bootstrap idea of the
60s failed, because contrary to naive expectation of uniqueness, at the end of
the day there were too many solutions (the infinite family of 2-dim.
factorizing models). The dual resonance model on the other hand, which
resulted from replacing the general crossing property by the
Veneziano--Virasoro-Dolen-Horn-Schmid duality, led to a very explicit formulae
which represented some (only at that time desirable) properties of an S-matrix
describing strong interactions, but had serious conceptual problems.
A critical presentation of string theory would not be comprehensive if it
ignores the history which led to it. The best way to try to understand a
theory which derives its computational recipes from unclear physical concepts
is to critically follow the path of its history. This is precisely what we
will do in the following four sections.
The present view of string theorists at their own history is reflected in the
several recent 40 year anniversary contributions, some from experienced
veterans of the dual model days with first hand insider knowledge. They make
quite interesting historical reading and some of their points will be
mentioned in later sections. But they were written in the certainty of ideas
which at the end led to a theory of great impact and world wide dominance
which is certainly not supported by the point of view in this essay.
The criticism of the scientific aspects of superstring theory and its
sociological manifestation in this essay is not directed against its
protagonists and individual contributors. Rather its main target is the lack
of balance caused by the uncritical reception of its central claim to
represent a quantum theory in which string-like objects replace the standard
point-like fields of QFT. It will be argued that its unreal dreamlike almost
surreal physical appearance which it has outside the string community, is the
result of a mismatch between what it claims to be and what it really is. More
specifically, the geometric aspects which led to its name are not identical
with its intrinsic physical localization property.
For a mathematician this is irrelevant, the geometric properties should be
present somewhere, but he could not care less about where it is encoded,
whether the mathematics of string configurations describes matter localized in
spacetime or whether it becomes encoded into an infinite component field which
"sits over one point". A physicist however is required to follow the logic of
quantum localization; if he would base his interpretation solely on the
geometric logic and overlook that localization has an intrinsic quantum
physical meaning, he would go astray or, in the terminology of this article he
would take the path of wrong metaphors and not that of autonomous physical
interpretation. The historical context shows that the erosion of the intrinsic
by the metaphoric, of which string theory is the most visible illustration,
did not come out of the blue.
Even at the risk of sounding cynical there is also some positive aspect,
namely the construction of the free Nambu-Goto string or the superstring
solves a problem which a whole previous generation (Barut, Kleinert, Fonsdal,
Ruegg, Budini,.. ) found a hard nut to crack namely to find natural quantum
constructions of infinite component wave functions or (which is the same)
infinite component free fieds. The quantum mechanics archetype was the
hydrogen atom which can be fully understood in terms of a representation of
the noncompact group SO(4,2). Attempts to get a relativistic mass/spin
spectrum along these line failed. If one is not prudish about spacetime
dimensions, then the N-G or superstring is indeed a solution. The secret is to
replace group theory with vector-valued (or dotted/undotted spinorial) quantum
mechanical oscillators on which the Lorentz group acts. String theory uses
oscillators as they arise from the Fourier decomposition of multicomponent
currents of a chiral QFT. But beware, this does not mean that one embeds a
conformal field theory as a stringy one-dimensional subalgebra in higher
dimensional spacetine. All these problems wil be addressed in detail in later sections.
It is an interesting question whether the problems caused by confusing
geometric properties with the intrinsic meaning of localization have a higher
dimensional analog beyond the one-dimensional strings. The lack of a pure
quantum (non quasiclassical) brane solution in analogy to the N-G model
prevents presently a direct answer to this question but the negative
experience with string theory would go against describing branes by embedding
in terms of quantized coordinates which depend on more parameters as for strings.
This intention to go to the \textit{conceptual} roots separates the content of
the present essay from several other books and articles which have appeared
\cite{Wo}\cite{Sm}\cite{Hedrich}. We want to go beyond the mere assertion that
string theory has surreal aspects; it is our intentions to expose the cause of
its misleading metaphor. We do however agree with all the authors who have
written string-critical articles that even if superstring theory would be
consistent on the conceptual-mathematical level, the lack of tangible results
despite of more than four decades of hard work by hundreds of brilliant minds,
the futile consummation of valuable resources and last not least its bizarre
philosophical implications should be matters of great concern.
The crisis in particle physics on the eve of the LHC experiments has strong
relations to a ongoing crisis of post cold war socioeconomic system and its
ideology. The idea of a final theory of everything is too close to the way
globalized capitalism views itself in order to resist the temptation of
looking in more detail on some surprising parallels. As will be argued in the
last section both developments are characteristic manifestations of the same
Zeitgeist. It remains to be seen whether the turbulent 2008 crash of the post
cold war economic system and its ideological of "end of history" setting finds
its analog in a loss of support for project of a TOE and the superstring metaphors.
The content is structured as follows.
The next section reviews Heisenberg's S-matrix proposal and its profound
criticism by Stueckelberg on the basis of macro-causality problems. The third
section recalls the S-matrix bootstrap program whose lasting merit consists in
having added the important on-shell crossing symmetry to the requirements of
an S-matrix program. The fourth section analyses the relation of on-shell
crossing property with off-shell localization concepts and comments on its
proximity to the Kubo-Martin-Schwinger (KMS) thermal aspect of localization.
Section 5 reviews the implementation of duality of the Dolen-Horn-Schmidt
(DHS) dual resonance model in the setting of a multi-charge chiral current
mode. In this way the differences between duality resulting from the
generalized (anyonic) commutation relations of charge-carrying chiral fields
and the particle-based notion of crossing becomes highlighted. The prior
results on quantum localization obtained in the fourth section are then used
in section 6 to show that string theory of the canonical quantized Nambu-Goto
model, contrary to terminology, does not deal with string-localized objects in
spacetime. Rather the classical string configurations associated with the
functional integrand become encoded into the structure of and infinite
component local field whose decomposition leads to free fields corresponding
to a discrete mass/spin tower.
The last section is an attempt to shed some light on how a theory with so many
conceptual shortcomings as well as lack of predictive power was able to
represents the spirit of particle physics at the turn of the millennium. In
that section we leave the ivory tower of particle theory and turn to some
observations on parallels between between superstring physics and the
millennium Zeitgeist.
Since the mathematical-conceptual content is quite demanding and we want to
keep this essay accessible to readers with more modest mathematical knowledge,
some statements and arguments will appear more than once in a different
formulation and context.
\section{QFT versus a pure S-matrix approach from a historical perspective}
Particle physics was, apart from a period of doubts and confusion around the
ultraviolet catastrophe which started in the late 30s and did not last longer
than one decade, a continuous success story all the way from its inception
\cite{Dar} by Pascual Jordan (quantization of wave fields for light waves and
matter) and Paul Dirac (relativistic particles and anti-particles via hole
theory) up to the discovery of the SM at the end of the 60s.. For about 40
years the original setting of Lagrangian quantization, in terms of which QFT
was discovered, gave an ever increasing wealth of results \textit{without
requiring any change of the underlying principles}. After the laws of QT had
been adapted to the changed conceptual and mathematical requirements resulting
from the causal propagation in theories with a background-independent maximal
velocity (the velocity of light), the new principles were in place and the
subsequence significant progress, including the elaboration of renormalized
QED, consisted basically in finding new conceptual-mathematical realizations
of those principles underlying QFT. The ultraviolet divergencies posed some
temporary obstacle on this path, since they led to the proposal of an
elementary length which would have required a modification of the
principles\footnote{Despite of numerous attempts, some of them even ongoing,
it has not been possible to find a conceptual framework for nonlocal theories.
The notion of a cutoff has remained a metaphor. Non of the soluable
two-dimensional local factorizable models permits the introduction of a
cutoff, without wrecking its physical content and loosing the mathematical
control.}. After the clouds of doubts about the ultraviolet catastrophe
dispersed, thanks to the new setting of covariantly formulated perturbative
renormalization theory, the conceptional and mathematical improvements
reinforced the original principles.
It is interesting to observe that, already at the beginning of QFT, its
protagonist Pascual Jordan worried about the range of validity of
quantization. His doubts originated from his conviction that, although
classical analogies allow in many cases rapid access to the new quantum theory
of fields in form of important perturbative model illustrations, in the long
run a more fundamental quantum theory should not need the quantization
parallelism to the less fundamental \textit{classical} Lagrangian formalism.
Rather it should develop its own autonomous tools for the classification and
construction of QFTs, or in his words "without classical crutches" \cite{Kha}.
To turn the argument around: to the extend to which one has to still rely on
quantization crutches, one has not really reached the conceptual core of the
new theory.Jordan's doubts about the range of validity of that umbilical cord
to classical field theory did not originate from any perceived concrete
shortcoming of his "quantum theory of wave fields". Rather the state of
affairs in which he discovered this new theory did not comply with his
philosophical conception; in his opinion a classical parallelism can only be
tolerated as a temporary device for a quick exploration of those parts of the
new theory which are in the range of this quantization recipe.
But things did not develop in the direction of his plea. The ultraviolet
divergence crisis of the 30s ended in the late 40s in the discovery of
renormalized QED, a fact which certainly revitalized the Lagrangian approach
and pushed the search for an intrinsic formulation into the sideline.
Unfortunately the renormalized perturbation series of quantum field
theoretical models diverges, so the hope to settle also the existence problem
of QFTs in the Lagrangian quantization setting did not materialize; the
success of the renormalized perturbative setting did not lead to a conceptual
closure of QFT. However at least it became clear that the old problem of
ultraviolet infinities, which almost derailed the development of QFT, was in
part a pseudo-problem caused by the unreflective use of quantum mechanical
operator techniques for pointlike quantum fields which are too singular to
qualify as operators.
The main difference of the status of QFT to any other physical theory
(statistical mechanics, quantum mechanics,..) is that as a result of many
interesting nontrivial models with solid mathematically control, one knows
that the "axioms" which characterize them admit nontrivial solutions. It is
often forgotten or pushed aside as a nuisance, that \textit{QFT has not yet
reached this state}, it still has a long way to go.
Using more adequate mathematical tools in conjunction with a minimality
principle which limits the short distance singularity in every perturbative
order \cite{E-G}, one finds that there are local couplings between pointlike
fields for which the perturbative iteration either does not require more
parameters than there were in the beginning, or adds only a finite number of
new couplings which one could have already included in the starting
interaction expressed in terms of Wick-product of free fields\footnote{I am
referring here to the Epstein Glaser \cite{E-G} formulation which produces the
renormalized finite result directly by treating the fields in every order
according correctly according to their singular nature. The avoidance of
intermediate cutoffs or regularizations maintains the connection with the
quantum theoretical Hilbert space structure of QFT.}. The renormalized theory
forms a finite parameter space on which the (Petermann-Stueckelberg)
renormalization group acts ergodically. These finite parametric families are
conveniently pictured as "islands" in an infinite parameter setting (the
Bogoliubov spacetime dependent operator-valued S-functional, or the Wilson
universal renormalization group setting) within a universal master
S-functional\footnote{The S-functional is a formally unitary off-shell
operator which contains the on-shell S-matrix. The existence of the latter
does however not depend on that of the former.} depending on infinitely many
coupling functions (which by itself has no predictive power). Since the
renormalization group leads from any point on the island in coupling space to
any other point on the same island, a QFT cannot provide a method to
distinguish special numerical values as that e.g. of the fine-structure
constant.. \
The phenomenon of interaction-caused infinite vacuum polarization clouds
(finite in every order perturbation theory) gives rise to a conceptual rupture
with QM \cite{interface} and leads to a change of parameters in every order.
But since these parameters remain undetermined anyhow, this causes no harm.
The inexorable presence of interaction-induced vacuum polarization prevents
one to think of an initial numerical (Lagrangian) value for these parameters
which is then changed by a computable finite amount. With other words unlike
in QM there is no \ separate "bare" and "induced" part. This is why the
Epstein-Glaser renormalization, which avoids such quantum mechanical concepts,
is conceptually preferable \cite{E-G}. It not only addresses the singular
nature of fields but it also exposes the limits of QFT concerning the
predictive power about the numerical value of certain parameters in a more
honest way. \
So when string theorists say that their theory is ultraviolet finite and
contrast this with QFT, what they really mean in intrinsic terms is that their
theory is more economical (and hence more fundamental) in that it has only the
parameters which describe string interactions i.e. the string tension. But
beware, they say that without being able to give a proof!
This implies that in particular that \textit{string theory has no vacuum
polarization} which is of course completely consistent with its on-shell
S-matrix character. An S-matrix is free of vacuum polarization par excellence,
in fact in QFT the S-matrix and formfactors are the only objects of this kind.
Heisenberg's plea for basing particle physics on the S-matrix was proposed
precisely because of this absence of vacuum polarization and the ensuing
ultraviolet problems. But can one really do particle physics without such a
central concept as vacuum polarization? can one formulate physically motivated
autonomous S-matrix properties without the intervention of fields ? These are
clearly the kind of problems one faces in a pure S-matrix approach. We will
return to them in the historical context of Stueckelberg and Heisenberg.
Before taking a critical look at pure S-matrix attempts, it is quite
instructive to glance at some unfinished problems of QFT.
The well-known power counting restriction to interactions $dim\mathcal{L}%
_{int}=$ $d$ (spacetime dimension)\footnote{The causal perturbative approach
(in contrast to the functional integral setting) does not use the Lagrangian
formalism; $\mathcal{L}_{int}$ is only a notational device for the
Wick-ordered polynomial which represents the pointlike interaction between
fields.} is quite severe; in $d=4$ it only allows pointlike fields $\Phi$ with
short distance dimension $sdd\Phi=1.$ In addition massless vectorpotentials
(and more generally the potentials associated with the Wigner massless finite
helicity representations) \textit{cannot be pointlike covariant objects within
a (ghost-free) Wigner Fock representation}; the best one can do is to permit a
spatial semiinfinite extension. In the case of (m=0,s=1) this leads to
semiinfinite stringlike-localized covariant vector potentials $A_{\mu}(x,e)$
\cite{MSY}\ with $sdd=1~$which act in the same physical Wigner-Fock space as
the associated pointlike field strength; in fact their only unphysical aspect
is that they are not \textit{local} observables. As expected their stringlike
localization shows up in the commutator between two potentials whenever the
semiinifinite string of one $x+\mathbb{R}_{+}e$ gets into the influence region
of the other. Such string-localized potentials with $sdd=1$ independent of
helicity exists for all $(m=0,s\geq1)\footnote{For s+2 the field strength is
the (linearized) Riemann tensor and the potential is a string-localized
linearized metric tensor $g_{\mu\nu}(x,e)$ localized along the line
$x+\mathbb{R}_{+}e$ (see section 4). The string localization comes from
quantum requirements and has no counterpart in the classical theory.}.$
For massive fields there is no such representation theoretic reason to
introduce string-localized fields since pointlike covariant fields exist for
all admissible covariant spinorial objects $\Phi^{(A,\dot{B})}$ with
$\left\vert A-\dot{B}\right\vert \leq s\leq A+\dot{B}$. But it is well-known
that the short distance dimension increases with the spin s; for $s=1$ it is
at least $sdd=2.$ They violate the power counting theorem and therefore the
lowering of the dimension to $sdd=1$ for semiinfinite stringlike localized
fields \cite{MSY} is of potentially great interest for extending the realm of
renormalizable interactions. The crucial question is whether there exists a
perturbation theory for stringlike localization, more specifically whether the
Epstein-Glaser iteration step, which is build around pointlike locality,
admits an extension to semiinfinite stringlike localization. This is presently
under investigation and we refer to work in progress \cite{M-S}.
Another way to lower short distance dimensions but without sacrificing the
pointlike formalism is to allow \textit{indefinite metric} with the help of a
pointlike (BRST) ghost formalism. The use of such a formalism is suggested by
the quantization of classical gauge theories. The requirement of gauge
invariance is the key which permits to return to a Hilbert space. The
formalism is quite efficient but it is primarily directed to the perturbative
construction of local observables which are identified with the gauge
invariant objects. Nonlocal physical operators as charged fields have to be
defined "by hand" \cite{infra} which up to now has only been possible in
abelian gauge theories. In an approach based on string-localized fields the
nonlocality would be there from the beginning and all objects local and
nonlocal would be part of the same formalism.
In the massive case the stringlike formulation would permit to start already
with $sdd=1$ massive vectormeson and one would have the chance to understand
better if and why the requirement that the string localization should only be
a dimension-lowering technical trick (i.e. the theory continues to be
pointlike generated) requires the presence of another scalar scalar field
(naturally with a vanishing one-point function).
This short account of the history of QFT and particle physics contains most of
the ideas which are needed for the formulation of the SM which places QED, the
weak interaction and the QCD setting of strong interactions under one common
gauge theoretic roof. But it also was meant to expose some important
unfinished areas of QFT. Once one goes beyond textbook presentations of QFT
and returns to the principles, QFT is to a large extend virgin territory. The
claim that QFT is a closed subject and that its innovative role has passed to
string theory may be is not really an argument against QFT but rather against
the carricature picture wgich string theorists created about QFT.
One of the marvelous achievements of the post QED renormalization theory is a
clear understanding of the particle-field relation (not to be confused with
the particle-wave dualism in QM) in the presence of interactions. Whereas in
free field theories Heisenberg had already observed the presence of vacuum
fluctuations due to particle-anti-particle pairs in states obtained by the
application of (Wick) composites to the vacuum, the real surprise came when
Furry and Oppenheimer discovered that in interacting theories even the
Lagrangian "elementary" field generates vacuum polarization upon application
to the vacuum state. Different from the case studied by Heisenberg, the
interaction-induced polrization pairs increase in number with the perturbative
order and form a \textit{vacuum polarization cloud} containing an infinite
number of virtual particles. This observation challenges the naive
identification of particles and \ fields which is the result of a
simple-minded conceptual identification of QFT as a kind of relativistic QM.
Although one-particle states exist in the Hilbert space and the global
operator algebra certainly contains particle creation/annihilation operators,
\textit{compactly localized subalgebras} in interacting QFTs contain no
vacuum-\textbf{p}olarization-\textbf{f}ree \textbf{g}enerator
(PFGs)\textit{\footnote{The only localization which allows PFGs is the
non-compact wedge-like localization \cite{BBS}.}.} In other words in the
presence of interactions (independent of what kind of interaction) there are
no operators localized in subwedge regions which creates a one-particle state
from the vacuum without being accompanied by an infinite vacuum polarization cloud.
The at that time mysterious particle/field relation was partially unveiled
when in the post QED renormalization period it became clear that interacting
\textit{QFT is not capable to describe particles at a finite time}; as a
result of the ubiquitous presence of vacuum polarization clouds it is only
possible to have an asymptotic particle description when, barring long range
forces and infrared problems, the localization centers of particle are far
removed from each other, so that the interaction is effectively switched off.
In fact the elaboration of scattering theory as a structural consequence of
causal locality, energy-momentum positivity and the presence of a mass gap, as
carried out in the late 50s and early 60s, was one of the finest achievements
of relativistic particle theory. No comparable conceptual enrichment has been
added after the discovery of the SM.
As mentioned in the previous section, the idea of a pure S-matrix theory as a
remedy against the ultraviolet catastrophe of the old (pre-renormalization)
QFT was first proposed by Heisenberg\footnote{The concept of a unitary
scattering operator as a mapping incoming multiparticle configurations into
outgoing in the limit of infinite timelike separations was introduced
independently by Wheeler and Heisenberg. however the idea of a pure S-matrix
theory as an antidote against the pre-renormalization pretended ultraviolet
catastrophe is attributed to Heisenberg.} \cite{1946}. The S-matrix models
with which he illustrates his ideas resulted from a naive unitarization of the
interaction Lagrangian (see next section). Heisenberg's proposal was
immediately criticized by Stueckelberg who pointed out that, although it was
Poincar\'{e}-invariant and unitary, it did not meet the requirements of macro-causality.
In the next section we will comment on Heisenberg's construction and isolate
the problem on which all pure S-matrix theory failed: fitting together
unitarity and Poincar\'{e} covariance with macrocausality (notably the cluster
factorization property). Clustering is the spacelike aspect of macro-causality
which is indispensable for any S-matrix, whether its comes from QFT or any
other theory of interacting particles. In QFT and other off-shell
implementations of particle interactions, the clustering property is
implemented on correlation functions or (similar to nonrelativistic QM)
through asymptotic additivity of the interaction-dependent generators of the
Poincar\'{e} group. Its validity for the asymptotic configurations is then a
side result of the proof of asymptotic convergence. With other words, the
highly nonlinear on-shell unitarity requirement is trivialized by showing that
it results from the large time limiting of more easily implementable linear
additive clustering properties for correlation functions.
A long time after the Heisenberg S-matrix proposal was abandoned and QFT
experienced a strong return in the form of renormalized quantum
electrodynamic, ideas about an S-matrix-based approach returned, this time as
the result of the failure of perturbative arguments in strong interactions
between mesons and nucleons. This led to the \textit{S-matrix bootstrap} by
Chew and Mandelstam. The ideological fallout of the bootstrap approach
interspersed with ideas from eastern mysticism entered the popular writing of
the physicist F. Capra and others. The bootstrap ideas never led to any
tangible remaining results in particle physics, but their influence on the
popular "new age" culture and the Zeitgeist of the 60s and 70s has been considerable.
The analytic aspects of QFT correlations, which follow from locality and
spectral properties, imply a kind of crossing relation which was first seen in
Feynman diagrams within a fixed perturbative order. Restricting the external
legs of these graphs to the mass-shell in order to obtain perturbative
contribution to the S-matrix, one was able to show that the different S-matrix
elements belonging to different distributions of n-particles into $k$ incoming
and $l$ outgoing particles are connected by an analytic continuation. The
surprising aspect (which was not trivial even with Feynman graphs) was that
this was possible without leaving the complexified mass shell. With other
words crossing is not a symmetry but rather an analytic on-shell mark left by
the spacelike commutativity of QFT. Although there is presently no general
proof of crossing for generic particle configuration beyond some special
cases, most particle physicists would agree that highlighting this important
property will remain as one of the few legacies of that S-matrix bootstrap period.
The S-matrix bootstrap community did not succeed to come up with a model in
which this new property is nonperturbatively realized. Strictly speaking a TOE
has no model representation which is different from itself; it leads to an
"everything or nothing" alternative.\ Whereas crossing and unitarity were the
two main postulates in the S-matrix bootstrap setting, other important
properties as Stueckelberg's macrocausality (in particular the cluster
factorization) were missing from the bootstrap postulates.
There exists an exceptional situation in d=1+1\footnote{This is related to the
kinematical equality of the energy-momentum delta function with the product of
two one-particle delta functions which only holds in d=1+1.}. This has the
effect that the cluster property cannot separate the genuine 2-particle
interacting contribution from the identity term of S. In this case it is
possible to fulfill unitarity and crossing with purely elastic two-particle
S-matrices. In fact one can classify such two-particle S-matrices and show
that all higher elastic processes are given by a combinatorial formula in
terms of the two-particle S-matrix \cite{Ba-Ka} Purely elastic relativistic
scattering in higher spacetime dimension, as it occurs in the relativistic
quantum mechanics of \textit{direct particle interactions} \cite{Co-Po} (see
next section), is not possible in QFT.
\section{Unitarity and macro-causality in relativistic particle theories}
There are three fundamental requirements which an S-matrix of relativistic
particle physics in must obey, namely Poincar\'{e} invariance, unitarity and
macro-causality. None of these concepts requires to introduce fields;
macrocausality is a very weak version of causality which can be formulated and
understood in terms of only particle concepts. To avoid misunderstandings,
there are analytic properties of scattering amplitudes as, e.g. the crossing
property which requires analytic continuation inside the complex mass shell.
Such analytic on-shell properties cannot be traced back to principles
referring to particles only; rather they must be understood as being an
on-shell imprint of the causal locality principles of an underlying local
quantum physics. i.e. they are consequences of the existence of local fields
which interpolate between the incoming/outgoing particles.
As a pedagogical exercise, which leads right into the problematic aspects of
pure S-matrix theories, let us revisit the situation at the time when
Stueckelberg \cite{Stue} criticized Heisenberg's S-matrix proposal.
As already mentioned, Heisenberg suggested that avoiding the vacuum
polarization in interacting QFTs by abandoning fields in favor of directly
constructing S-matrices could lead to a solution of the ultraviolet problem.
His rather concrete proposals consisted in expressing the unitary S-matrix in
terms of a Hermitian "phase operator" $\eta$ and imposing physically motivated
restrictions on this operator$.$ In modern notation his proposal reads%
\begin{align}
S & =\exp i\eta\\
\eta & =%
{\displaystyle\sum}
\frac{1}{n!}%
{\displaystyle\int}
...%
{\displaystyle\int}
\eta(x_{1},...x_{n}):A_{in}(x_{1})...A_{in}(x_{n}):dx_{1}...dx_{n}\nonumber\\
& \eta_{Hei}=g\int:A_{in}^{4}(x):d^{4}x\nonumber
\end{align}
where the on-shell coefficient functions of $\eta$ are chosen to be
Poincar\'{e} invariant and subject to further physically motivated
restrictions. In fact one such restriction which he suggested was that the
on-shell $\eta$ should be close to a Lagrangian interaction i.e. have local
coefficient functions as illustrated in the third line. It is customary to
split off the identity operator from $S$ and formulate unitarity in terms of a
quadratic relation for the T-operator
\begin{align}
& S=1+iT\\
& iT-iT^{\ast}=TT^{\ast}\nonumber
\end{align}
In this form the unitarity is close to the optical theorem and convenient for
perturbative checks.
Unitarity and Poincar\'{e} invariance are evidently satisfied if the (possibly
singular) functions $\eta(x_{1}...x_{n})$ are Poincar\'{e} invariant, but what
about macro-causality? For spacelike separation one must require the so called
\textit{cluster factorization property}. If there are n+m particles involved
in the scattering (the sum of incoming and outgoing particles) and one forms k
clusters (again containing in and out) and then separates these clusters by
large spacelike translations, the S-matrix must factorize into the product of
k smaller cluster S-matrices referring each describing the scattering
associated with a cluster. For the simplest case of two clusters
\begin{equation}
\lim_{a\rightarrow\infty}\left\langle g_{1}^{a},..,g_{m}\left\vert
S\right\vert f_{1}^{a},..,f_{n}\right\rangle =\left\langle g_{1},..\left\vert
S\right\vert f_{1},..\right\rangle \cdot\left\langle ..g_{m}\left\vert
S\right\vert ..f_{n}\right\rangle
\end{equation}
where the first factor contains all the a-translated wave packets i.e. the
particles in the first cluster and the second factor contains the remaining
wave packets. \ In massive theories the cluster factorization is rapidly
attained. \ This asymptotic factorization property is usually written in
momentum space as
\begin{equation}
\left\langle q_{1},..q_{m}\left\vert S\right\vert p_{1},..p_{n}\right\rangle
=\delta-contrib.+~products~of~lower~delta\text{ }contrib.
\end{equation}
i.e. the S-matrix contains besides the \textit{connected} contribution the
disconnected parts which consists of products of connected amplitudes
referring to processes with a lesser number of particles. The connected parts
have the correct smoothness properties as to make the formulas meaningful.
For timelike separated clusters the fall-off properties for large cluster
separations are much weaker. In fact there are inverse power law corrections
in the asymptotic timelike cluster distance. With the correct $i\varepsilon$
prescription in momentum space they define what is referred to as
\textit{causal} re-scattering or\textit{ the causal one-particle structure};
the presence of this singularity structure prevents the presence of time-like
precursors\footnote{An example for a model which was shown \cite{Swieca} to
lead to such timelike precursors (as the result of the presence of complex
\ poles) was the Lee-Wick model.}.
For an explanation imagine a kinematical situation of elastic 3-particle
scattering in which the third particle enters the future cone of an
interaction region of particle 1 and 2 \ a long time after the 1-2 interaction
happened, and then scatters with the outgoing first particle leaving particle
2 undisturbed. \ In the limit of \textit{infinite timelike separation} the
connecting trajectory which describes the path of particle 1 between the 1-2
interaction region and the later 1-3 interaction region must coalesce in the
limit with a causal propagator. i.e. asymptotically this 3-particle scattering
must contain a singular Feynman propagator connecting two 2-particle
scattering processes.
Whereas the cluster factorization of a Heisenberg S-matrix Ansatz is a trivial
consequence of imposing the connectedness property on the coefficient
functions of the phase operator $\eta$, it is not possible to satisfy the
causal one-particle structure with a finite number of terms in $\eta;$ in fact
no pure S-matrix scheme has ever been devised which secures the validity of
the causal one-particle structure in the presence of unitarity.
At this point the weakness of a pure S-matrix approach as advocated by
Heisenberg becomes exposed; although one can formulate all three requirements
solely within a particle setting, one lacks an implementing formalism. The
phase matrix $\eta$ which linearises unitarity unfortunately complicates those
properties which were linear in terms of $S.~$To construct solutions by
"tinkering" for objects which besides linear properties also obey unitarity,
has throughout the history of pure S-matrix attempts never led anywhere .
It is off-shell QFT and its asymptotic timelike convergence properties, better
known as scattering theory, which saves us for spending the rest of our days
with S-matrix tinkering. The QFT correlation functions are the natural arena
for implementing causality properties; the observables are Hermitian and not
unitary and the building up of S-matrix unitarity is part of the asymptotic
convergence whose existence is guarantied by the properties of the correlations.
This problem of causal re-scattering in a Heisenberg S-matrix setting, and
more generally in any pure S-matrix formulation, was what finally convinced
Stueckelberg \cite{Stue} that a pure S-matrix approach is not feasible.
The S-matrix is without doubt the most important observable concept in
particle physics, but it should remain the "crown" of the theory and not its
foundation nor its principal computational tool. This was at least the gist of
Stueckelberg's critique on Heisenberg's program when he pointed out that to
reconcile macro-causality with unitarity "by hand" (i.e. without an off-shell
setting which naturally unites these seemingly ill-fitting on-shell concepts)
one runs into insoluble problems.
Interestingly enough, Stueckelberg then combined his idea of the causal one
particle structure with postulating pointlike interaction vertices leaving out
unitarity and in this way came to Feynman rules several years before Feynman,
but without knowing that he arrived at the perturbative rules of a QFT. For
showing that this prescription leads to on-shell unitarity, at least on a
perturbative level, he lacked the elegance of the formalism of QFT in which
the on-shell unitarity (and all the other properties of S) is derived from
simpler properties of correlation functions.
A systematic step for step derivation from a covariant Tomonaga setting of
QFT, including the Schwinger or Feynman formalism of renormalization, and with
particular care concerning the perturbative connection between QFT and the
S-matrix, was finally given by Dyson. It was also Dyson who raised the first
doubts about the convergence of the renormalized perturbative series.
The conceptually opaque status of perturbation theory lends importance to a
purely structural derivations of particles properties and scattering data
directly from the quantum field theoretic principles. Without having
mathematically controlled models at one's disposal, structural arguments
became increasingly important. Despite all the difficulties to construct
interacting models there was no problem to define the requirements which are
characteristic for QFT in mathematical clear terms. This "axiomatic setting"
in terms of correlation functions of products of fields (Wightman functions)
was a major achievement. Under the assumption of a mass-gap in the energy
momentum spectrum it led to the validity of (LSZ) scattering theory; in this
way it became a framework which combined particles with fields. It led to
systems of nonlinear equations for time-ordered or retarded functions whose
perturbative solutions contained those obtained from the Lagrangian
quantization formalism. One of its nonperturbative results was the derivation
of Kramers-Kronig type of dispersion relation and their experimental
verification. It was later refered to as "axiomatic QFT", but at least its
original motivation was driven by the pragmatic desire to go beyond divergent
perturbative series.
The derivation of the Kramers-Kronig dispersion relations for the scattering
amplitudes in particle physics and its subsequent experimental verification is
an example of particle theory at its best, it secured the localization
properties of QFT up to present energies and it did so in a clear direct and
yet modest way without relying on metaphors or importing
geometric-mathematical ideas beyond those which are autonomous to QFT. In
comparison to ST it was one of particle physics finest achievements.
All this was achieved less than a decade after Stueckelberg's criticism of a
\textit{pure} S-matrix approach and the discovery of renormalized perturbation
theory by Tomonaga, Schwinger, Feynman and Dyson and forms the backbone of the
LSZ and Haag-Ruelle scattering theory.
As indicated above, the basic simplification of the problem of macro-causality
for the S-matrix consisted in the realization that its representation as the
large time scattering limit \textit{defuses the rather intractable nonlinear
problem} \textit{of implementing macro-causality in the presence of unitarity}
by delegating it to simpler linear (off-shell) properties for correlation
functions. The path from local observables to the S-matrix is generally not
invertible. In a QFT in which all formfactors (matrix elements between bra in-
and ket out-states) including the S-matrix (the formfactor of the identity
operator) fulfill the crossing property, the inversion turns out to be
unique\footnote{Such inverse scattering problems show very clearly the
conceptual advantage of formulating QFT in terms of spacetime-indexed nets of
algebras rather than in terms of pointlike field coordinatizations of the
Lagrangian quantization. The crossing symmetric S-matrix is not capable to
highlight individual field coordinatizations, it only fixes the local net.}
\cite{unique}. Within the family of two-dim. factorizing S-matrices the
existence of the associated QFT can actually be proven \cite{Lech}..Whether
the general framework of QFT can also lead to a nonperturbative classification
and construction of higher dimensional QFT remains to be seen.
There is another quantum mechanical particle physics setting in which a
Poincar\'{e} invariant unitary macro-causal S-matrix arises through scattering
theory in the large time asymptotic limit: \textit{Direct Particle
Interaction} (DPI). It forgoes micro-causality and fields and only retains
Poincar\'{e} covariance and macro-causality. It is certainly more
phenomenological than QFT since it contains interaction potentials instead of
coupling strength.
The reason why it is mentioned here (even though we are not advocating its use
outside medium energy pion-nucleon physics) is because \textit{its very
existence} not only \textit{removes some prejudices and incorrect folklore}
(including the belief that relativistic particle interactions are necessarily
QFTs or that a clustering S-matrix matrix can only arise from a QFT setting),
but it also indicates what has to be added/changed in order to arrive from
particle interactions to a full QFT setting.
Relativistic QM of particles is based on the Born-Newton-Wigner
localization\footnote{This terminology refers to the Born probability of the
associated with relativisic wave function. As pointed out by Newton and
Wigner, the relativistic normalization leads to a change as compared to the
nonrelativitic Schroedinger QM.}, whereas the causal localization of QFT,
which incorporated the finiteness of the propagation speed, is related to the
Poincar\'{e} representation theory via modular theory (next section). The
B-N-W localization of wave packets is sufficient for recovering the forward
lightcone restriction for 4-momenta associated with events which are separated
by large time-like distances. Although this suffices to obtain a Poincar\'{e}
invariant macro-causal S-matrix, it fails on securing the existence of local
observables and vacuum polarization. For a presentation of the differences and
their profound consequences see \cite{interface}.
This DPI scheme introduces interactions between particles within a
multiparticle Wigner representation-theoretical setting by generalizing the
Bakamijan-Thomas (B-T) two-particle interacting Poincar\'{e} generators
\cite{Co-Po}. But whereas in the nonrelativistic QM\ the additivity of the
interaction potentials trivializes the problem of cluster factorization, there
is now no such easy connection between the modification of the n-particle
Poincar\'{e} generators and the cluster properties of the interactions.
Nevertheless, by using the notion of \textit{scattering equivalences} one can
arrive at a cluster factorization formula for the interacting Poincar\'{e}
generators and the and the S-matrix \cite{Co-Po}\cite{interface}. A scattering
equivalence consists in a unitary transformation which changes the
representation of the Poincar\'{e} generators but maintains the S-matrix. In
the Coester-Polyzou DPI scheme the iteratively defined (according to particle
number n) Poincar\'{e} generators lack the large distance additivity
associated with clustering, but a scattering equivalence transformation
rectifies this situation.
One starts with a B-T two-particle interaction and computes the 2-particle
Moeller operator and the associated S-matrix as a large time limit of
propagation operators. As in the nonrelativistic case the two-particle cluster
property is satisfied for short range two particle interactions. For 3 and
more particles the construction of cluster factorizing Poincar\'{e} generators
and S-matrices require the iterative application of scattering equivalences.
The so constructed 3-particle S-matrix clusters with respect to the 2 particle
S-matrix in the previous step. But in contrast with the nonrelativistic
situation it also contains a 3-particle connected part which vanishes if any
one of the particles is removed to spacelike infinity and their is no natural
restriction to only two-particle interactions: in other words the occurrence
of direct higher particle induced interactions cannot be prevented in any
natural way.
As a result of the use of scattering equivalences in order to achieve
clustering, there is no natural way to encode such multiparticle theories into
a second quantization Fock formalism. They are basically relativistic S-matrix
theories because their only truly covariant object is the Poincar\'{e}
invariant S-matrix. In particular the DPI setting does not lead to covariant
formfactors. In the original formulation of DPI the scattering was purely
elastic, but later it was shown that an extension with particle creation
channels is possible. Hence the characteristic difference of DPI to QFT is not
the presence of creation/annihilation channels in scattering theory (since
those can be incorporated "by hand") but rather the inexorable presence of
interaction-induced infinite vacuum polarization clouds in QFT.
Needless to add such a scheme is purely phenomenological since the
interactions are not given in terms of coupling constants but rather coupling
functions (interaction potentials) \cite{Co-Po}. An S-matrix with all the
above properties fulfills the requirements of a conjecture by Weinberg
\cite{W} although it does not lead to a QFT. If one adds the crossing property
(which has however no implmentation in a DPI setting), one can prove the
uniqueness of the inverse scattering problem, but the existence of a QFT
remains open \cite{unique}.
\textit{QFT and DPI are the only known settings in which a unitary,
Poincar\'{e} invariant and macro-causal can be derived} and which also have
been reasonably well understood from a conceptual/mathematical viewpoint. For
DPI the mathematical existence of models and their construction is handled in
terms of well-known functional analysis concepts as in ordinary QM. In case of
QFT this is much more difficult in view of the fact that the perturbative
series is divergent and the sometimes provable Borel resummability does not by
itself establish the existence. Therefore it is encouraging that its most
intrinsic (field coordinatization-independent) formulation in terms of
spacetime localized operator algebras has led to a nonperturbative existence
proof for a special family of interacting two-dimensional factorizable models.
Hopefully this will be the beginning of a new nonperturbative understanding
which at the end could be a realization of the old dream of an intrinsic
construction of models of QFT without the quantization "crutches".
The main aim of this article is to put forward arguments showing that string
theory is not what most people think it is, namely a theory of an infinite
collection of particles whose mass/spin spectrum originates from a string
which vibrates in spacetime. The idea that it generalizes the pointlike
localized fields of QFT is a metaphor based on the analogy with QM where such
a spectrum is associated with a vibrating string. But the intrinsic
localization concept in QFT which is different from the non-covariant
Born-Newton-Wigner localization forbids such an interpretation. Since
localization is a notorious difficult issue which led to many
misunderstandings, the discussion of localization of the objects of string
theory requires careful preparation. This will be the main theme of the next section.
\section{On-shell crossing and thermal properties from causal localization}
In order to attain a solid vantage point for a critique of string theory, it
is helpful to recall the issue of localization which constitutes the basis for
the formulation and \textit{interpretation} of local quantum physics. The
easiest access with the least amount of previous knowledge is through the
Wigner one-particle theory. Wigner discovered \cite{Wig} that irreducible
positive energy ray representations of the Poincar\'{e} group come in three
families: massive particles with half-integer spin, zero mass halfinteger
helicity representations and zero mass "infinite spin" representations. For
brevity we will sometimes refer to these families using numbers 1,2,3. Whereas
the first and the third family are rather large because their Casimir
invariants have a continuous range \footnote{Whereas for the massive family
this is the value of the mass operator, the continuous value in case of the
infinite spin family is the Casimir eigenvalue of the faithfully represented
Euclidean group $E(2)$ (the little group of a lightlike vector).}, the finite
helicity family has a countable cardinality labeled by the halfinteger
helicities. All up to present observed particles are in the first two
families. The fact that no objects have been observed which fit into the third
family should not mislead us into prematurely dismissing these positive energy
representations. Their physical properties are somewhat unusual and the
presence of apparently strange astrophysical dark matter of largely unknown
properties \cite{invisible} advise caution against prematurely dismissing such
representations. In the present context these objects mainly serve the purpose
to explain what \textit{indecomposable string-like localization} means.
The three families have quite different causal localization properties. Let us
first look at the one with the best (sharpest) localization which is the
representation family of massive particles. For pedagogical simplicity take
the Wigner representation of a scalar particle with the representation space
\begin{align}
& H_{Wig}=\left\{ \psi(p)|\int\left\vert \psi(p)\right\vert ^{2}\frac
{d^{3}p}{2p^{0}}<\infty\right\} \\
& \left( \mathfrak{u}_{Wig}(a,\Lambda)\psi\right) (p)=e^{ipa}\psi
(\Lambda^{-1}p)\nonumber
\end{align}
Now define a subspace which, as we will see later on, consists of wave
function localized in a wedge. Take the standard $t-x$ wedge $W_{0}%
=(x>\left\vert t\right\vert ,~x,y$ arbitrary) and use the $t-x$ Lorentz boost
$\Lambda_{x-t}(\chi)\equiv\Lambda_{W_{0}}(\chi)$%
\begin{equation}
\Lambda_{W_{0}}(\chi):\left(
\begin{array}
[c]{c}%
t\\
z
\end{array}
\right) \rightarrow\left(
\begin{array}
[c]{cc}%
\cosh\chi & -\sinh\chi\\
-\sinh\chi & \cosh\chi
\end{array}
\right) \left(
\begin{array}
[c]{c}%
t\\
z
\end{array}
\right)
\end{equation}
which acts on $H_{Wig}$ as a unitary group of operators $\mathfrak{u}%
(\chi)\equiv$ $\mathfrak{u}(0,\Lambda_{z-t}(\chi))$ and the $x$-$t$ reflection
$j:$ $(x,t)\rightarrow(-x$,$-t)$ which, since it involves time reflection, is
implemented on Wigner wave functions by an anti-unitary operator
$\mathfrak{u}(j).$ One then forms the unbounded\footnote{The unboundedness of
the $\mathfrak{s}$ involution is of crucial importance in the encoding of
geometry into domain properties.} \textquotedblleft analytic
continuation\textquotedblright\ in the rapidity $U_{Wig}(\chi\rightarrow
-i\pi\chi)$ which leads to unbounded positive operators. Using a notation
which harmonizes with that of the modular theory in mathematics \cite{Su}, we
define the following operators in $H_{Wig}$
\begin{align}
& \delta^{it}=U_{Wig}(\chi=-2\pi t)\equiv e^{-2\pi iK}\label{pol}\\
\mathfrak{\ } & \mathfrak{s}=\mathfrak{\ \mathfrak{j}}\delta^{\frac{1}{2}%
},\mathfrak{\mathfrak{j}}=U_{Wig}(j),~\delta=\delta^{it}|_{t=-i}\nonumber\\
& ~\left( \mathfrak{s}\psi\right) (p)=\psi(-p)^{\ast}\nonumber
\end{align}
Since the anti-unitary operator $\mathfrak{j}$ is bounded, the domain of
$\mathfrak{s}$ consists of all vectors which are in the domain of
$\delta^{\frac{1}{2}}.$ With other words the domain is completely determined
in terms of Wigner representation theory of the connected part of the
Poincar\'{e} group. In order to highlight the relation between the geometry of
the Poincar\'{e} group and the causal notion of localization, it is helpful to
introduce the real subspace of $H_{Wig}$ (the closure refers to closure with
real scalar coefficients).%
\begin{align}
\mathfrak{K} & =\overline{\left\{ \psi|~\mathfrak{s}\psi=\psi\right\}
}\label{K}\\
dom\mathfrak{s~}\mathfrak{=K} & +i\mathfrak{K},~\overline{\mathfrak{K}%
+i\mathfrak{K}}=H_{Wig},\mathfrak{K}\cap i\mathfrak{K}=0\nonumber
\end{align}
The reader who is not familiar with modular theory will notice that these
modular concepts are somewhat unusual and very specific for the important
physical concept of causal localization; despite their physical significance
they have not entered the particle physics literature. One usually thinks that
an \textit{unbounded} anti-linear involutive ($\mathfrak{s}^{2}=1$ on
$dom\mathfrak{s}$) operator which has two real eigenspace associated to the
eigenvalues $\pm1$ is an impossibilty, but its ample existence is the essence
of causal localization in QFT. Their conspicuous absence in the mathematical
physics literature is in a surprising contrast with their pivotal importance
for an intrinsic understanding of localization in local quantum physics (LQP).
The second line (\ref{K}) defines a property of an abstract real subspace
which is called \textit{standardness} and the existence of such a subspace is
synonymous with the existence of an abstract $\mathfrak{s~}$operator. This
property is the one-particle version of the Reeh-Schlieder property of QFT
\cite{Ha} which is also sometimes referred to (not entirely correct) as the
"state-field relation".
The important analytic characterization of modular wedge localization in the
sense of pertaining to the dense subspace $dom\mathfrak{s}$ is the strip
analyticity of the wave function in the momentum space rapidity $p=m(ch\chi
,p_{\perp},sh\chi).$ The requirement that such a wave function must be in the
domain of the positive operator $\delta^{\frac{1}{2}}$ is equivalent to its
analyticity in the strip $0<\chi<i\pi,$ and the action of $\mathfrak{s}$
(\ref{pol}) relates the particle wave function on the lower boundary of the
strip which is associated to the antiparticle wave function on the negative
mass shell.
This relation of particle to antiparticle wave functions is the conceptual
germ from which the most fundamental properties of QFT, as crossing, existence
of antiparticles, TCP theorem, spin-statistics connection and the thermal
manifestation of localization originate. Apart from special cases this fully
quantum localization concept cannot be reduced to support properties of
classical test functions.
More precisely the modular localization structure of the Wigner representation
theory "magically" preempts these properties of a full QFT already within the
one-particle sector; to be more specific: these one-particle properties imply
the corresponding QFT properties via time-dependent scattering theory
\cite{Mu}.
Hence a modification of of those fundamental properties, as the replacement of
crossing by Veneziano duality, is changing the principles of local quantum
physics (i.e. the result of more than half a century of successful particle
physics research) in a conceptually unsecured way. It was the first step into
the 40 year reign of metaphors in particle physics which culminated in the
wastful TOE.
The mentioned one-particle thermal aspects follows directly from (\ref{pol})
by converting the dense set $dom\mathfrak{s}$ via the graph norm of
$\mathfrak{s}$ into an Hilbert space in its own right $H_{G}\subset H_{Wig}$%
\begin{align}
\left\langle \psi\left\vert 1+\delta\right\vert \psi\right\rangle &
=\left\langle \psi|\psi\right\rangle _{G}\label{an}\\
\left\langle \psi|\psi\right\rangle |_{dom\mathfrak{s}} & =\int\frac{d^{3}%
p}{2p_{0}}\frac{1}{1+e^{2\pi K}}\left\vert \psi_{G}(p)\right\vert ^{2}%
,~\psi_{G}\in H_{G}\nonumber
\end{align}
This formula represent the restriction of the norm to the strip analytic
function in terms of Hilbert space vectors $\psi_{G}$ which are free of
analytic restrictions. The result is the formula for a one point expectation
value in a thermal KMS state with respect to the Lorentz boost Hamiltonian K
at temperature $2\pi$. As we will see in a moment, the modular relation
(\ref{pol})\ in the Wigner one-particle setting is the pre-stage for the
crossing relation as well as an associated KMS property in an interacting
QFT\footnote{The thermal manifestation of localization is the strongest
seperation between QM and QFT \cite{interface}.}.
Before we get to that point, we first need to generalize the above derivation
to all positive energy representations and then explain how to get to the
sub-wedge (spacelike cones, double cones) modular localization spaces. For the
generalization to all positive energy representations we refer the reader to
\cite{B-G-L}\cite{MSY}; but since the sharpening of localization is very
important for our critique of string theory in the next section, it is helpful
to recall some of the points in those papers.
In the first step one constructs the "net" of wedge-localized real subspaces
$\left\{ \mathfrak{K}_{W}\right\} _{W\in\mathcal{W}}.$ This follows from
covariance applied to the reference space$\mathfrak{K}_{W_{0}}.$ In the second
step one aims at the definition of nets associated with tighter localization
regions via the formation of spatial intersections
\begin{equation}
\mathfrak{K}(\mathcal{O})\equiv\cap_{W\supset\mathcal{O}}\mathfrak{K}_{W}
\label{inter}%
\end{equation}
Note that the causally complete nature of the region is preserved under these
intersections in accordance with the causal propagation principle which
attributes physical significance to the causal closures of regions (this is
the reason for the appearance of noncompact or compact conic regions in local
quantum physics). In this way localization properties have been defined in an
intrinsic way i.e. separate from support properties of classical test functions.
The crucial question is how "tight" can one localize without running into the
triviality property $\mathfrak{K}(\mathcal{O})=0.$ The answer is quite
surprising: For all positive energy representations one can go down from
wedges to spacelike cones $\mathcal{O=C~}$of arbitrary narrow size
\cite{B-G-L}
\begin{align}
& \mathfrak{K}(\mathcal{C})~is\text{ }standard\\
& \mathcal{C}=\left\{ x+\lambda\mathcal{D}\right\} _{\lambda>0}\nonumber
\end{align}
i.e. the non-compact spacelike cones result by adding a family of compact
double cones with apex $x$ which arise from a spacelike double cone
$\mathcal{D}$ which touches the origin. Since there are three families of
positive energy Wigner representation\footnote{In d=1+2 there are also
plektonic/anyonic representations which will not be considered here.} one can
ask this question individually for each family.
The family with the most perfect localizability property is the massive one,
because in that case each $\mathfrak{K}(\mathcal{D})$ for arbitrary small
double cones is standard. On the opposite side is the third (massless infinite
spin) family for which the localization in arbitrarily thin spacelike cones
(in the limit semiinfinite strings) cannot be improved \cite{Yn}. The second
family (massless finite helicity) is in the middle in the sense that the
$\mathfrak{K}(\mathcal{D})$ spaces are standard but that the useful
"potentials" (vector potential in case of s=1,symmetric tensors for s=2) are
only objects in Wigner representation space if one permits spacelike cone
localized objects i.e. they covariant vectorpotentials cannot be associated
with compact spacetime regions.
In fact there exists a completely intrinsic argument on the level of subspaces
associated with field strengths which attributes a representation theoretical
property to these "stringlike" potentials. It turns out that\ "duality"
relation (Haag duality)
\begin{equation}
\mathfrak{K}(\mathcal{O})=\mathfrak{K}(\mathcal{O}^{\prime})^{\prime}%
\end{equation}
in massive representations holds for all spacetime regions including
non-simply connected regions. Here the dash on $\mathcal{O}$ denotes the
causal disjoint, whereas $\mathfrak{K}(\mathcal{O})^{\prime}$ is the
symplectic complement of $\mathfrak{K}(\mathcal{O})$ in the sense of the
symplectic form defined by the imaginary part\footnote{For halfinteger spin
there is a slight change.} of the inner product in $H_{Wig}$ This ceases to be
the case for zero mass finite helicity representation where there is a
\textit{duality defect} when \ $\mathcal{O}$ is multiply connected (example:
the causal completion of the inside of a torus at t=0)$.$ In that case one
finds
\begin{equation}
\mathfrak{K}(\mathcal{O})\subsetneqq\mathfrak{K}(\mathcal{O}^{\prime}%
)^{\prime}%
\end{equation}
which can be shown to be related to the string-like localization of potentials
\cite{MSY} i.e. this "defect" is the intrinsic indicator of the presence of
stringlike potentials.
These properties of localized Wigner subspaces can easily be converted to the
corresponding properties of a system (net) of spacetime indexed subalgebras of
the Weyl algebra or (for halfinteger spin) the CAR algebra. Since the reaction
between subspaces and subalgebras is functorial, all spatial properties have
their operator algebraic counterpart and one obtains (for simplicity we
restrict to the bosonic case)
\begin{align}
\mathcal{A(O}) & \equiv alg\left\{ e^{i(a^{\ast}(\psi)+h.c.)}|~\psi
\in\mathfrak{K}(\mathcal{O})\subset H_{Wig}\right\} \label{mod}\\
SA\left\vert 0\right\rangle & =A^{\ast}\left\vert 0\right\rangle
,~A\in\mathcal{A(O}),~S=J\Delta^{\frac{1}{2}}\nonumber\\
\Delta^{it}\mathcal{A(O})\Delta^{-it} & =\mathcal{A(O}),~J\mathcal{A(O}%
)J=\mathcal{A(O})^{\prime}=\mathcal{A(O}^{\prime})\nonumber
\end{align}
where the operator-algebraic modular objects are the functorial images of the
spatial ones $S=\Gamma(\mathfrak{s}),~\Delta=\Gamma(\delta),~J=\Gamma
(\mathfrak{j}).$ It is important to not to misread the Weyl algebra generator
in the first line as an exponential of a smeared field; it is rather an
exponential of a (momentum space) Wigner creation/annihilation operator
integrated with Wigner wave functions from $\mathfrak{K}(\mathcal{O})$ i.e.
the functor uses directly the modular localization in Wigner space and does
not rely on the knowledge of pointlike quantum fields. Rather it represents an
intrinsic functorial construction of local algebras whereas the infinite
family of singular covariant fields result from the "coordinatization" of this
local net of algebras. The antiunitary involution $J$ not only maps the
algebra in its commutant (a general property of the T-T modular theory) but,
as a result of Haag duality, also brings the causal commutativity into the
game. Modular theory in the general operator algebra setting leads to the
action of the modular group $Ad\Delta^{it}$ which leaves the algebra invariant
and the action of the antiunitary involution $AdJ$ which transforms the
algebra into its Hilbert space commutant; both operators result from the polar
decomposition of the so-called (unbounded) Tomita involution $S$. \ The field
generators of this net of algebras are of course the well-known singular
covariant free fields whose systematic group theoretical construction directly
from the Wigner representation theory (except the massless infinite spin
representations) can be looked up in the first volume of \cite{Wei}. Far from
being a property which can be easily generalized of disposed of causal
localization is an inexorable aspect of a profound mathematical theory whose
consequential application leads to the recognition that inner- and spacetime-
symmetries are consequences of modular positioning of algebras \cite{Ha}%
\cite{interface}.
For a bona \ fide string-localization which contrasts the string theory
metaphor, the third Wigner representation family is particularly useful. The
history of its unravelling is an interesting illustration of the intricacies
of localization \cite{interface}. This class of representations have
\textit{no pointlike generators}; in fact their compactly localized subspaces
are trivial $\mathfrak{K}(\mathcal{O})=0$ whereas the spacelike cone localized
subspaces $\mathfrak{K}(\mathcal{C})$ are standard i.e. $\overline
{\mathfrak{K}(\mathcal{C})+i\mathfrak{K}(\mathcal{C})}=H_{Wig}.$ the finite
helicity representation for which onlycertain tensor fields (for s=1 the
vectorpotentials) are stringlike localized, the QFT of the infinite spin
representation have no pointlike generators at all and there are strong
indication that there are also no compactly localized subobjects.
To make contact with the standard field formalism one looks at the
(necessarily singular) generators of these algebras. For the first two
families these are pointlike covariant fields $\Psi(x)$ apart from the finite
helicity potentials which, similar to the generators of the infinite spin
class, are described by string-localized field generators $\Psi(x,e)$ (leaving
off the tensor/spinorial indices) which depend in addition to a point x in
d-dimensional Minkowski spacetime also on a point in a d-1 dimensional de
Sitter space (the spacelike string direction) $e.$ The stringlike localization
nature shows up in the support properties of the commutator for whose
vanishing it is not sufficient that the starting point $x$ and $x^{\prime}$
are spacelike but rather
\begin{equation}
\left[ \Psi(x,e),\Psi(x^{\prime},e^{\prime})\right] =0\text{~}%
only\text{~}for~x+\mathbb{R}_{+}e~><~x^{\prime}+\mathbb{R}e^{\prime}
\label{string}%
\end{equation}
This string-localization is real, unlike that in string theory which is
stringlike in a metaphoric but not in \ material spacetime sense (section 6).
The basic difference between the second (finite helicity) and third kind
Wigner representation type is that the string localization is only required in
relations in which the vectorpotentials play an important role whereas in case
of the third kind one does not expect the presence of pointlike localized composite.
The theory also says that there is no need to introduce generators which have
a higher dimensional localization beyond point- or semiinfinite string-like.
Note that it is of course not forbidden to introduce decomposable string (and
higher) localized operators as e.g.
\begin{equation}
\int\Psi(x)f(x)d^{4}x,\text{ }suppf\subset tube
\end{equation}
in the limit when the thickness of the tube approaches zero. When we talk
about semiinfinite string localization without further specification we mean
indecomposable strings. These are strings which in contrast to decomposable
strings cannot be observed in a counter since any registration device would
inevitably partition the string into the part inside and outside the counter
which contradicts its indecomposable nature (this is of course a metaphorical
argument which is in urgent need of a more explicit and intrinsic
presentation). The string-localized generators of the Wigner infinite spin
representation do not even admit pointlike localized composites i.e. net of
spacelike cone localized algebras has no compactly localized nontrivial
subalgebras. \ A milder form of string-like generation of representations
occurs for the zero mass finite helicity representation family which in some
way behaves localization-wise as standing in the middle between massive
representation (which are purely point-localized) and the third kind. These
representations are fully described in terms of pointlike localized field
strength but already before using these representations in interactions it
turns out that the additional introduction of "potentials" is helpful. Whereas
in the interaction-free case there is a linear relation between the observable
field strength and its potential whose inversion permits to rewrite the latter
as one or more line integral over the former, this feature is lost under
suitable interactions i.e. the string localized potential may become an
indecomposable string localized generator which cannot be approximated by
compactly observables. In this case the "visibility" of such objects in
counters with finite localizability becomes an issue.
In the presence of interactions there is no \textit{direct} algebraic access
to problems of localization from the Wigner one-particle theory. In the
Wightman setting based on correlation functions of pointlike covariant fields,
the modular theory for the wedge region has been derived a long time ago by
Bisognano and Wichmann \cite{Bi-Wi} and more recently within the more general
algebraic setting by Mund\footnote{That derivation actually uses the modular
properties of the Wigner setting which is connected via scattering theory to
the interacting wedge-localized algebras and then as explained above (via
intersection) to the modular structure of all local algebras $\mathcal{A(O}%
).$} \cite{Mu}. The resulting modular S-operator has the same property as in
(\ref{mod}) i.e. the "radial " part of the polar decomposition of the modular
involution $S$ is determined solely by the representation theory of the
Poincar\'{e} group i.e. the particle content whereas the $J$ turns out to
depend on the interaction \cite{Ann} since it is related to the scattering
operator $S_{scat}$%
\begin{equation}
J=J_{0}S_{scat} \label{J}%
\end{equation}
which in this way becomes a relative modular invariant between the interacting
and the free wedge algebra\footnote{$J_{0}$ is (apart from a $\pi$-rotation
around the z-axis of the t-z wedge) the TCP operator of a free theory and $J$
is the same object in the presence of an interaction.}. There is no change in
the construction of the $\mathcal{A(O})$ by intersecting $\mathcal{A(}W)s.$
However in the presence of interactions the functorial relation between the
Wigner theory gets lost. In fact no subwedge-localized algebra contains any
associated PFG (polarization-free-generator) i.e. an operator which creates a
pure one particle state from the vacuum (without an additional vacuum
polarization cloud consisting of infinitely many particle-antiparticle pairs).
Since the crossing property played a crucial role in S-matrix approaches to
particle physics, it is instructive to spend some time for its appropriate
formulation and on its conceptual content. Its most general formulation is
given in terms of formfactors which are products of W-localized operators
$A_{i}\in\mathcal{A}(W)$\footnote{Since all compactly localized operators can
be translated into a common W and since the spacetime translation acts on in
and out states in a completel known way this is hardly any genuine
restriction.} between incoming ket and outgoing bra states
\begin{align}
& ^{out}\left\langle p_{k+1},p_{k+2},...p_{n}\left\vert A\right\vert
p_{1},p_{2},..p_{k}\right\rangle ^{in}=\label{cross}\\
& ^{out}\left\langle -\bar{p}_{k},p_{k+1},p_{k+2},...p_{n}\left\vert
A\right\vert p_{1},p_{2},..p_{k-1}\right\rangle ^{in},~A=\Pi_{l}A_{l}\nonumber
\end{align}
where the crossed particle is an outgoing anti particle relative to the
original incoming particle Hence all formfactors of $A$ with the same total
particle number n are related to one \ "masterfunction" by analytic
continuation through the complex mass shell from the physical forward shell to
the unphysical backward part. Hence the predictive power of crossing is
inexorably connected with the concept of analytic continuation i.e. it is
primarily of a structural-conceptual kind. It is convenient to take as the
master reference formfactor the vacuum polarization components of $A\Omega$
i.e. the infinite system of components of the infinite vacuum polarization
cloud of $A\Omega.$ Needless to add that the crossing relation may be empty in
case that the operator $A~$cannot absorb the energy momentum difference
between the original value and its continued negative backward mass shell
value. In this setting the S-matrix arises as a special case for
$A=\mathbf{1}$ i.e. an operator which cannot absorb any nontrivial energy
momentum. In this case it is not possible to use the vacuum polarization as a
reference and neither leads the crossing of one momentum in the 2-particle
elastic amplitude to a meaningful relation (but the simultaneous crossing of
two particles in the in and out configuration is meaningful).
This is also the right place to correct the picture of the QFT vacuum as a
bubbling soup which for short times, thanks to the Heisenberg uncertainty
relation between time and energy, can violate the energy momentum
conservation\footnote{The origin of these metaphors sees to be the too literal
interpretation of the momentum space Feynman rules.}. The correct picture is
that (modular) localization in QFT costs energy-momentum i.e. in order to
split the vacuum into a product vacuum%
\begin{align}
& \omega(A_{1}A_{2})\rightarrow\omega(A_{1})\omega(A_{2}),~A_{i}%
\in\mathcal{A}(\mathcal{O}_{i})\\
& \Omega\rightarrow\Omega\otimes\Omega,~~A_{1}A_{2}\rightarrow A_{1}\otimes
A_{2}\nonumber
\end{align}
where the index 1 refers to a spacetime region $\mathcal{O}_{1}$ and 2 labels
a region $\mathcal{O}_{2}$ in the causal complement i.e. $\mathcal{O}%
_{2}\subset\mathcal{O}_{1}^{\prime}.$Whereas the tensor factorization in QM
for disjoint regions at a fixed time is automatic, the tensor product vacuum
in QFT is a highly energetic thermal state whose energy diverges in the limit
when the closure of $\mathcal{O}_{2}$ touches $\mathcal{O}_{1}^{\prime}.$ The
difference with respect to tensor factorization can be traced back to the
conceptual difference between Born- and modular- localization. The tensor
factorization in QM leads to an information theoretical entanglement whereas
the entanglement for the QFT tensor factorization is related to thermal
properties \cite{interface}. Confusing the two is the main cause for the
"information paradox". \
The image of the bubbling vacuum shows that metaphors are not limited to
string theory. But those used in QFT do usually not lead to serious misunderstandings.
The origin of the formfactor crossing property (\ref{cross}) lies in the strip
analyticity of wedge localized states and correlation function. For wedge
localized wave functions this was explained above (\ref{pol}, \ref{an}). For
simplicity let us limit the interacting situation to the simplest case
\begin{align}
\left\langle 0\left\vert A\right\vert p\right\rangle & =\left\langle
-\bar{p}\left\vert A\right\vert 0\right\rangle \\
\left\langle 0\left\vert AB\right\vert 0\right\rangle & =\left\langle
0\left\vert B\Delta A\right\vert 0\right\rangle \nonumber
\end{align}
where in the second line we have written the KMS property for the wedge
algebra which is a general consequence of modular operator theory and for the
special case of wedge localization agrees with Unruh's observations about
thermal aspects of Rindler localization ($\Delta^{it}=U_{W}^{boost}(\chi=-2\pi
t)$). But how to view the first relation as a consequence of the second? The
secrete is that although the intersection of the space of one-particle states
with that obtained from applying compact localized algebras to the vacuum (and
closing in the modular graph norm) is trivial, that with the noncompact
wedge-localized algebra is not; it is even dense in the one particle Hilbert
space. Once it is understood that there exists a wedge affiliated operator $B$
which, if applied to the vacuum, generates the one-particle state, one can
apply the KMS relation in the second line. The rest the follows from
transporting the left side $B$ as $B^{\ast}$ to the bra vacuum. The rest
follows by rewriting the $B^{\ast}\left\vert 0\right\rangle $ as $SB\left\vert
0\right\rangle $ using modular operator theory and using (\ref{J}). The
resulting $\Delta^{\frac{1}{2}}JB\left\vert 0\right\rangle =\Delta^{\frac
{1}{2}}J_{0}B\left\vert 0\right\rangle $ (since the $S_{scat}$ matrix acts
trivially on one-particle states) leads to the desired result\footnote{The
plane wave relation should be understood in the sence of wave packets from the
dense set of strip-analytic wave functions.} $\Delta^{\frac{1}{2}}%
J_{0}\left\vert p\right\rangle =\left\vert -p\right\rangle .$ The general form
(\ref{cross}) would follow if we could generalize the KMS relation to include
operators from the wedge localized in and out free field algebras. They share
with $\mathcal{A}(W)$ the same unitary Lorentz boost as the modular group but
their modular inversions $J$ are not equal and hence additional arguments are
required. We will leave the completion of the derivation of crossing to a
future publication \cite{M-S}.
To criticize the string theory interpretation of the canonically quantized
Nambu-Goto mode we do not have to go into subtle details. For such bilinear
Lagrangians (leading to linear Euler-Lagrange equations) the connection
between localization of states and locality of operators is that in free field
theory. In this case it is possible to pass from the "first quantized" version
directly to its "second quantization" i.e. to the N-G "string field theory".
Since the physical content consists of an infinite tower of massive particles
(with one layer of finite helicity massless representations), the only
question is does the original classical parametrization lead to fields which
are decomposable strings or are they point localized? In the first case one
could resolve the composite string in terms of a "stringy spread" of
underlying pointlike fields\footnote{A (composite) string from a Lagrangian
setting would be surprising.} whereas in the second case the string
terminology would only refer to the classical origin and lack any intrinsic
quantum meaning. The suspense will be left to the next section.
\section{A turn with grave consequences}
Although the partisans of the S-matrix bootstrap program placed the new and
important crossing property into the center of their S-matrix setting, they
failed to come up with a constructive proposal which could implement this new
requirement. Other older requirements, as Stueckelberg's macro causality, were
not mentioned in the bootstrap program, they were probably forgotten in the
maelstrom of time. The important question in what way (on-shell) crossing is
related to the causality principles of QFT did not receive the attention it
merits; it has no place in an ideology which set out to cleanse particle
physics from the dominance of QFT. In fact most of the efforts were focussed
on the elastic scattering amplitude on which Mandelstam's conjecture
(concerning the validity of a certain double spectrum representation) was the
central object in terms of which the crossing had a simple natural formulation.
The crossing property was verified in the Feynman perturbation theory where a
certain analytic continuation from momenta on the forward to the backward
shell changes the in/out association of the external legs of Feynman graphs
within the same perturbative order. As a result all physical channels with the
same total number of external lines are just different boundary values of one
analytic master function. Since the analytic properties of Feynman graphs are
established without much efforts, it appears at first sight that the
perturbative version of crossing is an easy matter. However the condition to
perform this analytic continuation inside the complex mass shell adds some
nontrivial aspects. By looking at the S-matrix of 2-dimensional factorizing
models one can see that crossing involves a delicate interplay between
one-particle poles and the multi-particle scattering continuum \cite{Ba-Ka}.
From the motivation in the original papers it is quite evident that Veneziano
\cite{Venez} had this kind of crossing in mind when he set out to construct an
explicit implementation within the Mandelstam setting for 2-2 elastic
scattering amplitudes. But being guided by the properties of $\Gamma
$-functions he arrived at a crossing in terms of an infinite set of particles
forming a mass/spin tower. This formal version of crossing in which the higher
particle scattering states (multiparticle "cuts") is not consistent with the
principles underlying QFT. The "Veneziano duality" marked the beginning of a
new setting in particle physics which was defined in terms of metaphoric
recipes but for which one lacked an understanding about the intrinsic physical
meaning \footnote{The realization of duality with the help of identities
between Gamma functions is the only successful "by hand" construction within a
pure S-matrix scheme. Ironically it did not lead to a model for the crossing
property, but it created a new concept of "duality" which is not supported by
any established principle in particle physics and marks the begining of the
metaphoric ascent into string theory.}. This point will be elaborated in the
context of its string theoretic formulation in the next section.
At the time of its discovery the difference between duality and crossing did
not cause much headache since the idea of an infinite mass/spin spectrum
needed in order to implement duality was favored from a phenomenological point
of view. There is of course nothing unusual to explore all phenomeological
consequences and leave the more conceptual problems to times after the
phenomenological success has been secured beyond any doubt. The Regge-pole
dominance gained a lot of popularity before it was ruled out by experiments
which favored the quantum field theory of QCD as the appropriate description
of strong interactions.
Veneziano's remarkable observation about the existence of a rather simple
mathematically attractive idea, which is capable to generate an interesting
mass/spin spectrum from a suitably formulated duality requirement, continued
to attract attention even after the Regge pole idea lost its attraction.
Although not noticed at the time, the quest for a field equation for an
infinite component wave function\footnote{The motivation originated from the
successful algebraization of the hydrogen spectrum in terms of
represesentation of the noncompact $O(4,2)$ group. In the relativistic case
the group theoretical Ansatz did not lead to interesting results. String
theory replaces group theory by an infinite collection of oscillators which
maintain the pointlike localization (section 6)..} with an infinite mass/spin
content, which remained an unfulfilled dream in previous attempts at infinite
component relativistic field equations, found a successful implementation in
Veneziano's dual model and in the subsequent string theory..The Veneziano
Ansatz did not only create a mass/spin tower. It also led to explicit analytic
expressions for amplitudes which, leaving aside problems of unitarity, had the
formal appearance of an approximation to the elastic part of a "could be" S-matrix.
Extending the search for an implementation of duality-based on properties of
Gamma function, Virasoro \cite{Vir} arrived at a model with a different and
somewhat more realistic looking particle content. The duality setting became
more complete and acquired some additional mathematical charm after it was
extended to n particles \cite{DHS}. The resulting "dual resonance model" was
the missing link from the phenomenological use of Gamma function properties to
a conceptually and mathematically attractive formulation in terms of known
concepts in chiral conformal QFT, the new idea being that Minkowski spacetime
should be envisaged as the "target space"\footnote{Note that the notion of
target space is well defined only in classical field theories (where fields
have numerical values) whereas in QFT its meaning is metaphorical.} of a
suitably defined chiral model.
It is worthwhile to look at the mathematical formulation and the associated
concepts in some detail. The conformal model which fits the dual resonance
model are the charge creating fields of a multi-component abelian chiral
current which are customarily described in the setting of bosonization
\begin{equation}
\Psi(z,p)=:e^{ip\phi(z)}:
\end{equation}
where the d-component $\phi(z)$ is the formal potential of a d-component
chiral current $j(z)=\frac{d}{dz}\phi(z),$ and p is a d-component numerical
vector whose components describe (up to a shared factor) the value of the
charge which the $\Psi$ transfers$.$ The fact that the Hilbert space for
$\Psi$ is larger than that of the current $j$ is the place where the language
of bosonization becomes somewhat metaphoric; this point is taken care of by a
proper quantum mechanical treatment of zero modes which appear in the Fourier
decomposition of $\phi(z).$ The n-point functions of the $\Psi$ define the
integrands of the scattering amplitudes of the dual resonance model; the
latter result from the former by z-integration after multiplication with
$z_{i}$ dependent factors \cite{Vec}. \ Hence the energy-momentum conservation
of the target theory results from the charge conservation of the conformal
source theory. Even the cluster property is preempted on the level of the
conformal current model. The verification of the causal one-particle structure
is more tricky, but it also can be traced back to a property of the charge
composition structure, but it is a very special property which has no direct
relevance within the logic of a current algebra.
The crucial additional step which secured the survival of the mathematical
formalism was the streamlining it received by connecting it to the
\textit{Lagrangian of a Nambu-Goto string}. It is this step which facilitated
the formulation of prescriptions for higher order interactions; but these
recipes are, unlike in the Lagrangian setting of QFT, not consequences of a
Lagrangian formulation . Since the prescription for interactions is on-shell
i.e. directly formulated in terms of the particle/spin tower and therefore
should be interpreted as a contribution to the S-matrix, it is difficult to
judge whether it is physically acceptable since physical properties for the
S-matrix are hard to formulate and even harder to check. One could of course
try to verify the macrocausality properties. This has not been done; but
assuming that it works, one can ask furthergoing question about unitarity.
There are conjectures how to unitarize amplitudes with the help of geometric
concepts involving Riemann surfaces of higher genus, but again there is no
tangible result which comes close to unitarity checks in the Feynman setting.
The same holds for the question whether the duality on the one-particle level
persists to higher orders.
One of its most curious aspects is that string theory only exists in the
positive energy setting in d=10 in the presence of supersymmetry. The origin
of this severe restriction is the identification of the physical spacetime
with the target dimension of a multicomponent chiral current model. It turns
out that this is only possible with 10 target dimensions. However even this
weird looking possibility represents a pyrrhic victory for the conformal
embedding construction: instead of a stringlike object one obtains an infinite
component pointlike generated wave functions. This wave function is not
string-localized since all the degrees of freedom of the chiral theory have
become "inner" i.e. live over one localization point.
Truely spacetime localized strings exist in any dimension $\geq$ 3, so what
does this distinction of 10 dimensions mean in physical terms? The resolution
of this paradoxical situation is that that the objects associated with the
embedding of chiral theories or from the quantization of the Nambu-Goto
Lagrangian are not strings in the spacetime sense as their protagonists
imagine and present them, rather they describe nontrivial solution of a
slightly older problem of constructing infinite component wave
functions/fields which requires that special spacetime dimension in order to
exist at all. The oscillators from the Fourier decomposition of a string
indeed play an important role in setting the mass/spin spectrum, but they have
nothing to do with a stringlike localization in spacetime but "live" in a
Hilbert space which is attached to a point in an analogous sense as the spin
components are attached to a pointlike localized field or wave function.
To the extend that one calls degrees of freedom which do not affect the
localization of objects of local quantum physics "inner", the harmonic
oscillator variables are indeed inner, in contrast to the variable $e$ which
characterises the string direction. It is however interesting to note that
these kind of infinite degree of freedom inner oscillator variables accomplish
the construction of a nontrivial 10-dimenional infinite component wave
function where previous attempts based on noncompact groups (in analogy to the
dynamical $O(4,2)~$group which describes the hydrogen spectrum) failed (in any
dimension). \ This raises the curious question why the infinite component
program fell out of favor, but not the metaphoric string theory progam, or to
phrase it in a more specific way: why is the formulation of interactions in a
metaphoric setting more acceptable than in an intrinsic formulation? A
somewhat ironies answer may be that it is only in the metaphoric setting the
graphical tube rules for the transition amplitudes have some intuitive appeal.
The proofs of the infinite componnt wave function content of the Nambu-Goto
model will be presented in the next section.
In the aforementioned conformal field theoretic description based on the use
of abelian currents the reason why most spacetime dimensions are excluded have
some plausibility. In that setting the spacetime symmetry is connected to the
inner symmetry of abelian charges. Inner symmetries are from experience
related to compact groups; for QFT of at least 3 spacetime dimensions this can
be rigorously established \cite{Ha}. On this basis one would conclude that
there is no possibility of having vector-valued conformal fields on which a
Lorentz transformation can act. This argument is for several reasons too
naive\footnote{One reason being that there is no such theorem in low
dimensional theories.}. But as the existence of the N-G model shows, it is
only possible under extremely special circumstances.
The dual resonance model and its extension to string theory\footnote{Modern
string theory has moved a far distance beyond the N-G model. In textbooks and
reviews this model only serves to illustrate the underlying idea as well as
its historical origin. Since our critical view is not directed towards
concrete calculations but aims at its metaphoric interpretation, it suffices
to exemplify our point in this model. The crucial question do results mean
which are derived on the incorrect metaphor of a string in spacetime.} was,
for the same reasons as the Regge pole model, soon contradicted by
experiments; despite all its curious and in some cases surprising properties
nature, as far as it is known, had no use for the scattering amplitudes coming
from string S-matrix prescriptions. Like the Regge pole-model it could have
ended in that dustbin of history reserved for curious but not useful observations.
The sociological situation changed drastically when, as a result of containing
all possible spins, some physicists had the audacity to propose superstring
theory as a theory of everything (TOE). By identifying the quantum aspects of
gravity with the zero mass s=2 component in the mass/spin tower and
emphasizing that it promises to lead to a totally convergent S-matrix
essentially without free parameter (this is apparently true in lowest
nontrivial order) it acquired a hegemonic status above QFT.
By its very nature an S-matrix cannot have ultraviolet divergencies, but the
claim that it is free of any additional parameters beyond the string tension
would be remarkable, always assuming that the tube geometry (which constitutes
the graphical visualization of interacting strings) admits an interpretation
which avoids the metaphoric reading of a string in spacetime (see next
section). What is surprising about this proposal is not that it was made; the
speculative dispositions of theoretical physicists lead to many fantastic
suggestions, some of them even enter journals. The surprising aspect is rather
that the string idea was embraced quite uncritically by leading theoreticians
so that the formation of a superstring community around this idea became
inevitable. The resulting sociological setting was very different from
previous situations in which the established members of the particle physics
community provided the crucial balancing criticism. In this way the march into
a theory which has a highly technical mathematical formalism and a totally
metaphoric physical interpretation took its course.
Modern string theory has moved a far distance beyond the N-G model. In
textbooks and reviews this model only serves to illustrate the underlying idea
as well as its historical origin. Since our critical view is not directed
towards concrete calculations but aims at their metaphoric interpretation, it
suffices to exemplify our point in this model. All post N-G advancements of
string theory, as sophisticated as they may be, are afflicted with one
original sin: \textit{the incorrect metaphor of strings in spacetime. }The
crucial question, namely what do results real mean which are derived on the
incorrect metaphor of a string in spacetime remained unanswered. The only way
in which one can rescue the string theoretic calculations is to find out what
remains if this metaphor is replaced by its true meaning. This is a problem
which will be analyzed in the concrete context of the N-G model in the next section.
\section{The ascent of the metaphoric approach to particle physics: string
theory}
When it became clear that QCD was the more appropriate theory for the
description of strong interactions, a completely new and physically more
relevant playground offered new challenges for particle theorists and
phenomenologists. As a result the era of string theory which started with the
dual model came to an abrupt but only temporary end. In this way string theory
became an "orphan" of particle physics; its modest mathematical charm, which
consisted in Euler type of identities between gamma functions, lost its
physical attraction.
If it were not for a small community of string theorists who remained unshaken
in their belief that hidden behind the many unexpected properties there ought
to exist a deep new kind of quantum physics, string theory, as we presently
know it, would not be around today. Although those few years when string
theory was out of the limelight would have been the right time to look at its
conceptual foundations, this is not what actually happened. Rather than a
foundational critical review, the main attraction was the idea that a theory,
which by its "stringy" nature contains a mass/spin tower in which all possible
spin values occur (including that of a spin=2 graviton), could lay claim to
represent a TOE i.e. a unique theory of all quantum matter in the universe.
Its apparent uniqueness in form of a superstring supported such a belief, at
least in the beginning. In this way the chance to remove from string theory
the metaphoric casing of its birth was lost and the delicate critical
distinction between the autonomous conceptual content of a theory and the
metaphoric presentation of its computational rules became increasingly
blurred. It is our aim in this section to elaborate on this point.
In the previous section it was pointed out that the duality property, which
led to string theory, came into being by mathematical expediency; as a result
its content has little in common with the crossing property in QFT which is a
consequence of physical principles. It is not uncommon in particle theory to
substitute a problem which one cannot solve by a similar looking one which is
more susceptible to solution. One gets into very unsafe waters if the
construction has a high degree of mathematical consistency within a
fundamentally flawed physical concept.
As will be seen in the sequel the idea that string theory has to do with
string-like extended objects in spacetime is fundamentally flawed even though
no geometric-mathematical property of strings is violated. Geometry does not
care about its physical realization\footnote{A similar but less extreme
situation is met with the mathematics of Riemann surfaces which occurs at
different places which have nothing to do with surfaces in a physical sense
e.g. Fuchsian groups. In chiral conformal theory one often finds the
expression "chiral theory on Riemann surfaces" when the analytic continuation
of the correlation functions yields a Riemann surface, but this should not be
confused with the physical living (localization) space which remains always
one-dimensional.}. By not noticing that the real autonomous content
contradicts the metaphor, the problem becomes compounded. The problem arises
from confusing geometric properties of string theory which are correct in
their own geometric-mathematical right, with physical-material localization in
spacetime. For precisely this reason the Atiyah-Witten geometrization of
particle physics which started at the end of the 70s was a double-edged sword.
Since the Lorentz-Einstein episode we know that the physical interpretation is
not an automatic consequence of mathematics.
In fact it was an uneasy feeling that string theory was constructed with an
excess of sophisticated "tinkering" and a lack of guiding principles which in
many theoreticians, especially those with a rich experience with conceptual
problems of QFT, nourished the suspicion that, even leaving aside the problem
of predictive power, there is something deeply surreal about this theory.
Instead of entering a point for point critique of the extensive and
technically laborious content of string theory, I will focus my critical
remarks to what I consider the Achilles heel of string theory, namely its
metaphoric relation with those localization concepts which are central for the
formulation and interpretation of particle physics.
We know from Wigner's representation theoretical classification (section 4)
that the indecomposable constituents of positive energy matter are coming in
three families: the massive family which is labeled by a continuous mass
parameter and a discrete spin, a discrete massless family with discrete
helicity, and finally a continuous zero mass family of with an infinite spin
(helicity) tower. Whereas theories involving the first two families have
generating pointlike localized fields or field strengths (with possibly
stringlike potentials), there are no pointlike covariant generators within the
last family; rather the sharpest localized generators in that case are
semiinfinite strings localized along the spacelike half-line $x+\mathbb{R}%
_{+}e,$ where $x$ is the starting point of the string and $e$ is the spacelike
direction in which it extends to spacelike infinity. Their localization shows
up in their commutation relation (\ref{string}).
Stringlike localized objects can of course also be constructed in pointlike
QFTs; one only has to spread a physical pointlike field along a string where
the spreading has to be done in the sense of distribution theory since the
resulting stringlike objects is still singular. Such a string will be referred
to as composite or \textit{decomposable; }it plays no role in structural
investigations\textit{.} Neither these strings nor their indecomposable
counterpart arise directly from Lagrangians, but their presence is important
for the understanding of the physical content of a theory. A excellent
illustration for an indecompsable string\footnote{Only by using unphysical
fields the DJM formula has the appearance of a decomposable string.} is
provided by the nonlocal aspects of electrical charge-carrying fields in QED
\cite{infra} whose sharpest localization is that of a Dirac-Jordan-Mandelstam
(DJM) semiinfinite spacelike string. There are also situations in which only
stringlike semiclassical solutions (whose quantum status remained unknown)
play a role.
Even in cases where the full physical content can be expressed in terms of
pointlike localized fields, there may be important physical reasons for their
introduction which are related to \textit{improvements in the short distance
behavior}, thus permitting an extension of the family of renormalizable
interactions. It turns out that covariant string-like generating free fields
exist for each all spins. Whereas the short distance dimension of point-like
fields increases with their spin, the short distance dimension (ssd) of their
string-like counterpart $\Phi(x,e)$ remains at $sdd\Phi=1$ \cite{MSY}. Hence
the formal power-counting criterion is fulfilled within the setting of
maximally quartic interaction$_{{}}$, and the remaining hard problem consists
in generalizing the rules of the perturbative Epstein-Glaser iteration from
point-like to string-like fields. This problem, which has obvious implication
for a radical reformulation of gauge theory\footnote{The present formulation
only contains locally gauge invarant observables within its formalism,
nonlocal gauge invariant objects have to be introduced "by hand"
\cite{infra}.} is presently being studied \cite{M-S}. \
As explained in section 4 (\ref{string}), the representations of the third
kind of positive energy matter (Wigner's famous infinite spin representations)
are \textit{indecomposable strings }\cite{MSY}; in fact these free string
fields, unlike those charge-carrying QED strings, do not even lead to
pointlike composites \cite{invisible}.These representations are therefore
excellent illustrations for the meaning of \textit{indecomposable stringlike
localized fields.}
The important lesson from section 4 is that \textit{localization is an
autonomous quantum theoretical concept} i.e. there is no general
correspondence to classical localization, although for pointlike localized
fields the Lagrangian quantization shows that a classical field at a point
remains also pointlike in the autonomous notion of quantum causal
localization. The latter comes with its own \textit{modular localization
formalism} which is of a representation theoretic kind and bears no trace of
any quantization parallelism (section 4). There is no negation of the fact
that the coincidence of the two was one of the luckiest moments in the history
of particle physics; QFT could and did start with Jordan and Dirac and did not
have to wait for the arrival of Wigner's representation theory.
The conceptual autonomy of quantum localization is underlined by the rather
involved arguments which are necessary to show the equivalence of the
quantization- with the representation-method \cite{Wei}. The representation
theoretical setting together with modular localization is more flexible, since
it immediately leads to infinitely many covariant field realizations for the
unique $(m,s)$ representation; from the quantization viewpoint this would be
hard to see. In fact the confusing situation during the 30s (when there were a
large number of proposals for field equation which looked different but
nevertheless were physically equivalent) was Wigner's motivation for taking
the unique representation theoretical route.
The correspondence breaks down in two cases which both turn out to be
string-localized. The more spectacular of the two is the mentioned infinite
spin representation where there is no relation at all to the classical spinor
formalism. The zero mass finite helicity case on the other hand is pointlike,
but not all spinorial objects which relate the physical spin with the formal
dotted/undotted spinorial; spin are admissible in the sense of
(\ref{admissible}); it is well-known that the physical photon representation
leads to field strength but not to a vectorpotential. Allowing string
localized fields one recovers all undotted/dotted spinorial possibilities
which were admissible in the massive case.
The long time it took to find a covariant field description for the infinite
spin representations finds its explanation in the nonexistence of any
pointlike generating field. More recently, with a better understanding of
modular localization (section 4), it became clear that the generating fields
with the best localization are covariant string-like fields \cite{MSY} which
generate indecomposable string states. It serves as an excellent illustration
of what string-localization in a non-metaphoric intrinsic sense really means.
This intrinsicness stands in an ironic constrast to the fact that the
classical relativistic Nambu-Goto Lagrangian does not lead to quantum object
which are string localized; its full content is rather described by a
point-like localized \textit{infinite component field}, a special case of a
generalized free field. The main topic for the rest of this section will be to
demonstrate that the meaning of "string" in connection of "string theory" is
purely metaphoric.
Metaphors in particle theory are often helpful as intermediate crutches
because they facilitate to find the correct concepts and their appropriate
geometric-mathematical implementation. It is not important that they contain
already the correct physical interpretations of the object which has been
constructed, it suffices that they helped to get there. After the object has
been constructed it unfolds its own intrinsic interpretation. QFT is an
excellent illustration; no matter if one gets to it either by quantization or
representation theory, it always unfolds its intrinsic logic which rejects any
outside metaphoric interpretation that does not agree with its autonomous
meaning. This internal strength it owes to its \textit{quantum causal
localization} (which only theories with a maximal velocity but not QM are able
to possess). If one gets into a situation where metaphors contradict the
intrinsic properties and one fails to notice this lack of balance, one runs
into a very serious problem.
A good starting point for the problem of localization in string theory is to
remind oneself of the form of the most general pointlike field which is free
in the sense of leading to a c-number (graded) commutator. In case of a
discrete mass spectrum, a "masterfield" which contains all spins as well as
all possibilities to interwine a given physical spin s with all admissible
covariant representations looks as follows
\begin{align}
& \Psi(x)=\sum_{\left( A\dot{B},s\right) ,i}\Psi^{(A\dot{B},s)}%
(x,m_{\left( A\dot{B},s\right) ,i})\label{admissible}\\
& \Psi^{(A\dot{B},s)}(x,m_{\left( A\dot{B},s\right) ,i})=\frac{1}{\left(
2\pi\right) ^{\frac{3}{2}}}\int e^{-ipx}\sum_{s_{3}=-s}^{s}u^{\left(
A\dot{B},s\right) ,i}(s;p,s_{3})\times\\
& \times a(s;p;s_{3})\frac{d^{n-1}p}{2\sqrt{\vec{p}^{2}+m_{\left( A\dot
{B},s\right) ,i}^{2}}}+c.c.\nonumber\\
& \left[ \Psi(x),\Psi^{\ast}(y)\right] _{grad}=\sum_{s}\sum_{A,\dot{B}%
\in\left( A\dot{B},s\right) }\int\mathbf{\Delta}^{(A\dot{B},s)}%
(x-y;m_{\left( A\dot{B},s\right) ,i},s)
\end{align}
The meaning of the notation is as follows:
\begin{description}
\item The$\Psi_{\left( A\dot{B},s\right) }^{(A\dot{B})}(x,m_{\left(
A\dot{B},s\right) ,i})$ are free fields of mass $m_{\left( A\dot
{B},s\right) ,i}~$and spin $s$ which transform according to a $\left(
2A+1\right) (2\dot{B}+1)$ dimensional irreducible representation of the
two-fold covering $\widetilde{O(1,3)}$ of the Lorentz group which are
characterized by the SL(2,C)-"spin" $A$ and its conjugate $\dot{B}%
$\footnote{These are the famous undotted-dotted spinorial representations in
van der Waerden's notation.}. For a given spin $s$ there exists an infinite
set of pairs $(A,\dot{B}).$ The only restriction which characterizes
admissible triples $(A\dot{B},s)$ is $\left\vert A-\dot{B}\right\vert \leq
s\leq A+\dot{B}$ for $m_{i}>0$ and $A=\dot{B}$ for $m_{i}=0$.
\item For each mass and admissible triple $(A\dot{B},s)$ and each mass
$m_{\left( A\dot{B},s\right) ,i}$ there exist one intertwiner namely
$u^{\left( A\dot{B},s\right) }(s;p,s_{3})$ i.e. a rectangular matrix of
width 2s+1 and height $\left( 2A+1\right) (2\dot{B}+1)$ (with suppressed
column indices) which convert the unitary $p$-dependent $\left( 2s+1\right)
\times\left( 2s+1\right) ~$Wigner representation matrix (which appears in
Wigner's unitary transformation law of the irreducible $(m_{i},s)$
representation of the little group) into the $\left( 2A+1\right) (2\dot
{B}+1)\times\left( 2A+1\right) (2\dot{B}+1)$ matrix of the spinorial
$(A,\dot{B})$ representation of $\widetilde{O(1,3)}.$ One Wigner
representation is associated with infinitely admissible covariant
representations. The $c.c$. denotes the contribution from the antiparticle
creation contribution whose $v$-intertwiner converts the unitary equivalent
conjugate Wigner representation matrix into the $(A,\dot{B})$ Lorentz group.
In all formulas the $p~$is to be taken on the appropriate $m_{\left( A\dot
{B},s\right) ,i}$ mass shell.
\item The two-point function of two spinorial field is only nonvanishing if
they are mutually charge conjugate (with selfconjugate Bosons as a special
case) and hence the c-number graded commutator has the form in the third line
with $\mathbf{\Delta}^{(A\dot{B},s)}(x-y;m_{\left( A\dot{B},s\right) ,i},s)$
being a matrix-valued covariant polynomial acting on the scalar two-point
commutator function $\Delta(x-y,m_{\left( A\dot{B},s\right) ,i}).$ Since the
all $u$ and $v$ intertwiners for all admissible triples $(A\dot{B},s)$ have
been computed \cite{Wei}, one also knows all covariant graded commutator
functions. The latter can also be computed directly, thus avoiding the rather
complicated interwiners \cite{Tod}.
\end{description}
The field (\ref{admissible}) is the most general covariant free field with a
discrete mass spectrum; if the mass spectrum would be continuous the field is
called a generalized free field; the shared property is that the (graded)
commutator is a c-number commutator function. Guided by the SO(4,2) spectrum
of the hydrogen atom people in the 60s tried to find a distingushed infinite
component covariant wave function, or equivalently a "natural" infinite
component free field. We will see in a moment that string theory produces a
nontrivial model solution.
The irreducible components have an analog in any spacetime dimension $n>4,$
but the analog of the undotted/dotted formalism is more involved since the
Wigner rotations now refer to the rotation group $\widetilde{O(n-1)}$ which
has more than one Casimir invariant.
Leaving the physical interpretation aside, one may ask the mathematical
question whether there exist quantum mechanical models which can be used in
the construction of such an infinite component local field with a mass/spin
tower which is fixed by the rules of quantum mechanics. The infinite component
one-particle Hilbert space of such a model must be a subspace $H_{sub}\subset
L^{2}(\mathbb{R}^{n})\otimes H_{QM}.$For simplicity we restrict to a bosonic
situation in which the auxiliary QM is generated by a system of vector-valued
bosonic operators. The Nambu-Goto model offers a solution. One starts with the
oscillators of a quantum mechanical string and defines as $H_{QM}$ the Hilbert
space generated by the oscillator Fourier components leaving out the zero mode.
There is no unitary representation of the Lorentz group in this Hilbert space
in which each oscillator transforms according to its vector index. In fact
there is no quantum mechanical space which can support a unitary covariant
representation of the Poincar\'{e} group generating.
To see the way out it suffices to remember how one handles the finite
dimensional case with say $H_{QM}=V^{(n)}$ an n-dimensional vector space. To
obtain the correct spin one unitary representation, one passes to a subspace
in which the Lorentz group representation is isomorphic to the homogenous part
of the unitary Wigner representation. The following two relations indicate
this procedure for operators in the present case
\begin{align}
U(a,\Lambda)\left\vert p;\varphi\right\rangle & =e^{ipa}\left\vert \Lambda
p;u(\Lambda)\varphi\right\rangle ,~\varphi\in H_{QM}\\
U(a,\Lambda)\left\vert p;\varphi\right\rangle _{H_{sub}} & =e^{ipa}%
\left\vert \Lambda p;u(\Lambda)\varphi\right\rangle _{H_{sub}}+nullvector
\end{align}
where $u(\Lambda)$ denotes the natural action on the multivector indices of
the quantum mechanical states. Since the natural L-invariant inner product in
the full tensor product space is indefinite one looks (as in case of finitely
many vector indices) for a subspace $H_{sub}$ on which it becomes at least
positive semidefinite i.e. the passing to the subspace the Poincar\'{e}
transformation commutes with the condition which defines this subspace up to a
vector in $H$ of vanishing norm. The more general transformation up to a
nullvector is necessary as evidenced by the Gupta-Bleuler formalism. The last
step of passing to a positive definite inner product is canonical; one
identifies equivalence classes with respect to nullvectors.
The remaining problem is to characterize such a subspace. But this is
precisely what the $u$-intertwiner accomplish.
In the case of the infinite dimensional setting of the N-G model there two
conditions which these oscillators have to obey: the string boundary
conditions and the reparametrization invariance condition. The corresponding
quantum requirements are well-known. In the present tensor product setting
they mix the momentum of the sought object with its "internal" quantum
mechanical degrees of freedom and in this way one gets to the physical states.
The two conditions provide the additional knowledge for a master-intertwiner
which intertwines between the original covariant transformation law and a
positive metric subspace $H_{sub}$ on which the representation is
semi-unitary. As mentioned the formation of a factor space $\hat{H}_{sub}$
leads to a bona fide unitary representation.
Since all unitary representations are completely reducible and a free field in
a positive energy representation is fully determined by its two-point
function, it is clear that the resulting object is an infinite components
pointlike field and not a string in physical spacetime. The string has not
disappeared, it is encoded in the mass/spin spectrum as well as in the
irreducible component $u^{(A\dot{B},s)}$ intertwiner. Those transformations in
the oscillator space $H_{QM}$ which leave the subspace invariant and do not
implement Poincar\'{e} transformation, mix the irreducible components of the
infinite component field. This corresponds to the "wiggling" of the string but
are totally unphysical since they flip the masses and spins between different
multiplets. It is not the string in spacetime which wiggles but rather tower
over one spacetime point.
There is no reason to go into explicit computations, they can be found in
\cite{Dim}; similar calculations are also known to string theorists
\cite{Mar}. The $u^{(A\dot{B},s)}$-content of the N-G intertwiner is not
contained in those papers but it can be computed if needed. One should also
add the remark that in order to show the pointlike nature of the
localization\footnote{This means that there exists a N-G wave function-valued
distribution $u(x,..)$ which generates via test function smearing a dense set
of normalizable N-G states.} of the N-G wave functions one needs the validity
of the positive energy condition. This requires the removal of a tachyonic
component. Passing to the 10 dimensional superstring, this step becomes unnessary.
The metaphoric problem starts with declaring the localization point of the
infinite component field as the c.m. point of a string. With other words in
order to account for the quantum mechanical string one invents a string in
spacetime of which only the c. m. point is "visible". It is clear that this is
not a simple slip of the pen or a coincidental multi-slip of multi-pens;
\textit{here we are confronted with a deep misunderstanding of concepts of
local quantum physics.} Localization in quantum theories based on causal
locality, the quantum counterpart of a finite propagation speed, as opposed to
the Born probability localization in QM have a completely intrinsic
localization (the modular localization of section 4), and any attempt to
impose an interpretation from the outside (as that of a string of which only
the c. m. is visible) will sooner or later lead to the creation of a surreal
parallel world.
The only explanation I have for the present situation is that the admittedly
very subtle localization concepts, which underlie local quantum physics, have
not been mentally digested; in view of the fact that popular textbooks on QFT
are limited to commented computational recipes and obviously consider any
conceptual discussion as a waste of paper, this is lamentable but hardly
surprising. The most subtle of all structural properties of QFT is
localization which finds its most intrinsic formulation in the concept of
modular localization which is intimately connected to unitary positive energy
representations of the Poincar\'{e} group.. Perhaps at this point it is a bit
clearer becomes why a section on localization (section 4) was added in a paper
which has string theory in its sight.
The next step, namely to formulate a string theoretic interaction, did not
contribute anything in the direction of a conceptual reassessment of string
theory; to the contrary, it reinforced the metaphoric tendencies. What are
those splitting and recombining tube pictures worth, if there are no material
strings in spacetime to start with? In order to view this situation in a
historical perspective, let us briefly look back at previous struggles for
conceptual clarity in fundamental physics.
A famous episode of conceptual reassessment which immediately comes to one's
mind is the ether-relativity transition. A closer look, at how this dispute at
the turn of the 19th century evolved, shows that the problem of the ether and
the Fitzgerald-Lorentz contractions was quite innocuous as compared to that of
the conceptual status of string theory. Even if Einstein had not discovered
special relativity in 1905, sooner or later somebody would have been able to
do it. The theory had a solid observational basis, and to go beyond Lorentz it
was only necessary, permitting a philosophical simplification, to apply
Ockham's razor.
Non of the preconditions for applying Ockham's razor to string theory are met.
There are no observational facts to refer to, and on the theoretical side
there is not even a consistent concept which could serve as an ether analog,
i.e. a temporary conceptual vessel free of contradiction, which could serve as
an intermediate storage for an obviously valuable observation.
But what is there to be stored from string theory? Could it be that behind the
metaphoric camouflage there is a new idea about consistently interacting
infinite component field? The most realistic expectation after 40 years of
research without any tangible physical result is that Ockham's razor will
leave nothing; more specifically it will convert what was considered to be a
setting of particle physics into a tool in mathematics. For mathematicians the
metaphoric content is no obstacle, as long as the geometrical data are
correctly processed. The sensation to have access to a miraculous gift from
physicists discharges their imagination so that the future of string theory as
an extremely efficient factory for mathematical conjectures would be secured.
Mathematicians do not have to be concerned with the subtle relation between
geometry and material localization in local quantum physics. Apart from a very
few individuals (%
$<$%
5), they do not know the intrinsic nature of modular localization nor does it
play any role in the pursuit of their problems.
In physics the consequences are much more serious. For the first time in the
history of particle theory a whole community has entered a region in which the
capacity to distinguish between the real and the surreal has been lost. On the
one hand the idea of a string in spacetime is totally metaphoric, but on the
other hand one needs precisely this picture in order to make sense out of
interactions. This is a perfect catch 22 situation.
By having taken the old N-G model to illustrate our point of the metaphoric
versus the intrinsic, we do not want to give the impression that superstring
theory has remained on such a simple-minded old-fashioned level. Modern string
theory is a complex subject and requires a large amount of mathematical
sophistication. The use of the N-G model in this essay is only for
illustrative purposes; this is pretty much the role it plays in the first
sections in books on superstring theory. But the localization problem under
discussion is not affected by these kind of extensions to more sophisticated
implementations of the string idea. The situation is a bit like the biblical
story about Adam and Eve and the original sin which is inherited despite all
cultural enrichments.
The acceptance of the metaphoric interpretation of string theory and the kind
of thinking resulting from it has spread into parts of particle theory and
taken its toll. An example is the Klein-Kaluza idea of extra spacetime
dimensions which underwent its quantum renaissance in the entourage of string
theory. Again the metaphoric trap which blinds people to recognize the deep
link between spacetime and the localization of local quantum physics took its
toll. As a result of the very nature of inner symmetries in local quantum
physics as coming from the classification of possibilities of realizing a
local net of observable algebras, there is no way in which a spacetime
dimension can "roll up" and become an inner symmetry index. Even without
knowing the theory of how inner symmetries evolve from local nets \cite{Ha},
the horse-sense of somebody who knows about the ubiquitous presence of vacuum
polarization through localization should alert him that there is something
fishy with transporting this idea into QFT without an in-depth study how to
reconcile it with structural properties of local quantum physics..
Many physicists, especially those which have been around before string theory
took the headlines, have noticed its almost surreal appearance even if not all
of them have been able to exactly express their uneasy feeling as forcefully
as Phil Anderson, Robert Laughlin or Burt Richter. But the point of departure
into the surreal is subtle, as this essay attempts to demonstrate. What
weights even more is that the metaphors have been sanctioned by several
generations of particle theorists of the highest intellectual caliber, a very
bad precondition, certainly much worse than that at the time of ether.
Research in particle physics is not taking place in an ivory tower; it remains
under the influence of the Zeitgeist. Can one think of a better analog to the
reign of a rampant post cold war capitalism, who for decades with its false
promises managed to suffocate all ambitions for a more rational and equitable
world, and the reign of a TOE build around a metaphor?
It is no longer necessary to speculate about possible connections between
metaphors in physics and their philosophical manifestations. String theorists
not only do not deny this relation, they even take pride in spreading its
surreal content. More recently several string theorists or physicists
influenced by string theory had there outcoming in several articles and books
in which they presented their string theory supported Weltanschauung.
These articles re-enforce the take on the metaphoric surreal consequences of
string theory in this essay and they also justify the uneasy feelings which
the particle physics community outside of string theory has about what is
going on in the midst of their science. There is Susskind's world of anthropic
reasoning leading to world of multiverses\cite{Suss} as a kind of last ditch
attempt to rescue the uniqueness of superstring theory in its role as a TOE.
An even more fantastic view about the physical world can be found in articles
by Tegmark\cite{Teg} in which every mathematical theorem finds its physical
incarnation in some corner of the multiverse. It is inconceivable that
articles as \cite{Sche} would have been written without prior preparation of a
metaphor friendly area in particle theory.
It would be totally incorrect to dismiss these articles as outings of
individual oddballs of the kind previous TOEs have attracted. The philosophy
contained in those article is the logical bottom line of four decades under
the reign of an apparently mathematically consistent theory whose illusionary
aspect is not the result of an affectation of the authors with science
fiction. Rather it is an unfortunate consequence of a misunderstanding of the
most central issue of local quantum physics: the autonomous meaning of
localization. A string in QM does not know anything about placement in
spacetime, only causal theories with a maximal velocity come with this notion,
and string theory belongs to this category of theories.
In this context one should not outright dismiss the suggestion that the
immense popularity of geometry, especially differential geometry in the 70s
and 80s may have contributed to the somewhat frivolous handling of the issue
of localization. Whereas in quasiclassical approximations e.g. soliton or
monopole solution this has no effect, there is a potential danger of
metaphoric trespassing if it comes to interpretations of exact solutions.
An illustration of this point is provided by the interpretation of a chiral
observable algebra on the circle in a temperature state with respect to the
conformal Hamiltonian $L_{0}.$ The thermal correlation of such a theory
fulfill a highly nontrivial duality relation in which the upper boundary
circle of KMS of the analyticity region becomes the localizing circle of the
dual temperature chiral theory. Both the original theory and its dual "live"
(are localized) on a circle To say that this theory lives on the torus is a
metaphoric statement. The torus is an analyticity domain which relates two
models on a circle. In order to convert the upper boundary of the torus into
the living space of a chiral theory, the boundary value has to be taken in a
particular way in order to obtain the expectations of products of fields.
These (modular\footnote{Here the word modular represents stands for identities
between modular forms in complex function theory.}) dual temperature relation
are the analogs of the Nelson-Symanzik duality relation for the correlation
functions of massive two-dimensional QFT on a finite interval. In these
relations the spatial variable in correlation functions is interchanged with
in the euclidean time. Whereas the proof of these relation depends on the
validity of a strong form of the Osterwalder Schrader theorems, the chiral
case is much more complicated since space and time have been combined to a
lightray and the duality becomes a selfduality.
Evidence suggests that behind this more complicated chiral temperature duality
relation there is a noncommutative euclideanization which is related to the
modular localization structures of chiral theories on a circle, but the
mathematics which converts this evidence into a solid proof is still missing
\cite{Schrader}. Also in this case it is important to distinguish between
geometric pictures and physical localization in spacetime. Obviously there is
a Riemann surface in form of a torus associated with such a thermal chiral
theory. But this torus is not the localization region of quantum matter; the
latter only exists on the circular physical boundary. String theorists tend to
view conformal theories as worldsheet embedding of the this torus.
Geometrically this is of course correct, but from the viewpoint of physical
localization in spacetime it is a disaster.
String theorists want both, on the one hand one should think in terms of
spacetime strings. On the other hand, especially if they feel hard pressed on
this point by their critical opponents they take recourse to the excuse that
what they really had in mind is a pure S-matrix theory. In any theory which
admits objects in spacetime one needs the interceding of time-dependent
scattering theory in order to get to an S-matrix; this is the only rational
way in which "on- and off-shell" can be connected. Working on this issue with
cooking recipes would offer metaphors another breeding ground. A geometric
picture which only serves as a cooking recipe for something which one calls an
S-matrix, without having even checked those general properties with which
Stueckelberg challenged Heisenberg's S-matrix, would make both Feynman and
Stueckelberg rotate in their graves.
Having to face the surreal physical world of string theory, it interesting to
remember that at the beginning of quantum mechanics Heisenberg introduced the
concept of quantum mechanical observables (from which important restriction on
measurability and a new notion of physical reality emerged) as a precaution
against metaphoric classical traps. Let us think for a moment that QM would
have been discovered in Feynman's path integral setting. In such a situation
Heisenberg's notion of observables would have been essential for preventing a
journey into a world paved with metaphors. But once the problems of
interpretation of QM became clarified, Feynman's path integral was an
important enrichment with no danger of evocating misleading metaphors.
Unfortunately an operator version similar to the one in QM exists presently in
QFT only in a fragmentary form. Hence possible misleading interpretations
extracted from the functional integral representation are not yet banned.
It may happen that string theory will disappear through the exhaustion of its
proponents or as a result of significant changes from experiments and
observations. A move away from the Zeitgeist of the post cold war cultural
dominance of global capitalism and a change of directions away from
confrontation and exploitation may also lead to a loss of interest in grand
designs as TOEs and deplete the chances of their protagonists to obtain high
social status within the particle physics community. But considering the
enormous amount of manpower and material resources which has gone into that
project during four decades there should be a detailed account of what
precisely kept a large community of highly skilled people working on on this
project. It is hard to imagine that particle theory can have a post string
future without accounting for the problem whether string theory was more than
a collection of recipes with a misleading spacetime interpretation. A closure
only as a result of difficulties to reconcile string theory with observations
without a critical theoretical appraisal about its conceptual content would be unsatisfactory.
Apart from the introduction, the presentation has been kept on the track of a
scientifically oriented critical review. But the history as well as the
present status of string theory raises many questions whose answer has to be
looked for outside particle physics in the realm of sociology of the string
community and its coupling to the to the Zeitgeist. The following last section
attempts to address such questions.
\section{TOE, or particle theory in times of crisis}
Although there is general agreement among experts that particle physics is in
the midst of a crisis, there are diverging opinions about the underlying causes.
An often heard opinion is that the ascent of metaphoric ideas and the
increasing popularity of theories of everything (TOE)\footnote{As far as I
know the first TOE came with the German name, it was the Heisenberg Weltformal
(a nonlinear spinor theory). Pauli supported it at the beginning but later
(after Feynman's criticism) turned against it. My later Brazilean collaborator
visited Munich at the end of the 50s and got so depressed about the circus
around this Weltformel that he had doubts about his decision to go into
particle physics. Fortunately that TOE remained a local event.} is the result
of stagnation of the research on the SM caused by a lack of experimental data.
If this would be the only explanation, one could expect that forthcoming
experimental data from LHC would change directions in fundamental particle
theory research away from the TOE project towards a better understanding of QFT.
However there is no guaranty that a mere change of subject will automatically
alter the conceptual framework in which research is conducted. Whatever will
be revealed by the LHC experiments, the metaphoric style of discourse which
has been supported by string theory will not disappear only because it lost
another round against nature.
Experiments can add new facts, but as long as physical principles and
conceptual clarity do not regain the importance they had in the first four
decades of quantum theory particle physics and the sociological preferences of
theoretical research do not favor new conceptual investments in the hugely
successful, but still incomplete QFT\footnote{The level of knowledge in
present publications on such central matters as the particle-field relation
\cite{infra} has fallen below what it was decades ago.}, the present situation
will continue.
It is not an coincidence that the new less physical-conceptional and more
geometric-mathematical oriented way of doing particle theory is a post SM
phenomenon. Looking at the SM discovery, one cannot help to be impressed by
the fact that a relative modest conceptual investment which, different from
the discovery of QED two decades before did not add much to the principles of
QFT, has led to a an impressive extension of our description of nature which
has withstood the test of time for four decades.
Instead of accepting this as a result of unmerited luck, particle theorists
became intellectually arrogant and forgot that the great theoretical conquests
of the past were followed by a long struggle about problems of interpretation
which often led to heated disputes between different schools of thought as the
famous Einstein-Bohr controversy. Nothing of this kind happened in the post SM
era, even though laying claim to a TOE would require a deep conceptual
analysis more than at any previous situation. The strange proximity of
detailed and occasionally sophisticated computations and crude metaphoric
interpretations has become the hallmark of string theory and related ideas.
Most of the of the ongoing foundational work is concerned with
interpretational problems of QM. This led to an imbalance with the more
fundamental QFT. Whereas the foundational discussion in QM arrived a
nit-picking issues as the "many world interpretation" and other forms of
intellectual masturbation, a conceptual penetration of the fundamental
localization concept in QFT, which rules all issues of of physical
interpretation, has not even started. It is not easy to dismiss the thesis
that with better understanding of foundational aspects of local quantum
physics, particle theory would have shown more resistance against metaphoric
arguments and temptations to succumb to a siren's call of a TOE.
One cannot realistically expect that new experiment will change the style of
research of particle theorists whose \ thinking has been formed in the shadow
of a TOE. Too many careers have been built around ideas of a final theory and
too many prominent people have supported it in order to expect any rapid
change. A person who dedicated a good part of his/her scientific life to the
pursuit of a TOE, and in this became a renown member of the large and
influential superstring community, will find it difficult to muster the
intellectual modesty it takes to leave the limelight and start a different path.
In this respect science is not different from what happens in the
eco-political realm of society; a change of direction towards more modesty
without going through a substantial crash is not very probable.
A theory whose intrinsic properties are unknown and in which concrete
calculations, metaphoric physical arguments and subtle mathematics form an
entangled mix, presents a fertile ground for ill-defined conjectures leading
to inconclusive publications. Perhaps the most impressive illustration of how
the principal message of an interesting theoretical discovery gets lost in the
conceptual labyrinth of a TOE is the fate of the AdS-CFT correspondence. It is
quite instructive to look at this still ongoing discourse in some detail.
Already in the 60s the observation that the 15-parametric conformal symmetry
which is shared between the conformal of 3+1-dimensional compactified
Minkowski spacetime and the 4+1-dimensional Anti-de-Sitter (the opposite
constant curvature as compared to the cosmologically important de Sitter
spacetime) brought a possible field theoretic relation between these theories
into the foreground; in fact already Fronsdal \cite{Fron} suspected that a
5-dim QFTs on AdS and a 4-dim. conformal QFT share more than the spacetime
symmetry groups. But an intrinsic localization concept detached from the
chosen point-like field generators, which could have helped to convert the
shared group symmetry into a relation between two \textit{different spacetime
ordering devices} for the \textit{same abstract quantum matter substrate,} was
not yet in place. Therefore a verification of the suggestion that not only the
symmetry groups, but even the local structure of QFTs in different spacetimes
(even in the extreme case that one spacetime is a boundary of the other) may
be related, remained out of reach.
For several decades the unphysical aspect of closed timelike world lines in
the AdS\footnote{Its universal covering is however globally causal.} solution
of the Einstein-Hilbert equations was used as an argument that these equations
require the imposition of additional restrictions.
The AdS spacetime began to play an important role in particle physics when the
string theory community placed it into the center of a conjecture about a
correspondence between a particular maximally supersymmetric massless
conformally covariant Yang-Mills model in d=1+3 and a supersymmetric
gravitational model on ADS. The first paper was by J. Maldacena \cite{Ma}, who
started from a particular compactification of 10-dim. superstring theory, with
5 uncompactified coordinates forming the AdS spacetime. Since the conceptual
structure as well as the mathematics of string theory is poorly understood,
the string side was tentatively identified with one of the supersymmetric
gravity models which, in spite of its being non-renormalizable, admitted a
more manageable Lagrangian formulation and is believed to have the same
particle content. On the side of CFT string theorists placed a maximally
supersymmetric gauge theory of which calculations which verify the vanishing
of the low order beta function already existed. The vanishing of the
beta-function is a \textit{necessary} prerequisite for conformal
invariance.\ As in all Yang-Mills theories, the perturbative approach for the
correlation functions is seriously impeded by severe infrared divergencies.
The more than 5.000 follow-up papers to Maldacena's work left the conceptual
and mathematical status of the conjecture essentially unchanged. But it
elevated the Maldacena conjecture to the most important result of string
theory and its claimed connection with the still elusive quantum gravity.
The conceptual situation became somewhat more palatable after Witten
\cite{Witten} and Polyakov et al. \cite{Polya} exemplified the ideas using a
d-dimensional Euclidean functional integral setting and paying particular
attention to the $\phi^{4}$ interaction for the scalar component of the quite
involved supersymmetric Lagrangian. In this way the Maldacena conjecture
became converted into Feynman like graphical rules in terms of vertices and
propagators for both the AdS bulk to bulk as well as its conformal boundary to bulk.
The model-independent \textit{structural properties of the AdS-CFT
correspondence} came out very clearly in Rehren's \cite{Rehren}
\textit{algebraic holography}. The setting of local quantum physics (LQP) is
particularly suited for questions concerning "holography" i.e. in which a
theory is assumed as given and one wants to construct its corresponding model
on a lower spacetime associated with a boundary. Using methods of local
quantum physics one can solve such problems of isomorphisms between models in
a purely structural way i.e. without being forced to explicitly construct the
models on either side of the correspondence. QFT in its present imperfect
state of development is not capable to address detailed properties of concrete
models but it is very efficient on structural properties for classes of models
as correspondences or holographic projections.
Since generating pointlike fields are coordinatizations of spacetime-indexed
operator algebras and as such (as numerical valued coordinates in geometry)
are highly nonunique and certainly not preserved under holographic changes, an
algebraic formulation which replaces fields by the net of local algebras which
they generate, is more appropriate.
One interesting property which came out in Rehren's proof was a statement
concerning the degrees of freedom on both sides of the correspondence.
Intuitively one expects that if one starts from a Lagrangian QFT on AdS side
and the holography to the lower dimensional QFT is really a correspondence
(i.e. not a holographic \textit{projection} as in case the lightfront
holography), the resulting conformal theory should \textit{not} be of the kind
"as one knows it". A similar situation should arise in the opposite situation;
this time because there are too few degrees of freedom.
This mismatch of degrees of freedom was indeed a corollary of Rehren's
correspondence theorem. It permits the following simple computational
illustration (which does not require the more demanding mathematical setting
in \cite{Rehren}). A standard pointlike free quantum field on
AdS\footnote{Here "standard" means originating from a Lagrangian or, in more
intrinsic terms, fulfilling the time-slice property of causal propagation. A
free field is standard in this sense, a generalized free field with an
increasing Kallen-Lehmann spectral function fails to have this property.}
passes under the correspondence to a conformal generalized free field with a
continuous distribution of masses whose anomalous dimension varies with the
AdS mass parameter \cite{Du-Re}. Generalized free fields with unbounded mass
distributions (in particular this conformal generalized field as well as the
N-G generalized free field of the previous section) have a number of
undesirable properties.
Since such fields already appeared in the previous section in connection with
the N-G model (section 6), some informative historical remarks about
generalized free fields may be helpful. Such fields were introduced in the
late 50s by W. Greenberg \cite{Gre}. Their main purpose at the time was to
\textit{test the physical soundness of axioms of QFT} in the sense that if a
system of axioms allowed unphysical solutions, it needed to be further
restricted. In \cite{Ha-Sc} it was shown that generalized free fields with too
many degrees of freedom (as they arise in the mentioned illustration) lead to
a breakdown of the causal shadow (also called time-slice) property \cite{C-F}
which is the QFT analog of the classical Cauchy propagation. A related
phenomenon is the occurance of problems in defining thermal states (a maximal
"Hagedorn" temperature or worse).
Rehren's structural argument was later adapted \cite{Du-Re2}\cite{ReLec} to
the more intuitive functional integral setting (sacrificing rigor in favor of
easier communicability within the particle theory community) in order to allow
a comparison with the work of Witten \cite{Witten}, with the result that the
perturbation theory in terms of vertices and propagators agree\footnote{In yet
another implementation of the correspondence called "projective holography"
Rehren considers a formulation which uses pointlike generating fields instead
of algebras \cite{ReLec}.}. The trick which did it was a functional identity
which was very specific for AdS models; it showed that fixing functional
sources on a boundary on the one hand, and forcing the field to take a
boundary value via delta function in the functional field space on the other
hand, leads to the same result. But this clear indication in favour of
\textit{one} kind of correspondence did not make any impression on the string
community. They continued in insisting that the correspondence in Rehren's
theorem is not what they had in mind; occasionally they referred to it as the
"German AdS-CFT correspondence" thus dismissing the content of the AdS-CFT
correspondence as something which depends on geography.
The question of how such a strange situation could arise in the midst of
particle physics, the beacon of rationality, has presently no definite answer.
In lack of any other hint one guess would be that it must have been the
mismatch of degrees of freedom on both sides of the correspondence which
contravened what string theorists expected which led to the conceptual
blackout. The subordination of a theorem to the metaphoric setting of a TOE,
i.e. that mathematical rigor and conceptual cohesion are only accepted as long
as they do not get into the way of a TOE, is a unique event which has not
happened before at any other time in the history of particle physics.
This development should be deeply worrisome to the particle physics community.
Never before have there been so many (far beyond 5.000) publications with
inconclusive results on what appears an interesting, but in the broader
context of particle physics also somewhat narrow subject. In fact even
nowadays, more than one decade after the gold-digger's rush to the gold mine
of the AdS-CFT correspondence started, there is still a sizable number of
papers every month, by people looking for nuggets at the same place, but
without bringing the gravity-gauge conjecture any closer to a resolution.
Since commentaries like this run the risk of being misunderstood, let me make
perfectly clear that particle physics always was a speculative subject and it
is important that it remains this way. Therefore there is no problem
whatsoever with Maldacena's paper; it is in the best tradition of particle
physics which was always a delicate blend of a highly imaginative and
innovative contribution from one author followed by a critical analysis of
others. One should however be worried about the almost complete loss of
balance in thousands of papers trying to support a conjecture at a place which
is already occupied by a theorem without its authors even being aware of the
situation they are in.
Many of the young high energy theorists have gone into string theory in good
faith, believing that they are working at an epoch-forming paradigmatic
problem because their advisers made them think this way. Joining a globalized
community dedicated to unravel the answers to the ultimate problems of matter
and the universe with the help of a TOE is simply not a good prerequisite for
starting critical reflections.
String theory is the first theory which succeeded to dominate particle theory
without observational credentials and conceptual coherence, solely on the
claim of being a TOE. The question is how this was possible in a science,
which is considered to be the bastion of rationality and experimental
verifiability, is not easy to be answered.
Human activities even in the exact sciences were never completely independent
of the Zeitgeist. In fact if there is any sociological phenomenon to which the
frantic chase for a TOE finds it analog, it is the post-cold war reign of
globalized capitalism with its "end of history" frame of mind \cite{Fuku} and
its ideological support for insatiable greed and exploration of natural
resources. It is hard to imagine any other project in physics which would fit
the post cold war millennium spirit of power and glory and its hegemonic
claims in the pursuit of a these goals better than superstring theory: shock
and awe of a TOE against the soft conceptual power of critical thinking.
Whereas the post cold war social order has, contrary to its promises of a
better life for everybody, accentuated social differences and caused avoidable
wars and deep political divisions, the three decades long reign of the project
of a TOE in particle physics has eradicated valuable knowledge about QFT and
considerably weakened chances of finding one's way out of the present crisis.
In a science, in which the discourse at the frontier of research is as
speculative as in particle theory, one needs a solid conceptual platform from
which one can start and to which one can return if, as it is often the case,
the foray into the unknown gets stuck in the blue yonder. QFT, with its step
by step way of accumulating knowledge and its extremely strong inherent
physical principles, was able to create such a platform. The present status of
the SM does almost certainly not reflect the last word on how to formulate
renormalizable interactions in the presence of spin $\geq1$ which rather
remains one of the great future challenges of QFT.
String theory in its almost 40 year history was not able to create such a
platform; in case of failure there is nothing to fall back on. Physicists who
got directly into the string theory fray without having had the chance to get
a solid background in QFT will not be able to get out of the blue yonder since
their is no conceptually secured region from which one could look for other directions.
Many physicists entered the monoculture of string theory with only having had
a brush with a string-theoretical caricature of QFT through some computational
recipes. Hence the failure of their string theory project will not necessarily
strengthen particle theory since the greatest burden of string theory its
metaphoric style of discourse and the ability of string theorists (in
Feynman's word) to "replace arguments by excuses" will be carried on for some
time to come. Even a decisive message from the forthcoming LHC experiments
will not be able to change this situation in the short run.
String theory is the first proposal which, as the result of its Planck scale
interpretation, was effectively exempt from observational requirements. It
reached this unique status of observational invulnerability qua birth as the
result of a gigantic jump in scales of more than 15 orders of magnitude
applied to a its previous setting as a phenomenological description of certain
aspects of strong interactions. In this situation of absence of direct
observability, a fundamental theoretical discussion about its conceptual basis
would have been of the highest importance and priority. Whereas at the time of
its existence as a phenomenological dual model for strong interaction there
was yet no compelling reason to do this, when it laid claims to be the first
TOE which incorporates gravity, time was ripe for such a foundational discussion.
This chance was missed. Unlike in a similar situation around the S-matrix
bootstrap during the 60s, when renown physicists \cite{Jost} criticized some
of the more outrageous claims and later results showed that the S-matrix
principles of Poincar\'{e} invariance, unitarity, and the crossing analyticity
property admitted infinitely many "factorizing model" solutions in two
dimensions, there was no critical discussion when string theory acquired a
Planck scale interpretation. So neither the change from crossing to DHS
duality, nor the string theoretic implementation of duality and its later
claim at incorporating all known interaction (including gravity) received the
necessary critical attention.
The later geometrical enrichment rendered certain aspects of string theory
attractive to mathematicians, but did not improve observational aspects nor
its conceptual setting within relativistic QT; it mainly added a mathematical
cordon of "shock and awe" which made it more difficult even for experienced
particle theorists to get to its physical conceptual roots. This explains
perhaps why recent criticism against its hegemonic pretensions was almost
never directly aimed at the conceptual basis but instead focussed attention to
sociological and philosophical implications. A common point of attention in
all critical articles is the gross disparity between theoretical pretenses and
the total absence of observational support. Indeed superstring theory and its
claim to be a TOE created for the first time a situation in which string
theorists could acquire a high social status and have easy access to funds
without delivering tangible physical results.
Our main criticism of string theory (sections 5 and 6) was directed toward its
somewhat frivolous manner in ignoring intrinsically defined quantum concepts
as localization and the propagandistic style through which it replaces them by
metaphoric interpretations. Normally the terminology in particle physics is
set by the intrinsic property of the construct and not by metaphors which may
have been helpful in the construction. The origin of the name is inextricably
related with the first model of a relativistic string, the Nambu-Goto model.
But is was demonstrated in those two sections the quantum one-string states
and their second quantization lifting are in fact point-localized generalized
free fields. The (classical) string localization is a property of the
integrand of the N-G action does not lead to quantum strings rather the
"stringy" looking spectrum is encoded into an infinite particle/spin tower
which "sits" over one point, not a bad achievement after the futile attempts
to find interesting infinite component fields one decade before. Contrary to
statements in the string literature, the localization point is not the c. m.
of a string in spacetime, but as we have seen, the only role the classical
string configuration plays on the quantum level is that its sets the relative
normalization of the infinitely many particle components in the Kallen-Lehmann
spectral function.
Unfortunately the geometrization of recipes to implement string interactions
had a two-edged effect, it helped to organize the mathematics but was
counterproductive on problems of interpretations. As we have emphasized, the
historically important parallelism between classical and quantum localization
theory is limited to points. Without this unique stroke of luck, QFT could not
have been accessed by Lagrangian quantization and the many differential
geometric applications would not have been possible. It breaks down for
stringlike\ localization. To avoid confusion on this point, it is of course
possible to smear a pointlike physical field into a string like configuration.
But such "composite strings" are not Euler-Lagrange objects.
The lack of understanding of the intrinsic meaning\footnote{What is at issue
here is Heisenberg's insitence on "intrinsicness" by introducing the notion of
quantum \textit{observables.}} of "quantum localization" was the reason why it
took such a long time to realize that the third kind of Wigner's irreducible
positive energy representations (the zero mass "infinite spin"
representations) are localized along semiinfinite spacelike strings
\cite{MSY}. These indecomposable representations are also not associated to a
Lagrangian. On the other side of the coin we know from sections 5 and 6 that
classical N-G string Lagragians describe upon quantization an infinite
component field, the mass/spin tower "sits" over one point; and the picture of
interacting strings in spacetime is a metaphor whose true meaning in terms of
the pointlike object has remained mysterious (probably only for the reason
that it has not been investigated).
The geometric setting of string theory which in the hands of Ed Witten led to
many mathematical insights, unfortunately takes one away from an intrinsic
quantum physical understanding. It permits the creation of a mathematically
consistent, but physically metaphoric world.
Even the insistence in a pure S-matrix interpretation, which is strongly
suggested by the dual model origin of \ string theory, does not help to get it
out of a conceptual "catch 22" situation. The infinite component pointlike
quantum physical nature of the string wave function remains incompatible with
the tube picture for calculating transition amplitudes even if the latter are
"on-shell" recipes. No wonder that outside the community of TOE followers,
string theory has a profound surreal appearance; even without the more
detailed critical analysis in this essay, a particle physicist who has lived
through more healthy times senses that there is something fishy.
A pure S-matrix theory cannot be the end of theoretical research since it
contains no information about vacuum polarization; the latter is one of the
most charateristic observationally confirmed properties of QFT. Instead of the
lucid picture of a theory in spacetime in which particles and the S-matrix
arise only at asymptotically large times, string theory as an S-matrix setting
would denigrate particle physics to the result of a cooking recipe and advance
the metaphorization of particle theory.
With such a big pot of conceptual hodge podge at hand, the forcible retrieval
of uniqueness by interpreting superstring theory as a theory of a multiverse,
in which each 4-dimensional universe represents one superstring solution with
differently compactified extra dimensions seems to be only a small additional
extension into the surreal..
Apart from particle physics, the idea of a TOE also brings additional problems
of a logical-philosophical nature. Something which is unique cannot be
characterized by its properties, as shown by the futility of medieval attempts
to characterize God. Characterizations of properties of objects in QT emerge
from relations (quantum correlations) between different objects or different
parts of the same object. We have a good understanding of a theory when these
relations lead to invariants which characterize what all the members of a set
have in common, whereas the their concrete values distinguish the different
members. This viewpoint is the basis of Mermin's relational interpretation of
QM \cite{Mer} as well as the presentation of QFT as arising from the relative
positioning of a finite number of "monads" in a joint Hilbert
space.\cite{interface}.
This characterization was achieved in QFT; the result is best characterized by
an allegory: \textit{something which looks, moves, smells and sounds like an
elephant is really an elephant}\textbf{;} or in QFT: an object which fulfills
a certain list of well-known properties is really a QFT and there is no place
for a metaphor which permits a different identification. A TOE would not fit
into the Alice and Bob world of exchange of information.
But when metaphors keep hanging on for decades and the saga about the little
wiggling strings and their big cosmic counterparts even enter videos
\cite{nova}, particle physics starts to have serious problems. Almost four
decades after its inception and a subsequent confusing history, and in spite
of worldwide attention and an enormous number of publications, the conceptual
status of string theory has remained as obscure as it was at the beginning.
This conceptual opaqueness and in particular the misleading information on the
intrinsic properties of localization and the lack of interest of string theory
to critically confront its own past coupled with its ability to seduce a new
generation of physicists, may well derail particle physics for some time to come.
It would be naive to believe that an essay like this or any other kind of
critique can change the course of events. Its only value may be that it
facilitates the task of future historians and philosophers of science to
understand what really happened to particle physics at the end of the
millennium. One can be sure that there will be a lot of public interest in
work dedicated to explanations about what really happened during this strange
episode in the midst of the exact sciences.
If string theory comes to an end, the reason is most probably its inability of
making predictions or/and the exhaustion of its proponents resulting from
frantic efforts to keep the theory in the headlines. It would be regrettable
if it fades away without a final resum\'{e} because having had a project
running for almost 4 decades with an enormous expenditure of mental and
material resources one at least would like to know the precise reason why it
was abandoned and what quantum physical concepts, if any, were behind its rich
metaphoric constructions.
In the previous section we have alluded to a parallelism between the post cold
war Zeitgeist of global capitalism and its claim at ideological hegemony with
the increasing receptivity of the idea of a TOE in form of string theory. As
in all previous chapters of human history, the spirit of the times finds its
vocal presenters in a few individuals who on the one hand appear to be in
command of events but at the same are impelled by them.
Let us take notice of some of the statements coming from leading figures in
string theory.
The following statement (which is attributed to Ed Witten) underlines this
point: string theory is "\textit{the gift to the 21 century which fell by luck
already into the 20}$^{th}$\textit{ century}". It is representative for
several other similar statements whose purpose is to lift the spirit in times
of theoretical doubts and absence of any help from the observations. Such
statements have a seductive effect on newcomers and play an important role in
community building.
To see the dependence on the Zeitgeist in sharper focus compare this with
important statements from a previous epoch as the following one:
"\textit{Henceforth space by itself and time by itself shall become degraded
to mere shadows and only some kind of union of them shall remain
independent}". Everybody will immediately notice that this is a famous
quotation from Hermann Minkowski at a time when special relativity had already
acquired the status of a theory. This was a illuminative aphorism in a poetic
form which condensed the most important message of relativity. The idea of
rallying support behind a totally ill-defined incomplete project would have
been incompatible with the spirit in the first half of the previous century.
The uncritical reception can be seen from an Wikipedia excerpt which again is
representative for many other similar statement, some even to be found in
scientific publications:
\textit{In the 1990s, Edward Witten and others found strong evidence that the
different superstring theories were different limits of an unknown
11-dimensional theory called M-theory. These discoveries sparked the second
superstring revolution. When Witten named M-theory, he didn't specify what the
\textquotedblright M\textquotedblright\ stood for, presumably because he
didn't feel he had the right to name a theory which he hadn't been able to
fully describe. Guessing what the \textquotedblright M\textquotedblright%
\ stands for has become a kind of game among theoretical physicists. The
\textquotedblright M\textquotedblright\ sometimes is said to stand for
Mystery, or Magic, or Mother. More serious suggestions include Matrix or
Membrane. Sheldon Glashow has noted that the \textquotedblright
M\textquotedblright\ might be an upside down \textquotedblright
W\textquotedblright, standing for Witten. Others have suggested that the
\textquotedblright M\textquotedblright\ in M-theory should stand for Missing,
Monstrous or even Murky. According to Witten himself, as quoted in the PBS
documentary based on Brian Greene's \textquotedblright The Elegant
Universe\textquotedblright, the \textquotedblright M\textquotedblright\ in
M-theory stands for \textquotedblright magic, mystery, or matrix according to
taste.\textquotedblright}
In this case the hegemonic pretense is veiled in the form of a playful name-coquetry.
In contrast, the opening mantra for most introductory talks/articles on string
theory usually starts with an apparent matter of fact statement as:
\textit{String theory is a model of fundamental physics whose building blocks
are one-dimensional extended (strings) rather than the zero-dimensional
objects (point particles).}
But what is the meaning of this statement and where is its proof ? A speaker
who makes such intoductory remarks would probably be surprised if somebody
from the audience would question this; or worse, the questioner would perhaps
be exposed to the mocking laughter of the audience.
String theorists, being not less intelligent than other particle physicists,
know about the weakness of some of their points. But they have invested too
many years in their project in order to be able to abandon it. This created a
very different situation from earlier times when the important role of a
critical balance was considered to be paramount to keep such a highly
speculative science as particle physics in a healthy state.
Erroneous research directions are not uncommon in a speculative science as
particle theory. Even some famous physicists got lost for some time in their
life in high-flying, but at the end worthless, projects. The criticism of
their colleagues and their own strong awareness that without a critical
balance particle physics would go astray brought them back to earth. It is
well known that Pauli engaged himself for more than a year in a project which
started with Heisenberg: the (now forgotten)\footnote{One useful relic of this
otherwise forgotten attempt is the two-dimensional Thirring model.}
\textit{Weltformel} in the veil of a "nonlinear spinor theory". When he went
onto a lecture tour through the US in order to spread the new idea, he was
severely criticized by Feynman to the effect that he recognized the flaw and
abandoned the project; loosing time with finding excuses was not Pauli's style.
It is certainly true that Pauli was often abrasive with his colleagues and
even wrong (however never "not even wrong") on several occasions, albeit
mostly in an interesting and for particle physics profiting way. Certainly
neither he nor his contemporaries would have used Winston Churchill's
endurance-rallying speech\footnote{"Never, never, ..........give up".} in the
defense of a questionable theory.
The strongest illustration for the loss of this corrective critical balance is
the manner in which David Gross, representing the thinking of a large part of
the string community, has kept critics at bay by stating that superstring
theory is "\textit{the only game in town}".
There is a certain grain of (perverse) truth in string theorists self-defense
in hard pressed situations at panel discussions or interviews, when they take
recourse to the argument of David Gross that, notwithstanding all criticism,
superstring theory is the only worthwhile project. Similar to the words of a
character in a short story by the late Kurt Vonnegut's (which Peter Woit
\cite{Wo} used in a similar context):
\textit{A guy with the gambling sickness loses his shirt every night in a}
\textit{poker game. Somebody tells him that the game is crooked, rigged}
\textit{to send him to the poorhouse. And he says, haggardly, I know, I}
\textit{know. But its the only game in town.}
(Kurt Vonnegut, The Only Game in Town \textit{\cite{Vo})}
the situation in string theory is at least partially self-inflicted, although
its defenders make it appear as the result of an inevitable development in
particle physics. Self-fulfilling prophesies are not uncommon in the political
realm; a long-term derailment of particle physics as a result of the
uncritical pursuit of a TOE is however unprecedented.
For more than three decades considerable intellectual and material resources
in the form of funding research laboratories and university institutions have
been going into the advancement of string theory and this has led to a
marginalization of other promising areas. So to extend that the "no other game
in town" is an assessment of the present sociological state of particle
physics, it describes, probably unintentionally, a factually true but
unhealthy situation in particle theory.
At no time before has a proposal, which for more than four decades did not
contribute any conceptual enrichment or observable prediction to particle
physics, received that much uncritical and propagandistic support by a
worldwide. As a result superstrings and extra dimensions became part of the
popular culture \cite{nova}\cite{Green}. Their increasing importance for the
entertainment industry contrasts their ill-defined scientific status.
A successful seduction does not only require a skilled seducer, but also a
sufficient number of people who, despite all their knowledge and intellectual
capacities, permit themselves to be seduced by a TOE's glamour. Individuals
can only successfully direct tendencies which have been already latently
present. The image of a TOE has always fascinated people and there were
attempts as the S-matrix bootstrap which preceded superstring theory. But they
had a lower degree of complexity and the critical stabilizing power was strong
enough to contain them and finally send them to the dustbin of history.
Perhaps this time a gigantic failure is necessary in order to recover to the
lost critical balance and return to it the same important status which it had
in the past.
A return to the project of the SM caused by new data from LHC may remind
particle physicists that a conceptual unification cannot succeed without
starting from a solid platform of experimental data and theoretical
principles. As in successful times in the past unification should come as a
conceptual gratification at the end, but not enter as a modus operandi in the
construction of a theory.
If the last section I tried to connect the existence of the superstring
community and its claim for domination of particle physics with the help of
the ideology of a TOE to the Zeitgeist of the rule of globalized capitalism
and its ideological subordination of all ares of human life to the
maximization of profit. Globalized capitalism has passed its apogee and is
presently in the midst of a terrifying downward spiral. The message proclaimed
by its ideological defenders about the end of history \cite{Fuku} were proven
incorrect by events and the promise for a beginning of a new epoch free of
wars and social conflicts and the universal reign of democracy with a happy
life for everybody is beginning to turn sour. Less than two decades after the
doom of communism, it is the victorious capitalism which is throwing the world
into its deepest crisis.
Science has been is a very important part for the presentation of the power
and glory of the social system of globalized capitalism. A theory of
everything with promise of a glorious closure of fundamental physics at the
turn of the millennium fell on extremely fertile ground. Superstring theory
does not only enjoy strong support in the US; the European Union together with
other states is spending billions of dollars on the LHC accelerator and its
five detectors which among other things are designed for the task of finding
traces for two of string theories "predictions" namely supersymmetry and extra
dimensions. TV series on string theory such as \cite{nova} would be
unthinkable without the embedding into the millennium's power and glory Zeitgeist.
It is inconceivable, that metaphoric ideas without experimental support and
with no clear conceptual position with respect to QFT could be supported by
another Zeitgeist. Nevertheless the cause for the loss of critical judgement
in a central part of what is considered to represent the most rational science
remains somewhat of a mystery and the present attempt to link it to the spirit
of time is more descriptive than explanatory.
For a few philosophers and sociologists regressive development in the social
history of mankind do not come unexpected. Especially those of the
\textit{Frankfurt school of critical theory} anticipated dialectic changes
from enlightenment into irrationality. According to a dictum of
Horkheimer\footnote{In Horkheimer's words: \textquotedblleft!f by
enlightenment and intellectual progress we mean the freeing of man from
superstitious belief in evil forces, in demons and fairies, in blind fate --
in short, the from fear -- then denunciation of what is currently called
reason is the greatest service reason can render." cited in M. Jay, The
Dialectical Imagination. A History of the Frankfurt School and the Institute
of Social Research, 1923-1950, Univ. of California Pr., 1996, p. 253.} and
Adorno: \textit{enlightenment must convert into mythology}.
Indeed the metaphoric nature of the scientific discourse, which gained
acceptability through string theory, has presented the ideal projection screen
of mystical beliefs. No other idea coming from science had such a profound
impact on the media and on popular culture. Physics departments at renown
universities \ have become the home for a new type of scientist who spends
most of her/his time moving around spreading the message extra dimensions,
landscapes of multiverses etc. This had the effect that people outside of
science think of intergalactic journeys, star wars, UFOs, poltergeists from
extra dimensions etc. whenever they hear the word "superstring" \cite{Kaku}.
For a long time physicists were critical of suggestions that there may be a
link between the content of their science and the prevalent Zeitgeist. Indeed
the interpretation of Einstein's relativity theory in connection with the
"relativism of values" at the turn of the 19$^{th}$ century is a
misunderstanding caused by terminology; relativity is the theory of the
absolute i.e. of the observer-independent invariants.
A book by P. Foreman \cite{Fo} proposes the daring thesis that a theory in
which the \textit{classical certainty} is replace by \textit{quantum
probability} could only have been discovered in war-ridden Germany where
Spengler's book \textit{the decline of the west,} which represented the post
world war I Zeitgeist, had its strongest impact. There are reasons to be
sceptical of Foreman's arguments; I think the more palatable explanation is
that the high level of German science especially on theoretical subjects was
not at all affected by the destruction of the war and the subsequent social upheavals.
Certainly there is no \textit{direct} way in which the scientific content of
fundamental research can be influenced by processes within a society. The
probability interpretation of QT had no direct relation to the post world war
I doom and gloom inasmuch as Einstein's special relativity was not influenced
by discussion about relativism of values which had been a fashionable topic at
the beginning of the 20$^{th}$century.
The relation between science and society takes on a slightly different
perspective if one looks at the way protagonists communicated among themselves
and with the public. It is mainly in this indirect way that the Zeitgeist can
have some influence on the direction and the conceptional level of the
scientific discourse.
In particle theory the feedback is more subtle. Whereas in earlier times
leading particle physicists who were the protagonists of speculative new ideas
were also their fiercest critics, it would be difficult to imagine the
protagonists of superstrings playing this double role. Of course even things
which one cannot imagine do happen once in a while. Who would have thought a
couple of years ago that one day Alan Greenspan could come forward and
declared the post cold war kind of deregulated capitalism a failed system? Who
might be the Alan Greenspan for a failed TOE in particle physics?
\begin{acknowledgement}
I thank Fritz Coester for sharing with me his recollection about the
Stueckelberg-Heisenberg S-matrix dispute. I also recollect with pleasure
several encounters I had with a young string theorist named Oswaldo Zapata
whose growing critical attitude and disenchantment with this area of research
led him to the previous version of this article and away from string theory
into the philosophy, sociology and history of science.
\end{acknowledgement}
\section{An epilog, reminiscences about Juergen Ehlers}
My visit to Berlin at the end of May 2008 was overshadowed by the sad news
about Juergen Ehlers's sudden death. As every year, I was looking forward to
continue the interesting discussions from my previous encounter with Juergen
at the AEI in Golm.
I met Juergen for the first time at the Unversity of Hamburg in 1956 when,
initially out of curiosity while still an undergraduate, I started to frequent
Pascual Jordan%
\'{}%
s seminar on General Relativity. Juergen%
\'{}%
s philosophical background and his deep grasp of conceptual points left a
remaining impression.
This was the time when DESY was founded and the University of Hamburg became
the center of experimental particle physics in Germany. With Harry Lehmann
succeeding Wilhelm Lenz and Kurt Symanzik representing the DESY theory this
was matched on the side of particle theory. These significant events and the
fact that General Relativity at that time did not enjoy much
support\footnote{This situation only changed many decades later (at least
outside of German universities) when Juergen was able to take an active role
in the formation of a MPI-supported Relativity group in Munich which, under
his leadership, became the nucleus for the AEI in Golm.} finally changed my
mind in favor of doing my graduate work under Harry Lehmann's guidance. After
I graduated and took a position in the US I lost contact with Juergen. Several
of the relativists from the Jordan Seminar, including Juergen Ehlers and
Engelbert Schuecking had gone to the Universit of Texas in order to work with
others in a scientific program on General Relativity initiated by Alfred Schild.
Juegen's return to Germany at the beginning of the 70s and the formation of a
research group at the MPI in Munich secured the survival of General Relativity
after Pascual Jordan's retirment. The group he formed at the MPI Muenchen was
an important link between the beginnings of post war General Relativity in
Hamburg and the present AEI in Golm, which the Max Planck societey founded in
1995 as part of its extension after the German unification. Juergen Ehlers was
its founding director up to his retirement in 1998. Under his leadership the
AEI became Germanies most impressive post unification Max Planck institutes.
Since my own area of research QFT in an act of extreme shortsightedness has
been closed down in all universities of Berlin, the AEI is the only nearby
place where one can meet people with similar interests.
From that time on I met Juergen on each of my yearly visits of the FU-Berlin
from where I retired in 1998. It was easy to find topics of joint interests
because after his retirement Juergen became increasingly attracted to
foundational aspects of QM and QFT. Only after his death I learned that he
also developed quite intense personal contacts with many of the algebraic
field theorists in Germany.
Part of his interest originated certainly from his desire to understand
Jordan's role as the protagonist of QFT in more detail. When Juergen worked
under Jordan in Hamburg, quantum theory and quantum field theory were side
issues, Jordan focussed all his activities on General Relavitiy, in particular
he was looking for geophysical manifestation of a 5-dimensional extension of
Einstein's theory in which the gracitational constant became a field variable.
For this reason most participants in Jordan's seminar had little knowledge
about quantum field theory and Jordan never talked about his pathbreaking
contributions, it almost was as if he did not want to be reminded of his
glorious scientific beginnings. This was a somewhat odd situation because in
Sam Schwebers' words "Jordan is the unsung hero of QFT" whereas on the other
hand he never contributed something of comparable significance to General Relativity.
Juergen probably saw the post retirement years (as the acting director of the
AEI) as a chance to finally understand the conceptual content of QFT by
following its history. I shared the interest in the history; although I am a
quantum field theoriest, I never took the time to look at its early history
either. Juergen gave me a list of Jordan's important publication and I began
to read some. We both learned to appreciate the subtle distiction between
Jordan's and Dirac's viewpoint about relativistic Quantum Theory.
In 2004 there was an conference with international participation in Mainz
\cite{Mainz} about Jordan's contributions to the foundations of quantum
physics which was supported by the Academy of Mainz, in the organization of
which Juergen played a leading role. The talks added a lot of unknown or lost
details about the beginnings of quantum field theory.
Some time afterwards Juergen asked me about my opinion on Jordan's algebraic
construction of magnetic monopoles. I was somewhat surprised because I did not
know that Jordan had published a paper on monople quantization in the same
year as Dirac. Shortly before, I had seen a purely algebraic derivation by
Roman Jackiew. When I wrote to Roman Jackiw he was probably as surprised as I
to find his full argument with all details in Jordan's three-page paper; even
the tetrahedral drawing depicting the (in modern parlance) cohomological
aspect of the argument was there.
.Another matter of common interest was to understand the fine points in
Jordan's and Dirac's work on "transformation theory". This was the name for
the formalism by which the structural equivalence between the Heisenberg and
Schroedinger formulation of quantum mechanics was established. At the
beginning of Jordan's paper which we both red, he thanks Fritz London for
sending his results on this issue before publication and strongly praises
London's work for the clarity of his presentation. I viewed this as a
condescending remark in accordance with the social etiquette of the times, but
Juergen went to the library and really red London's article. He convinced me
that, apart from any politeness, Jordan really had profound scientific reasons
to be impressed by Fritz London's article; it is the first article which
connects the new quantum theory with the appropriate mathematical tools as the
concept of Hilbert space and "rotations" therein (unitary operators). Usually
one attributes the first connection of operators in Hilbert space with QT to
John von. Neumann. When Fritz London wrote this impressive article he was an
assistant at the Technische Hochschule in Stuttgart; unlike Jordan he was not
part of the great quantum dialog between Goettingen, Copenhagen and Hamburg.
After the Mainz conference \cite{Mainz} Juergen was engaged in what I think
was a book project about Jordan since he was compiling a selected publication
list. In this context he asked me for some advice about whether Jordan's
series of papers on what he called the\textit{ neutrino theory of light} have
the same quality as his other work and hence should also enter the selected
list. Since Jordan's contemporaries made fun of this project\footnote{The
critical reaction against the metaphoric "neutrino theory of light" title of
Jordan's papers caused his contemporaries to overlook their very interesting
bosonization/fermionization content.} (in \cite{Pais} one even finds a very
funny mocking song), Juergen had doubts about its content and was in favor of
ignoring all articles which had this title.
I looked at several of these articles and was quite surprised about their
actual content behind their metaphoric title. I finally convinced Juergen to
keep at least two of them. For somebody with a knowledge of modern concepts of
QFT these articles had nothing to do with real neutrinos or light, rather
Jordan discovered what is nowadays called the \textit{bosonization of
Fermions} (or fermionization of Bosons) which is a typical structural property
of 2-dimensional conformal field theories. Obviously Jordan saw the potential
relevance of this property, but unlike Luttinger almost 3 decades later, he
found no physical application in the context of solid state physics where the
formalism of low dimensional QFTs in certain circumstances turns out to have
useful applications. In order to "sell" his nice field theoretic result, he
used the very metaphoric "neutrino theory of light" title.
An attention-attracting title which only has a metaphoric relation with the
content would have gone well in present times, but in those days it only led
to taunts by his contemporaries. Unfortunaely the ability to distinguish
between intrinsic aspects and metaphoric presentations has suffered serious
setbacks in contemporary particle physics.
An interesting episode (in which Juergen really impressed me with his astute
critical awarenes about ongoing discussions on rather complex matters)
developed, when Juergen asked me about Maldacena's conjecture on the AdS-CFT
correspondence. He wanted to understand how there can be a conjecture about a
property which is already covered by a theorem (Rehren%
\'{}%
s theorem). I told him something similar to what I commented on this problem
in section 5 of this essay.
With some differences in the interpretation of Jordan's legacy having been
removed in previous visits, I was looking forward to return to previous
unfinished discussions on the fundamental conceptual differences between QFT
and relativistic QM which result from differences in localization and
entanglement. This time I was much better prepared than last year
\cite{interface}. I also wanted to explain the rather simple argument why
states of string theory are pointlike generated and the name "string" is a
metaphor and lacks any intrinsic meaning i.e.the arguments contained in
section 6 of this essay.
With Juergen Ehlers, the theoretical physics community in Germany looses one
of its most knowledgable and internationally renown members. It is hard to
think of anybody else who was able to combine the traditional virtues of an
analytic critical mind with a still very present curiosity about fundamental
aspects of contemporary problems.
|
math/0603294
|
\section{Introduction}
Completely integrable multidimensional Partial Differential
Equations (PDEs) represent attractive subject of intensive study during
last decades after the paper \cite{GGKM}. This popularity is due to
their remarkable mathematical properties and variety of physical
applications, which may be found in literature.
Investigation approach considered in this paper is, in some sence, associated
with
so-called $S$-integrable PDEs \cite{C_int0},
i.e. nonlinear PDEs which may be "linearized" using special
technique, such as Inverse Spectral Transform (IST)
\cite{ZMNP,AKNSB,AC}.
It is well known that IST is not the only method to study
$S$-integrable PDEs. One may refer to
Sato Theory \cite{DKJM,DKJM2,SS,OSTT},
Symmetry Approach \cite{MShS,KM},
Dressing Method \cite{ZSh1,ZSh2,ZM,BM} . The later, in turn, has several
formulations: Zakharov-Shabat method \cite{ZSh1},
local Riemann problem \cite{ZSh2}, nonlocal Riemann and
$\bar\partial$-problem \cite{ZM,BM,K}.
Classical $S$-integrable systems are basically (1+1)- and
(2+1)-dimensional. Only special types of multidimensional
$S$-integrable examples are known, such as
self-dual Yang-Mills equations \cite{YM,BPST,BZ,ADHM,DM} and the
Plebanski heavenly equation \cite{P,BK,MS}.
Recently a new type of multidimensional {\it partially} integrable
systems have been found \cite{ZS}, for which integration algorithm is based
on the integral operator with nontrivial kernel, which is a variant of the dressing method.
This recent result encourage us to search for other improvements of the dressing method.
It is well known, that dressing method has been originally developed
to construct nonlinear PDEs together with their solutions.
Variant of the dressing method suggested here does not allow one to find analytic solutions for nonlinear PDEs. However
\begin{enumerate}
\item
it gives an alternative representation of largely arbitrary nonlinear PDE as nonlinear system of Integro-Differential Equations (IDEs). In particular case, this system becomes single linear PDE where potentials are expressed through the spectral function from one hand and through the field of original nonlinear PDE from another hand;
\item
it relates a {\it single} linear spectral evolution equation (written for some spectral function) with largely arbitrary nonlinear PDE.
\end{enumerate}
This is an interesting result of the paper. However, the fact that one has single linear equation associated with given nonlinear PDE (instead of overdetermined linear system, like in $S$-integrable case) results in system of nonlinear IDEs (or PDEs) defining evolution of the dressing function, which is disadvantage of our representation. Remember that dressing functions of $S$-integrable PDE satisfy linear PDE. As a consequence, our (largely arbitrary) PDE may not be derived as compatibility conditions of linear overdetermined system.
In some sence, similar purpose (but different approach)
was sought in series of papers generalizing known
(2+1)- and (1+1)-dimensional completely integrable equations.
These are
generalization of Kadomtsev-Petviashvili
equation (KP) using deformation of the classical Inverse Spectral
Transform (IST) \cite{BB},
generalizations of Korteweg-de Vries equation (KdV) and
Nonlinear Shr\"odinger equation (NLS) \cite{FA},
generalization of Benjamin-Omo
equation (BO) \cite{KLM}. In these papers
evolution of spectral data is defined by nonlinear nonlocal
equations (spectral data are replaced by dressing functions in our case).
Here we start with dressing method based on the integral
equation in the form
\cite{SAF,Z2}, where we introduce an integral operator
with different type of kernel
allowing us to increase dimensionality of PDE.
As a consequence, an arbitrary function of $x_i$ (independent variables of nonlinear PDE) appears in the dressing algorithm (see function $\hat \Phi(\lambda_1;x)$ in Sec.\ref{IDE}) enforcing us to introduce an extra constrain in the form of largely arbitrary nonlinear IDE for $\hat \Phi(\lambda_1;x)$, see eq.(\ref{condition_f}).
Fixing function $\hat \Phi(\lambda_1;x)$, this constrain provides possibility to write single nonlinear PDE for single field $u$ expressible in terms of the dressing
and spectral functions. Note that similar extra constrain has been introduced in \cite{ZS}, but arbitrary function there has quite different origin.
Below we concentrate on multidimensional
generalizations of dressing algorithm
for (2+1)-dimensional $N$-wave equation and Davey-Stewartson equation (DS). However, generalized version is applicable to largely arbitrary nonlinear PDE.
In the next section (Sec.\ref{Sec4}) we give general algorithm
deriving nonlinear $N$-wave type PDE. We introduce an
extra constrain allowing to write single nonlinear PDE for single field. Characterization of solution space for derived nonlinear PDE is
given in Sec.\ref{IDE}. Sec.\ref{DR_DS} considers similar generalization of dressing method for DS.
Finally we represent some conclusions in Sec.\ref{conclusions}.
\section{Derivation of multidimensional nonlinear $N$-wave equation}
\label{Sec4}
\label{DR_nl}
We start with usual integral equation
\begin{eqnarray}
\label{Sec4:U}
\Phi(\lambda) = \int\Psi(\lambda,\nu;x) U(\nu;x)
d\nu = \Psi(\lambda,\nu;x)*U(\nu;x)=\Psi*U,
\end{eqnarray}
where $*$ means integration over spectral parameter appearing
in both functions.
There are two types of parameters in this equation. First, already mentioned {\it spectral parameters}
denoted by Greek letters $\lambda$, $\mu$, $\nu$ (for instance
$\lambda=(\lambda_1,\dots,\lambda_{\dim \lambda})$), and, second, {\it additional
parameters} denoted by $x$,
$x= (x_1,\dots,x_{\dim x})$. These additional parameters are
{\it independent variables} of resulting nonlinear PDE. Besides, we reserve
$k$ for scalar Fourier type parameter appearing in
integral representations of some functions.
All functions are $Q\times Q$ matrices.
We always assume $\dim x = M$ and $ \dim \lambda= \dim \mu= \dim \nu=M+1$, where $M$ is dimensionality of resulting nonlinear PDE.
Eq.(\ref{Sec4:U}) is a linear equation for
the {\it spectral function} $U(\lambda;x)$, where operator
$\Psi(\lambda,\mu;x)*$ is required to be uniquely invertible,
$\Phi$ is a diagonal matrix function
specified below.
Integration is over whole space of vector spectral parameter $\nu$.
Function $\Psi(\lambda,\mu;x)$ is defined by the following
formulae
introducing $x$-dependence:
\begin{eqnarray}\label{Sec6:Fx}
\partial_{x_n}\Psi_{\alpha\beta}(\lambda,\mu;x) +
\Big(h^n_\alpha(\lambda) +g^n_{\alpha\beta}(\mu)\Big)
\Psi_{\alpha\beta}(\lambda,\mu;x) &=&
\Phi_\alpha(\lambda;x) B^{n}_\alpha
C_{\alpha\beta}(\mu;x),\\\nonumber
&&1\le n \le M.
\end{eqnarray}
Here $C(\mu;x)$ is a new function, which will be characterized
below; $h^n(\lambda)$ are diagonal
and $g^n(\mu)$ are arbitrary matrix
functions of argument; $B^n$ are diagonal
constant matrices.
Short form of eq.(\ref{Sec6:Fx}) reads:
\begin{eqnarray}\label{Sec6:Fx2}
&& L^{n}_{\alpha\beta}(\lambda,\mu) \Psi_{\alpha\beta}(\lambda,\mu;x)=
\Phi_\alpha(\lambda;x) B^{n}_\alpha
C_{\alpha\beta}(\mu;x),\;\;\\\nonumber
&& L^{n}_{\alpha\beta}(\lambda,\mu) (*) =
\Big(\partial_{x_{n}} +
h^n_\alpha(\lambda)+ g^n_{\alpha\beta}(\mu) \Big)(*),\;\;1\le n
\le M.
\end{eqnarray}
Remark that derivatives $\partial_{x_i}\Psi(\lambda,\mu;x)$
are not separated functions
of spectral parameters, unlike the $S$-integrable case
\cite{ZS,Z2}.
Been overdetermined system of PDEs for function
$\Psi(\lambda,\mu;x)$, eqs. (\ref{Sec6:Fx2}) imply
compatibility conditions, which are following:
\begin{eqnarray}\label{Sec4:comp1}
L^{n}_{\alpha\beta}(\lambda,\mu) \Big(\Phi_\alpha(\lambda;x)
B^{j}_\alpha C_{\alpha\beta}(\mu;x)\Big)&=&
L^{j}_{\alpha\beta}(\lambda,\mu) \Big(\Phi_\alpha(\lambda;x) B^{n}_\alpha
C_{\alpha\beta}(\mu;x)\Big),\;\;n\neq j.
\end{eqnarray}
Without loss of generality,
we put $j=1$,
and
\begin{eqnarray}
\label{Sec1:simplification}
B^1=I,\;\; h^1(\lambda)=g^1(\lambda)=0,
\end{eqnarray}
where $I$ is the unit matrix.
Since each term in expanded form of eqs.(\ref{Sec4:comp1}) is separated
function of
parameters $\lambda$ and $\mu$, these equations with $j=1$
are equivalent to two sets of
equations:
\begin{eqnarray}\label{Sec6:Phi_x}\label{Phi_x}
&&
\partial_{x_n}\Phi(\lambda;x) + h^n(\lambda) \Phi(\lambda;x) -
\partial_{x_1}\Phi (\lambda;x)
B^{n}=0,\;\;1<n\le M,
\\\label{c_x}
&& \partial_{x_n}C(\mu;x) +
C^{1n}(\mu;x) -
B^{n}\partial_{x_1}C(\mu;x) =
0,\;\;1<n\le M,\;\;
\end{eqnarray}
where
\begin{eqnarray}
\label{c^n}
C^{1n}_{\alpha\beta}(\mu;x) = C_{\alpha\beta}(\mu;x)
g^n_{\alpha\beta}(\mu).
\end{eqnarray}
Eqs.(\ref{Sec6:Phi_x}-\ref{c^n}) define
$\Phi$ and $C$. We refer to functions $\Phi$, $C$ and $\Psi$ as {\it dressing functions}, where $\Psi$ is expressed in terms of $\Phi$ and $C$ due to eqs.(\ref{Sec6:Fx}).
Thus we have specified all functions appearing in
eqs. (\ref{Sec4:U}) and (\ref{Sec6:Fx}). Now we demonstrate how
linear integral equation (\ref{Sec4:U}) is related with
appropriate multidimensional nonlinear PDE written for
fields expressible in terms of spectral function $U(\lambda;x)$ and
dressing functions.
System of nonlinear equations is generated by eq.(\ref{Sec6:Phi_x}).
Derivation is very similar to derivation of classical
integrable equations \cite{ZS,Z2}. First of all, we use
representation for $\Phi$ as $\Psi*U$, see eq.(\ref{Sec4:U}).
Then using equations
(\ref{Sec6:Fx})
for derivatives $\Psi_{x_n}$
we end up with homogeneous equations
in the form
\begin{eqnarray}\label{Sec1:PsiEn}
&&\Psi(\lambda,\mu;x)* E_n(\mu;x) =0,\;\;\\\nonumber
&& E_n(\lambda,x)=
U_{x_{n}}(\lambda,x) - U_{x_{1}}(\lambda,x) B^n +
U(\lambda,x) [B^{n},u(x)]
- {\cal{G}}^n(\lambda,x),\;\;1<n\le M,
\end{eqnarray}
where function $u$ is related with spectral function by
the formula
\begin{eqnarray}\label{Sec6:u}
u(x)= C(\lambda,x)* U(\lambda,x)
\end{eqnarray}
and functions ${\cal{G}}^n$ satisfy the following equations:
\begin{eqnarray}\label{G}
\Psi(\lambda,\mu;x)* {\cal{G}}^n(\mu;x) &=&
\Psi^n(\lambda,\mu;x)*
U(\mu;x),\;\;1<n\le M,\\\label{Psi^n}
\Psi^n_{\alpha\beta}(\lambda,\mu;x)&=&
\Psi_{\alpha\beta}(\lambda,\mu;x) \; g^n_{\alpha\beta}(\mu).
\end{eqnarray}
Later, function $u$ will be field of
nonlinear PDE.
Eqs.(\ref{G},\ref{Psi^n}) along with eq.(\ref{Sec4:U}) will be used in
Sec.\ref{IDE} to analyse solution space of nonlinear system.
Inverting operator $\Psi*$ in
eqs.(\ref{Sec1:PsiEn}) one gets
\begin{eqnarray}\label{Sec6:U2}
E_n(\lambda;x):= U_{x_{n}}(\lambda;x) - U_{x_{1}}(\lambda;x) B^n +
U(\lambda;x) [B^{n},u(x)]
-{\cal{G}}^n(\lambda;x) =0, \;\;1<n \le M.
\end{eqnarray}
In the case of classical dressing method,
nonlinear integrable PDE can be received for function $u$
applying $C(\lambda;x)*$ to (\ref{Sec6:U2})
and using eq.(\ref{c_x}) for
$C_{x_n}$, $n>1$. Doing the same one gets in our case:
\begin{eqnarray}\label{eq:u}
E^u_n(x)&:=& u_{x_{n}}(x) - u_{x_{1}}(x) B^n +u(x) [B^{n},u(x)]
=\tilde
{\cal{H}}^n(x) ,\\\nonumber
&& \tilde
{\cal{H}}^n(x)=[B^n,u^1(x)] -C^{1n}(\mu;x)
*U(\mu;x)
+C(\mu;x)*{\cal{G}}^n(\mu;x), \;\;1<n\le M,
\end{eqnarray}
where function $u^1$ is related with spectral function by the
formula similar to eq.(\ref{Sec6:u}):
\begin{eqnarray}
u^1(x)=C_{x_1}(\lambda;x)*U(\lambda;x).
\end{eqnarray}
Functions $u^1$ and $\tilde{\cal{H}}^n$ are "intermediate"
functions which will be eliminated from the final system of
nonlinear PDEs.
System (\ref{eq:u}) has an obvious limit to classical
$(2+1)$-dimensional $S$-integrable $N$-wave equation. In fact,
if $g^n=0$ for all $n$, (i.e. ${\cal{G}}^n(\lambda;x)=0$,
$\tilde{\cal{H}}^n(x)=[B^n,u^1(x)]$),
then we may eliminate $u_1$
using two equations (\ref{eq:u}): $E^u_n$ and $E^u_m$, $n\neq m$:
\begin{eqnarray}\label{Sec0:Nw1}
[u_{x_n},B^m] - [u_{x_m},B^n] + B^m u_{x_1} B^n - B^n u_{x_1}
B^m
-[[u,B^m],[u,B^n]]=0.
\end{eqnarray}
This is the classical (2+1)-dimensional completely integrable $N$-wave equation, which has acceptable reduction $u_{\beta\alpha}=\bar u_{\alpha\beta}$, where bar means complex conjugation,
see, for instance, \cite{AC}.
System
(\ref{Sec6:U2}) with ${\cal{G}}^n=0$ becomes linear overdetermined system for eq.(\ref{Sec0:Nw1}),
where $U(\lambda;x)$ is a spectral function, i.e. eq.(\ref{Sec0:Nw1})
is compatibility condition for $E_n$ and $E_m$. This is well-known common
feature of $S$-integrable models: they may be derived both
algebraically through compatibility condition of overdetermined
linear system and using dressing method.
However, if $g^n \neq 0$ for all $n$, then ${\cal{G}}^n\neq 0$ and system (\ref{Sec6:U2})
may not be considered as a linear
overdetermined system, since it has set of spectral functions,
such as $U(\lambda;x)$ and ${\cal{G}}^n(\lambda;x)$.
As a consequence, nonlinear eqs. (\ref{eq:u}) have extra
functions
$\tilde{\cal{H}}^n(x)$ and may not be received
as compatibility condition of the system (\ref{Sec6:U2})
through commutation of linear operators appearing in (\ref{Sec6:U2}).
So, similar to \cite{ZS}, the only way to derive system (\ref{eq:u}) from
eq.(\ref{Sec6:U2}) is the dressing
method.
The derived system (\ref{eq:u}) consists of ($M-1$)
equations and $M$ fields, which are $u$ and $\tilde {\cal{H}}^n$,
$1<n\le M$. In other words, it is not complete.
In order to write a single nonlinear PDE for field $u$
we involve another important deviation from the classical
approach.
Let us split $C(\lambda;x)$ into two factors:
\begin{eqnarray}\label{CGG}
&&C_{\alpha\beta}(\mu;x) =G^1_\alpha(\mu_1;x)
G^2_{\alpha\beta}(\mu;x), \\\nonumber
&& \partial_{x_n}G^1(\mu_1;x) -
B^{n}\partial_{x_1}G^1(\mu_1;x) =
0,\;\;1<n\le M,\;\; \\\nonumber
&& \partial_{x_n}G^2(\mu;x) +
G^{1n}(\mu;x) =
0,\;\;1<n\le M,\;\;G^{1n}_{\alpha\beta}(\mu;x) =
G_{\alpha\beta}(\mu;x) g^n_{\alpha\beta}(\mu), \\\nonumber
&& \partial_{x_1}G^2(\mu;x) =0
\end{eqnarray}
where eqs.(\ref{CGG}b-d) appear due to the eq.(\ref{c_x}).
Multiply eq.(\ref{Sec6:U2}) by $G^2(\lambda;x)$ from the left and integrate
over
$\tilde \lambda= (\lambda_2,\dots,\lambda_{M+1})$.
One gets
\begin{eqnarray}\label{U2}
\tilde E_n(\lambda_1;x)&:=& \hat U_{x_{n}}(\lambda_1;x) - \hat
U_{x_{1}}(\lambda_1;x) B^n +
\hat U(\lambda_1;x) [B^{n},u(x)]
-\hat {\cal{F}}^n(\lambda_1;x) =0, \;\;\\\nonumber
&& 1<n\le M,
\end{eqnarray}
where
\begin{eqnarray}\label{hatU:def}
\hat U(\lambda_1;x)&=& \int G^2(\lambda;x) U(\lambda;x) d\tilde
\lambda,\\\nonumber
\hat U^{1n}(\lambda_1;x)&=&
-\int G^2_{x_n}(\lambda;x) U(\lambda;x) d\tilde\lambda=
\int G^{1n}(\lambda;x) U(\lambda;x) d\tilde\lambda
,
\\\nonumber
\hat {\cal{G}}^n(\lambda_1;x)&=& \int G^2(\lambda;x)
{\cal{G}}^n(\lambda;x)d\tilde \lambda,\;\;
\hat {\cal{F}}^n(\lambda_1;x) = \hat {\cal{G}}^n(\lambda_1;x)-
\hat U^{1n}(\lambda_1;x),\\\nonumber
&&
1<n\le M.
\end{eqnarray}
We will see in the next section that off-diagonal parts of
$\hat U(\lambda_1;x) $ and
$\hat {\cal{G}}^n(\lambda_1;x)
$ have arbitrary dependence on $x$. Thus we are able to introduce
one more relation among them. For instance,
let
\begin{eqnarray}\label{condition}
\sum_{i=2}^{M}S^i_{\alpha\beta}\Big(\hat
{\cal{F}}^i_{\alpha\beta}(\lambda_1;x)
-\lambda_1\hat U_{\alpha\beta}(\lambda_1;x)
(B^i_\alpha-B^i_\beta)
\Big)
=0,\;\;\alpha\neq\beta,
\end{eqnarray}
where $S^i_{\alpha\beta}$ are constants.
Then eq.(\ref{U2}) gives ($\alpha\neq \beta$):
\begin{eqnarray}\label{lin:U}
\sum_{i=2}^{M}S^i_{\alpha\beta}
\Big( \partial_{x_i} \hat U_{\alpha\beta}
(\lambda_1;x) - \partial_{x_1}\hat U_{\alpha\beta}(\lambda_1;x)
B^i_\beta +
\sum_{\gamma=1}^Q\hat U_{\alpha\gamma}(\lambda_1;x)
u_{\gamma\beta}(x) (B^{i}_\gamma-B^{i}_\beta) -&&
\\\nonumber
\lambda_1\hat U_{\alpha\beta}(\lambda_1;x)
(B^i_\alpha-B^i_\beta)\Big) &=&0, \\\label{lin:Ub}
\sum_{i=2}^{M}
S^i_{\alpha\beta}(B^i_\alpha-B^i_\beta) &=&0.
\end{eqnarray}
This equation is a linear equation for the spectral function $\hat U^{of}$; additional relation (\ref{lin:Ub}) is introduced to eliminate diagonal part of $\hat U$ from the nonlinear term of (\ref{lin:U}).
Multiply this equation by $G^1_\alpha(\lambda_1;x)$ from the left, integrate over
$\lambda_1$ and assume that $G^1_{x_1}(\lambda_1;x)=\lambda_1 G^1(\lambda_1;x)$:
\begin{eqnarray}\label{nl}
\sum_{i=2}^{M}S^i_{\alpha\beta}
\Big( \partial_{x_i} u_{\alpha\beta}
(x) - \partial_{x_1}u_{\alpha\beta}(x)
B^i_\beta +
\sum_{{\gamma=1}\atop{\gamma\neq\alpha\neq\beta}}^Q u_{\alpha\gamma}(x)
u_{\gamma\beta}(x) (B^{i}_\gamma-B^{i}_\beta) \Big) =0,\;\;\alpha\neq\beta
\end{eqnarray}
which becomes $N$-wave equation if, along with (\ref{lin:Ub}), one requires
\begin{eqnarray}
S^i_{\alpha\beta} =
S^i_{\beta\alpha},\;\;
u_{\beta\alpha}=\bar u_{\alpha\beta}.
\end{eqnarray}
Thus, nonlinear eq.(\ref{nl}) is equivalent to linear eq.(\ref{lin:U}) where spectral function $\hat U^{of}(\lambda_1;x)$ is related with dressing functions by the eqs.(\ref{Sec4:U}-\ref{c^n},\ref{CGG},\ref{hatU:def}). Detailed discussion of this relation is represented in the next subsection.
\subsection{Analysis of the system (\ref{Sec4:U}-\ref{c^n},\ref{CGG},\ref{hatU:def},\ref{lin:U})}
\label{IDE}
\label{Sec63}
In this section we characterize solution space of nonlinear equation
(\ref{nl}) in terms of dressing functions $\Psi$, $\Phi$ and $C$.
First step is solving equations
(\ref{Sec6:Fx},\ref{Sec6:Phi_x},\ref{c_x})
for $\Psi(\lambda,\mu;x)$, $\Phi(\lambda;x)$ and $C(\mu;x)$.
Eq.(\ref{Sec6:Fx}) is nonhomogeneous equation
for $\Psi(\lambda,\mu;x)$, so we take the following solution:
\begin{eqnarray}\label{Sec5:F}
\Psi_{\alpha\beta}(\lambda,\mu;x) &=& \partial_{x_1}^{-1}
\Big(\Phi_\alpha(\lambda;x)
C_{\alpha\beta}(\mu;x)\Big) +
\delta_{\alpha\beta}\delta(\lambda-\mu)
e^{-\sum\limits_{j=2}^M \Big(h^j_\alpha(\lambda) +
g^j_{\alpha\beta}(\mu)\Big)},\\\nonumber
&&
\delta(\lambda-\mu)= \prod_{i=1}^{M+1}\delta(\lambda_i-\mu_i)
\end{eqnarray}
(remember that dimension of spectral parameters is $M+1$),
where $\delta_{\alpha\beta}$ is Kronecker delta symbol, first term is a particular solution of nonhomogeneous equation, while the second term is particular solution of homogeneous equation associated with eq.(\ref{Sec6:Fx}).
Function (\ref{Sec5:F}) is not general solution of (\ref{Sec6:Fx}), but this is enough for
our algorithm.
Solutions of eqs. (\ref{Sec6:Phi_x},\ref{c_x})
in view of (\ref{CGG}) read
\begin{eqnarray}\label{Sec4:chi_sol_t}
\label{Phi_def}
\Phi_{\alpha}(\lambda;x)&=&
\int \Phi^0_{\alpha}(\lambda,k)
e^{K^\Phi_{\alpha}(\lambda,k;x)} d k ,\;\;
K^\Phi_{\alpha}(\lambda,k;x)=k x_1+ \sum_{j=2}^{M}
\left(k B^{j}_\alpha - h^j_\alpha(\lambda) \right)
x_j,\\\label{Sec4:c_sol}
C_{\alpha\beta}(\mu;x)&=& G^1_\alpha(\mu_1;x) G^2_{\alpha\beta}(\mu;x)
,\\\nonumber
G^1_\alpha(\mu_1;x) &=&
e^{K^{G^1}_\alpha(\mu_1;x)},\;\;
G^2_{\alpha\beta}(\mu;x)=
e^{K^{G^2}_{\alpha\beta}(\mu;x)} C^0_{\alpha\beta}(\mu)
,\\\nonumber
&&
K^{G^1}_\alpha(\mu_1;x)=\mu_1\left(x_1 + \sum_{i=2}^{M} B^i_\alpha
x_i \right) ,\;\;
K^{G^2}_{\alpha\beta}(\mu;x)=-\sum_{j=2}^{M}
g^j_{\alpha\beta}(\mu) x_{j},
\end{eqnarray}
where parameter $k$ is scalar.
Hereafter we take
\begin{eqnarray}\label{Sec4:cphi}
\Phi^0(\lambda,k) =
\delta(\lambda_2-k) I.
\end{eqnarray}
Thus expression
(\ref{Sec5:F}) may be written in explicit
form:
\begin{eqnarray}\label{Sec6:Fexpl}
\Psi_{\alpha\beta}(\lambda,\mu;x) &=&
\frac{e^{K^\Phi_{\alpha}(\lambda,\lambda_2;x)+
K^{G^1}_\alpha(\mu_1;x)+K^{G^2}_{\alpha\beta}(\mu;x)} C^0_{\alpha\beta}(\mu)
}{\lambda_2+\mu_1}
+
\delta_{\alpha\beta}\delta(\lambda-\mu)
e^{-\sum\limits_{j=2}^M \Big(h^j_\alpha(\lambda) +
g^j_{\alpha\beta}(\mu)\Big)}.
\end{eqnarray}
Due to the last term in eq.(\ref{Sec6:Fexpl}), eq.(\ref{Sec4:U}) has term $e^{-\sum\limits_{j=2}^M\Big( h^j_\alpha(\lambda)+
g^j_{\alpha\alpha}(\lambda)\Big) x_j} U_{\alpha\beta}(\lambda;x)$. However, we would like to eliminate factor ahead of $U$ in this term for convenience of subsequent constructions.
To do this, we
multiply
eqs.(\ref{Sec4:U},\ref{G}) by $e^{\sum\limits_{j=2}^M\Big( h^j_\alpha(\lambda) +
g^j_{\alpha\alpha}(\lambda)\Big) x_j}$:
\begin{eqnarray}\label{Sec4:E0^U}
E^U(\lambda;x)
&:=& U(\lambda;x)= -
\partial^{-1}_{x_1} \Big(\Phi^1(\lambda;x) C(\mu;x)\Big)*U(\mu;x) +
\Phi^1(\lambda;x), \\
\label{G2_2}
E^{G^n}(\lambda;x) &:=& {\cal{G}}^n(\lambda;x) = -
\partial^{-1}_{x_1} \Big(\Phi^1(\lambda;x)
C(\mu;x)\Big)*{\cal{G}}^n(\mu;x)
+
\\\nonumber
&&
\partial^{-1}_{x_1} \Big(\Phi^1(\lambda;x)
C^{1n}(\mu;x)\Big)*
U(\mu;x) +
U^n(\lambda;x),\;\;n>1 ,
\end{eqnarray}
where
\begin{eqnarray}\label{Phi^1}
\Phi^1_{\alpha}(\lambda;x)&=&e^{\sum\limits_{j=2}^{M}
\Big(h^j_\alpha(\lambda) + g^j_{\alpha\alpha}(\lambda)\Big)}
\Phi_{\alpha}(\lambda;x)=
e^{K^{\Phi^1}_{\alpha}(\lambda;x)} ,\;\;\\\nonumber
&&
K^{\Phi^1}_{\alpha}(\lambda;x)= \lambda_2 x_1+ \sum_{j=2}^{M}
\left(\lambda_2 B^{j}_\alpha + g^j_{\alpha\alpha}(\lambda) \right)
x_j,\\
\label{U^n}
U^n_{\alpha\beta} (\lambda;x)&=&
g^n_{\alpha\alpha}(\lambda) U_{\alpha\beta}(\lambda;x).
\end{eqnarray}
Below we need function
\begin{eqnarray}
G^{1n}_{\alpha\beta}(\lambda;x) = G^2_{\alpha\beta}(\lambda;x) g^n_{\alpha\beta}(\lambda).
\end{eqnarray}
Applying $\int d\tilde \lambda G^2(\lambda;x) \cdot$ to eqs
(\ref{Sec4:E0^U},\ref{G2_2}) and $\int d\tilde \lambda G^{1n}(\lambda;x) \cdot$
to (\ref{Sec4:E0^U}) one gets equations for $\hat U$, $\hat {\cal{G}}^n$ and
$\hat U^n$:
\begin{eqnarray}\label{hatU}
\hat U(\lambda_1;x)&=& -
\int\partial^{-1}_{x_1} \Big(\hat\Phi(\lambda_1;x)
G^1(\mu_1;x)\Big)\hat U(\mu_1;x)d\mu_1 +
\hat\Phi(\lambda_1;x),
\\\label{hatG2_2}
\hat{\cal{G}}^n(\lambda_1;x) &=& -
\int\partial^{-1}_{x_1} \Big(\hat\Phi(\lambda_1;x)
G^1(\mu_1;x)\Big)\Big(\hat{\cal{G}}^n(\mu_1;x)
-
\hat U^{1n}(\mu_1;x)\Big) d\mu_1 +
\\\nonumber
&&
\hat U^{2n}(\lambda_1;x),\;\;1<n\le M ,
\\
\label{hatG^n}
\hat U^{1n}(\lambda_1;x)&=& -
\int\partial^{-1}_{x_1} \Big(\hat\Phi^{1n}(\lambda_1;x)
G^1(\mu_1;x)\Big)\hat U(\mu_1;x) d\mu_1 +
\hat\Phi^{1n}(\lambda_1;x),\;\; 1<n\le M,
\end{eqnarray}
where
\begin{eqnarray}
&&
\hat \Phi(\lambda_1;x)=\int G^2(\lambda;x) \Phi^1(\lambda;x)
d\tilde\lambda,\;\;\hat \Phi^{1n}(\lambda_1;x)=\int G^{1n}(\lambda;x) \Phi^1(\lambda;x)
d\tilde\lambda,\\\nonumber
&&
\hat U^{2n}(\lambda_1;x)=
\int G^2(\lambda;x) U^{n}(\lambda;x) d\tilde\lambda=
\int G^{2n}(\lambda;x) U(\lambda;x) d\tilde\lambda
,
\\\nonumber
&&
G^{2n}_{\alpha\beta}(\lambda;x)= G^2_{\alpha\beta}(\lambda;x) g^n_{\beta\beta}(\lambda).
\end{eqnarray}
Equation for $\hat U^{2n}$ follows from eq.(\ref{Sec4:E0^U}) after applying
$\int d\tilde \lambda G^{2n}(\lambda;x) \cdot$:
\begin{eqnarray}\label{hatU^2n}
\hat U^{2n}(\lambda_1;x)&=& -
\int\partial^{-1}_{x_1} \Big(\hat\Phi^{2n}(\lambda_1;x) G^1(\mu_1;x)\Big) \hat U(\mu_1;x)d\mu_1 +
\hat\Phi^{2n}(\lambda_1;x),\;\;n>1,\\\nonumber
&&
\hat \Phi^{2n}(\lambda_1;x)=\int G^{2n}(\lambda;x) \Phi^1(\lambda;x) d\tilde\lambda.
\end{eqnarray}
By construction,
function $\hat \Phi(\lambda_1;x)$ has arbitrary dependence on variables $x$, if, for instance, $g^i_{\alpha\beta}(\lambda)= \lambda_{i+1} \hat g^i_{\alpha\beta}$, where
$\hat g^i_{\alpha\beta}$ are constants, $i=2,\dots,M$. Due to this fact $\hat \Phi(\lambda_1;x)$ may solve
equation (\ref{condition}). Let us transform eq.(\ref{condition})
substituting eqs.(\ref{hatU}-\ref{hatU^2n}):
\begin{eqnarray}
\label{condition_f}
\sum_{i=2}^{M}S^i_{\alpha\beta}\Big\{
\partial_{x_i}\hat \Phi_{\alpha\beta}(\lambda_1;x)-\partial_{x_1} \hat \Phi_{\alpha\beta}(\lambda_1;x) B^i_\beta - \lambda_1 \hat \Phi_{\alpha\beta}(\lambda_1;x)(B^i_\alpha- B^i_\beta)-&&\\\nonumber
\int\sum_{\gamma=1}^Q\Big[
\partial_{x_1}^{-1} \Big(\hat
\Phi_{\alpha\gamma}(\lambda_1;x) G^1_{\gamma}(\mu_1;x)\Big) \hat{\cal{F}}^i_{\gamma\beta}(\mu_1;x)+&&\\\nonumber
\partial_{x_1}^{-1} \Big(\big(\partial_{x_i}\hat \Phi_{\alpha\gamma}(\lambda_1;x)-\partial_{x_1} \hat \Phi_{\alpha\gamma}(\lambda_1;x) B^i_\gamma\big)G^1_{\gamma}(\mu_1;x)\Big) \hat U_{\gamma\beta}(\mu;x)-
&&\\\nonumber
\lambda_1 \partial_{x_1}^{-1} \Big(\hat \Phi_{\alpha\gamma}(\lambda_1;x) G^1_{\gamma}(\mu_1;x)\Big) \hat U_{\gamma\beta}(\mu_1;x)
(B^i_\alpha- B^i_\beta)
\Big]d\mu_1 \Big\} &=& 0
\end{eqnarray}
where $\alpha\neq\beta$
and eqs.(\ref{hatU}-\ref{hatU^2n}) give us
\begin{eqnarray}\label{U^23n}
\hat{\cal{F}}^n(\lambda_1;x) &=& -
\int\partial^{-1}_{x_1} \Big(\hat\Phi(\lambda_1;x)
G^1(\mu_1;x)\Big)\hat{\cal{F}}^n(\mu_1;x) d\mu_1
- \\\nonumber
&&
\int\partial^{-1}_{x_1} \Big(\hat\Phi_{x_n}(\lambda_1;x)-\hat\Phi_{x_1}(\lambda_1;x)B^n
\Big) G^1(\mu_1;x) \hat U(\mu_1;x) d\mu_1
+ \\\nonumber
&&
\hat\Phi_{x_n}(\lambda_1;x)-\hat\Phi_{x_1}(\lambda_1;x)B^n,\;\;
1<n\le M.
\end{eqnarray}
Deriving eqs.(\ref{condition_f},\ref{U^23n}), we took into account an obvious relation
\begin{eqnarray}
\hat\Phi^{2n}_{\alpha\beta} - \hat \Phi^{1n}_{\alpha\beta} =
\partial_{x_n} \hat \Phi_{\alpha\beta}-\partial_{x_1}
\hat \Phi_{\alpha\beta} B^n_\beta, \;\;\alpha\neq \beta, \;\; 1<n\le M .
\end{eqnarray}
Note, that diagonal elements of $\hat\Phi$,
\begin{eqnarray}
\label{diag}
\hat \Phi_{\alpha\alpha}(\lambda_1;x) =\int C^0_{\alpha\alpha}(\lambda) e^{\lambda_2 \Big(x_1 + \sum_{\i=2}^M B^i_\alpha x_i\Big)}d\tilde \lambda,
\end{eqnarray}
may be arbitrary functions of single independent variable and
$\hat\Phi^{2n}_{\alpha\alpha} - \hat \Phi^{1n}_{\alpha\alpha} =0$.
System (\ref{hatU},\ref{condition_f},\ref{U^23n})
represent a complete nonlinear system of equations allowing to find $\hat U(\lambda_1;x)$ and $\hat\Phi(\lambda_1;x)$. Since $u(x) = \int G^1(\lambda_1;x) \hat U(\lambda_1;x) d\lambda_1$, this system is alternative form of the nonlinear equation (\ref{nl}).
In particular case $C^0(\mu)=\delta(\mu_1) \tilde C^0(\tilde \mu)$, one has $G^1(0;x)=I$, and this system reduces to PDE for $\varphi(\lambda_1;x)= \partial_{x_1}^{-1} \hat \Phi(\lambda_1;x)$ (below $\alpha\neq\beta$):
\begin{eqnarray}\label{RhatU}
\hat U(\lambda_1;x)= -
\varphi(\lambda_1;x)
u(x) +
\partial_{x_1}\varphi(\lambda_1;x),\;\;\; u(x)=\hat U(0;x),\hspace{3cm}&& \\
\label{condition_f2}
\sum_{i=2}^{M}S^i_{\alpha\beta}\Big\{\partial_{x_1}\Big(
\partial_{x_i}\varphi_{\alpha\beta}(\lambda_1;x)-\partial_{x_1} \varphi_{\alpha\beta}(\lambda_1;x) B^i_\beta-\lambda_1\varphi_{\alpha\beta}(\lambda_1;x)(B^i_\alpha- B^i_\beta)\Big)-&&\\\nonumber
\sum_{\gamma=1}^Q\Big[
\varphi_{\alpha\gamma}(\lambda_1;x) \hat{\cal{F}}^i_{\gamma\beta}(0;x)+&&\\\nonumber \Big(\partial_{x_i}\varphi_{\alpha\gamma}(\lambda_1;x)-\partial_{x_1} \varphi_{\alpha\gamma}(\lambda_1;x) B^i_\gamma\Big) u_{\gamma\beta}(x)-
\lambda_1 \varphi_{\alpha\gamma}(\lambda_1;x) u_{\gamma\beta}(x)
(B^i_\alpha- B^i_\beta)
\Big] \Big\} &=& 0,
\end{eqnarray}
\begin{eqnarray}
\label{RU^23n}
\hat{\cal{F}}^n(\lambda_1;x)&=& -
\varphi(\lambda_1;x)
\hat{\cal{F}}^n(0;x)
-\\\nonumber
&&
\Big(\varphi_{x_n}(\lambda_1;x)-
\varphi_{x_1}(\lambda_1;x)B^n\Big) u(x)
+ \varphi_{x_1x_n}(\lambda_1;x)-
\varphi_{x_1x_1}(\lambda_1;x)B^n
,\\\nonumber
&&
1<n\le M,
\end{eqnarray}
Eqs. (\ref{RhatU},\ref{RU^23n}) with $\lambda_1=0$ give us
\begin{eqnarray}\label{potentials}
u(x)&=&\Big(I+\varphi(0;x)\Big)^{-1}\varphi_{x_1}(0;x),
\\\nonumber
\hat{\cal{F}}^n(0;x)
&=&
\Big(I+\varphi(0;x)\Big)^{-1}
\Big[\varphi_{x_1x_n}(0;x)-
\varphi_{x_1x_1}(0;x)B^n-\\\nonumber
&&
\Big(\varphi_{x_n}(0;x)-
\varphi_{x_1}(0;x)B^n\Big) u(x)
\Big]
\end{eqnarray}
We see that eq.(\ref{condition_f2}) is linear PDE for $\varphi^{of}(\lambda_1;x)$, $\lambda_1\neq 0$, with "boundary" function $\varphi(0;x)$ satisfying (\ref{potentials}a).
By construction, if $\lambda_1=0$, then eq.(\ref{condition_f2}) is projected into (\ref{nl}), i.e. calculation of evolution of $\varphi^{of}(0;x)$
is equivalent to solving original nonlinear PDE (\ref{nl}). However, from another point of view, this evolution may be found as $\lim\limits_{\lambda_1\to 0} \varphi^{of}(\lambda_1;x)$.
The simplest algorithm for numerical construction of particular solutions to (\ref{nl}) is following.
For given arbitrary $\varphi(\lambda_1;x)|_{x_M=0}$ we find
$u(x)|_{x_M=0}$ and $\hat{\cal{G}}^n(0;x) |_{x_M=0}
- \hat U^{1n}(0;x)|_{x_M=0}$ using (\ref{potentials}). Then solve (\ref{condition_f2}) for $\varphi^{of}_{x_M}(\lambda_1;x)|_{x_M=0}$.
Using Tailor formulae we approximate $\varphi^{of}(\lambda_1;x)|_{x_M=\Delta t}$ :
\begin{eqnarray}
\varphi^{of}(\lambda_1;x)|_{x_M=\Delta t} \approx
\varphi^{of}(\lambda_1;x)|_{x_M=0} + \Delta t \; \varphi^{of}_{x_M}(\lambda_1;x)|_{x_M=0}.
\end{eqnarray}
Evolution of diagonal elements $\varphi_{\alpha\alpha}(\lambda_1;x)$ is fixed by $\varphi_{\alpha\alpha}(\lambda_1;x)|_{x_M=0}$ due to (\ref{diag}).
Substitute this result into (\ref{potentials}) we find
$u(x)|_{x_M=\Delta t}$ and $\hat{\cal{G}}^n(0;x) |_{x_M=\Delta t}
- \hat U^{1n}(0;x)|_{x_M=\Delta t}$.
Then eq.(\ref{condition_f2}) gives
$\varphi^{of}_{x_M}(\lambda_1;x)|_{x_M=\Delta t}$, and so on.
Solving the Initial Value Problem (IVP) (i.e. construction of $u^{of}(x)$ for given initial data $u^{of}(x)|_{x_M=0}$) is more complicated and will not be considered here, since it seems to be not simpler then direct numerical solving of IVP for (\ref{nl}).
Let us remark in the end of this section, that eq.(\ref{condition}) is not the only admittable constrain. Instead of zero in the rhs of this equation one might use expression $L\big(\hat U^{of}(\lambda_1;x\big); u^{of}(x))$ which is linear differential operator applied to $\hat U^{of}(\lambda_1;x)$. Coefficients of this operator depend on field $u^{of}(x)$ and its derivatives. Then expression $L\big(\hat U^{of}(\lambda_1;x\big); u^{of}(x))$ appears in the rhs of (\ref{lin:U}). The only requirement to $L$ is that
after multiplying eq.(\ref{lin:U}) by $G^1_\alpha(\lambda_1;x)$
from the left and integrating over $\lambda_1$ one gets nonlinear PDE for $u^{of}$. This new PDE (which replaces eq.(\ref{nl})) may be
largely arbitrary nonlinear PDE for $u^{of}$. So, as for now, represented multidimensional version of the dressing method is not the method for solving of nonlinear PDE, but it gives a new representation of nonlinear PDE. This situation is equivalent to the situation appearing when Fourier method is applied to PDE other then linear PDE with constant coefficients.
\section{Derivation of multidimensional Nonlinear Shr\"odinger Equation}
\label{DR_DS}
In the previous section we demonstrated that (largely) arbitrary nonlinear PDE can be transformed using a variant of multidimensional generalization of the dressing method for (2+1)-dimensional $N$-wave equation.
In this section we show that similar construction may be performed starting with the dressing method for (2+1)-dimensional DS. We use notations of the Sec.\ref{DR_nl}. For simplicity, we take $Q=2$, i.e. consider $2\times 2$ matrix equations.
Variables $x_i$ are introduced by the following system:
\begin{eqnarray}\label{DS_Sec6:Fx}
\partial_{x_n}\Psi_{\alpha\beta}(\lambda,\mu;x) +
\Big(h^n_\alpha(\lambda) +g^n_{\alpha\beta}(\mu)\Big)
\Psi_{\alpha\beta}(\lambda,\mu;x) &=&
\Phi_\alpha(\lambda;x) B^{n}_\alpha
C_{\alpha\beta}(\mu;x),\\\nonumber
&& 1\le n < M\\\nonumber
\partial_{x_M}\Psi_{\alpha\beta}(\lambda,\mu;x) +
\Big(h^M_\alpha(\lambda) +g^M_{\alpha\beta}(\mu)\Big)
\Psi_{\alpha\beta}(\lambda,\mu;x) &=&
\partial_{x_1}\Phi_\alpha(\lambda;x) B^{M}_\alpha
C_{\alpha\beta}(\mu;x)-\\\nonumber
&&
\Phi_\alpha(\lambda;x) B^{M}_\alpha
\partial_{x_1}C_{\alpha\beta}(\mu;x),
\end{eqnarray}
where the first equation is identical to (\ref{Sec6:Fx}).
Since $Q=2$, only two $B^i$ are linearly independent, so we may put $B^i=0$, $i>2$ without loss of generality. Let, in addition, $B^1=I$, $B^M=B^2={\mbox{diag}}(1,-1)$, $h^1=g^1=0$.
Compatibility of (\ref{DS_Sec6:Fx}) results in (compare with Sec.(\ref{DR_nl})):
\begin{eqnarray}\label{DS_Sec6:Phi_x}\label{DS_Phi_x}
&&
\partial_{x_{2}}\Phi(\lambda;x) + h^2(\lambda) \Phi(\lambda;x) -
\partial_{x_1}\Phi (\lambda;x) B^2=0,\\\nonumber
&&
\partial_{x_{n}}\Phi(\lambda;x) + h^{n}(\lambda) \Phi(\lambda;x) =0,\;\;2<n<M
\\\nonumber
&&
\partial_{x_{M}}\Phi(\lambda;x) + h^{M}(\lambda) \Phi(\lambda;x) -
\partial^2_{x_1}\Phi (\lambda;x)B^2
=0,
\\\nonumber\\
\label{DS_c_x}
&&\partial_{x_{2}}C(\mu;x) +
C^{12}(\mu;x) -
B^2\partial_{x_1}C(\mu;x) =
0,\\\nonumber
&&
\partial_{x_{n}}C(\mu;x) +
C^{1n}(\mu;x) =
0,\;\;2<n<M,\\\nonumber
&&
\partial_{x_{M}}C(\mu;x) +
C^{1M}(\mu;x) +
B^2\partial^2_{x_1}C(\mu;x) =
0,
\end{eqnarray}
where
\begin{eqnarray}
\label{DS_c^n}
C^{1n}_{\alpha\beta}(\mu;x) = C_{\alpha\beta}(\mu;x)
g^{n}_{\alpha\beta}(\mu), \;\;1<n \le M.
\end{eqnarray}
Eqs.(\ref{DS_Sec6:Phi_x}-\ref{DS_c^n}) define
$\Phi$ and $C$.
System of nonlinear equations is generated by eq.(\ref{DS_Sec6:Phi_x}).
Derivation is very similar to derivation carried out in Sec.\ref{DR_nl}. First of all, we use
representation for $\Phi$ as $\Psi*U$, see eq.(\ref{Sec4:U}).
Then using equations
(\ref{DS_Sec6:Fx})
for derivatives $\Psi_{x_n}$ and inverting $\Psi*$
we end up with system of linear equations
in the form
\begin{eqnarray}\label{DS_U2_1}
E_2(\lambda;x)&:=& U_{x_{2}}(\lambda;x) - U_{x_{1}}(\lambda;x) B^2 +
U(\lambda;x) [B^{2},u(x)]
-{\cal{G}}^2(\lambda;x) =0, \\\label{DS_U2_2}
E_n(\lambda;x)&:=& U_{x_{n}}(\lambda;x)
-{\cal{G}}^n(\lambda;x) =0, \\\nonumber
&&
2<n < M\\\label{DS_U2_3}
E_{M}(\lambda,x)&:=&
U_{x_{M}}(\lambda,x) - U_{x_1 x_{1}}(\lambda,x) B^2 +
U(\lambda,x) \Big(u(x)[B^2,u(x)] -
\\\nonumber
&& 2 u_{x_1}(x) B^2 +[u^1,B^2]\Big) + U_{x_1}(\lambda,x) [B^2,u] - {\cal{G}}^{M}(\lambda;x)=0
\end{eqnarray}
where functions $u$ and $u^1$ are related with spectral functions by
the formula
\begin{eqnarray}\label{DS_Sec6:u}
u(x)= C(\lambda,x)* U(\lambda,x),\;\; u^1(x)= C_{x_1}(\lambda,x)* U(\lambda,x)
\end{eqnarray}
and functions ${\cal{G}}^n$ satisfy the following equations:
\begin{eqnarray}\label{DS_G}
\Psi(\lambda,\mu;x)* {\cal{G}}^n(\mu;x) &=&
\Psi^n(\lambda,\mu;x)*
U(\mu;x),\\\label{DS_Psi^n}\nonumber
\Psi^n_{\alpha\beta}(\lambda,\mu;x)&=&
\Psi_{\alpha\beta}(\lambda,\mu;x) \; g^n_{\alpha\beta}(\mu),\;\;1<n\le M.
\end{eqnarray}
Later, function $u$ will be field in the
nonlinear PDE.
In the case of classical dressing method,
nonlinear integrable PDE can be received for function $u$
applying $C(\lambda;x)*$ and $C_{x_1}(\lambda;x)*$ to (\ref{DS_U2_1}), applying $C(\lambda;x)*$ to (\ref{DS_U2_3})
and using eqs.(\ref{DS_c_x}) for
$C_{x_n}$, $n>1$. Doing the same one gets in our case:
\begin{eqnarray}\label{DS_eq:u}
E^u_{02}(x)&:=& u_{x_{2}}(x) - u_{x_{1}}(x) B^2 +u(x) [B^{2},u(x)]
=\tilde
{\cal{H}}^{02}(x) ,\\\label{DS_eq:u1}
E^u_{12}(x)&:=& u^1_{x_{2}}(x) - u^1_{x_{1}}(x) B^2 +u^1(x) [B^{2},u(x)]
=\tilde
{\cal{H}}^{12}(x) ,\\\label{DS_eq:uM}
E^u_M(x)&:=& u_{x_{M}}(x) - u_{x_1 x_{1}}(x) B^2 +u(x) \Big(u(x)[B^{2},u(x)]-2u_{x_1} B^2\Big) + \\\nonumber
&&u_{x_1}(x) [B^2,u(x)]
=\tilde
{\cal{H}}^{0M}(x) ,
\\\nonumber
&& \tilde
{\cal{H}}^{02}(x)=[B^2,u^1(x)] -C^{12}(\mu;x)
*U(\mu;x)
+C(\mu;x)*{\cal{G}}^2(\mu;x),\\\nonumber
&& \tilde
{\cal{H}}^{12}(x)=[B^2,u^2(x)] -C^{12}_{x_1}(\mu;x)
*U(\mu;x)
+C_{x_1}(\mu;x)*{\cal{G}}^2(\mu;x),\\\nonumber
&& \tilde
{\cal{H}}^{0M}(x)=- 2u^1_{x_1}(x) B^2 -u(x)[u^1(x),B^2] + u^1(x)[B^2,u(x)]-[B^2,u^2(x)] -\\\nonumber
&&
C^{1M}(\mu;x)
*U(\mu;x)
+C(\mu;x)*{\cal{G}}^M(\mu;x),
\end{eqnarray}
where function $u^2$ is related with spectral function by the
formula similar to eq.(\ref{Sec6:u}):
\begin{eqnarray}
u^2(x)=C_{x_1x_1}(\lambda;x)*U(\lambda;x).
\end{eqnarray}
Remark that eqs.(\ref{DS_eq:u}) coincide with (\ref{eq:u})
where $n=2$ and $\tilde{\cal{H}}^2=\tilde{\cal{H}}^{02}$.
Functions $u^1$, $u^2$ and $\tilde{\cal{H}}^{in}$ are "intermediate"
functions which will be eliminated from the final
nonlinear PDE.
System (\ref{DS_eq:u}-\ref{DS_eq:uM}) has an obvious limit to classical
$(2+1)$-dimensional $S$-integrable
DS. In fact,
if $g^2=g^M=0$, (i.e. ${\cal{G}}^2(\lambda;x)={\cal{G}}^M(\lambda;x)=0$,
$\tilde{\cal{H}}^{02}(x)=[B^2,u^1(x)]$,
$\tilde{\cal{H}}^{12}(x)=[B^2,u^2(x)]$, $\tilde{\cal{H}}^{0M}(x)=- 2u^1_{x_1}(x) B^2 -u(x)[u^1(x),B^2] + u^1(x)[B^2,u(x)]-[B^2,u^2(x)]$),
then we may eliminate $u_1$ and $u_2$ from eq.(\ref{DS_eq:uM})
using equations (\ref{DS_eq:u}) and (\ref{DS_eq:u1}):
\begin{eqnarray}\label{Sec0:DS}
{\cal{E}}&:=&[u^{of}_{x_{M+1}},\sigma]-u^{of}_{x_1x_1}
-u^{of}_{x_2x_2}
-8 u_{12} u_{21} u^{of} -4 \varphi u^{of}
=0\\\label{Sec1:varphi}
&&\varphi_{x_2 x_2} - \varphi_{x_1x_1} = 4
(u_{12} u_{21})_{x_1x_1},\;\;
\varphi=(u_{11}+u_{22})_{x_1},
\end{eqnarray}
where
\begin{eqnarray}
u=\left(
\begin{array}{cc}
u_{11} & u_{12} \cr
u_{21} & u_{22}
\end{array}
\right),
\end{eqnarray}
which is DS if $x_{M}=i t$, $i^2=-1$, $u_{21}=\bar u_{12}$.
Eqs.
(\ref{DS_U2_1}) and (\ref{DS_U2_3})
with ${\cal{G}}^2={\cal{G}}^M=0$ become linear overdetermined system for
this equation where spectral function is $U(\lambda;x)$, i.e. eq.(\ref{Sec0:DS})
is compatibility condition for $E_2$ and $E_M$.
However, if $g^2 \neq 0$ and $g^M \neq 0$, then system (\ref{DS_U2_1},\ref{DS_U2_3})
may not be considered as a linear
overdetermined system, since it has set of spectral functions,
such as $U(\lambda;x)$ and ${\cal{G}}^n(\lambda;x)$.
As a consequence, nonlinear eqs. (\ref{DS_eq:u}-\ref{DS_eq:uM}) have extra
fields
$\tilde{\cal{H}}^{in}(x)$, $i=0,1$, and may not be received
as compatibility condition of the system (\ref{DS_U2_1},\ref{DS_U2_3})
through commutation of linear operators.
So, similar to Sec.\ref{DR_nl}, the only way to derive nonlinear system (\ref{DS_eq:u}-\ref{DS_eq:uM}) from
eqs.(\ref{DS_U2_1},\ref{DS_U2_3}) is the dressing
method.
Similar to Sec.\ref{IDE}, we can take largely arbitrary equation for $\hat U^{of}(\lambda_1;x)$ resulting to largely arbitrary nonlinear PDE for field $u^{of}$.
For example, we want to construct such linear equation for $\hat U^{of}(\lambda_1;x)$ that after multiplying it by $G^1(\lambda_1;x)$ and integrating over $\lambda_1$ one gets
\begin{eqnarray}\label{DS}
u^{of}_{x_M} - \Delta u^{of} B^2 + u^{of}u_{12}u_{21} B^2 =0,\;\;\Delta = \sum_{i=1}^{M-1} \partial_{x_i}^2,
\end{eqnarray}
which becomes multidimensional NLS if $x_M=i t$, $i^2=-1$, $u_{21}=\bar u_{12}$. Let $G^1_{x_1}(\lambda_1;x) = \lambda_1 G^1(\lambda_1;x)$.
Appropriate linear equation is following
\begin{eqnarray}\label{DS_lin}
\hat U^{of}_{x_M}(\lambda_1;x) - \Delta \hat U^{of}(\lambda_1;x) B^2 + \hat U^{of}(\lambda_1;x) u_{12}u_{21} - B^2 \lambda_1^2 \hat U^{of}(\lambda_1;x)- &&\\\nonumber
2 \Big(\lambda_1 \hat U^{of}_{x_1}(\lambda_1;x)+
\lambda_1 B^2 \hat U^{of}_{x_2}(\lambda_1;x)+ \lambda_1^2 \hat U^{of}(\lambda_1;x)\Big) B^2&=&0
\end{eqnarray}
Thus nonlinear eq.(\ref{DS}) is equivalent to linear eq.(\ref{DS_lin})
where $\hat U$ is expressed in terms of the dressing functions by the system (\ref{Sec4:U},\ref{DS_Sec6:Fx}-\ref{DS_c^n}). Detailed discussion of this relation is given in the next subsection.
\subsection{Analysis of the system (\ref{Sec4:U},\ref{DS_Sec6:Fx}-\ref{DS_c^n},\ref{DS_lin} )}
\label{DS_IDE_DS}
In this section we characterize solution space of nonlinear equation
(\ref{DS}) in terms of the dressing functions.
First step is solving equations
(\ref{DS_Sec6:Fx}-\ref{DS_c_x})
for $\Psi(\lambda,\mu;x)$, $\Phi(\lambda;x)$ and $C(\mu;x)$.
Eqs. (\ref{DS_Sec6:Fx}) represent nonhomogeneous system
for $\Psi(\lambda,\mu;x)$, so, similar to Sec.\ref{IDE}, we take the following solution:
\begin{eqnarray}\label{DS_Sec5:F}
\Psi_{\alpha\beta}(\lambda,\mu;x) = \partial_{x_1}^{-1}
\Big(\Phi_\alpha(\lambda;x)
C_{\alpha\beta}(\mu;x)\Big) +
\delta_{\alpha\beta}\delta(\lambda-\mu)
e^{-\sum\limits_{j=2}^M \Big(h^j_\alpha(\lambda) +
g^j_{\alpha\beta}(\mu)\Big)},
\end{eqnarray}
Solutions of eqs. (\ref{DS_Phi_x},\ref{DS_c_x})
in view of (\ref{CGG}) read
\begin{eqnarray}\label{DS_Sec4:chi_sol_t}
\label{DS_Phi_def}
\Phi_{\alpha}(\lambda;x)&=&
e^{K^\Phi_{\alpha}(\lambda;x)},\;\;
K^\Phi_{\alpha}(\lambda,x)=\lambda_2 (x_1 + x_2 B^2_\alpha) + x_M \lambda_2^2 B^2_\alpha -\sum_{j=2}^{M}
h^j_\alpha(\lambda)
x_j,\\\label{DS_Sec4:c_sol}
C_{\alpha\beta}(\mu;x)&=& G^1_\alpha(\mu_1;x) G^2_{\alpha\beta}(\mu;x)
,\\\nonumber
G^1_\alpha(\mu_1;x) &=&
e^{K^{G^1}_\alpha(\mu_1;x)} ,\;\;
G^2_{\alpha\beta}(\mu;x)=
e^{K^{G^2}_{\alpha\beta}(\mu;x)} C^0_{\alpha\beta}(\mu)
,\;\;\\\nonumber
&& K^{G^1}_\alpha(\mu_1;x)= \mu_1(x_1 + x_2 B^2_\alpha) - x_M \mu_1^2 B^2_\alpha,\;\;
K^{G^2}_{\alpha\beta}(\mu;x)=-\sum_{j=2}^{M}
g^j_{\alpha\beta}(\mu) x_{j}.
\end{eqnarray}
Thus expression
(\ref{DS_Sec5:F}) may be written in explicit
form:
\begin{eqnarray}\label{DS_Sec6:Fexpl}
\Psi_{\alpha\beta}(\lambda,\mu;x) &=&
\frac{e^{K^\Phi_{\alpha}(\lambda,\lambda_1;x)+
K^{G^1}_{\alpha}(\mu_1;x)+
K^{G^2}_{\alpha\beta}(\mu;x)} C^0_{\alpha\beta}(\mu)
}{\lambda_2+\mu_1}
+
\delta_{\alpha\beta}\delta(\lambda-\mu)
e^{-\sum\limits_{j=2}^M \Big(h^j_\alpha(\lambda) +
g^j_{\alpha\beta}(\mu)\Big)}.
\end{eqnarray}
Equations (\ref{Sec4:E0^U}-\ref{hatU^2n}) have the same form
with
\begin{eqnarray}\label{DS_Phi^1}
\Phi^1_{\alpha\beta}(\lambda;x)&=&
e^{K^{\Phi^1}_{\alpha}(\lambda;x)} ,
\;\;
K^{\Phi^1}_{\alpha}(\lambda;x)= \lambda_2 (x_1+x_2 B^2) + x_M\lambda_2^2 B^2+ \sum_{j=2}^{M}
g^j_{\alpha\alpha}(\lambda)
x_j.
\end{eqnarray}
Function $\hat \Phi^{of}$ satisfies equation (\ref{DS_lin}) where $\hat U$ is related with $\hat \Phi$ by eq.(\ref{hatU}).
Note, that diagonal elements of $\hat\Phi$ may be arbitrary functions of single independent variable, similar to Sec.\ref{DR_nl}.
In particular case $C^0(\mu)=\delta(\mu_1) \tilde C^0(\tilde \mu)$, eq.(\ref{hatU})
reduces to PDE (\ref{RhatU}) so that $u$ is defined by the formula (\ref{potentials}a).
We see that eq.(\ref{DS_lin}) in view of (\ref{RhatU}) is linear PDE for $\varphi^{of}(\lambda_1;x)$ with "boundary" function $\varphi(0;x)$ satisfying (\ref{potentials}a).
Remark made in the end of Sec.\ref{IDE} regarding numerical construction of
particular solutions is relevant for this section as well.
\section{Conclusions}
\label{conclusions}
We applied a variant of the dressing method to derive a special representation for a largely arbitrary multidimensional
nonlinear PDEs nonintegrable in classical sence.
Although we have considered only $N$-wave equation and NLS, reducible from the linear eqs.(\ref{lin:U}) and (\ref{DS_lin}) respectively, different linear equation for the spectral function $\hat U(\lambda_1;x)$ may be used. The only requirement is that after multiplying this equation by $G^1(\lambda_1;x)$ and integrating over $\lambda_1$ one gets nonlinear PDE for $u^{of}$.
We introduced several modifications in the classical dressing
method:
\begin{enumerate}
\item
Eqs.(\ref{Sec6:Fx}) (or(\ref{DS_Sec6:Fx})) with functions
$h^n(\lambda)$ and $g^n(\mu)$ showing that derivatives
$\Psi_{x_j}(\lambda,\mu;x)$ are not
separated functions of spectral parameters.
\item
Eq.(\ref{CGG}) splitting $C(\lambda;x)$.
\item
Extra constrain
(\ref{condition}) (or (\ref{DS_lin}) together with (\ref{hatU})) defining structure of
PDE (\ref{nl}) (or (\ref{DS})). This constrain is equation for function $\hat
\Phi^{of}(\lambda_1,x)$ (see for instance eqs.(\ref{condition_f}, \ref{condition_f2}) of the Sec.\ref{IDE})
and has no spectral
origin.
\end{enumerate}
At the present form, multidimensional dressing method doesn't give explicit
solutions for nonlinear PDEs, but represents them in different form. We expect perspective development of the ideas outlined in this paper.
Author thanks Prof. P.M.Santini for useful discussions of some
aspects of this paper.
The work was supported by INTAS Young Scientists Fellowship Nr. 04-83-2983,
RFBR grant 04-01-00508 and grant NSh 1716-2003.
|
2103.08474
|
\section{Introduction}\label{sec:intro}
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\textwidth]{Example_bitype_tree}
\caption{Normal game on a bi-type tree}
\label{normal_illus}
\end{figure}
\sloppy This paper is dedicated to the analysis of three well-known games -- the \emph{normal}, the \emph{mis\`{e}re} and the \emph{escape} games -- on \emph{rooted multi-type Galton-Watson trees}. These games are played on directed, acyclic graphs. Given a realization of a rooted random tree, we assign the following notion of direction to each of its edges: edge $\{u, v\}$ is directed from $u$ to $v$ if $u$ is the parent of $v$. Each of these games involves two players and a token. The vertex on which the token is placed at the beginning of the game is known as the \emph{initial vertex}. The players take turns to move the token along the directed edges, conforming to the rules that we describe in detail in Definition~\ref{defn:games}.
The (extremely broad class of) \emph{combinatorial games} (see, for example, \cite{survey_games, complexity_appeal} for a general introduction to these games as well as a discussion of the vast literature devoted to this topic) are two-player games with perfect information, no chance moves, and the possible outcomes being victory for one player (and loss for the other) and draw for both players. Countless intriguing and natural mathematical problems that belong to complexity classes harder than \emph{NP} constitute two-player combinatorial games. Besides, these games have applications / connections to disciplines such as mathematical logic, automata theory, complexity theory, graph and matroid theory, networks, error-correcting codes, online algorithms, and even, outside of mathematics, to biology, psychology, economics, insurance, actuarial studies and political sciences.
\subsection{The (simple) Galton-Watson branching process} A \emph{rooted Galton-Watson} (GW) branching process $\mathcal{T}_{\chi}$, introduced in \cite{galton_watson} (independently studied in \cite{bienayme}) as a model to investigate the extinction of ancestral family names, begins with the root $\phi$ giving birth to a random number $X$ of children where $X$ follows the \emph{offspring distribution} $\chi$ (a probability distribution supported on $\mathbb{N}_{0}$). If $X = 0$, we stop the process. If $X = k \in \mathbb{N}$, the children of $\phi$ are named $v_{1}, \ldots, v_{k}$ in some order, and $v_{i}$ gives birth to $X_{i}$ children with $X_{1}, \ldots, X_{k}$ i.i.d.\ $\chi$. This process continues, and it survives (i.e.\ continues forever) with positive probability iff the expectation of $\chi$ exceeds $1$. We refer the reader to \cite{athreya_vidyashankar}, \cite{athreya_jagers} and \cite{athreya_ney} for further reading on GW trees.
\subsection{The games when played on a simple GW tree}\label{subsec:simple_version} We call the players \emph{P1} and \emph{P2} when they play the normal and mis\`{e}re games, and \emph{Stopper} and \emph{Escaper} when they play the escape game. A realization $T$ of $\mathcal{T}_{\chi}$ is fixed, the token placed on an initial vertex $v$ of $T$, the players take turns to move the token along directed edges, and the outcomes of the games are decided as follows:
\begin{enumerate}
\item \textbf{Normal game:} Whoever fails to make a move for the first time in the game, loses. If the token never reaches a leaf vertex throughout the game, it results in a draw. See Figure~\ref{normal_illus}.
\item \textbf{Mis\`{e}re game:} Whoever fails to make a move for the first time in the game, wins. If the token never reaches a leaf vertex throughout the game, then the game results in a draw.
\item \textbf{Escape game:} If either player fails to make a move, Stopper wins. Else Escaper wins. This game never results in a draw.
\end{enumerate}
We are concerned with \emph{optimal play}, i.e.\ when the game does not end in a draw, the player destined to win tries to win as quickly as possible, while her opponent tries to prolong it as much as possible. Analysis of these games on rooted simple GW trees has been carried out in \cite{holroyd_martin}.
\subsection{Motivation for studying such combinatorial games} We dwell here on several combinatorial games that have been studied on random structures, along with their myriad theoretical applications and connections to other areas of mathematics. \cite{percolation_games} studies normal and a variant of mis\`{e}re games on percolation clusters of oriented Euclidean lattices. This \emph{percolation game} assigns one of the labels ``trap", ``target" and ``open" to each site of $\mathbb{Z}^{2}$ with probabilities $p$, $q$ and $1-p-q$ respectively, and the players take turns to move a token from its current position $(x,y)$ to either $(x+1,y)$ or $(x,y+1)$. If a player moves to a target, she wins immediately, and if she moves to a trap, she loses immediately. The game's outcome can be interpreted in terms of the evolution of a one-dimensional discrete-time probablistic cellular automaton (PCA) -- specifically, the game having no chance of ending in a draw is shown to be equivalent to the ergodicity of this PCA. \cite{percolation_games} also establishes a connection between the percolation game with $q = 0$ (called the \emph{trapping game}) on directed graphs in higher dimensions and the hard-core model on related undirected graphs with reduced dimensions. \cite{trapping_games} studies the trapping game on undirected graphs. The players take turns to move the token from the vertex of its current position to an adjacent vertex that has never been visited before. The player unable to make a move loses. The outcome of this game is shown to have close ties with maximum-cardinality matchings, and a draw in this game relates to the sensitivity of such matchings to boundary conditions. \cite{wastlund} studies a related, two-person zero-sum game called \emph{exploration} on a \emph{rooted distance model}, to analyze minimum-weight matchings in edge-weighted graphs. In a related game called \emph{slither} (\cite{slither}), the players take turns to claim yet-unclaimed edges of a simple, undirected graph, such that the chosen edges, at all times, form a path, and whoever fails to move, loses. This too serves as a tool for understanding maximum matchings in graphs.
The \emph{maker-breaker positional games} (\cite{positional_games_book}) involve a set $X$, a collection $\mathcal{F}$ of subsets of $X$, and $a, b \in \mathbb{N}$. \emph{Maker} and \emph{Breaker} take turns to claim yet-unclaimed elements of $X$, with Maker choosing $a$ elements at a time and Breaker $b$ elements at a time, until all elements of $X$ are exhausted. Maker wins if she claims all elements of a subset in $\mathcal{F}$. When this game is played on a graph, the players take turns to claim yet-unclaimed edges, and Maker wins if the subgraph induced by her claimed edges satisfies a desired property (e.g.\ it is connected, it forms a clique of a given size, a Hamiltonian cycle, a perfect matching or a spanning tree). The game is \emph{unbiased} when $a = b$, and \emph{biased} otherwise. This game has intimate connections with existential fragments of first order and monadic second order logic on graphs. \cite{milos_thesis} and \cite{milos_tibor} study the threshold probability $p_{c}$ beyond which Maker has a winning strategy when this game is played on Erd\H{o}s-R\'{e}nyi random graphs $G(n,p)$; \cite{hamiltonian_maker_breaker} studies the game for Hamiltonian cycles on the complete graph $K_{n}$; \cite{maker_breaker_geometric} studies the game on random geometric graphs; \cite{biased_random_boards} studies the \emph{critical bias} $b^{*}$ of the $(1 : b)$ biased game on $G(n, p(n))$ for $p(n) = \Theta\left(\ln n/n\right)$. In addition, \cite{milos_thesis} studies the game where Maker wins if she can claim a non-planar graph or a non-$k$-colourable graph. \cite{biased_positional} indicates a deep connection between positional games on complete graphs and the corresponding properties being satisfied by a random graph. This follows from \emph{Erd\H{o}s' probabilistic intuition}, which states that the course of a combinatorial game between two players playing optimally often resembles the evolution of a purely random process.
Finally, the \emph{Ehrenfeucht-Fra\"{i}ss\'{e} games} play a pivotal role in our understanding of first and monadic second order logic on random rooted trees and random graphs (see, for example, \cite{spencer_threshold, spencer_thoma, spencer_stjohn, kim, bohman, pikhurko, verbitsky, maksim_1, maksim_2, maksim_3, maksim_4, maksim_5, maksim_6, podder_1, podder_2, podder_3, podder_5}).
\subsection{Novelty of the games studied in this paper} To the best of our knowledge, the normal, mis\`{e}re and escape games have not been studied in the premise of graphs with coloured vertices. The rules to be followed when the games are played on rooted multi-type GW trees, as described in Definition~\ref{defn:games}, are significantly different from those studied in \cite{holroyd_martin} (see also \S\ref{subsec:simple_version}). In particular, the directed edges along which one player is allowed to move are no longer the same as those along which the other player is allowed to move, thereby breaking the symmetry and making the analysis far more complicated. This paper thus serves as a pretty broad generalization of the work in \cite{holroyd_martin}. It no longer suffices for the players to take note of the leaf vertices of the tree and the paths that lead down to those vertices -- they are now required to use a \emph{look-ahead strategy} that must take into account the colours of the endpoints of every directed edge they may have to traverse. Moreover, much of the analysis in this paper involves multivariable functions defined on subsets of $[0,1]^{r}$ for some $r \in \mathbb{N}$, and the calculus used to draw conclusions about corresponding functions defined on subsets of $[0,1]$ in \cite{holroyd_martin} no longer applies to our set-up.
There are \textbf{several sharp points of contrast} between some of the results in this paper and those of \cite{holroyd_martin}. First, we note that despite the far more complicated set-up of this paper compared to that of \cite{holroyd_martin}, the inequalities in Theorem~\ref{thm:main_2} (some of which are analogous to those in [\cite{holroyd_martin}, Theorem 2]) require very few, and intuitively very reasonable, assumptions (see, for example, Remark~\ref{rem_intuitive} in relation to Theorem~\ref{thm:main_2}, part \ref{main_2_part_3}). The inequalities in \ref{main_2_part_3} are, rather surprisingly, \emph{very} different from [\cite{holroyd_martin}, Theorem 2, part (iii)], and those in \ref{main_2_part_2} are, in some sense, stronger than the inequalities in [\cite{holroyd_martin}, Theorem 2, part (ii)]. We take into account the possibility of offspring distributions having infinite expectations in Theorem~\ref{thm:main_3}, which is not the case with [\cite{holroyd_martin}, Theorem 3]. The analysis of the games on bi-type binary GW trees in Theorem~\ref{thm:main_example_1} yields a strikingly different picture from that obtained while studying them on the simple binary GW tree (see [\cite{holroyd_martin}, Proposition 3, part (i)]), specifically in terms of phase transitions for draw probabilities in the normal and mis\`{e}re games and probabilities of Escaper winning the escape game. In some sense, Theorem~\ref{thm:main_example_1} gives us results reminiscent of $0-1$ laws. Theorem~\ref{thm:main_example_2} shows that on bi-type Poisson GW trees, where each vertex, irrespective of its colour, has $\poi(\lambda)$ children in total, the draw probabilities of the normal and mis\`{e}re games and the probabilities of Escaper winning the escape game all approach $1$ as $\lambda \rightarrow \infty$.
\subsection{Inspirations and possible applications of the games studied in this paper} One of the primary inspirations for studying these particular versions of the games lies in our interest in understanding \emph{illegal moves} in mathematical games. As seen in Definition~\ref{defn:mult_GW}, a move made by one of the players along an edge not \emph{permissible} for her is considered illegal and is not allowed. One may think of more ``relaxed" versions of these games where a player is allowed to make illegal moves at most $k$ times in the game, for some $k \in \mathbb{N}$, or the total number of illegal moves made by both players is allowed to be at most $k$. Such games are reminiscent of rules employed by the World Chess Federation (FIDE): when in a timed game, the first illegal move by a player awards a certain number of minutes worth of extra time to her opponent, while a second illegal move forfeits the game. They also resemble \emph{liar} and \emph{half-liar games} where one of the players, Carole, is allowed to lie a certain number of times at the most when answering questions asked by her opponent, Paul.
As far as \textbf{applications} are concerned, we mention here a curious connection -- one that could be exploited for investigating a generalized version of the PCA studied in \cite{percolation_games} -- between the games analyzed in this paper and the percolation games (in particular, the trapping games). Although this paper concerns itself with games played on rooted trees, let us for a moment imagine the games being played on an oriented lattice $\mathbb{Z}^{2}$ with each vertex coloured either blue (denoted $b$) or red (denoted $r$). Any directed edge in this lattice is either of the form $((x,y), (x+1,y))$ or of the form $((x,y), (x,y+1))$. Suppose P1 is allowed to move only along monochromatic directed edges and P2 only along non-monochromatic directed edges. When it comes to the normal game, we may think of each site $u = (x,y)$ being assigned one of the following labels according to the fate P1 meets with if she moves the token to $u$:
\begin{itemize}
\item if $\sigma(u) = b$ with no red child, or $\sigma(u) = r$ with no blue child, $u$ is a target for P1;
\item if $\sigma(u) = b$ and $u$ has at least one red child $v$ with no red child of its own, or if $\sigma(u) = r$ and $u$ has at least one blue child $v$ with no blue child of its own, then $u$ is a trap for P1;
\item in all other cases, $u$ is marked open for P1.
\end{itemize}
Likewise, when it comes to deciding P2's fate upon reaching $u$,
\begin{itemize}
\item if $\sigma(u) = b$ with no blue child, or $\sigma(u) = r$ with no red child, $u$ is a target for P2;
\item if $\sigma(u) = b$ and $u$ has at least one blue child $v$ with no red child of its own, or if $\sigma(u) = r$ and $u$ has at least one red child $v$ with no blue child of its own, then $u$ is a trap for P2.
\end{itemize}
Following the notations used in \cite{percolation_games}, for $i = 1, 2$, we let $\eta_{i}(u) = W$ if the game that begins with the token at $u$ and Pi playing the first round is won by Pi, $\eta_{i}(u) = L$ if it is lost by Pi, and $\eta_{i}(u) = D$ if it ends in a draw. If $u = (x,y)$ is a trap for P2, then $\eta_{1}(u) = W$, and if $u$ is a target for P2, then $\eta_{1}(u) = L$. Setting $\out(u) = \{(x+1,y), (x,y+1)\}$, if $\sigma(u) = b$ and $u$ is open for P2, then
\begin{itemize}
\item $\eta_{1}(u) = W$ if there exists at least one $v \in \out(u)$ with $\sigma(v) = b$ and $\eta_{2}(v) = L$;
\item $\eta_{1}(u) = L$ if $\sigma(v) = b$ implies that $\eta_{2}(v) = W$ for every $v \in \out(u)$;
\item $\eta_{1}(u) = D$ otherwise.
\end{itemize}
We analogously derive the recursions when $\sigma(u) = r$ and $u$ is open for P2. Although this yields a more complicated set-up than that in \cite{percolation_games}, whether we should consider $\eta_{1}(u)$ or $\eta_{2}(u)$ for any $u = (x,y)$ ought to depend on whether the parities of $x_{0}+y_{0}$ and $x+y$ are the same and on whether P1 or P2 plays the first round, where $(x_{0},y_{0})$ is the site from which the game begins. It seems likely that this version of the percolation games will allow us to study a broader, more general class of probabilistic cellular automata -- one where we may have to consider not one but two states $\eta_{t}^{(1)}(n)$ and $\eta_{t}^{(2)}(n)$ of any site $n \in \mathbb{Z}$ at any point of time $t$, and the alphabet needs to be extended from $\{0,1\}$ to $\{0b, 0r, 1b, 1r\}$. This may also aid us in studying a statistical mechanical model, with hard constraints, that is a generalization of the hard-core model studied in \cite{percolation_games} and that involves the assignment of a label from $\{0,1\}$ \emph{and} a colour from $\{b, r\}$ to each vertex of the graph.
We strongly suspect that the games we study here are also applicable in extending the results connecting trapping games with maximum matchings in \cite{trapping_games}. As an example, let us consider the trapping game on a graph each of whose vertices has been assigned one of two colours: blue and red. P1, as above, is allowed to move the token along monochromatic edges and P2 along non-monochromatic edges, making sure to never re-visit a vertex. Inspired by the conclusion of [\cite{trapping_games}, Proposition 4], we surmise that in order for P1 to win the game starting from a vertex $v$, it is perhaps necessary for $v$ to be contained in every \emph{colour-coordinated maximum matching} of the graph, i.e.\ every maximal matching where each vertex and its partner must be of the same colour.
We allude here to a related, and analytically very similar, version of each of these games that may also be studied. Fixing a non-empty proper subset $S$ of $[m]$, P1 (in the normal and mis\`{e}re games) and Stopper (in the escape game) are allowed to move the token along a directed edge $(u,v)$ as long as $\sigma(v) \in S$, whereas P2 and Escaper must move only along directed edges $(u, v)$ with $\sigma(v) \in [m] \setminus S$. An even more generalized version involves two non-empty subsets $S_{1}$ and $S_{2}$ of $[m]$ with $S_{1} \cup S_{2} \subsetneq [m]$. P1 and Stopper are allowed to move along $(u,v)$ with $\sigma(v) \in [m] \setminus S_{2}$, and P2 and Escaper along $(u,v)$ with $\sigma(v) \in [m] \setminus S_{1}$. Consequently, the sets of permissible edges for the two players now overlap. Careful analysis is required to understand the impact of such generalizations on the outcome probabilities of these games.
\subsection{Main definitions and illustrative examples}\label{subsec:defns}
Here, we define the rooted multi-type Galton-Watson branching process and the games we study on it, followed by a couple of motivating examples.
\begin{defn}\label{defn:mult_GW}
Given a finite set of colours $[m]$, a probability vector $\mathbf{p} = (p_{1}, \ldots, p_{m})$ and probability distributions $\chi_{1}, \ldots, \chi_{m}$ with each $\chi_{j}$ supported on $\mathbb{N}_{0}^{m}$, the rooted multi-type Galton-Watson branching process $\mathcal{T} = \mathcal{T}_{[m] , \mathbf{p}, \bm{\chi}}$, with $\bm{\chi} = \left(\chi_{j}: j \in [m]\right)$, is generated as follows. The root $\phi$ is assigned a colour $\sigma(\phi)$ from $[m]$ according to $\mathbf{p}$. From there onward, every vertex $v$ of the tree, provided that its colour $\sigma(v)$ equals $i$ for some $i \in [m]$, gives birth, independent of all else, to $X_{v,j}$ many offspring of colour $j$ for all $j \in [m]$, where $(X_{v,1}, \ldots, X_{v,m}) \sim \chi_{i}$, i.e.\ for all $n_{1}, \ldots, n_{m} \in \mathbb{N}_{0}$,
\begin{equation}\label{eq:mult_GW}
\Prob\left[X_{v,j} = n_{j} \text{ for all } j \in [m]\big|\sigma(v) = i\right] = \chi_{i}(n_{1}, \ldots, n_{m}).
\end{equation}
\end{defn}
We refer the reader to [\cite{athreya_ney}, Chapter V, Pages 181-228] and [\cite{karlin_taylor}, Chapter 8, Pages 392-442] for further reading.
\begin{defn}\label{defn:games}
For each $j \in [m]$, we fix a non-empty, proper subset $S_{j}$ of $[m]$. As previously mentioned, every edge $(u,v)$ of the rooted tree is directed from $u$ to $v$ where $u$ is the parent of $v$.
\begin{enumerate}
\item We define a directed edge $(u,v)$ to be permissible for P1 / Stopper if $\sigma(v) \in S_{\sigma(u)}$.
\item We define a directed edge $(u,v)$ to be permissible for P2 / Escaper if $\sigma(v) \in [m] \setminus S_{\sigma(u)}$.
\end{enumerate}
In each of these games, in every round, the player makes sure to move along a directed edge permissible for her. The outcome of each game is decided via the same rules as outlined in \S\ref{subsec:simple_version}.
\end{defn}
We consider a couple of examples. In the first, we set $S_{j} = \{j\}$ for each $j \in [m]$. This means that P1 and Stopper are allowed to move only along \emph{monochromatic} directed edges, i.e.\ $(u,v)$ with $\sigma(u) = \sigma(v)$, whereas P2 and Escaper are allowed to move only along \emph{non-monochromatic} directed edges, i.e.\ $(u,v)$ with $\sigma(u) \neq \sigma(v)$. Note that swapping these rules for Stopper and Escaper yields a different escape game altogether.
The second example can be thought of as a generalization of the first. Fix a non-empty, proper subset $S$ of $[m]$. For each $j \in S$, we set $S_{j} = S$, and for each $j \in [m] \setminus S$, we set $S_{j} = [m] \setminus S$. Thus P1 and Stopper are allowed to move only along edges $(u,v)$ such that \emph{either} $\sigma(u)$ and $\sigma(v)$ both belong to $S$, \emph{or} they both belong to $[m] \setminus S$, whereas P2 and Escaper are allowed to move along edges $(u,v)$ such that \emph{either} $\sigma(u) \in S$ and $\sigma(v) \in [m] \setminus S$, \emph{or} $\sigma(u) \in [m] \setminus S$ and $\sigma(v) \in S$.
\subsection{Notation}\label{subsec:notation}
Given a rooted tree $T$, we let $V(T)$ indicate the vertex set of $T$ and $\phi$ its root. Given $v \in V(T)$, we denote by $T(v)$ the subtree of $T$ comprising $v$ and all its descendants. For $n \in \mathbb{N}$, we let $[n]$ denote the set $\{1, \ldots, n\}$.
Given tuples $(x_{1}, \ldots, x_{n})$ and $(y_{1}, \ldots, y_{n})$ in $\mathbb{R}^{n}$, for some $n \in \mathbb{N}$, we write $(x_{1}, \ldots, x_{n}) \preceq (y_{1}, \ldots, y_{n})$ if $x_{i} \leqslant y_{i}$ for each $i \in [n]$. Given a function $f: [0,1]^{n} \rightarrow [0,1]^{n}$, we denote by $\FP(f) = \left\{\mathbf{x} \in [0,1]^{n}: f(\mathbf{x}) = \mathbf{x}\right\}$ the set of all fixed points of $f$ in $[0,1]^{n}$. We define $\min \FP(f)$, if it exists, to be the unique $(x_{1}, \ldots, x_{n}) \in \FP(f)$ such that $(x_{1}, \ldots, x_{n}) \preceq (y_{1}, \ldots, y_{n})$ for all $(y_{1}, \ldots, y_{n}) \in \FP(f)$. Likewise, we define $\max \FP(f)$, if it exists, to be the unique $(x_{1}, \ldots, x_{n}) \in \FP(f)$ such that $(y_{1}, \ldots, y_{n}) \preceq (x_{1}, \ldots, x_{n})$ for all $(y_{1}, \ldots, y_{n}) \in \FP(f)$. We also denote by $f^{(n)}$ the $n$-fold composition of $f$ with itself. Given any real-valued $f$ defined on some domain $E$ of $\mathbb{R}^{n}$, we denote by $\partial_{i} f(x_{1}, \ldots, x_{n})$ the partial derivative $\frac{\partial}{\partial x_{i}} f(x_{1}, \ldots, x_{n})$ for each $i \in [n]$.
Given a non-empty, proper subset $S$ of $[n]$ and tuples $\mathbf{x}_{S} = (x_{i}: i \in S) \in \mathbb{R}^{|S|}$ and $\mathbf{y}_{[n] \setminus S}= (y_{i}: i \in [n] \setminus S) \in \mathbb{R}^{n-|S|}$, we let $\left(\mathbf{x}_{S} \vee \mathbf{y}_{[n] \setminus S}\right)$ denote their concatenation, i.e.\ the tuple $(z_{1}, \ldots, z_{n})$ where $z_{i} = x_{i}$ for each $i \in S$ and $z_{j} = y_{j}$ for each $j \in [n] \setminus S$. We denote by $\mathbf{1}_{S}$ the $|S|$-tuple in which each coordinate equals $1$, and by $\mathbf{0}_{S}$ the $|S|$-tuple in which each coordinate equals $0$.
For each $j \in [m]$, we let $G_{j}$ denote the pgf of $\chi_{j}$ (see Definition~\ref{defn:mult_GW}), i.e.\
\begin{equation}\label{eq:pgf}
G_{j}(x_{1}, \ldots, x_{m}) = \sum_{n_{1}, \ldots, n_{m} \in \mathbb{N}_{0}} \prod_{i=1}^{m} x_{i}^{n_{i}} \chi_{j}(n_{1}, \ldots, n_{m}), \text{ for all } (x_{1}, \ldots, x_{m}) \in [0,1]^{m}.
\end{equation}
Given any subset $S$ of $[m]$ and any $\mathbf{n}_{S} = \left(n_{k}: k \in S\right) \in \mathbb{N}_{0}^{|S|}$, we define the probability distribution
\begin{equation}\label{eq:dist_partial}
\chi_{j,S}\left(\mathbf{n}_{S}\right) = \Prob\left[X_{v,k} = n_{k} \text{ for all } k \in S\big|\sigma(v) = j\right],
\end{equation}
where recall from Definition~\ref{defn:mult_GW} that $X_{v,k}$ indicates the number of children of $v$ that are of colour $k$ for any $k \in [m]$. We denote by $G_{j, S}$ the pgf corresponding to $\chi_{j,S}$.
\subsection{Main results and organization of the paper}\label{subsec:main_results}
The first of our main results expresses the probabilities of win / loss for each player, in each of the three games, as fixed points of suitable multivariable functions. These functions turn out to be compositions of translates of the pgfs $G_{j,S_{j}}$ and $G_{j,[m] \setminus S_{j}}$ for $j \in [m]$, where recall the subsets $S_{j}$ from Definition~\ref{defn:games}.
Before we state the theorem, we introduce the definitions of certain subsets of vertices of the tree $\mathcal{T} = \mathcal{T}_{[m] , \mathbf{p}, \bm{\chi}}$ described in Definition~\ref{defn:mult_GW}. For ease of exposition, we let `N', `M' and `E' indicate the normal, mis\`{e}re and escape games respectively, whereas `W', `L' and `D' denote, respectively, the outcomes of win, loss and draw for any particular player. For each $i = 1, 2$ and $j \in [m]$,
\begin{enumerate}
\item let $\NW_{i,j}$ comprise all $v \in V(\mathcal{T})$ with $\sigma(v) = j$, such that if $v$ is the initial vertex and P$i$ plays the first round of the normal game, then P$i$ wins;
\item let $\NL_{i,j}$ comprise all $v \in V(\mathcal{T})$ with $\sigma(v) = j$, such that if $v$ is the initial vertex and P$i$ plays the first round of the normal game, then P$i$ loses;
\item let $\ND_{i,j}$ comprise all $v \in V(\mathcal{T})$ with $\sigma(v) = j$, such that if $v$ is the initial vertex and P$i$ plays the first round, then the normal game results in a draw.
\end{enumerate}
We analogously define $\MW_{i,j}$, $\ML_{i,j}$ and $\MD_{i,j}$ for the mis\`{e}re game. For the escape game, we let
\begin{enumerate}
\item $\ESW_{j}$ comprise all $v$ with $\sigma(v) = j$, such that if $v$ is the initial vertex and Stopper plays the first round, she wins;
\item $\EEL_{j}$ comprise all $v$ with $\sigma(v) = j$, such that if $v$ is the initial vertex and Escaper plays the first round, she loses.
\end{enumerate}
We define $\nw_{i,j}$ to be the probability that the root $\phi$ of $\mathcal{T}$ belongs to $\NW_{i,j}$. Analogously, we define $\nl_{i,j}$, $\nd_{i,j}$, $\mw_{i,j}$, $\ml_{i,j}$, $\md_{i,j}$, $\esw_{j}$ and $\eel_{j}$. We let $\bnw_{i} = (\nw_{i,j}: j \in [m])$, and likewise, $\bnl_{i}$, $\bmw_{i}$, $\bml_{i}$, $\besw$ and $\beel$. We also let $\alpha_{j}$ (respectively $\beta_{j}$) denote the probability that $\phi$ has no child of colour $k$ for any $k \in S_{j}$ (respectively any $k \in [m] \setminus S_{j}$), conditioned on $\sigma(\phi) = j$.
\begin{theorem}\label{thm:main_1}
In case of the normal game, we have
\begin{equation}
\bnw_{1} = \min \FP(F_{N}) \quad \text{and} \quad \bnl_{1} = \mathbf{1}_{[m]} - \max \FP(F_{N}),
\end{equation}
\begin{equation}
\bnw_{2} = \min \FP(\F_{N}) \quad \text{and} \quad \bnl_{2} = \mathbf{1}_{[m]} - \max \FP(\F_{N}),
\end{equation}
where the functions $F_{N}(x_{1}, \ldots, x_{m}) = \left(F_{N,j}(x_{1}, \ldots, x_{m}): j \in [m]\right)$ and $\F_{N}(x_{1}, \ldots, x_{m}) = \left(\F_{N,j}(x_{1}, \ldots, x_{m}): j \in [m]\right)$, mapping from $[0,1]^{m}$ to itself, are given by
\begin{multline}\label{F_{N,j}_script_F_{N,j}_defns}
F_{N,j}(x_{1}, \ldots, x_{m}) = 1 - G_{j, S_{j}}\left(1 - G_{k, [m] \setminus S_{k}}\left(x_{\ell}: \ell \in [m] \setminus S_{k}\right): k \in S_{j}\right) \\ \text{ and } \F_{N,j}(x_{1}, \ldots, x_{m}) = 1 - G_{j, [m] \setminus S_{j}}\left(1 - G_{k,S_{k}}\left(x_{\ell}: \ell \in S_{k}\right): k \in [m] \setminus S_{j}\right).
\end{multline}
In case of the mis\`{e}re game, we have
\begin{equation}
\bmw_{1} = \min \FP(F_{M}) \quad \text{and} \quad \bml_{1} = \mathbf{1}_{[m]} - \max \FP(F_{M}),
\end{equation}
\begin{equation}
\bmw_{2} = \min \FP(\F_{M}) \quad \text{and} \quad \bml_{2} = \mathbf{1}_{[m]} - \max \FP(\F_{M}),
\end{equation}
where the functions $F_{M}(x_{1}, \ldots, x_{m}) = \left(F_{M,j}(x_{1}, \ldots, x_{m}): j \in [m]\right)$ and $\F_{M}(x_{1}, \ldots, x_{m}) = \left(\F_{M,j}(x_{1}, \ldots, x_{m}): j \in [m]\right)$, mapping from $[0,1]^{m}$ to itself, are given by
\begin{multline}\label{F_{M,j}_script_F_{M,j}_defns}
F_{M,j}(x_{1}, \ldots, x_{m}) = \alpha_{j} + 1 - G_{j,S_{j}}\left(1 - G_{k,[m] \setminus S_{k}}\left(x_{\ell}: \ell \in [m] \setminus S_{k}\right) + \beta_{k}: k \in S_{j}\right) \text{ and }\\ \F_{M,j} = \beta_{j} + 1 - G_{j,[m] \setminus S_{j}}\left(1 - G_{k,S_{k}}\left(x_{\ell}: \ell \in S_{k}\right) + \alpha_{k}: k \in [m] \setminus S_{j}\right).
\end{multline}
In case of the escape game, we have
\begin{equation}\label{escape_min_fixed_point}
\besw = \min \FP(F_{E}) \quad \text{and} \quad \beel = \mathbf{1}_{[m]} - \max \FP(\F_{E}),
\end{equation}
where the functions $F_{E}(x_{1}, \ldots, x_{m}) = \left(F_{E,j}(x_{1}, \ldots, x_{m}): j \in [m]\right)$ and $\F_{E}(x_{1}, \ldots, x_{m}) = \left(\F_{E,j}(x_{1}, \ldots, x_{m}): j \in [m]\right)$, mapping from $[0,1]^{m}$ to itself, are given by
\begin{multline}\label{F_{E,j}_script_F_{E,j}_defns}
F_{E,j}(x_{1}, \ldots, x_{m}) = \alpha_{j} + 1 - G_{j,S_{j}}\left(1 - G_{k, [m] \setminus S_{k}}\left(x_{\ell}: \ell \in [m] \setminus S_{k}\right): k \in S_{j} \right) \text{ and} \\ \F_{E,j}(x_{1}, \ldots, x_{m}) = 1 - G_{j, [m] \setminus S_{j}}\left(\alpha_{k} + 1 - G_{k,S_{k}}\left(x_{\ell}: \ell \in S_{k}\right): k \in [m] \setminus S_{j}\right).
\end{multline}
\end{theorem}
Our next couple of results pertain to two very fascinating examples: the \emph{bi-type binary GW tree} and the \emph{bi-type Poisson GW tree}. In each of these set-ups, the vertices get assigned one of the colours blue (indicated by $b$) and red (indicated by $r$). We set $S_{b} = \{b\}$ and $S_{r} = \{r\}$, whereby P1 and Stopper are allowed to move only along monochromatic directed edges, and P2 and Escaper only along non-monochromatic directed edges.
The offspring distributions in case of the bi-type binary tree are described as follows:
\begin{enumerate}
\item given $\sigma(v) = b$, $v$ has no child with probability $p_{0}$, two blue children with probability $p_{\blbl}$, two red children with probability $p_{\rr}$, and one red and one blue child with probability $p_{\br}$,
\item given $\sigma(v) = r$, $v$ has no child with probability $q_{0}$, two blue children with probability $q_{\blbl}$, two red children with probability $q_{\rr}$, one red and one blue child with probability $q_{\br}$.
\end{enumerate}
In the bi-type Poisson tree, each vertex $v$, irrespective of its colour, gives birth to $\poi(\lambda)$ offspring in total, and
\begin{enumerate}
\item conditioned on $\sigma(v) = b$, a child of $v$ is assigned, independent of all else, colour $b$ with probability $p_{b}$, and colour $r$ with probability $p_{r} = 1-p_{b}$;
\item conditioned on $\sigma(v) = r$, a child of $v$ is assigned, independent of all else, colour $b$ with probability $q_{b}$, and colour $r$ with probability $q_{r} = 1-q_{b}$.
\end{enumerate}
\begin{theorem}\label{thm:main_example_1}
When the normal game is played on the bi-type binary GW tree, for each $i = 1, 2$, the draw probabilities $\nd_{i,b}$ and $\nd_{i,r}$ both equal $1$ if $p_{\br} = q_{\br} = 1$. Otherwise, $\nd_{i,b} = \nd_{i,r} = 0$. Analogous conclusions hold for the mis\`{e}re game. In case of the escape game, $\esl_{b} = \esl_{r} = 1$ when $p_{\br} = q_{\br} = 1$, and $\esl_{b} = \esl_{r} = 0$ otherwise.
\end{theorem}
\begin{theorem}\label{thm:main_example_2}
When $p_{b}$ and $q_{r}$ are constants in $(0,1)$, the draw probabilities $\nd_{i,b}$ and $\nd_{i,r}$, for $i = 1, 2$, of the normal game on the bi-type Poisson GW tree can be brought arbitrarily close to $1$ by making $\lambda$ sufficiently large. Analogous conclusions hold for $\md_{i,b}$ and $\md_{i,r}$ in case of the mis\`{e}re game and for $\esl_{b}$ and $\esl_{r}$ in case of the escape game. If
\begin{equation}\label{poisson_second_cond}
p_{b} p_{r} q_{b} q_{r} \leqslant \lambda^{-4} \exp\left\{\lambda p_{r} e^{-\lambda q_{r}} + \lambda p_{b} \exp\left\{-\lambda p_{r} \exp\left\{-\lambda q_{r} e^{-\lambda q_{b}}\right\}\right\} + \lambda q_{r} e^{-\lambda q_{b}}\right\},
\end{equation}
then the above probabilities are all $0$. If
\begin{equation}\label{normal_poisson_cond}
\lambda q_{r} e^{-\lambda q_{b}} \geqslant 1 \text{ and } \lambda p_{b} e^{-\lambda p_{r} e^{-\lambda q_{r} e^{-\lambda q_{b}}}} \geqslant 1,
\end{equation}
then $\nd_{1,b} = 0$. Analogous results hold for $\nd_{1,r}$, $\nd_{2,b}$ and $\nd_{2,r}$. If
\begin{equation}\label{misere_poisson_cond}
\lambda q_{r} e^{-\lambda q_{b}} \geqslant 1 \text{ and } \lambda p_{b} e^{-\lambda p_{r} \left(1 - e^{-\lambda q_{r}}\right)} \geqslant 1,
\end{equation}
then $\md_{1,b} = 0$. Analogous results hold for $\md_{1,r}$, $\md_{2,b}$ and $\md_{2,r}$. If
\begin{equation}\label{escape_poisson_cond}
\lambda q_{r} e^{-\lambda q_{b}} \geqslant 1 \text{ and } \lambda p_{b} e^{-\lambda p_{r} \left(e^{-\lambda q_{r} e^{-\lambda q_{b}}} - e^{-\lambda q_{r}}\right)} \geqslant 1,
\end{equation}
then $\esl_{b} = 0$. An analogous result holds for $\esl_{r}$.
\end{theorem}
The next main result compares the probabilities of outcomes of different games with each other.
\begin{theorem}\label{thm:main_2}
The following are true:
\begin{enumerate}
\item \label{main_2_part_1} For each $j \in [m]$, we have $\nw_{1,j} \leqslant \esw_{j}$, $\nl_{2,j} \leqslant \eel_{j}$, $\mw_{1,j} \leqslant \esw_{j}$ and $\ml_{2,j} \leqslant \eel_{j}$.
\item \label{main_2_part_2} When $G_{j,S_{j}}$ and $G_{j,[m] \setminus S_{j}}$ are differentiable for any $j \in [m]$, we have
\begin{multline}\label{main_2_part_2_eq_1}
\esw_{j} \geqslant \alpha_{j} + \min\left\{\eel_{t}: t \in S_{j}\right\}\left(1 - \alpha_{j}\right) + \sum_{k \in S_{j}} \left(\eel_{k} - \min\left\{\eel_{t}: t \in S_{j}\right\}\right)\\ \partial_{k} G_{j,S_{j}}\left(1 - \eel_{t}: t \in S_{j}\right) \geqslant \min\left\{\eel_{t}: t \in S_{j}\right\};
\end{multline}
\begin{multline}
\mw_{1,j} \geqslant \alpha_{j} + \min\left\{\ml_{2,t}: t \in S_{j}\right\}\left(1 - \alpha_{j}\right) + \sum_{k \in S_{j}} \left(\ml_{2,k} - \min\left\{\ml_{2,t}: t \in S_{j}\right\}\right)\\ \partial_{k} G_{j,S_{j}}\left(1 - \ml_{2,t}: t \in S_{j}\right) \geqslant \min\left\{\ml_{2,t}: t \in S_{j}\right\};
\end{multline}
\begin{multline}
\mw_{2,j} \geqslant \beta_{j} + \min\left\{\ml_{1,t}: t \in [m] \setminus S_{j}\right\}\left(1 - \beta_{j}\right) + \sum_{k \in [m] \setminus S_{j}} \left(\ml_{1,k} - \min\left\{\ml_{1,t}: t \in [m] \setminus S_{j}\right\}\right)\\ \partial_{k} G_{j,[m] \setminus S_{j}}\left(1 - \ml_{1,t}: t \in [m] \setminus S_{j}\right) \geqslant \min\left\{\ml_{1,t}: t \in [m] \setminus S_{j}\right\}.
\end{multline}
\item \label{main_2_part_3} Let $G_{j,S_{j}}$ be convex and continuously differentiable, $G_{j,[m] \setminus S_{j}}$ be continuous. If
\begin{equation}\label{main_2_part_3_cond_1}
\alpha_{j} \leqslant \sum_{i \in S_{j}} \beta_{i} \Prob\left[X_{v,i} = 1, X_{v,k} = 0 \text{ for all } k \in S_{j} \setminus \{i\}\big|\sigma(v) = j\right],
\end{equation}
for each $j \in [m]$, then $\bnl_{1} \preceq \bml_{1}$. If
\begin{equation}\label{main_2_part_3_cond_2}
\alpha_{j} \geqslant \sum_{i \in S_{j}} \beta_{i} \E\left[X_{v,i}\big|\sigma(v) = j\right],
\end{equation}
then $\bnw_{1} \preceq \bmw_{1}$. Analogous inequalities hold for $\bnl_{2}$, $\bml_{2}$, $\bnw_{2}$ and $\bmw_{2}$.
\end{enumerate}
\end{theorem}
\begin{remark}\label{rem_intuitive}
Note that $\nl_{1,j}^{(2)} = \nl_{1,j}^{(1)} = \alpha_{j}$, since the only way that a normal game, with P1 playing the first round, ends in less than $2$ rounds \emph{and} in P1's loss, is if she fails to make the very first move, i.e.\ the root $\phi$, with $\sigma(\phi) = j$, has no child of colour $k$ for any $k \in S_{j}$. On the other hand, $\ml_{1,j}^{(2)} \geqslant \sum_{i \in S_{j}} \beta_{i} \Prob\left[X_{v,i} = 1, X_{v,k} = 0 \text{ for all } k \in S_{j} \setminus \{i\}\big|\sigma(v) = j\right]$, since if $\phi$ has one child $v$ of colour $k$ for $k \in S_{j}$, no child of colour $k'$ for any $k' \in S_{j} \setminus \{k\}$, and $v$ has no child of colour $\ell$ for any $\ell \in [m] \setminus S_{k}$, then P1 is forced to move the token to $v$ in the first round, and P2 wins immediately. Thus, \eqref{main_2_part_3_cond_1} guarantees $\bnl_{1}^{(2)} \preceq \bml_{1}^{(2)}$, which in turn, surprisingly, guarantees $\bnl_{1} \preceq \bml_{1}$.
Likewise, we note that $\mw_{1,j}^{(2)} = \alpha_{j}$, since the root $\phi$ having no child of colour $k$ for any $k \in S_{j}$ and P1 playing the first round of the mis\`{e}re game implies that she fails to move in the first round and thus wins. On the other hand, if $\phi$ has at least one child $u$ with $\sigma(u) = k \in S_{j}$ and $u$ has no child of colour $\ell$ for any $\ell \in [m] \setminus S_{k}$, then in the normal game, P1 moves the token to such a $u$, and P2 is unable to move in the second round, thus yielding
\begin{align}
\nw_{1,j}^{(2)} &= \sum_{\mathbf{n}_{S_{j}} = \left(n_{k}: k \in S_{j}\right) \in \mathbb{N}_{0}^{|S_{j}|} \setminus \left\{\mathbf{0}_{S_{j}}\right\}} \left\{1 - \prod_{i \in S_{j}}\left(1 - \beta_{i}\right)^{n_{i}}\right\} \chi_{j}\left(\mathbf{n}_{S_{j}}\right),\nonumber
\end{align}
which is bounded above by $\sum_{i \in S_{j}} \beta_{i} \E\left[X_{v,i}\big|\sigma(v) = j\right]$, since $\left\{1 - \prod_{i \in S_{j}}\left(1 - \beta_{i}\right)^{n_{i}}\right\} \leqslant \sum_{i \in S_{j}} \beta_{i} n_{i}$. Thus, \eqref{main_2_part_3_cond_2} guarantees $\bnw_{1}^{(2)} \preceq \bmw_{1}^{(2)}$, which in turn ensures that $\bnw_{1} \preceq \bmw_{1}$.
\end{remark}
Our next main result pertains to the behaviour of the probabilities of the game outcomes as functions of the law of $\mathcal{T}$. Given $\mathcal{T}_{\bm{\chi}} = \mathcal{T}_{[m], \mathbf{p}, \bm{\chi}}$, as defined in Definition~\ref{defn:mult_GW}), for a \emph{fixed} $m$, we denote its law by $\mathcal{L}_{\bm{\chi}}$. We define the distance between two such laws $\mathcal{L}_{\bm{\chi}}$ and $\mathcal{L}_{\bm{\eta}}$ as
\begin{equation}\label{metric_d_{0}}
d_{0}\left(\mathcal{L}_{\bm{\chi}}, \mathcal{L}_{\bm{\eta}}\right) = \max\left\{||\chi_{j} - \eta_{j}||_{\tv}: j \in [m]\right\},
\end{equation}
where $\tv$ indicates the total variation distance between two probability measures.
We introduce a slight change of notation in Theorem~\ref{thm:main_3}, to emphasize the dependence of our objects of interest on the law of the GW process under consideration. For instance, on $T_{\bm{\chi}}$, we replace $\nw_{i,j}$ by $\nw_{i,j,\bm{\chi}}$, $G_{j,S}$ by $G_{j,S,\bm{\chi}}$, $F_{N,j}$ by $F_{N,j,\bm{\chi}}$, $\alpha_{j}$ and $\beta_{j}$ by $\alpha_{j,\bm{\chi}}$ and $\beta_{j,\bm{\chi}}$ respectively etc. Let $\Prob_{\mathcal{L}_{\bm{\chi}}}$ be the probability measure induced by $\mathcal{L}_{\bm{\chi}}$ and $\E_{\mathcal{L}_{\bm{\chi}}}$ expectation under $\Prob_{\mathcal{L}_{\bm{\chi}}}$.
\begin{theorem}\label{thm:main_3}
Keeping $m$ fixed, define the following subsets of laws $\mathcal{L}_{\bm{\chi}}$:
\begin{equation}
\mathcal{D}_{1} = \left\{\mathcal{L}_{\bm{\chi}}: \alpha_{j,\bm{\chi}} > 0 \text{ for each } j \in [m]\right\}, \quad \mathcal{D}_{2} = \left\{\mathcal{L}_{\bm{\chi}}: \beta_{j,\bm{\chi}} > 0 \text{ for each } j \in [m]\right\};\nonumber
\end{equation}
\begin{multline}
\mathcal{D}_{3} = \left\{\mathcal{L}_{\bm{\chi}}: \mathcal{E}_{j,S_{j},\bm{\chi}} = \E_{\mathcal{L}_{\bm{\chi}}}\left[\sum_{k \in S_{j}} X_{v,k}\big|\sigma(v) = j\right] < \infty \text{ for each } j \in [m]\right\}, \\ \mathcal{D}_{4} = \left\{\mathcal{L}_{\bm{\chi}}: \mathcal{E}_{j,[m] \setminus S_{j},\bm{\chi}} = \E_{\mathcal{L}_{\bm{\chi}}}\left[\sum_{k \in [m] \setminus S_{j}} X_{v,k}\big|\sigma(v) = j\right] < \infty \text{ for each }j \in [m]\right\},\nonumber
\end{multline}
where recall from Definition~\ref{defn:mult_GW} that $X_{v,k}$ is the number of children coloured $k$ of a vertex $v$, for $k \in [m]$. Finally, let
\begin{multline}
\mathcal{C}_{1} = \left\{\mathcal{L}_{\bm{\chi}}: G_{j,S_{j},\bm{\chi}}\left(\beta_{k,\bm{\chi}}: k \in S_{j}\right) > \alpha_{j,\bm{\chi}} \text{ for each } j \in [m]\right\} \text{ and } \\ \mathcal{C}_{2} = \left\{\mathcal{L}_{\bm{\chi}}: G_{j,[m] \setminus S_{j},\bm{\chi}}\left(\alpha_{k,\bm{\chi}}: k\in [m] \setminus S_{j}\right) > \beta_{j,\bm{\chi}} \text{ for each } j \in [m]\right\}.\nonumber
\end{multline}
Then, the following are true, for each $i = 1, 2$ and $j \in [m]$:
\begin{enumerate}
\item \label{main_3_part_1} If $\mathcal{L}_{\bm{\chi}} \in \left(\mathcal{D}_{1} \cup \mathcal{D}_{4}\right) \cap \left(\mathcal{D}_{2} \cup \mathcal{D}_{3}\right)$, then $\nw_{i,j,\bm{\chi}}$ and $\nl_{i,j,\bm{\chi}}$ are lower semicontinuous functions of $\mathcal{L}_{\bm{\chi}}$ with respect to the metric $d_{0}$. If, moreover, $\nd_{i,j,\bm{\chi}} = 0$, then $\nw_{i,j,\bm{\chi}}$ and $\nl_{i,j,\bm{\chi}}$ are continuous functions of $\mathcal{L}_{\bm{\chi}}$.
\item \label{main_3_part_2} If $\mathcal{L}_{\bm{\chi}} \in \left(\mathcal{C}_{1} \cup \mathcal{D}_{4}\right) \cap \left(\mathcal{C}_{2} \cup \mathcal{D}_{3}\right)$, then $\mw_{i,j,\bm{\chi}}$ and $\ml_{i,j,\bm{\chi}}$ are lower semicontinuous functions of $\mathcal{L}_{\bm{\chi}}$ with respect to the metric $d_{0}$. If, moreover, $\md_{i,j,\bm{\chi}} = 0$, then $\mw_{i,j,\bm{\chi}}$ and $\ml_{i,j,\bm{\chi}}$ are continuous functions of $\mathcal{L}_{\bm{\chi}}$.
\item \label{main_3_part_3} If $\mathcal{L}_{\bm{\chi}} \in \mathcal{D}_{3} \cap \mathcal{D}_{4}$, then $\esw_{j,\bm{\chi}}$ and $\eel_{j,\bm{\chi}}$ are lower semicontinuous functions of $\mathcal{L}_{\bm{\chi}}$ with respect to the metric $d_{0}$.
\end{enumerate}
\end{theorem}
Our final main result provides a sufficient condition that guarantees a positive probability for Escaper winning the escape game.
\begin{theorem}\label{thm:main_4}
For every $i \in [m]$ and $j \in S_{i}$, let us define the probability $\gamma_{i,j} = \Prob\left[X_{v,j} = 1, X_{v,k} = 0 \text{ for all } k \in S_{i} \setminus \{j\}\big|\sigma(v) = i\right]$, and for $i, j \in [m]$, let $m_{i,j} = \E[X_{v,j}|\sigma(v) = i]$. If there exists a function $f: [m] \rightarrow [m]$ such that $f(i) \in S_{i}$ for every $i \in [m]$, and the matrix $M'' = \left(\left(m''_{i,j}\right)\right)_{i,j \in [m]}$, with $m''_{i,j} = \sum_{k \in [m] \setminus S_{i}: f(k) = j} m_{i,k} \gamma_{k,j}$, has its largest eigenvalue strictly greater than $1$, then Escaper has a positive probability of winning if she plays the first round, i.e.\ $\eew_{j} > 0$ for each $j \in [m]$. Moreover, as long as $\alpha_{j} < 1$ for each $j \in [m]$, we have $\eew_{j} > 0$ for all $j \in [m]$ iff $\esl_{j} > 0$ for all $j \in [m]$.
\end{theorem}
We now describe the organization of this paper. \S\ref{sec:proof_theorem_1} is dedicated to the proof of Theorem~\ref{thm:main_1}, with \S\ref{subsec:normal_recursions}, \S\ref{subsec:misere_recursions} and \S\ref{subsec:escape_recursions} respectively addressing the normal, mis\`{e}re and escape games. The proofs of Theorems~\ref{thm:main_example_1} and \ref{thm:main_example_2} are contained, respectively, in \S\ref{subsec:main_example_1_proof} and \S\ref{subsec:main_example_2_proof} of \S\ref{sec:main_examples_proof}. The proofs of \ref{main_3_part_1}, \ref{main_3_part_2} and \ref{main_3_part_3} of Theorem~\ref{thm:main_2} are given in three separate subsections \S\ref{subsec:main_2_part_1}, \S\ref{subsec:main_2_part_2} and \S\ref{subsec:main_2_part_3} of \S\ref{sec:main_2_proof}. Theorems~\ref{thm:main_3} and \ref{thm:main_4} are respectively proved in \S\ref{sec:main_3_proof} and \S\ref{sec:main_4_proof}.
\section{Proof of Theorem~\ref{thm:main_1}}\label{sec:proof_theorem_1}
\subsection{The normal games}\label{subsec:normal_recursions}
For every $n \in \mathbb{N}$, every $i =1, 2$ and each $j \in [m]$, let
\begin{enumerate}
\item $\NW_{i,j}^{(n)} \subset \NW_{i,j}$ comprise vertices $v$ such that if $v$ is the initial vertex and P$i$ plays the first round, the game lasts for less than $n$ rounds;
\item $\NL_{i,j}^{(n)} \subset \NL_{i,j}$ comprise vertices $v$ such that if $v$ is the initial vertex and P$i$ plays the first round, the game lasts for less than $n$ rounds;
\item $\ND_{i,j}^{(n)}$ comprise all vertices $v$ of $\mathcal{T}$ with $\sigma(v) = j$, such that $v \notin \NW_{i,j}^{(n)} \cup \NL_{i,j}^{(n)}$.
\end{enumerate}
In other words, if $v \in \ND_{i,j}^{(n)}$ is the initial vertex and P$i$ plays the first round, the outcome of the game cannot be decided in less than $n$ rounds. We set $\NW_{i,j}^{(0)} = \NL_{i,j}^{(0)} = \emptyset$. We define $\nw_{i,j}^{(n)}$, $\nl_{i,j}^{(n)}$ and $\nd_{i,j}^{(n)}$ to be the probabilities that the root $\phi$ belongs to $\NW_{i,j}^{(n)}$, $\NL_{i,j}^{(n)}$ and $\ND_{i,j}^{(n)}$ respectively, conditioned on $\sigma(\phi) = j$.
The following compactness result establishes that if a player is destined to win the normal game, she is able to do so in a finite number of rounds:
\begin{lemma}\label{lem:normal_compactness}
For each $i = 1, 2$ and $j \in [m]$, the subsets $\widetilde{\NW}_{i,j} = \NW_{i,j} \setminus \left(\bigcup_{n=1}^{\infty} \NW_{i,j}^{(n)}\right)$ and $\widetilde{\NL}_{i,j} = \NL_{i,j} \setminus \left(\bigcup_{n=1}^{\infty} \NL_{i,j}^{(n)}\right)$ are empty.
\end{lemma}
\begin{proof}
We only present the proof for $\widetilde{\NW}_{1,j}$, since the arguments to prove the rest of the claim are very similar. We show that, since P1 cannot guarantee to win the game in a finite number of rounds, P2 can actually ensure that the game results in a draw. For the initial vertex $v_{1}$ to be in $\widetilde{\NW}_{1,j}$, the following must be true:
\begin{itemize}
\item $\sigma(v_{1}) = j$;
\item $v_{1}$ cannot have any child $u$ in $\NL_{2,k}^{(n)}$ for any $k \in S_{j}$ and $n \in \mathbb{N}$, because in that case, P1 wins in less than $n+1$ rounds;
\item $v_{1}$ must have at least one child $v_{2}$ in $\widetilde{\NL}_{2,k}$ for some $k \in S_{j}$, and it is to such a $v_{2}$ that P1 moves the token in the first round, under optimal play.
\end{itemize}
Next, for $v_{2}$ to be in $\widetilde{\NL}_{2,k}$, the following must be true:
\begin{itemize}
\item $\sigma(v_{2}) = k$;
\item for every $\ell \in [m] \setminus S_{k}$, every child $u$ of $v_{2}$ with $\sigma(u) = \ell$ must be in $\NW_{1,\ell}$, since otherwise, P2 would not lose the game;
\item $v_{2}$ must have at least one child $v_{3}$ in $\widetilde{\NW}_{1,\ell}$ for some $\ell \in [m] \setminus S_{k}$. This is because, for every $\ell \in [m] \setminus S_{k}$ and every child $u$ of $v_{2}$ with $\sigma(u) = \ell$, if we have $u \in \NW_{1,\ell}^{(n)}$ for some $n \in \mathbb{N}$, then no matter where P2 moves the token in the second round, she would lose the game in a finite number of rounds. Moreover, assuming optimal play, P2 would move the token to some $v_{3}$ in $\widetilde{\NW}_{1,\ell}$, for some $\ell \in [m] \setminus S_{k}$, in the second round.
\end{itemize}
These observations reveal a pattern: the token starts at $v_{1}$ in $\widetilde{\NW}_{1,j}$, and for each $n \in \mathbb{N}$,
\begin{itemize}
\item before the $2n$-th round, it is at a vertex $v_{2n} \in \widetilde{\NL}_{2,\ell}$ for some $\ell \in S_{\sigma(v_{2n-1})}$;
\item before the $(2n+1)$-st round, it is at a vertex $v_{2n+1} \in \widetilde{\NW}_{1,\ell'}$ for some $\ell' \in [m] \setminus S_{\sigma(v_{2n})}$.
\end{itemize}
It is clear that the game never comes to an end, thereby ensuring a draw for both players.
\end{proof}
The next lemma states a crucially used consequence of Lemma~\ref{lem:normal_compactness}:
\begin{lemma}\label{lem:normal_compactness_consequence}
For each $i = 1, 2$ and $j \in [m]$, we have $\nw_{i,j}^{(n)} \uparrow \nw_{i,j}$ and $\nl_{i,j}^{(n)} \uparrow \nl_{i,j}$ as $n \rightarrow \infty$.
\end{lemma}
\begin{proof}
By definition, it is immediate that $\NW_{i,j}^{(n)} \subseteq \NW_{i,j}^{(n+1)}$ and $\NL_{i,j}^{(n)} \subseteq \NL_{i,j}^{(n+1)}$. Consequently, $\left\{\NW_{i,j}^{(n)}\right\}_{n}$ and $\left\{\NL_{i,j}^{(n)}\right\}_{n}$ are increasing sequences of subsets. This, combined with Lemma~\ref{lem:normal_compactness}, gives us the conclusion of Lemma~\ref{lem:normal_compactness_consequence}.
\end{proof}
The rest of \S\ref{subsec:normal_recursions} is dedicated to deriving recursive relations and using these to prove the first part of Theorem~\ref{thm:main_1}. For a vertex $v$ to be in $\NW_{1,j}^{(n+1)}$, for any $j \in [m]$ and $n \in \mathbb{N}$, we must have $\sigma(v) = j$, and $v$ must have at least one child $u \in \NL_{2,k}^{(n)}$ for some $k \in S_{j}$. Therefore, we have
\begin{align}\label{normal_recur_1}
\nw_{1,j}^{(n+1)} &= \sum_{n_{r} \in \mathbb{N}_{0}: r \in [m]} \left\{1 - \prod_{k \in S_{j}} \left(1 - \nl_{2,k}^{(n)}\right)^{n_{k}}\right\} \chi_{j}(n_{1}, \ldots, n_{m}) = 1 - G_{j, S_{j}}\left(1 - \nl_{2,k}^{(n)}: k \in S_{j}\right).
\end{align}
We now verify that this recursion is true when $n = 0$. Since $\nl_{2,k}^{(0)} = 0$ for all $k \in S_{j}$, hence $G_{j, S_{j}}\left(1 - \nl_{2,k}^{(0)}: k \in S_{j}\right) = 1$. The right side of \eqref{normal_recur_1} thus equals $0$. For the fate of the game to be decided in less than $1$ round, P1 ought not to be able to make her very first move, but this is not possible if P1 is destined to win the game. Consequently, $\nw_{1,j}^{(1)} = 0$, thus showing that \eqref{normal_recur_1} holds for $n = 0$ as well. Likewise, for all $n \in \mathbb{N}_{0}$ and $j \in [m]$, we have
\begin{equation}\label{normal_recur_2}
\nw_{2,j}^{(n+1)} = 1 - G_{j, [m] \setminus S_{j}}\left(1 - \nl_{1,k}^{(n)}: k \in [m] \setminus S_{j}\right)
\end{equation}
For $v$ to be in $\NL_{1,j}^{(n+1)}$, for any $n \in \mathbb{N}$, we must have $\sigma(v) = j$, and every child $u$ of $v$ with $\sigma(u) = k$ for any $k \in S_{j}$ must be in $\NW_{2,k}^{(n)}$. Therefore
\begin{align}\label{normal_recur_3}
\nl_{1,j}^{(n+1)} &= \sum_{n_{r} \in \mathbb{N}_{0}: r \in [m]} \prod_{k \in S_{j}} \left(\nw_{2,k}^{(n)}\right)^{n_{k}} \chi_{j}(n_{1}, \ldots, n_{m}) = G_{j,S_{j}}\left(\nw_{2,k}^{(n)}: k \in S_{j}\right).
\end{align}
Once again, we verify this recursion for $n = 0$. For the fate of the game to be decided in less than $1$ round, P1 must be unable to make her very first move, which happens only if $v$ does not have any child $u$ with $\sigma(u) = k$ for any $k \in S_{j}$. The probability of this event is equal to $G_{j,S_{j}}\left(\mathbf{0}_{S_{j}}\right) = G_{j,S_{j}}\left(\nw_{2,k}^{(0)}: k \in S_{j}\right)$, thus showing that \eqref{normal_recur_3} holds for $n = 0$. Likewise, for each $j \in [m]$ and $n \in \mathbb{N}_{0}$, we have
\begin{equation}\label{normal_recur_4}
\nl_{2,j}^{(n+1)} = G_{j, [m] \setminus S_{j}}\left(\nw_{1,k}^{(n)}: k \in [m] \setminus S_{j}\right)
\end{equation}
Using the above recursions, defining $F_{N}$, $\F_{N}$, $F_{N,j}$ and $\F_{N,j}$ as in \eqref{F_{N,j}_script_F_{N,j}_defns}, and setting $\bnw_{i}^{(n)} = \left(\nw_{i,j}^{(n)}: j \in [m]\right)$ for $i = 1, 2$, we get
$\bnw_{1}^{(n+2)} = F_{N}\left(\bnw_{1}^{(n)}\right)$ and $\bnw_{2}^{(n+2)} = \F_{N}\left(\bnw_{2}^{(n)}\right)$ for all $n \in \mathbb{N}_{0}$.
From Lemma~\ref{lem:normal_compactness_consequence}, we get
\begin{equation}
\bnw_{1} = \lim_{n \rightarrow \infty} \bnw_{1}^{(2n)} = \lim_{n \rightarrow \infty} F_{N}^{(n)}\left(\bnw_{1}^{(0)}\right) = \lim_{n \rightarrow \infty} F_{N}^{(n)}\left(\mathbf{0}_{[m]}\right).\nonumber
\end{equation}
Thus, $\bnw_{1}$ is a fixed point of $F_{N}$. Likewise, $\bnw_{2}$ is a fixed point of $\F_{N}$ with $\bnw_{2} = \lim_{n \rightarrow \infty} \F_{N}^{(n)}\left(\mathbf{0}_{[m]}\right)$.
It can be easily verified that $G_{j,S}(x_{i}: i \in S) \leqslant G_{j,S}(y_{i}: i \in S)$ for any $S \subset [m]$ and tuples $(x_{i}: i \in S)$ and $(y_{i}: i \in S)$ in $[0,1]^{|S|}$ with $(x_{i}: i \in S) \preceq (y_{i}: i \in S)$. Thus, whenever $(x_{1}, \ldots, x_{m}) \preceq (y_{1}, \ldots, y_{m})$, we have $F_{N}(x_{1}, \ldots, x_{m}) \preceq F_{N}(y_{1}, \ldots, y_{m})$ and $\F_{N}(x_{1}, \ldots, x_{m}) \preceq \F_{N}(y_{1}, \ldots, y_{m})$. Therefore
\begin{equation}\label{normal_min_fixed_point}
\bnw_{1} = \min \FP(F_{N}) \quad \text{and} \quad \bnw_{2} = \min \FP(\F_{N}).
\end{equation}
The above recursions also yield $\bnl_{1}^{(n+2)} = \mathbf{1}_{[m]} - F_{N}\left(\mathbf{1}_{[m]} - \bnl_{1}^{(n)}\right)$ and $\bnl_{2}^{(n+2)} = \mathbf{1}_{[m]} - \F_{N}\left(\mathbf{1}_{[m]} - \bnl_{2}^{(n)}\right)$, where we set $\bnl_{i}^{(n)} = \left(\nl_{i,j}^{(n)}: j \in [m]\right)$ for $i = 1, 2$. Lemma~\ref{lem:normal_compactness_consequence} then yields
\begin{align}\label{normal_loss_fixed_point_1}
\bnl_{1} = \lim_{n \rightarrow \infty} \bnl_{1}^{(2n)} = \lim_{n \rightarrow \infty} \mathbf{1}_{[m]} - F_{N}^{(n)}\left(\mathbf{1}_{[m]} - \bnl_{1}^{(0)}\right) = \mathbf{1}_{[m]} - \lim_{n \rightarrow \infty} F_{N}^{(n)}\left(\mathbf{1}_{[m]}\right),
\end{align}
and likewise, $\bnl_{2} = \mathbf{1}_{[m]} - \lim_{n \rightarrow \infty} \F_{N}^{(n)}\left(\mathbf{1}_{[m]}\right)$. The observations made above on the monotonically increasing nature of $F_{N}$ and $\F_{N}$ allow us to conclude that
\begin{equation}\label{normal_max_fixed_point}
\bnl_{1} = \mathbf{1}_{[m]} - \max \FP(F_{N}) \quad \text{and} \quad \bnl_{2} = \mathbf{1}_{[m]} - \max \FP(\F_{N}).
\end{equation}
\subsection{The mis\`{e}re games}\label{subsec:misere_recursions}
We define the subsets $\MW_{i,j}^{(n)}$, $\ML_{i,j}^{(n)}$ and $\MD_{i,j}^{(n)}$ analogous to $\NW_{i,j}^{(n)}$, $\NL_{i,j}^{(n)}$ and $\ND_{i,j}^{(n)}$ respectively, and the conditional probabilities $\mw_{i,j}^{(n)}$, $\ml_{i,j}^{(n)}$ and $\md_{i,j}^{(n)}$ analogous to $\nw_{i,j}^{(n)}$, $\nl_{i,j}^{(n)}$ and $\nd_{i,j}^{(n)}$ respectively, for $i = 1, 2$, $j \in [m]$ and $n \in \mathbb{N}_{0}$.
We outline the proof of Lemma~\ref{lem:misere_compactness} since it is very similar to that of Lemma~\ref{lem:normal_compactness}:
\begin{lemma}\label{lem:misere_compactness}
For each $i = 1, 2$ and $j \in [m]$, the subsets $\widetilde{\MW}_{i,j} = \MW_{i,j} \setminus \left(\bigcup_{n=1}^{\infty} \MW_{i,j}^{(n)}\right)$ and $\widetilde{\ML}_{i,j} = \ML_{i,j} \setminus \left(\bigcup_{n=1}^{\infty} \ML_{i,j}^{(n)}\right)$ are empty.
\end{lemma}
\begin{proof}
We show that $\widetilde{\MW}_{1,j}$ is empty by proving that P2 is able to ensure a draw instead of her loss. For the initial vertex $v_{1}$ to be in $\widetilde{\MW}_{i,j}$, it must have at least one child $u$ in $\ML_{2,k}$ for some $k \in S_{j}$, and no child of $v_{1}$ should be in $\ML_{2,k}^{(n)}$ for any $n \in \mathbb{N}$ and any $k \in S_{j}$. Thus, $v_{1}$ must have at least one child $v_{2}$ in $\widetilde{\ML}_{2,k}$ for some $k \in S_{j}$, and under optimal play, P1 moves the token to such a $v_{2}$ in the first round.
Next, for $v_{2}$ to be in $\widetilde{\ML}_{2,k}$, every child $u$ of $v_{2}$ with $\sigma(u) = \ell$ for any $\ell \in [m] \setminus S_{k}$ has to be in $\MW_{1,\ell}$, and $v_{2}$ must have at least one child $v_{3}$ in $\widetilde{\MW}_{1,\ell}$ for some $\ell \in [m] \setminus S_{k}$. Again, it is to such a $v_{3}$ that P2 moves the token under optimal play. These observations reveal the pattern along which the game proceeds, showing that the game never comes to an end, thus resulting in a draw.
\end{proof}
Similar to Lemma~\ref{lem:normal_compactness_consequence}, we conclude from Lemma~\ref{lem:misere_compactness} that as $n \rightarrow \infty$,
\begin{equation}\label{eq:misere_compactness_consequence}
\mw_{i,j}^{(n)} \uparrow \mw_{i,j} \text{ and } \ml_{i,j}^{(n)} \uparrow \ml_{i,j} \text{ for each } i = 1, 2 \text{ and } j \in [m].
\end{equation}
The derivation of recursion relations is quite similar to those of \S\ref{subsec:normal_recursions}. For $j \in [m]$ and $n \in \mathbb{N}$, an initial vertex $v$ is in $\MW_{1,j}^{(n+1)}$ if either $v$ has no child $u$ with $\sigma(u) = k$ for any $k \in S_{j}$, or $v$ has at least one child $u$ in $\ML_{2,k}^{(n)}$ for some $k \in S_{j}$. Recalling $\alpha_{j} = G_{j,S_{j}}\left(\mathbf{0}_{S_{j}}\right)$ from \S\ref{subsec:main_results}, we get
\begin{multline}\label{misere_recur_1}
\mw_{1,j}^{(n+1)} = \alpha_{j} + \sum_{n_{r} \in \mathbb{N}_{0}: r \in [m]} \left\{1 - \prod_{k \in S_{j}}\left(1 - \ml_{2,k}^{(n)}\right)^{n_{k}}\right\} \chi_{j}(n_{1}, \ldots, n_{m}) \\= \alpha_{j} + 1 - G_{j,S_{j}}\left(1 - \ml_{2,k}^{(n)}: k \in S_{j}\right).
\end{multline}
We now verify this recursion for $n = 0$. For the outcome to be decided in less than $1$ round, P1 must be unable to make her very first move, thus winning the game immediately. This happens only if $v$ has no child $u$ with $\sigma(u) = k$ for any $k \in S_{j}$, and this event has probability $\alpha_{j}$. Thus $\mw_{1,j}^{(1)} = \alpha_{j}$. On the other hand, since $\ml_{2,k}^{(0)} = 0$ for each $k \in S_{j}$, we get $G_{j,S_{j}}\left(1 - \ml_{2,k}^{(0)}: k \in S_{j}\right) = 1$, making the right side of \eqref{misere_recur_1} equal $\alpha_{j}$ for $n = 0$. This concludes the verification. Likewise, for each $j \in [m]$ and $n \in \mathbb{N}_{0}$, we deduce that
\begin{equation
\mw_{2,j}^{(n+1)} = \beta_{j} + 1 - G_{j,[m] \setminus S_{j}}\left(1 - \ml_{1,k}^{(n)}: k \in [m] \setminus S_{j}\right). \nonumber
\end{equation}
For an initial vertex $v$ to be in $\ML_{1,j}^{(n+1)}$ for $n \in \mathbb{N}$, it must have at least one child $u$ with $\sigma(u) = k$ for some $k \in S_{j}$, and every child $u$ with $\sigma(u) = k$ for any $k \in S_{j}$ must be in $\MW_{2,k}^{(n)}$. Thus,
\begin{align}\label{misere_recur_3}
\ml_{1,j}^{(n+1)} &= \sum_{\substack{n_{r} \in \mathbb{N}_{0}: r \in [m]\\(n_{k}: k \in S_{j}) \neq \mathbf{0}_{S_{j}}}} \prod_{k \in S_{j}}\left(\mw_{2,k}^{(n)}\right)^{n_{k}} \chi_{j}(n_{1}, \ldots, n_{m}) = G_{j,S_{j}}\left(\mw_{2,k}^{(n)}: k \in S_{j}\right) - \alpha_{j}.
\end{align}
Once again, we verify this recursion for $n = 0$. For the outcome to be decided in less than $1$ round, P1 must be unable to make her first move, allowing her to win. Since this is not an option, i.e.\ P1 must lose, hence $\ml_{1,j}^{(1)} = 0$. On the other hand, since $\mw_{2,k}^{(0)} = 0$ for each $k \in S_{j}$, hence $G_{j,S_{j}}\left(\mw_{2,k}^{(0)}: k \in S_{j}\right) = \alpha_{j}$, thus making the right side of \eqref{misere_recur_3} equal $0$ as well. Likewise, for each $j \in [m]$ and $n \in \mathbb{N}_{0}$, we have
\begin{equation
\ml_{2,j}^{(n+1)} = G_{j,[m] \setminus S_{j}}\left(\mw_{1,k}^{(n)}: k \in [m] \setminus S_{j}\right) - \beta_{j}.\nonumber
\end{equation}
Using these recursions, defining $F_{M}$, $\F_{M}$ , $F_{M,j}$ and $\F_{M,j}$ as in \eqref{F_{M,j}_script_F_{M,j}_defns}, and setting $\bmw_{i}^{(n)} = \left(\mw_{i,j}^{(n)}: j \in [m]\right)$ for $i = 1, 2$, we get $\bmw_{1}^{(n+2)} = F_{M}\left(\bmw_{1}^{(n)}\right)$ and $\bmw_{2}^{(n+2)} = \F_{M}\left(\bmw_{2}^{(n)}\right)$. Using \eqref{eq:misere_compactness_consequence}, we get $\bmw_{1} = \lim_{n \rightarrow \infty} F_{M}^{(n)}\left(\mathbf{0}_{[m]}\right)$ and $\bmw_{2} = \lim_{n \rightarrow \infty} \F_{M}^{(n)}\left(\mathbf{0}_{[m]}\right)$, and utilizing the monotonically increasing nature of both $F_{M}$ and $\F_{M}$, we get
\begin{equation}\label{misere_min_fixed_point}
\bmw_{1} = \min \FP(F_{M}) \quad \text{and} \quad \bmw_{2} = \min \FP(\F_{M}).
\end{equation}
The above recursions also yield $\bml_{1}^{(n+2)} = \mathbf{1}_{[m]} - F_{M}\left(\mathbf{1}_{[m]} - \bml_{1}^{(n)}\right)$ and $\bml_{2}^{(n+2)} = \mathbf{1}_{[m]} - \F_{M}\left(\mathbf{1}_{[m]} - \bml_{2}^{(n)}\right)$, where $\bml_{i}^{(n)} = \left(\ml_{i,j}^{(n)}: j \in [m]\right)$ for $i = 1, 2$, so that \eqref{eq:misere_compactness_consequence} leads to
\begin{equation}\label{misere_max_fixed_point}
\bml_{1} = \mathbf{1}_{[m]} - \max \FP(F_{M}) \quad \text{and} \quad \bml_{2} = \mathbf{1}_{[m]} - \max \FP(\F_{M}).
\end{equation}
\subsection{The escape games}\label{subsec:escape_recursions}
For $j \in [m]$ and $n \in \mathbb{N}$, we let $\ESW_{j}^{(n)} \subset \ESW_{j}$ comprise vertices $v$ such that if $v$ is the initial vertex and Stopper plays the first round, she wins in less than $n$ rounds. We let $\EEL_{j}^{(n)} \subset \EEL_{j}$ comprise vertices $v$ such that if $v$ is the initial vertex and Escaper plays the first round, she loses in less than $n$ rounds. We set $\ESW_{j}^{(0)} = \EEL_{j}^{(0)} = \emptyset$. We let $\ESL_{j}^{(n)}$ and $\EEW_{j}^{(n)}$ be the complement subsets of $\ESW_{j}^{(n)}$ and $\EEL_{j}^{(n)}$ respectively.
\begin{lemma}\label{lem:escape_compactness}
For each $j \in [m]$, the subsets $\widetilde{\ESW}_{j} = \ESW_{j} \setminus \left(\bigcup_{n=1}^{\infty}\ESW_{j}^{(n)}\right)$ and $\widetilde{\EEL}_{j} = \EEL_{j} \setminus \left(\bigcup_{n=1}^{\infty}\EEL_{j}^{(n)}\right)$ are empty.
\end{lemma}
\begin{proof}
We prove the claim for $\widetilde{\ESW}_{j}$. Given a realization $T$ of $\mathcal{T}$, we construct a corresponding rooted tree $T'$ so as to draw a parallel between the escape game played on $T$ and a suitable normal game on $T'$. Let $w$ be the initial vertex for the escape game on $T$. We denote the root of $T'$ by $w_{e}$. For every vertex $v$ of $T(w) \setminus \{w\}$, we create two copies of $v$ in $T'$ -- the odd copy $v_{o}$ and the even copy $v_{e}$, and we assign colours $\sigma'(v_{o}) = \sigma'(v_{e}) = \sigma(v)$, where $\sigma(v)$ is the colour of $v$ in $T$. If $u$ is the parent of $v$ in $T(w)$, then in $T'$, we include only the directed edges $(u_{o}, v_{1})$ and $(u_{1}, v_{o})$. If $v$ in $T(w)$ is at an even distance from $w$ and $v$ has no child of colour $k$ for any $k \in S_{\sigma(v)}$, we call $v$ \emph{special}. In this case, we choose some $k \in S_{j}$, add a vertex $\nu_{v}$ to $T'$ with $\sigma'(\nu_{v}) = k$, add the edge $\left(v_{e}, \nu_{v}\right)$, and keep $\nu_{v}$ childless.
The normal game on $T'$ begins at $w_{e}$ and P1 plays the first round. The permissible edges for P1 are the same as those for Stopper, and the permissible edges for P2 are the same as those for Escaper. Stopper executes the following strategy. If on $T'$, P1 moves the token from $u_{e}$ to $v_{o}$, and $u$ is not a special vertex in $T$, then in the corresponding round of the escape game, Stopper moves the token from $u$ to $v$. If $u$ is a special vertex, then P1 is able to move the token from $u_{e}$ to $\nu_{u}$, but Stopper fails to make a move and the escape game comes to an end.
We argue that Stopper wins the escape game on $T$ iff P1 wins the normal game on $T'$. If Stopper wins because of being unable to make a move, then the escape game must have reached, at the end of an even round, a special vertex $u$ in $T(w)$. By our construction, P1 moves the token, in that round, from $u_{e}$ to $\nu_{u}$, but in the next round, P2 gets stuck since $\nu_{u}$ is childless. If Stopper wins because of Escaper being unable to make a move, then the escape game must have reached, at the end of an odd round, a vertex $u$ in $T(w)$ such that $u$ has no child $v$ with $\sigma(v) \in [m] \setminus S_{\sigma(u)}$. In this case, the normal game, at the end of the corresponding round, must have reached $u_{o}$, and there exists no child $v_{e}$ of $u_{o}$ with $\sigma'(v_{e}) \in [m] \setminus S_{\sigma'(u_{o})}$. Consequently, P2 fails to move in this round and loses. Thus $\widetilde{ESW}_{j}$ on $T$ corresponds to $\widetilde{\NW}_{1,j}$ on $T'$. By Lemma~\ref{lem:normal_compactness}, the conclusion follows.
\end{proof}
Letting $\esw_{j}^{(n)}$ and $\eel_{j}^{(n)}$ denote the probabilities, conditioned on $\sigma(\phi) = j$, that $\phi$ belongs to $\ESW_{j}^{(n)}$ and $\EEL_{j}^{(n)}$ respectively, for each $n \in \mathbb{N}_{0}$, we conclude, as in Lemma~\ref{lem:normal_compactness_consequence}, that as $n \rightarrow \infty$,
\begin{equation}\label{eq:escape_compactness_consequence}
\esw_{j}^{(n)} \uparrow \esw_{j} \text{ and } \eel_{j}^{(n)} \uparrow \eel_{j} \text{ for each } j \in [m].
\end{equation}
For an initial vertex $v$ to be in $\ESW_{j}^{(n+1)}$, for $n \in \mathbb{N}$, either $v$ has no child of colour $k$ for any $k \in S_{j}$, or $v$ has at least one child $u$ in $\EEL_{k}^{(n)}$ for some $k \in S_{j}$. Thus,
\begin{multline}\label{escape_recur_1}
\esw_{j}^{(n+1)} = \alpha_{j} + \sum_{n_{r} \in \mathbb{N}_{0}: r \in [m]} \left\{1 - \prod_{k \in S_{j}}\left(1 - \eel_{k}^{(n)}\right)^{n_{k}}\right\} \chi_{j}(n_{1}, \ldots, n_{m}) \\= \alpha_{j} + 1 - G_{j,S_{j}}\left(1 - \eel_{k}^{(n)}: k \in S_{j}\right).
\end{multline}
We now verify this recursion for $n = 0$. For Stopper to win the game in less than $1$ round, she must fail to make a move in the very first round, which happens only if $v$ has no child of colour $k$ for any $k \in S_{j}$. The probability of this event is $\alpha_{j}$. Hence, $\esw_{j}^{(1)} = \alpha_{j}$. On the other hand, $1 - G_{j,S_{j}}\left(1 - \eel_{k}^{(0)}: k \in S_{j}\right) = 0$ since $\eel_{k}^{(0)} = 0$ for each $k \in S_{j}$. This concludes the verification. For $v$ to be in $\EEL_{j}^{(n+1)}$ for some $n \in \mathbb{N}$, either $v$ has no child of colour $k$ for any $k \in [m] \setminus S_{j}$, or, for every $k \in [m] \setminus S_{j}$, every child $u$ of $v$ with $\sigma(u) = k$ is in $\ESW_{k}^{(n)}$. Thus
\begin{multline}\label{escape_recur_2}
\eel_{j}^{(n+1)} = \beta_{j} + \sum_{\substack{n_{r} \in \mathbb{N}_{0}: r \in [m]\\\left(n_{k}: k \in [m] \setminus S_{j}\right) \neq \mathbf{0}_{[m] \setminus S_{j}}}} \prod_{k \in [m] \setminus S_{j}}\left(\esw_{k}^{(n)}\right)^{n_{k}} \chi_{j}(n_{1}, \ldots, n_{m})\\ = G_{j, [m] \setminus S_{j}}\left(\esw_{k}^{(n)}: k \in [m] \setminus S_{j}\right).
\end{multline}
Once again, we verify this recursion for $n = 0$. For Escaper to lose the game in less than $1$ round, she must be unable to make her very first move, which means that $v$ has no child of colour $k$ for any $k \in [m] \setminus S_{j}$. This event happens with probability $\beta_{j}$. Thus, $\eel_{j}^{(1)} = \beta_{j}$. On the other hand, $G_{j, [m] \setminus S_{j}}\left(\esw_{k}^{(0)}: k \in [m] \setminus S_{j}\right) = G_{j, [m] \setminus S_{j}}\left(\mathbf{0}_{[m] \setminus S_{j}}\right) = \beta_{j}$.
Using these recursions, defining $F_{E}$, $\F_{E}$, $F_{E,j}$ and $\F_{E,j}$ as in \eqref{F_{E,j}_script_F_{E,j}_defns}, we get $\besw^{(n+2)} = F_{E}\left(\besw^{(n)}\right)$ and $\beel^{(n+2)} = \mathbf{1}_{[m]} - \F_{E}\left(\mathbf{1}_{[m]} - \beel^{(n)}\right)$, where $\besw^{(n)} = \left(\esw_{j}^{(n)}: j \in [m]\right)$ and $\beel^{(n)} = \left(\eel_{j}^{(n)}: j \in [m]\right)$. Using the monotonically increasing nature of $F_{E}$ and $\F_{E}$ and \eqref{eq:escape_compactness_consequence},
\begin{equation}\label{escape_min_fixed_point}
\besw = \lim_{n \rightarrow \infty} \besw^{(2n)} = \lim_{n \rightarrow \infty} F_{E}^{(n)}\left(\mathbf{0}_{[m]}\right) = \min \FP(F_{E}),
\end{equation}
\begin{equation}\label{escape_max_fixed_point}
\beel = \lim_{n \rightarrow \infty} \beel^{(2n)} = \mathbf{1}_{[m]} - \lim_{n \rightarrow \infty} \F_{E}^{(n)}\left(\mathbf{1}_{[m]} - \beel^{(0)}\right) = \mathbf{1}_{[m]} - \max \FP(\F_{E}).
\end{equation}
\section{Proofs of Theorems~\ref{thm:main_example_1} and \ref{thm:main_example_2}}\label{sec:main_examples_proof}
\subsection{Proof of Theorem~\ref{thm:main_example_1}}\label{subsec:main_example_1_proof} The generating functions involved in this example are
\begin{align}
& G_{b,S_{b}}(x_{k}: k \in S_{b}) = G_{b,\{b\}}(x_{b}) = (p_{0}+p_{\rr}) + p_{\br} x_{b} + p_{\blbl} x_{b}^{2};\nonumber\\
& G_{r,S_{r}}(x_{k}: k \in S_{r}) = G_{r,\{r\}}(x_{r}) = (q_{0}+q_{\blbl}) + q_{\br} x_{r} + q_{\rr} x_{r}^{2}; \nonumber\\
& G_{b, [m] \setminus S_{b}}(x_{k}: k \in [m] \setminus S_{b}) = G_{b,\{r\}}(x_{r}) = (p_{0} + p_{\blbl}) + p_{\br} x_{r} + p_{\rr} x_{r}^{2};\nonumber\\
& G_{r, [m] \setminus S_{r}}(x_{k}: k \in [m] \setminus S_{r}) = G_{r,\{b\}}(x_{b}) = (q_{0}+q_{\rr}) + q_{\br} x_{b} + q_{\blbl} x_{b}^{2}. \nonumber
\end{align}
From these and the recursions derived in \S\ref{subsec:normal_recursions}, we get
\begin{align}
\nw_{1,b} &= p_{\br} + p_{\blbl} - p_{\br}\left(p_{\br} + p_{\rr} - p_{\br} \nw_{1,r} - p_{\rr} \nw_{1,r}^{2}\right) - p_{\blbl} \left(p_{\br} + p_{\rr} - p_{\br} \nw_{1,r} - p_{\rr} \nw_{1,r}^{2}\right)^{2} \nonumber\\
&= (p_{0}+p_{\blbl})\left\{p_{\br} + 2p_{\blbl} - p_{\blbl}(p_{0}+p_{\blbl})\right\} + \left(p_{br}^{2} + 2p_{\br}^{2}p_{\blbl} + 2p_{\br}p_{\blbl}p_{\rr}\right)\nw_{1,r} \nonumber\\& + \left(p_{\br}p_{\rr} - p_{\blbl}p_{\br}^{2} + 2p_{\blbl}p_{\rr}p_{\br} + 2p_{\blbl}p_{\rr}^{2}\right)\nw_{1,r}^{2} - 2p_{\blbl}p_{\br}p_{\rr}\nw_{1,r}^{3} - p_{\blbl}p_{\rr}^{2}\nw_{1,r}^{4}. \nonumber
\end{align}
On the other hand, we have
\begin{multline}
1 - \nl_{1,b} = (p_{0}+p_{\blbl})\left\{p_{\br} + 2p_{\blbl} - p_{\blbl}(p_{0}+p_{\blbl})\right\} + \left(p_{br}^{2} + 2p_{\br}^{2}p_{\blbl} + 2p_{\br}p_{\blbl}p_{\rr}\right)\left(1 - \nl_{1,r}\right) +\\ \left(p_{\br}p_{\rr} - p_{\blbl}p_{\br}^{2} + 2p_{\blbl}p_{\rr}p_{\br} + 2p_{\blbl}p_{\rr}^{2}\right)\left(1 - \nl_{1,r}\right)^{2} - 2p_{\blbl}p_{\br}p_{\rr}\left(1 - \nl_{1,r}\right)^{3} - p_{\blbl}p_{\rr}^{2}\left(1 - \nl_{1,r}\right)^{4}.\nonumber
\end{multline}
Combined together, these yield
\begin{multline}\label{nd_{1,b}_recursion_example_4}
\nd_{1,b} = \nd_{1,r}\Big[p_{br}^{2} + 2p_{\br}^{2}p_{\blbl} + 2p_{\br}p_{\blbl}p_{\rr} + \left(p_{\br}p_{\rr} - p_{\blbl}p_{\br}^{2} + 2p_{\blbl}p_{\rr}p_{\br} + 2p_{\blbl}p_{\rr}^{2}\right) \left(1 - \nl_{1,r} + \nw_{1,r}\right) -\\ 2p_{\blbl}p_{\br}p_{\rr}\left\{\left(1 - \nl_{1,r}\right)^{2} + \nw_{1,r}^{2} + \nw_{1,r}\left(1 - \nl_{1,r}\right)\right\} - p_{\blbl}p_{\rr}^{2}\left(1 - \nl_{1,r} + \nw_{1,r}\right)\left\{\left(1 - \nl_{1,r}\right)^{2} + \nw_{1,r}^{2}\right\}\Big].
\end{multline}
By symmetry, we have
\begin{multline}\label{nd_{1,r}_recursion_example_4}
\nd_{1,r} = \nd_{1,b}\Big[\left(q_{br}^{2} + 2q_{\br}^{2}q_{\rr} + 2q_{\br}q_{\rr}q_{\blbl}\right) + \left(q_{\br}q_{\blbl} - q_{\rr}q_{\br}^{2} + 2q_{\rr}q_{\blbl}q_{\br} + 2q_{\rr}q_{\blbl}^{2}\right) \left(1 - \nl_{1,b} + \nw_{1,b}\right) \\- 2q_{\rr}q_{\br}q_{\blbl}\left\{\left(1 - \nl_{1,b}\right)^{2} + \nw_{1,b}^{2} + \nw_{1,b}\left(1 - \nl_{1,b}\right)\right\} - q_{\rr}q_{\blbl}^{2}\left(1 - \nl_{1,b} + \nw_{1,b}\right)\left\{\left(1 - \nl_{1,b}\right)^{2} + \nw_{1,b}^{2}\right\}\Big].
\end{multline}
We now split our analysis into two parts: the first is where $A = p_{\br}p_{\rr} - p_{\blbl}p_{\br}^{2} + 2p_{\blbl}p_{\rr}p_{\br} + 2p_{\blbl}p_{\rr}^{2}$ from \eqref{nd_{1,b}_recursion_example_4} is non-negative, and the second is where $A$ is negative. When $A \geqslant 0$, from \eqref{nd_{1,b}_recursion_example_4} and using $p_{\rr} \leqslant 1 - p_{\br} - p_{\blbl}$, we get
\begin{align}\label{A_nonnegative_upper_bound}
\nd_{1,b} &\leqslant \nd_{1,r}\left[\left(p_{br}^{2} + 2p_{\br}^{2}p_{\blbl} + 2p_{\br}p_{\blbl}p_{\rr}\right) + 2\left(p_{\br}p_{\rr} - p_{\blbl}p_{\br}^{2} + 2p_{\blbl}p_{\rr}p_{\br} + 2p_{\blbl}p_{\rr}^{2}\right)\right] \nonumber\\
&\leqslant \nd_{1,r}\left(p_{\br}^{2} + 6p_{\blbl}p_{\br}p_{\rr} + 2p_{\br}p_{\rr} + 4p_{\blbl}p_{\rr}^{2}\right) \nonumber\\
&\leqslant \nd_{1,r}\left(2p_{\br} + 4p_{\blbl} - p_{\br}^{2} - 8p_{\blbl}^{2} - 4p_{\br}p_{\blbl} - 2p_{\br}^{2}p_{\blbl} + 2p_{\br}p_{\blbl}^{2} + 4p_{\blbl}^{3}\right).
\end{align}
Define the domain $\mathcal{D} = \left\{(x,y): x \in [0,1], y \in [0,1], x+y \leqslant 1\right\}$ and the function $f: \mathcal{D} \rightarrow \mathbb{R}$ as
\begin{equation}
f(x,y) = 2x + 4y - x^{2} - 8y^{2} - 4xy - 2x^{2}y + 2xy^{2} + 4y^{3}.\nonumber
\end{equation}
The partial derivatives of this function are $\frac{\partial}{\partial x} f(x,y) = 2 - 2x - 4y - 4xy + 2y^{2}$ and $\frac{\partial}{\partial y}f(x,y) = 4 - 16y - 4x - 2x^{2} + 4xy + 12y^{2}$. From the equation $\frac{\partial}{\partial x}f(x,y) = 0$, we have
\begin{equation}\label{x_value_example_4_normal}
2x + 4xy = 2 - 4y + 2y^{2} \implies x = \frac{(1-y)^{2}}{1+2y}.
\end{equation}
Substituting this in the equation $\frac{\partial}{\partial y}f(x,y) = 0$, we get the quartic polynomial equation $54y^{4} - 28y^{3} - 36y^{2} + 12y = 2$, with real roots $y_{1} = 1$ and $y_{2} \approx -0.77985$, of which only $y_{1}$ lies in our domain of interest. The corresponding value of $x$, from \eqref{x_value_example_4_normal}, is $x_{1} = 0$, and the value of $f$ at this critical point is $f(0,1) = 0$. On the boundary of $\mathcal{D}$:
\begin{itemize}
\item When $x+y = 1$, we have $f(x,y) = x^{2}$ strictly increasing on $[0,1]$ with maximum $f(1,0) = 1$.
\item When $x = 0$, we have $f(0,y) = 4y - 8y^{2} + 4y^{3} = 4y(1-y)^{2}$, so that $f'(0,y) = 4 - 16y + 12y^{2} = 4(1-y)(1-3y)$ is strictly positive only if $y < \frac{1}{3}$. Thus $f(0,y)$ is strictly increasing on $\left[0, \frac{1}{3}\right)$ and strictly decreasing on $\left(\frac{1}{3}, 1\right]$, and the maximum is $f\left(0, \frac{1}{3}\right) = \frac{16}{27}$.
\item When $y = 0$, we have $f(x,0) = 2x - x^{2}$, so that $f'(x,0) = 2 - 2x = 2(1-x)$ is strictly positive for $x < 1$, and the maximum, attained at $x = 1$, is $f(1,0) = 1$.
\end{itemize}
These observations, together with \eqref{A_nonnegative_upper_bound}, imply that when $A \geqslant 0$, we have $p_{\br}^{2} + 6p_{\blbl}p_{\br}p_{\rr} + 2p_{\br}p_{\rr} + 4p_{\blbl}p_{\rr}^{2} < 1$ for all $p_{\br}$, $p_{\blbl}$ and $p_{\rr}$, except when $p_{\br} = 1$. When $A$ is non-positive, \eqref{nd_{1,b}_recursion_example_4} yields
\begin{multline}\label{A_negative_upper_bound}
\nd_{1,b} \leqslant \nd_{1,r}\left(p_{br}^{2} + 2p_{\br}^{2}p_{\blbl} + 2p_{\br}p_{\blbl}p_{\rr}\right) = \nd_{1,r}\left[p_{\br}^{2} + 2p_{\br} p_{\blbl}(p_{\br} + p_{\rr})\right] \\ \leqslant \nd_{1,r}\left[p_{\br}^{2} + 2p_{\br} p_{\blbl}\right] \leqslant \nd_{1,r}\left(p_{\br} + p_{\blbl}\right)^{2} \leqslant \nd_{1,r},
\end{multline}
and the only scenario in which this inequality is an equality is where $p_{\br} = 1$.
We conclude that unless $p_{\br} = 1$, we have $\nd_{1,b} < \nd_{1,r}$ if we assume that $\nd_{1,r}$ is strictly positive. A similar analysis on \eqref{nd_{1,r}_recursion_example_4} shows that unless $q_{\br} = 1$, we have $\nd_{1,r} < \nd_{1,b}$ if we assume that $\nd_{1,b}$ is strictly positive. Clearly, these two inequalities cannot hold simultaneously. Therefore, we have:
\begin{itemize}
\item $\nd_{1,b} = \nd_{1,r} = 1$ when $p_{\br} = q_{\br} = 1$,
\item and in all other cases, $\nd_{1,b} = \nd_{1,r} = 0$.
\end{itemize}
For the mis\`{e}re game, $\alpha_{b} = p_{0} + p_{\rr}$ and $\beta_{b} = p_{0} + p_{\blbl}$. From the generating functions above and the recursions in \S\ref{subsec:misere_recursions} we get
\begin{multline
\mw_{1,b} = p_{0} + p_{\rr} + \left(p_{\br}^{2} + 2p_{\blbl}p_{\br}\right) \mw_{1,r} + \left(p_{\br}p_{\rr} + 2p_{\blbl}p_{\rr} - p_{\blbl}p_{\br}^{2}\right) \mw_{1,r}^{2} - 2p_{\blbl}p_{\br}p_{\rr}\mw_{1,r}^{3}\\ - p_{\blbl}p_{\rr}^{2}\mw_{1,r}^{4}. \nonumber
\end{multline}
Likewise, we deduce that
\begin{multline
1 - \ml_{1,b} = p_{0} + p_{\rr} + \left(p_{\br}^{2} + 2p_{\blbl}p_{\br}\right) \left(1 - \ml_{1,r}\right) + \left(p_{\br}p_{\rr} + 2p_{\blbl}p_{\rr} - p_{\blbl}p_{\br}^{2}\right) \left(1 - \ml_{1,r}\right)^{2} \\- 2p_{\blbl}p_{\br}p_{\rr}\left(1 - \ml_{1,r}\right)^{3} - p_{\blbl}p_{\rr}^{2}\left(1 - \ml_{1,r}\right)^{4}.\nonumber
\end{multline}
Combining the above, we get
\begin{multline}\label{md_{1,b}_recursion_example_4}
\md_{1,b} = \md_{1,r}\Big[p_{\br}^{2} + 2p_{\blbl}p_{\br} + \left(p_{\br}p_{\rr} + 2p_{\blbl}p_{\rr} - p_{\blbl}p_{\br}^{2}\right)\left(1 - \ml_{1,r} + \mw_{1,r}\right) - 2p_{\blbl}p_{\br}p_{\rr}\\\left\{\left(1 - \ml_{1,r}\right)^{2} + \mw_{1,r}^{2} + \mw_{1,r}(1 - \ml_{1,r})\right\} - p_{\blbl}p_{\rr}^{2} \left(1 - \ml_{1,r} + \mw_{1,r}\right)\left\{\left(1 - \ml_{1,r}\right)^{2} + \mw_{1,r}^{2}\right\}\Big].
\end{multline}
We analyze the cases where $B = p_{\br}p_{\rr} + 2p_{\blbl}p_{\rr} - p_{\blbl}p_{\br}^{2}$ is non-negative and where $B$ is negative, separately. When $B \geqslant 0$, from \eqref{md_{1,b}_recursion_example_4} and using $p_{\rr} \leqslant 1 - p_{\blbl} - p_{\br}$, we have
\begin{multline}\label{B_nonnegative_example_4}
\md_{1,b} \leqslant \md_{1,r}\left(p_{\br}^{2} + 2p_{\blbl}p_{\br} + 2\left(p_{\br}p_{\rr} + 2p_{\blbl}p_{\rr} - p_{\blbl}p_{\br}^{2}\right)\right) \\
\leqslant \md_{1,r}\left(2p_{\br} + 4p_{\blbl} - p_{\br}^{2} - 4p_{\blbl}^{2} - 4p_{\br}p_{\blbl} - 2p_{\br}^{2}p_{\blbl}\right).
\end{multline}
Defining $\mathcal{D}$ as above, and letting $f: \mathcal{D} \rightarrow \mathbb{R}$ be $f(x,y) = 2x + 4y - x^{2} - 4y^{2} - 4xy - 2x^{2}y$, we get the partial derivatives $\frac{\partial}{\partial x}f(x,y) = 2 - 2x - 4y - 4xy$ and $\frac{\partial}{\partial y}f(x,y) = 4 - 8y - 4x - 2x^{2}$. From the equation $\frac{\partial}{\partial y}f(x,y) = 0$, we have
\begin{equation}\label{y_value_example_4_misere}
8y = 4 - 4x - 2x^{2} \implies y = \left(2 - 2x - x^{2}\right)/4.
\end{equation}
Substituting this in the equation $\frac{\partial}{\partial x}f(x,y) = 0$, we get the cubic polynomial equation $x^{3} + 3x^{2} - 2x = 0$, with roots $x_{1} = 0$, $x_{2} = -\frac{3}{2} - \frac{\sqrt{17}}{2}$ and $x_{3} = \frac{\sqrt{17}}{2} - \frac{3}{2}$, of which $x_{1}$ and $x_{3}$ lie in our domain of interest. The corresponding values of $y$, from \eqref{y_value_example_4_misere}, are $y_{1} = \frac{1}{2}$ and $y_{3} = \frac{\sqrt{17}}{8} - \frac{3}{8}$. The values of $f$ at these critical points are $f(x_{1}, y_{1}) = f\left(0, \frac{1}{2}\right) = 1$ and $f(x_{3}, y_{3}) = f\left(\frac{\sqrt{17}}{2} - \frac{3}{2}, \frac{\sqrt{17}}{8} - \frac{3}{8}\right) \approx 0.8866$.
We examine the critical point $(x_{1}, y_{1})$ carefully. We have $p_{\br} = 0$ and $p_{\blbl} = \frac{1}{2}$. If $p_{\rr} < \frac{1}{2}$, then \eqref{B_nonnegative_example_4} shows that we end up with a strictly smaller value of $p_{\br}^{2} + 2p_{\blbl}p_{\br} + 2p_{\br}p_{\rr} + 4p_{\blbl}p_{\rr} - 2p_{\blbl}p_{\br}^{2}$. So we need only focus on the case where $p_{\rr} = \frac{1}{2}$ and $p_{0} = 0$. We then have, from \eqref{md_{1,b}_recursion_example_4}:
\begin{align}
\md_{1,b} &= \md_{1,r}\Big[2\left(\frac{1}{2}\right)^{2}\left(1 - \ml_{1,r} + \mw_{1,r}\right) - \left(\frac{1}{2}\right)^{3} \left(1 - \ml_{1,r} + \mw_{1,r}\right)\left\{\left(1 - \ml_{1,r}\right)^{2} + \mw_{1,r}^{2}\right\}\Big] \nonumber\\
&= \md_{1,r}\left(1 - \ml_{1,r} + \mw_{1,r}\right)\left[\frac{1}{2} - \frac{1}{8}\left\{\left(1 - \ml_{1,r}\right)^{2} + \mw_{1,r}^{2}\right\}\right] \nonumber
\end{align}
and $\left(1 - \ml_{1,r} + \mw_{1,r}\right)\left[\frac{1}{2} - \frac{1}{8}\left\{\left(1 - \ml_{1,r}\right)^{2} + \mw_{1,r}^{2}\right\}\right]$ equals $1$ iff $1 - \ml_{1,r} + \mw_{1,r} = 2$ and $\left(1 - \ml_{1,r}\right)^{2} + \mw_{1,r}^{2} = 0$, which cannot happen simultaneously.
On the boundary of $\mathcal{D}$:
\begin{itemize}
\item When $y = 1-x$, we have $f(x,1-x) = 2x^{3} - 3x^{2} + 2x$, and $f'(x,1-x) = 6x^{2} - 6x + 2 = 6\left(x - \frac{1}{2}\right)^{2} + \frac{1}{2}$ is strictly positive for all $x$, so that $f(x,1-x)$ is strictly increasing on $[0,1]$ and the maximum is $f(1,0) = 1$.
\item When $x = 0$, we have $f(0,y) = 4y - 4y^{2} = 4y(1-y)$, so that $f'(0,y) = 4 - 8y = 8\left(\frac{1}{2} - y\right)$ is strictly positive for $y < \frac{1}{2}$. Thus $f(0,y)$ is strictly increasing on $\left[0, \frac{1}{2}\right)$ and strictly decreasing on $\left(\frac{1}{2}, 1\right]$, and the maximum is $f\left(0, \frac{1}{2}\right) = 1$.
\item When $y = 0$, we have $f(x,0) = 2x - x^{2} = x(2 - x)$, so that $f'(x,0) = 2 - 2x = 2(1-x) > 0$ for $x < 1$. Thus, $f(x,0)$ is strictly increasing on $[0,1)$, with maximum $f(1,0) = 1$.
\end{itemize}
When $B < 0$, from \eqref{md_{1,b}_recursion_example_4}, we have
\begin{multline}\label{B_negative_example_4}
\md_{1,b} \leqslant \md_{1,r}\left[p_{\br}^{2} + 2p_{\blbl}p_{\br}\right] \leqslant \md_{1,r}\left[p_{\br}^{2} + 2p_{\blbl}p_{\br} + p_{\blbl}^{2}\right] = \md_{1,r}\left(p_{\br} + p_{\blbl}\right)^{2} \leqslant \md_{1,r},
\end{multline}
and the only scenario under which equality holds in \eqref{B_negative_example_4} is if we have $p_{\br} = 1$. Thus, unless $p_{\br} = 1$, we have $\md_{1,b} < \md_{1,r}$ if we assume that $\md_{1,r}$ is strictly positive. A similar analysis yields that unless $q_{\br} = 1$, we have $\md_{1,r} < \md_{1,b}$ if $\md_{1,b}$ is assumed strictly positive. Clearly, these two inequalities cannot hold simultaneously. Therefore, we conclude that:
\begin{itemize}
\item if $p_{\br} = q_{\br} = 1$, then $\md_{1,b} = \md_{1,r} = 1$,
\item and in all other cases, $\md_{1,b} = \md_{1,r} = 0$.
\end{itemize}
For the escape game, using the generating functions above and recursions from \S\ref{subsec:escape_recursions}, we get
\begin{multline}\label{esl_{b}_esl_{r}_relationship_1}
\esl_{b} = \esl_{r}\Big\{\left(p_{\br}^{2} + 2p_{\br}p_{\rr}\right) + \left(p_{\br}^{2}p_{\blbl} + 4p_{\br}p_{\rr}p_{\blbl} + 4p_{\blbl}p_{\rr}^{2} - p_{\br}p_{\rr}\right) \esl_{r} -\\ \big\{2p_{\br}p_{\rr}p_{\blbl} + 4p_{\blbl}p_{\rr}^{2}\big\}\esl_{r}^{2} + p_{\blbl}p_{\rr}^{2} \esl_{r}^{3}\Big\},
\end{multline}
so that, if $C = p_{\br}^{2}p_{\blbl} + 4p_{\br}p_{\rr}p_{\blbl} + 4p_{\blbl}p_{\rr}^{2} - p_{\br}p_{\rr}$ is non-negative, using $p_{\rr} \leqslant 1 - p_{\br} - p_{\blbl}$,
\begin{multline
\esl_{b} \leqslant \esl_{r}\left(p_{\br}^{2} + p_{\br}p_{\rr} + p_{\br}^{2}p_{\blbl} + 4p_{\br}p_{\rr}p_{\blbl} + 5p_{\blbl}p_{\rr}^{2}\right) \\ \leqslant \esl_{r}\left(2 p_{\br}^{2}p_{\blbl} + 6p_{\br}p_{\blbl}^{2} - 7p_{\br}p_{\blbl} + p_{\br} + 5p_{\blbl}^{3} - 10p_{\blbl}^{2} + 5p_{\blbl}\right).\nonumber
\end{multline}
With the domain $\mathcal{D}$ as defined above, consider the function $f:\mathcal{D} \rightarrow \mathbb{R}$ with $f(x,y) = 2x^{2}y + 6xy^{2} - 7xy + x + 5y^{3} - 10y^{2} + 5y$. The partial derivatives of $f$ are given by $\frac{\partial}{\partial x}f(x,y) = 4xy + 6y^{2} - 7y + 1$ and $\frac{\partial}{\partial y}f(x,y) = 2x^{2} + 12xy - 7x + 15y^{2} - 20y + 5$. The equation $\frac{\partial}{\partial x}f(x,y) = 0$ gives
\begin{align}\label{x_value_escape_example_3}
4xy = 7y - 1 - 6y^{2} \implies x = \frac{7y - 1 - 6y^{2}}{4y},
\end{align}
and substituting this in the equation $\frac{\partial}{\partial y}f(x,y) = 0$, we get the quartic polynomial equation $24 y^{4} + 16y^{3} - 42y^{2} + 2 = 0$, with roots $y_{1} = 1$, $y_{2} \approx -1.6868$, $y_{3} \approx -0.21244$ and $y\approx 0.23255$, of which $y_{1}$ and $y_{2}$ are in our domain of interest. From \eqref{x_value_escape_example_3}, the corresponding values of $x$ are $x_{1} = 0$ and $x_{4} \approx 0.32613$, and the corresponding values of $f$ are $f(x_{1}, y_{1}) = 0$ and $f(x_{4}, y_{4}) \approx 0.635365$. On the boundary of the domain $\mathcal{D}$, we have:
\begin{itemize}
\item When $y = 1-x$, we have $f(x,1-x) = x^{2} + x^{2}(1-x) = 2x^{2} - x^{3}$, and $f'(x,1-x) = x(4 - 3x)$ is strictly positive for all $x \in [0,1]$, thus showing that $f(x,1-x)$ is strictly increasing and its maximum is $f(1,0) = 1$.
\item When $x = 0$, we have $f(0,y) = 5y^{3} - 10y^{2} + 5y = 5y(1-y)^{2}$, and $f'(0,y) = 15y^{2} - 20y + 5 = 5(1-y)(1-3y)$ is strictly positive for $y < \frac{1}{3}$. Therefore, $f(0,y)$ is strictly increasing on $\left[0,\frac{1}{3}\right)$ and strictly decreasing on $\left(\frac{1}{3}, 1\right]$, and its maximum is $f\left(0,\frac{1}{3}\right) = \frac{20}{27}$.
\item When $y = 0$, we have $f(x,0) = x$, with maximum $f(1,0) = 1$.
\end{itemize}
When $C$ is negative, from \eqref{esl_{b}_esl_{r}_relationship_1}, we get
\begin{multline}
\esl_{b} \leqslant \esl_{r}\Big\{\left(p_{\br}^{2} + 2p_{\br}p_{\rr}\right) + p_{\blbl}p_{\rr}^{2} \esl_{r}^{3}\Big\} \leqslant \esl_{r} \Big\{p_{\br}^{2} + 2p_{\br}p_{\rr} + p_{\blbl}p_{\rr}^{2}\Big\} \\ \leqslant \esl_{r} \left\{p_{\br}^{2} + 2p_{\br}p_{\rr} + p_{\rr}^{2}\right\} = \esl_{r}\left(p_{\br} + p_{\rr}\right)^{2}, \nonumber
\end{multline}
and the only way equality holds is if $p_{\br} = 1$. These observations tell us that unless $p_{\br} = 1$, we have $\esl_{b} < \esl_{r}$ if we assume that $\esl_{r}$ is strictly positive. Likewise, unless $q_{\br} = 1$, we have $\esl_{r} < \esl_{b}$ if we assume that $\esl_{b}$ is strictly positive. As these inequalities cannot hold simultaneously, we conclude that
\begin{itemize}
\item when $p_{\br} = q_{\br} = 1$, we have $\esl_{b} = \esl_{r} = 1$,
\item and in all other cases, we have $\esl_{b} = \esl_{r} = 0$.
\end{itemize}
\subsection{Proof of Theorem~\ref{thm:main_example_2}}\label{subsec:main_example_2_proof} We prove the theorem only for normal games, as the argument goes through \emph{mutatis mutandis} for mis\`{e}re and escape games. For any vertex $v$, if $X_{v}$ is the number of blue children and $Y_{v}$ the number of red children, then conditioned on $\sigma(v) = b$, we have $X_{v} \sim \poi(\lambda p_{b})$ and $Y_{v} \sim \poi(\lambda p_{r})$, conditioned on $\sigma(v) = r$, we have $X_{v} \sim \poi(\lambda q_{b})$ and $Y_{v} \sim \poi(\lambda q_{r})$, and $X_{v}$ and $Y_{v}$ are always independent, all due to Poisson thinning.
The relevant generating functions in this example are
\begin{align}
& G_{b,\{b\}}(x_{b}) = \exp\left\{\lambda p_{b} (x_{b} - 1)\right\}, \quad G_{r,\{r\}}(x_{r}) = \exp\left\{\lambda q_{r} (x_{r} - 1)\right\};\nonumber\\
&G_{b,\{r\}}(x_{r}) = \exp\left\{\lambda p_{r} (x_{r}-1)\right\}, \quad G_{r,\{b\}}(x_{b}) = \exp\left\{\lambda q_{b} (x_{b} - 1)\right\}.\nonumber
\end{align}
These generating functions and the recursions of \S\ref{subsec:normal_recursions} yield $1 - \nw_{1,b} = f_{1} \circ f_{2}\left(1 - \nw_{1,b}\right)$ and $\nl_{1,b} = f_{1} \circ f_{2}\left(\nl_{1,b}\right)$, where $f_{1}(x) = \exp\left\{-\lambda p_{b} \exp\left\{-\lambda p_{r} x\right\}\right\}$ and $f_{2}(x) = \exp\left\{-\lambda q_{r} \exp\left\{-\lambda q_{b} x\right\}\right\}$. As $f_{1}$ and $f_{2}$ are strictly increasing, $1 - \nw_{1,b} = \max \FP\left(f_{1} \circ f_{2}\right)$ and $\nl_{1,b} = \min \FP\left(f_{1} \circ f_{2}\right)$. Therefore, $\nd_{1,b}$ is positive if and only if $f_{1} \circ f_{2}$ has at least two distinct fixed points in $[0,1]$. Let $x_{1}$ be a positive real with $f_{2}(x_{1}) = \frac{\ln(\lambda p_{b})}{2\lambda p_{r}}$, and let $0 < x_{2} < 1$ be \emph{any} constant. We now examine the signs of $f_{1} \circ f_{2}(x) - x$ at $x_{1}$ and $x_{2}$. Note that, given any $\epsilon > 0$, however small, we have
\begin{align}
\lim_{\lambda \rightarrow \infty} \lambda q_{r} e^{-\lambda q_{b} x_{2}} = 0 \implies f_{2}(x_{2}) = e^{-\lambda q_{r} e^{-\lambda q_{b} x_{2}}} \geqslant 1-\epsilon \text{ for all sufficiently large } \lambda.\nonumber
\end{align}
Therefore, we have
\begin{align}
\lim_{\lambda \rightarrow \infty} f_{1} \circ f_{2}(x_{2}) = \exp\left\{-p_{b} \lim_{\lambda \rightarrow \infty} \left(\lambda e^{-\lambda p_{r} f_{2}(x_{2})}\right)\right\} \geqslant \exp\left\{-p_{b} \lim_{\lambda \rightarrow \infty} \left(\lambda e^{-\lambda p_{r} (1-\epsilon)}\right)\right\} = 1, \nonumber
\end{align}
so that for all $\lambda$ sufficiently large, we have $f_{1} \circ f_{2}(x_{2}) \geqslant x_{2}$ as $x_{2} < 1$. On the other hand,
\begin{align
f_{2}(x_{1}) = \frac{\ln(\lambda p_{b})}{2\lambda p_{r}} \implies
& x_{1} = \frac{\ln \lambda + \ln q_{r} - \ln\left(\ln 2 + \ln \lambda + \ln p_{r} - \ln\left(\ln \lambda + \ln p_{b}\right)\right)}{\lambda q_{b}},\nonumber
\end{align}
so that
\begin{align
f_{1}\circ f_{2}(x_{1}) - x_{1}
&= \exp\left\{-\sqrt{\lambda p_{b}}\right\} - \frac{\ln \lambda}{\lambda q_{b}} - \frac{\ln q_{r}}{\lambda q_{b}} + \frac{\ln\left(\ln 2 + \ln \lambda + \ln p_{r} - \ln\left(\ln \lambda + \ln p_{b}\right)\right)}{\lambda q_{b}}.\nonumber
\end{align}
Noting that
\begin{equation}
\frac{\ln\left(\ln 2 + \ln \lambda + \ln p_{r} - \ln\left(\ln \lambda + \ln p_{b}\right)\right)}{\lambda q_{b}} = o\left(\frac{\ln \lambda}{\lambda q_{b}}\right) \text{ and } \exp\left\{-\sqrt{\lambda p_{b}}\right\} = o\left(\frac{\ln \lambda}{\lambda}\right) \text{ as } \lambda \rightarrow \infty,\nonumber
\end{equation}
we conclude that $f_{1} \circ f_{2}(x_{1}) - x_{1} < 0$ for all sufficiently large $\lambda$. Finally, for every $\lambda > 0$, we have $f_{1} \circ f_{2}(0) > 0$ and $f_{1} \circ f_{2}(1) < 1$. Thus, for all sufficiently large $\lambda$, there is at least one root of $f_{1} \circ f_{2}(x) - x$ in the interval $(0, x_{1})$, at least one in $(x_{1}, x_{2})$, and a third in $(x_{2}, 1)$, guaranteeing that $\nd_{1,b} > 0$. Moreover, given \emph{any} $\epsilon > 0$, by choosing $x_{2} > 1 - \epsilon/2$ and then $\lambda$ sufficiently large so that $f_{1} \circ f_{2}(x_{2}) > x_{2}$ and $x_{1} < \epsilon/2$, we conclude that $\nl_{1,b} = \min \FP f_{1} \circ f_{2} < \epsilon/2$ and $1 - \nw_{1,b} = \max \FP f_{1} \circ f_{2} > 1 - \epsilon/2$, thus making $\nd_{1,b} > 1 - \epsilon$. Analogous conclusions hold for $\nd_{1,r}$, $\nd_{2,b}$ and $\nd_{2,r}$.
We now establish the remaining claims made in Theorem~\ref{thm:main_example_2}. If \eqref{poisson_second_cond} holds, then
\begin{align
(f_{1} \circ f_{2}(x) - x)'
&= \lambda^{4} p_{b} p_{r} q_{b} q_{r} \exp\left\{-\lambda p_{r} f_{2}(x) - \lambda p_{b} \exp\left\{-\lambda p_{r} f_{2}(x)\right\} - \lambda q_{b} x -\lambda q_{r} \exp\left\{-\lambda q_{b} x\right\}\right\} - 1 \nonumber\\
&\leqslant \lambda^{4} p_{b} p_{r} q_{b} q_{r} \exp\left\{-\lambda p_{r} e^{-\lambda q_{r}} - \lambda p_{b} \exp\left\{-\lambda p_{r} \exp\left\{-\lambda q_{r} e^{-\lambda q_{b}}\right\}\right\} - \lambda q_{r} e^{-\lambda q_{b}}\right\} - 1, \nonumber
\end{align}
is strictly negative. Thus, $f_{1} \circ f_{2}(x) - x$ is strictly decreasing on $[0,1]$ and has precisely one root in $[0,1]$. Hence $\nd_{1,b} = 0$ in this case. In particular, since $p_{b}p_{r} = p_{b}(1-p_{b}) \leqslant \frac{1}{4}$, and likewise, $q_{b} q_{r} \leqslant \frac{1}{4}$, the draw probabilities are $0$ if we have $\lambda \leqslant 2$.
Setting $f_{3}(x) = e^{-\lambda p_{r} x}$ and $f_{4}(x) = e^{-\lambda q_{b} x}$, we have $(f_{1} \circ f_{2}(x))'' = \lambda^{5} p_{b} p_{r} q_{b}^{2} q_{r} f_{3}(f_{2}(x)) f_{1}(f_{2}(x)) f_{4}(x) f_{2}(x) \left[\lambda^{2} p_{r} q_{r} f_{4}(x) f_{2}(x) \left\{\lambda p_{b} f_{3}(f_{2}(x)) - 1\right\} + \left\{\lambda q_{r} f_{4}(x) - 1\right\}\right]$, so that the sign of $(f_{1} \circ f_{2}(x))''$ is the same as that of the function
\begin{equation}
f_{5}(x) = \lambda^{2} p_{r} q_{r} f_{4}(x) f_{2}(x) \left\{\lambda p_{b} f_{3}(f_{2}(x)) - 1\right\} + \left\{\lambda q_{r} f_{4}(x) - 1\right\}. \nonumber
\end{equation}
Both $\lambda p_{b} f_{3}(f_{2}(x)) - 1$ and $\lambda q_{r} f_{4}(x) - 1$ are strictly decreasing in $x$ on $[0,1]$, so that the minima are attained at $x = 1$. If \eqref{normal_poisson_cond} holds, both $f_{1}$ and $f_{2}$ are strictly convex on $[0,1)$. Since $f_{1}$ and $f_{2}$ are also strictly increasing, $f_{1} \circ f_{2}$ is strictly convex on $[0,1)$. Since $f_{1} \circ f_{2}(0) > 0$ and $f_{1} \circ f_{2}(1) < 1$, there is precisely one fixed point of $f_{1} \circ f_{2}$ in $[0,1]$, and therefore, $\nd_{1,b} = 0$.
\section{Proof of Theorem~\ref{thm:main_2}}\label{sec:main_2_proof}
\subsection{Proof of Theorem~\ref{thm:main_2}, part \ref{main_2_part_1}}\label{subsec:main_2_part_1}
For every $j \in [m]$, we show that $\NW_{1,j} \subset \ESW_{j}$, $\NL_{2,j} \subseteq \EEL_{j}$, $\MW_{1,j} \subseteq \ESW_{j}$ and $\ML_{2,j} \subseteq \EEL_{j}$, which immediately gives us the desired conclusion.
Fix a realization $T$ of $\mathcal{T}$. If $v \in \NW_{1,j}$, the normal game with $v$ as the initial vertex and P1 playing the first round is won by P1. Consider an escape game on $T(v)$ with $v$ as the initial vertex and Stopper playing the first round. Stopper employs the exact same strategy against Escaper as P1 does against P2 in the aforementioned normal game. If the normal game culminates in the token reaching a vertex $u$, at an odd distance from $v$, such that no child of $u$ has colour in $[m] \setminus S_{\sigma(u)}$, then the escape game also reaches $u$ in the corresponding round, and Escaper, whose turn it is to move, is unable to do so as there are no outgoing edges from $u$ that are permissible for her. Thus, Stopper wins the escape game, establishing that $v \in \ESW_{j}$ as well. Likewise, $\NL_{2,j} \subseteq \EEL_{j}$.
Now consider $v$ in $\MW_{1,j}$. The mis\`{e}re game with $v$ as the initial vertex and P1 playing the first round is won by P1. Consider an escape game on $T(v)$ with $v$ as the initial vertex and Stopper playing the first round. Stopper employs the exact same strategy against Escaper as P1 does against P2 in the aforementioned mis\`{e}re game. If the mis\`{e}re game culminates in the token reaching a vertex $u$, at an even distance from $v$, such that no child of $u$ has colour in $S_{\sigma(u)}$, then the escape game also reaches $u$ in the corresponding round, and Stopper, whose turn it is to move, is unable to do so since there are no outgoing edges from $u$ that are permissible for her. Thus, Stopper wins the escape game, proving that $v \in \ESW_{j}$. Likewise, $\ML_{2,j} \subseteq \EEL_{j}$.
\subsection{Proof of Theorem~\ref{thm:main_2}, part \ref{main_2_part_2}}\label{subsec:main_2_part_2} Given $x_{1}, \ldots, x_{r}$ in $[0,1]$ and $n_{1}, \ldots, n_{r} \in \mathbb{N}_{0}$ such that not every $n_{i}$ is $0$, and $x_{1} = \min\left\{x_{i}: i \in [r]\right\}$, we can write
\begin{multline}
1 - \prod_{t=1}^{r} (1 - x_{t})^{n_{t}}
= 1 - \left(1 - x_{1}\right)^{\sum_{t=1}^{r} n_{t}} + \sum_{i=2}^{r} \left(1 - x_{1}\right)^{\sum_{t=1}^{i-1}n_{t}} \left\{\left(1-x_{1}\right)^{n_{i}} - \left(1 - x_{i}\right)^{n_{i}}\right\} \prod_{t=i+1}^{r} \left(1 - x_{t}\right)^{n_{t}}
\\= x_{1} \sum_{i=0}^{\sum_{t=1}^{r} n_{t}-1} \left(1 - x_{1}\right)^{i} + \sum_{i=2}^{r} \left(1 - x_{1}\right)^{\sum_{t=1}^{i-1}n_{t}} \left(x_{i} - x_{1}\right) \left\{\sum_{j=0}^{n_{i}-1}\left(1-x_{1}\right)^{j}\left(1 - x_{i}\right)^{n_{i}-1-j}\right\} \prod_{t=i+1}^{r} \left(1 - x_{t}\right)^{n_{t}}
\\ \geqslant x_{1} + \sum_{i=2}^{r} (x_{i}-x_{1}) n_{i} \left(1 - x_{i}\right)^{n_{i}-1} \prod_{t \in [r] \setminus \{i\}} \left(1 - x_{t}\right)^{n_{t}}. \nonumber
\end{multline}
For $v$ to be in $\ESW_{j}$, either $v$ has no child of colour $k$ for any $k \in S_{j}$, or $v$ has at least one child $u$ in $\EEL_{k}$ for some $k \in S_{j}$. Letting $k_{0} \in S_{j}$ be such that $\eel_{k_{0}} = \min\left\{\eel_{k}: k \in S_{j}\right\}$, we get
\begin{align}
\esw_{j} &= \alpha_{j} + \sum_{\substack{n_{r} \in \mathbb{N}_{0}: r \in [m]\\\left(n_{k}: k \in S_{j}\right) \neq \mathbf{0}_{S_{j}}}}\left\{1 - \prod_{k \in S_{j}}\left(1 - \eel_{k}\right)^{n_{k}}\right\} \chi_{j}(n_{1}, \ldots, n_{m}) \nonumber\\
&= \alpha_{j} + \sum_{\mathbf{n}_{S_{j}} = \left(n_{k}: k \in S_{j}\right) \in \mathbb{N}_{0} \setminus \left\{\mathbf{0}_{S_{j}}\right\}}\Bigg\{\eel_{k_{0}} + \sum_{k \in S_{j} \setminus \{k_{0}\}} \left(\eel_{k} - \eel_{k_{0}}\right) n_{k} \left(1 - \eel_{k}\right)^{n_{k}-1} \nonumber\\&\prod_{t \in S_{j} \setminus \{k\}}\left(1 - \eel_{t}\right)^{n_{t}}\Bigg\} \chi_{j,S_{j}}\left(\mathbf{n}_{S_{j}}\right) \nonumber\\
&= \alpha_{j} + \eel_{k_{0}}\left(1 - \alpha_{j}\right) + \sum_{k \in S_{j}} \left(\eel_{k} - \eel_{k_{0}}\right) \sum_{\mathbf{n}_{S_{j}} = \left(n_{k}: k \in S_{j}\right) \in \mathbb{N}_{0} \setminus \left\{\mathbf{0}_{S_{j}}\right\}} n_{k} \left(1 - \eel_{k}\right)^{n_{k}-1} \nonumber\\& \prod_{t \in S_{j} \setminus \{k\}}\left(1 - \eel_{t}\right)^{n_{t}} \chi_{j,S_{j}}\left(\mathbf{n}_{S_{j}}\right) \nonumber\\
&= \alpha_{j} + \eel_{k_{0}}\left(1 - \alpha_{j}\right) + \sum_{k \in S_{j}} \left(\eel_{k} - \eel_{k_{0}}\right) \partial_{k} G_{j,S_{j}}\left(1 - \eel_{t}: t \in S_{j}\right). \nonumber
\end{align}
Observing that $\alpha_{j} + \eel_{k_{0}}\left(1 - \alpha_{j}\right) \geqslant \alpha_{j}\eel_{k_{0}} + \eel_{k_{0}}\left(1 - \alpha_{j}\right) = \eel_{k_{0}}$ gives the second inequality of \eqref{main_2_part_2_eq_1}. The remaining two claims of \ref{main_2_part_2} are established via very similar computations.
\subsection{Proof of Theorem~\ref{thm:main_2}, part \ref{main_2_part_3}}\label{subsec:main_2_part_3} Assume the hypothesis of \ref{main_2_part_3}, and $\bnl_{1}^{(n)} \preceq \bml_{1}^{(n)}$ for some $n \in \mathbb{N}$ (the base case, $n = 0$, is immediate as $\bml_{1}^{(0)} = \bnl_{1}^{(0)} = \mathbf{0}_{[m]}$). Then $\bml_{1}^{(n+2)} - \bnl_{1}^{(n+2)}$ equals
\begin{align
F_{N}\left(\mathbf{1}_{[m]} - \bnl_{1}^{(n)}\right) - F_{M}\left(\mathbf{1}_{[m]} - \bml_{1}^{(n)}\right) \geqslant F_{N}\left(\mathbf{1}_{[m]} - \bnl_{1}^{(n)}\right) - F_{M}\left(\mathbf{1}_{[m]} - \bnl_{1}^{(n)}\right), \nonumber
\end{align}
where the inequality follows from the induction hypothesis and since $F_{M}$ is monotonically increasing.
\begin{multline
\F_{N,j}\left(\mathbf{1}_{[m]} - \bnl_{1}^{(n)}\right) - F_{M,j}\left(\mathbf{1}_{[m]} - \bnl_{1}^{(n)}\right) = - G_{j, S_{j}}\left(1 - G_{k, [m] \setminus S_{k}}\left(1 - \nl_{1,\ell}^{(n)}: \ell \in [m] \setminus S_{k}\right): k \in S_{j}\right)\\+ G_{j,S_{j}}\left(1 - G_{k,[m] \setminus S_{k}}\left(1 - \nl_{1,\ell}^{(n)}: \ell \in [m] \setminus S_{k}\right) + \beta_{k}: k \in S_{j}\right) - \alpha_{j} \geqslant - \alpha_{j} + \\ \sum_{i \in S_{j}} \beta_{i} \partial_{i} G_{j,S_{j}}\left(1 - G_{k,[m] \setminus S_{k}}\left(1 - \nl_{1,\ell}^{(n)}: \ell \in [m] \setminus S_{k}\right): k \in S_{j}\right) = \sum_{i \in S_{j}} \beta_{i} \partial_{i} G_{j,S_{j}}\left(\nw_{2,k}^{(n+1)}: k \in S_{j}\right) - \alpha_{j}, \nonumber
\end{multline}
where the inequality holds as $G_{j,S_{j}}$ is convex (see, for example, [\cite{simon_blume}, Theorem 21.3]) and the last equality follows from \eqref{normal_recur_2}. Given the continuously differentiable pgf $G: [0,1]^{r} \rightarrow [0,1]$ corresponding to a probability distribution $\chi$ supported on $\mathbb{N}_{0}^{r}$ for $r \in \mathbb{N}$, we have, for $i \in [r]$,
\begin{align}
\partial_{i} G(x_{1}, \ldots, x_{r}) &= \sum_{n_{k} \in \mathbb{N}_{0}: k \in [r]} \left(\prod_{k \in [r] \setminus \{i\}} x_{k}^{n_{k}}\right) n_{i} x_{i}^{n_{i}-1} \chi(n_{1}, \ldots, n_{r}), \nonumber
\end{align}
which is a power series with non-negative coefficients. If $(x_{1}, \ldots, x_{r})$ and $(y_{1}, \ldots, y_{r})$ in $[0,1]^{r}$ satisfy $(x_{1}, \ldots, x_{r}) \preceq (y_{1}, \ldots, y_{r})$, then $\partial_{i} G(x_{1}, \ldots, x_{r}) \leqslant \partial_{i} G(y_{1}, \ldots, y_{r})$. Since $0 = \nw_{2,k}^{(1)} \leqslant \nw_{2,k}^{(n+1)}$ for all $n \in \mathbb{N}_{0}$ and all $k \in S_{j}$, hence $\partial_{i} G_{j,S_{j}}\left(\mathbf{0}_{S_{j}}\right) \leqslant \partial_{i} G_{j,S_{j}}\left(\nw_{2,k}^{(n+1)}: k \in S_{j}\right)$. Moreover, $\partial_{i} G_{j,S_{j}}\left(\mathbf{0}_{S_{j}}\right) = \Prob\left[X_{v,i} = 1, X_{v,k} = 0 \text{ for all } k \in S_{j} \setminus \{i\}\big|\sigma(v) = j\right]$. Thus, if
\eqref{main_2_part_3_cond_1} holds, we have $\bnl_{1}^{(n+2)} \preceq \bml_{1}^{(n+2)}$, thus completing the inductive proof. Taking limits as $n \rightarrow \infty$ gives the rest.
The second part of \ref{main_2_part_3} is also proved via induction. Assuming $\bnw_{1}^{(n)} \prec \bmw_{1}^{(n)}$, for each $j \in [m]$,
\begin{align}
\mw_{1,j}^{(n+2)} - \nw_{1,j}^{(n+2)} &= F_{M,j}\left(\bmw_{1}^{(n)}\right) - F_{N,j}\left(\bnw_{1}^{(n)}\right) \geqslant F_{M,j}\left(\bmw_{1}^{(n)}\right) - F_{N,j}\left(\bmw_{1}^{(n)}\right) \nonumber\\
&= \alpha_{j} - G_{j,S_{j}}\left(1 - G_{k,[m] \setminus S_{k}}\left(\mw_{1,\ell}^{(n)}: \ell \in [m] \setminus S_{k}\right) + \beta_{k}: k \in S_{j}\right) \nonumber\\&+ G_{j,S_{j}}\left(1 - G_{k,[m] \setminus S_{k}}\left(\mw_{1,\ell}^{(n)}: \ell \in [m] \setminus S_{k}\right): k \in S_{j}\right) \nonumber\\
&\geqslant \alpha_{j} - \sum_{i \in S_{j}} \beta_{i} \partial_{i} G_{j,S_{j}}\left(1 - G_{k,[m] \setminus S_{k}}\left(\mw_{1,\ell}^{(n)}: \ell \in [m] \setminus S_{k}\right) + \beta_{k}: k \in S_{j}\right) \nonumber\\
&= \alpha_{j} - \sum_{i \in S_{j}} \beta_{i} \partial_{i} G_{j,S_{j}}\left(1 - \ml_{2,k}^{(n+1)}: k \in S_{j}\right). \nonumber
\end{align}
From $0 = \ml_{2,k}^{(1)} \leqslant \ml_{2,k}^{(n+1)}$ for all $n \in \mathbb{N}_{0}$ and the monotonically increasing $\partial_{i} G_{j,S_{j}}$, it is enough if we have $\alpha_{j} \geqslant \sum_{i \in S_{j}} \beta_{i} \partial_{i} G_{j,S_{j}}\left(\mathbf{1}_{S_{j}}\right)$, which is equivalent to \eqref{main_2_part_3_cond_2}.
\section{Proof of Theorem~\ref{thm:main_3}}\label{sec:main_3_proof}
The proof of Theorem~\ref{thm:main_3} happens via several lemmas, as follows. Recall the law $\mathcal{L}_{\bm{\chi}}$ of $\mathcal{T}_{[m], \mathbf{p}, \bm{\chi}}$ defined in \S\ref{subsec:main_results}, and the metric $d_{0}$ from \eqref{metric_d_{0}}.
\begin{lemma}\label{main_3_lem_1}
The probabilities $\alpha_{j,\bm{\chi}}$ and $\beta_{j,\bm{\chi}}$ are continuous functions of $\mathcal{L}_{\bm{\chi}}$ with respect to $d_{0}$.
\end{lemma}
\begin{proof}
We show the proof only for $\alpha_{j, \bm{\chi}}$. Given laws $\mathcal{L}_{\bm{\chi}}$ and $\mathcal{L}_{\bm{\eta}}$, we have
\begin{align}
\left|\alpha_{j,\bm{\chi}} - \alpha_{j,\bm{\eta}}\right| &= \left|\chi_{j,S_{j}}\left(\mathbf{0}_{S_{j}}\right) - \eta_{j,S_{j}}\left(\mathbf{0}_{S_{j}}\right)\right| \nonumber\\
&\leqslant \sum_{\mathbf{n}_{[m] \setminus S_{j}} = \left(n_{k}: k \in [m] \setminus S_{j}\right) \in \mathbb{N}_{0}^{|[m] \setminus S_{j}|}} \left|\chi_{j}\left(\mathbf{0}_{S_{j}} \vee \mathbf{n}_{[m] \setminus S_{j}}\right) - \eta_{j}\left(\mathbf{0}_{S_{j}} \vee \mathbf{n}_{[m] \setminus S_{j}}\right)\right| \leqslant 2d_{0}\left(\mathcal{L}_{\bm{\chi}}, \mathcal{L}_{\bm{\eta}}\right).\nonumber \qedhere
\end{align}
\end{proof}
\begin{lemma}\label{main_3_lem_2}
Fix $j \in [m]$, $S \subset [m]$ and $\alpha \in (0,1)$. Set $S_{\alpha} = \sum_{n \in \mathbb{N}} n \alpha^{n-1}$. Fix $\mathbf{x}_{S} = \left(x_{k}: k \in S\right)$ and $\mathbf{y}_{S} = \left(y_{k}: k \in S\right)$ in $[0,\alpha]^{S}$. Then $\mathcal{L}_{\bm{\chi}}$ satisfies the following inequality:
\begin{equation}\label{main_3_lem_2_eq_1}
\left|G_{j,S,\bm{\chi}}\left(\mathbf{x}_{S}\right) - G_{j,S,\bm{\chi}}\left(\mathbf{y}_{S}\right)\right| \leqslant \max_{k \in S} \left|x_{k} - y_{k}\right| S_{\alpha}.
\end{equation}
If $\mathcal{E}_{j,S,\bm{\chi}} = \E_{\mathcal{L}_{\bm{\chi}}}\left[\sum_{k \in S} X_{v,k}\big|\sigma(v) = j\right]$ is finite, for all $\mathbf{x}_{S}$ and $\mathbf{y}_{S}$ in $[0,1]^{S}$, we have
\begin{equation}\label{main_3_lem_2_eq_2}
\left|G_{j,S,\bm{\chi}}\left(\mathbf{x}_{S}\right) - G_{j,S,\bm{\chi}}\left(\mathbf{y}_{S}\right)\right| \leqslant \max_{k \in S} \left|x_{k} - y_{k}\right|\mathcal{E}_{j,S,\bm{\chi}}.
\end{equation}
\end{lemma}
\begin{proof}
Given tuples $(u_{1}, \ldots, u_{r})$ and $(v_{1}, \ldots, v_{r})$ in $[0,\alpha]^{r}$ and $(n_{1}, \ldots, n_{r}) \in \mathbb{N}_{0}$, we can write
\begin{align}
\left|\prod_{k \in [r]} u_{k}^{n_{k}} - \prod_{k \in [r]} v_{k}^{n_{k}}\right| &\leqslant \sum_{i \in [r]} \left(\prod_{j=1}^{i-1} u_{j}^{n_{j}}\right) \left|u_{i}^{n_{i}} - v_{i}^{n_{i}}\right| \left(\prod_{j=i+1}^{r} v_{j}^{n_{j}}\right)\nonumber\\
&= \sum_{i \in [r]} \left(\prod_{j=1}^{i-1} u_{j}^{n_{j}}\right) \left|u_{i}-v_{i}\right| \left(\sum_{t=0}^{n_{i}-1} u_{i}^{t} v_{i}^{n_{i}-1-t}\right) \left(\prod_{j=i+1}^{r} v_{j}^{n_{j}}\right)\nonumber\\
&\leqslant \sum_{i \in [r]} \alpha^{\sum_{j=1}^{i-1} n_{j}} \left|u_{i}-v_{i}\right| n_{i} \alpha^{n_{i}-1} \alpha^{\sum_{j=i+1}^{r} n_{j}} \leqslant \max_{k \in [r]} |u_{k} - v_{k}| \left(\sum_{i \in [r]} n_{i}\right) \alpha^{\sum_{j \in [r]} n_{j}-1}.\nonumber
\end{align}
This allows us to bound $\left|G_{j,S,\bm{\chi}}\left(\mathbf{x}_{S}\right) - G_{j,S,\bm{\chi}}\left(\mathbf{y}_{S}\right)\right|$ by
\begin{align}
&\sum_{\mathbf{n}_{S} = (n_{k}: k \in S) \in \mathbb{N}_{0}^{S}} \left|\prod_{k \in S} x_{k}^{n_{k}} - \prod_{k \in S} y_{k}^{n_{k}}\right| \chi_{j,S}\left(\mathbf{n}_{S}\right) \leqslant \sum_{\mathbf{n}_{S} = (n_{k}: k \in S) \in \mathbb{N}_{0}^{S}} \max_{k \in S}|x_{k} - y_{k}| \left(\sum_{k \in S} n_{k}\right) \alpha^{\sum_{k \in S} n_{k} - 1}\chi_{j,S}\left(\mathbf{n}_{S}\right) \nonumber\\
&= \max_{k \in S}|x_{k} - y_{k}| \sum_{M \in \mathbb{N}_{0}} M \alpha^{M-1} \sum_{\mathbf{n}_{S} = (n_{k}: k \in S) \in \mathbb{N}_{0}^{S}: \sum_{k \in S} n_{k} = M} \chi_{j,S}\left(\mathbf{n}_{S}\right) \nonumber\\
&= \max_{k \in S}|x_{k} - y_{k}| \sum_{M \in \mathbb{N}} M \alpha^{M-1} \Prob_{\mathcal{L}_{\bm{\chi}}}\left[\sum_{k \in S} X_{\phi,k} = M\big|\sigma(\phi) = j\right] \leqslant \max_{k \in S}|x_{k} - y_{k}| S_{\alpha},\nonumber
\end{align}
which gives us \eqref{main_3_lem_2_eq_1}. To deduce \eqref{main_3_lem_2_eq_2}, similar computations lead to the general inequality
\begin{equation}
\left|\prod_{k \in [r]} u_{k}^{n_{k}} - \prod_{k \in [r]} v_{k}^{n_{k}}\right| \leqslant \max_{k \in [r]} |u_{k} - v_{k}| \left(\sum_{i \in [r]} n_{i}\right), \nonumber
\end{equation}
and using this, we bound $\left|G_{j,S,\bm{\chi}}\left(\mathbf{x}_{S}\right) - G_{j,S,\bm{\chi}}\left(\mathbf{y}_{S}\right)\right|$ by
\begin{multline}
\sum_{\mathbf{n}_{S} = (n_{k}: k \in S) \in \mathbb{N}_{0}^{S}} \max_{k \in S}|x_{k} - y_{k}| \left(\sum_{k \in S} n_{k}\right)\chi_{j,S}\left(\mathbf{n}_{S}\right) = \max_{k \in S}|x_{k} - y_{k}| \sum_{M \in \mathbb{N}_{0}} M \sum_{\substack{n_{k} \in \mathbb{N}_{0}: k \in S\\ \sum_{k \in S} n_{k} = M}} \chi_{j,S}\left(\mathbf{n}_{S}\right) \\= \max_{k \in S}|x_{k} - y_{k}| \sum_{M \in \mathbb{N}_{0}} M \Prob_{\mathcal{L}_{\bm{\chi}}}\left[\sum_{k \in S} X_{\phi,k} = M\big|\sigma(\phi) = j\right] = \max_{k \in S}|x_{k} - y_{k}| \mathcal{E}_{j,S,\bm{\chi}}.\nonumber \qedhere
\end{multline}
\end{proof}
\begin{lemma}\label{main_3_lem_3}
Given laws $\mathcal{L}_{\bm{\chi}}$ and $\mathcal{L}_{\bm{\eta}}$, subset $S \subset [m]$ and $\mathbf{x}_{S} = (x_{k}: k \in S) \in [0,1]^{S}$, we have
\begin{equation}
\left|G_{j,S,\bm{\chi}}\left(\mathbf{x}_{S}\right) - G_{j,S,\bm{\eta}}\left(\mathbf{x}_{S}\right)\right| \leqslant 2d_{0}\left(\mathcal{L}_{\bm{\chi}}, \mathcal{L}_{\bm{\eta}}\right).
\end{equation}
\end{lemma}
\begin{proof}
$\begin{aligned}[t]
&\left|G_{j,S,\bm{\chi}}\left(\mathbf{x}_{S}\right) - G_{j,S,\bm{\eta}}\left(\mathbf{x}_{S}\right)\right| = \left|\sum_{\mathbf{n}_{S} = (n_{k}: k \in S) \in \mathbb{N}_{0}^{S}}\prod_{k \in S} x_{k}^{n_{k}} \chi_{j,S}(\mathbf{n}_{S}) - \sum_{\mathbf{n}_{S} = (n_{k}: k \in S) \in \mathbb{N}_{0}^{S}}\prod_{k \in S} x_{k}^{n_{k}} \eta_{j,S}(\mathbf{n}_{S})\right| \nonumber\\
&\leqslant \sum_{\mathbf{n}_{S} = (n_{k}: k \in S) \in \mathbb{N}_{0}^{S}}\prod_{k \in S} x_{k}^{n_{k}} \left|\chi_{j,S}(\mathbf{n}_{S}) - \eta_{j,S}(\mathbf{n}_{S})\right| \nonumber\\
&\leqslant \sum_{\mathbf{n}_{[m] \setminus S} = (n_{k}: k \in [m] \setminus S) \in \mathbb{N}_{0}^{[m] \setminus S}} \sum_{\mathbf{n}_{S} = (n_{k}: k \in S) \in \mathbb{N}_{0}^{S}} \left|\chi_{j}\left(\mathbf{n}_{S} \vee \mathbf{n}_{[m] \setminus S}\right) - \eta_{j}\left(\mathbf{n}_{S} \vee \mathbf{n}_{[m] \setminus S}\right)\right| = 2||\chi_{j} - \eta_{j}||_{\tv}. \nonumber \qedhere
\end{aligned}$
\end{proof}
\begin{lemma}\label{main_3_lem_5}
Given laws $\mathcal{L}_{\bm{\chi}}$ and $\mathcal{L}_{\bm{\eta}}$ with $d_{0}\left(\mathcal{L}_{\bm{\chi}}, \mathcal{L}_{\bm{\eta}}\right) \leqslant \epsilon_{1}$, and $\mathbf{x} = (x_{k}: k \in [m])$ and $\mathbf{y} = (y_{k}: k \in [m])$ in $[0,\alpha]^{m}$, for some $\alpha \in (0,1)$, with $\max_{k \in [m]}|x_{k} - y_{k}| \leqslant \epsilon_{2}$, we have
\begin{equation}\label{main_3_lem_5_eq_1}
\left|G_{j,S,\bm{\chi}}\left(x_{k}: k \in S\right) - G_{j,S,\bm{\eta}}\left(y_{k}: k \in S\right)\right| \leqslant 2\epsilon_{1} + S_{\alpha} \epsilon_{2}.
\end{equation}
for any subset $S$ of $[m]$. If $\mathcal{E}_{j,S,\bm{\chi}} = \E_{\mathcal{L}_{\bm{\chi}}}\left[\sum_{k \in S} X_{v,k}\big|\sigma(v) = j\right]$ is finite, for $\mathbf{x} = (x_{k}: k \in [m])$ and $\mathbf{y} = (y_{k}: k \in [m])$ in $[0,1]^{m}$ with $\max_{k \in [m]}|x_{k} - y_{k}| \leqslant \epsilon_{2}$, we have
\begin{equation}\label{main_3_lem_5_eq_2}
\left|G_{j,S,\bm{\chi}}\left(x_{k}: k \in S\right) - G_{j,S,\bm{\eta}}\left(y_{k}: k \in S\right)\right| \leqslant 2\epsilon_{1} + \mathcal{E}_{j,S,\bm{\chi}} \epsilon_{2}.
\end{equation}
\end{lemma}
\begin{proof}
The left sides of both \eqref{main_3_lem_5_eq_1} and \eqref{main_3_lem_5_eq_2} can be bounded above as follows:
\begin{multline}
\left|G_{j,S,\bm{\chi}}\left(x_{k}: k \in S\right) - G_{j,S,\bm{\eta}}\left(y_{k}: k \in S\right)\right| \leqslant \left|G_{j,S,\bm{\chi}}\left(x_{k}: k \in S\right) - G_{j,S,\bm{\chi}}\left(y_{k}: k \in S\right)\right| \\+ \left|G_{j,S,\bm{\chi}}\left(y_{k}: k \in S\right) - G_{j,S,\bm{\eta}}\left(y_{k}: k \in S\right)\right|.\nonumber
\end{multline}
Under the hypothesis of the first part of Lemma \ref{main_3_lem_5}, we use \eqref{main_3_lem_2_eq_1} to get
\begin{equation}
\left|G_{j,S,\bm{\chi}}\left(x_{k}: k \in S\right) - G_{j,S,\bm{\chi}}\left(y_{k}: k \in S\right)\right| \leqslant \max_{k \in S}|x_{k} - y_{k}| S_{\alpha} \leqslant \epsilon_{2} S_{\alpha},\nonumber
\end{equation}
and by Lemma~\ref{main_3_lem_3}, we have $\left|G_{j,S,\bm{\chi}}\left(y_{k}: k \in S\right) - G_{j,S,\bm{\eta}}\left(y_{k}: k \in S\right)\right| \leqslant 2d_{0}\left(\mathcal{L}_{\bm{\chi}}, \mathcal{L}_{\bm{\eta}}\right) \leqslant 2\epsilon_{1}$. Combining everything, we get \eqref{main_3_lem_5_eq_1}. To deduce \eqref{main_3_lem_5_eq_2}, we apply \eqref{main_3_lem_2_eq_2} instead of \eqref{main_3_lem_2_eq_1}.
\end{proof}
\begin{lemma}\label{main_3_lem_4}
For each $j \in [m]$ and $n \in \mathbb{N}$, we have $\nw_{1,j,\bm{\chi}}^{(n)} \leqslant 1 - \nl_{1,j,\bm{\chi}}^{(n)} \leqslant 1 - \alpha_{j,\bm{\chi}}$, whereas $\nw_{2,j,\bm{\chi}}^{(n)} \leqslant 1 - \nl_{2,j,\bm{\chi}}^{(n)} \leqslant 1 - \beta_{j,\bm{\chi}}$. We also have $\mw_{1,j,\bm{\chi}}^{(n)} \leqslant 1 - \ml_{1,j,\bm{\chi}}^{(n)} \leqslant 1 + \alpha_{j,\bm{\chi}} - G_{j,S_{j},\bm{\chi}}\left(\beta_{k,\bm{\chi}}: k \in S_{j}\right)$, whereas $\mw_{2,j,\bm{\chi}}^{(n)} \leqslant 1 - \ml_{2,j,\bm{\chi}}^{(n)} \leqslant 1 + \beta_{j,\bm{\chi}} - G_{j,[m] \setminus S_{j},\bm{\chi}}\left(\alpha_{k,\bm{\chi}}: k \in [m] \setminus S_{j}\right)$.
\end{lemma}
\begin{proof}
If the initial vertex $v$, with $\sigma(v) = j$, has no child of colour $k$ for any $k \in S_{j}$, and P1 plays the first round, she loses. Thus $\nl_{1,j}^{(n)} \geqslant \alpha_{j,\bm{\chi}}$ for each $n \in \mathbb{N}$. This yields $\nw_{1,j,\bm{\chi}}^{(n)} \leqslant 1 - \nl_{1,j,\bm{\chi}}^{(n)} \leqslant 1 - \alpha_{j,\bm{\chi}}$, as desired. The claim about $\nw_{2,j,\bm{\chi}}^{(n)}$ and $1 - \nl_{2,j,\bm{\chi}}^{(n)}$ follows similarly.
Let $v$, with $\sigma(v) = j$, be the initial vertex. If for every child $u$ of $v$ that is of colour $k$ for any $k \in S_{j}$, $u$ has no child of colour $\ell$ for any $\ell \in [m] \setminus S_{k}$, and P1 plays the first round of the mis\`{e}re game, she is forced to move the token to such a $u$, and P2 fails to move in the second round, thus winning the game. Thus, for each $n \in \mathbb{N}$,
\begin{align}
\ml_{1,j,\bm{\chi}}^{(n)} &\geqslant \sum_{(n_{k}: k \in S_{j}) \in \mathbb{N}_{0}^{|S_{j}|} \setminus \left\{\mathbf{0}_{S_{j}}\right\}} \prod_{k \in S_{j}} \beta_{k,\bm{\chi}}^{n_{k}} \chi_{j,S_{j}}\left(n_{k}: k \in S_{j}\right) = G_{j,S_{j},\bm{\chi}}\left(\beta_{k,\bm{\chi}}: k \in S_{j}\right) - \alpha_{j,\bm{\chi}}.\nonumber
\end{align}
Consequently, both $\mw_{1,j,\bm{\chi}}^{(n)}$ and $1 - \ml_{1,j,\bm{\chi}}^{(n)}$ are bounded above by $1 + \alpha_{j,\bm{\chi}} - G_{j,S_{j},\bm{\chi}}\left(\beta_{k,\bm{\chi}}: k \in S_{j}\right)$. The claim about $\mw_{2,j,\bm{\chi}}^{(n)}$ and $1 - \ml_{2,j,\bm{\chi}}^{(n)}$ follows similarly.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:main_3}]
Proof of \ref{main_3_part_1}: We establish the claim for $\nw_{1,j,\bm{\chi}}$, for any fixed $j \in [m]$. Lemma~\ref{lem:normal_compactness_consequence} implies that $\nw_{1,j,\bm{\chi}}$ is the limit of the increasing sequence $\left\{\nw_{1,j,\bm{\chi}}^{(n)}\right\}_{n}$. It thus suffices to show that $\nw_{1,j,\bm{\chi}}^{(n)}$ is a continuous function of $\mathcal{L}_{\bm{\chi}}$ for each $n \in \mathbb{N}_{0}$, with respect to $d_{0}$.
We assume, for some $n \in \mathbb{N}_{0}$, that $\nw_{1,j,\bm{\chi}}^{(n)}$ is continuous in $\mathcal{L}_{\bm{\chi}}$, and show that $\nw_{1,j,\bm{\chi}}^{(n+2)} = F_{N,j,\bm{\chi}}\left(\bnw_{1,\bm{\chi}}^{(n)}\right)$ is continuous in $\mathcal{L}_{\bm{\chi}}$ as well. The base case of $n = 0$ is immediate since $\nw_{1,j,\bm{\chi}}^{(0)} = 0$.
First, consider $\mathcal{L}_{\bm{\chi}}$ in $\mathcal{D}_{1}$. From Lemma~\ref{main_3_lem_1} and Lemma~\ref{main_3_lem_4}, if we choose $\mathcal{L}_{\bm{\eta}}$ such that
\begin{equation}\label{dist_laws_main_3_part_1}
d_{0}\left(\mathcal{L}_{\bm{\eta}}, \mathcal{L}_{\bm{\chi}}\right) \leqslant \epsilon_{1} < \frac{1}{4} \min\left\{\alpha_{j,\bm{\chi}}: j \in [m]\right\},
\end{equation}
then $\max\left\{\nw_{1,j,\bm{\chi}}^{(n)}, \nw_{1,j,\bm{\eta}}^{(n)}\right\} \leqslant 1 - \frac{1}{2} \alpha_{j,\bm{\chi}} \text{ for each } j \in [m]$. Setting $\alpha = \max\left\{1 - \frac{\alpha_{j,\bm{\chi}}}{2}: j \in [m]\right\}$, we use \eqref{main_3_lem_5_eq_1} of Lemma~\ref{main_3_lem_5} and \eqref{dist_laws_main_3_part_1} to write, for each $k \in [m]$,
\begin{multline}\label{main_3_part_1_upper_bound_1}
\left|G_{k,[m] \setminus S_{k},\bm{\chi}}\left(\nw_{1,\ell,\bm{\chi}}^{(n)}: \ell \in [m] \setminus S_{k}\right) - G_{k,[m] \setminus S_{k},\bm{\eta}}\left(\nw_{1,\ell,\bm{\eta}}^{(n)}: \ell \in [m] \setminus S_{k}\right)\right|\\ \leqslant 2\epsilon_{1} + S_{\alpha}\max_{j \in [m]}\left|\nw_{1,j,\bm{\chi}}^{(n)} - \nw_{1,j,\bm{\eta}}^{(n)}\right|.
\end{multline}
If $\mathcal{L}_{\bm{\chi}} \in \mathcal{D}_{4}$, then for $\mathcal{L}_{\bm{\eta}}$ satisfying \eqref{dist_laws_main_3_part_1}, using \eqref{main_3_lem_5_eq_2} of Lemma~\ref{main_3_lem_5}, we have, for each $k \in [m]$,
\begin{multline}\label{main_3_part_1_upper_bound_2}
\left|G_{k,[m] \setminus S_{k},\bm{\chi}}\left(\nw_{1,\ell,\bm{\chi}}^{(n)}: \ell \in [m] \setminus S_{k}\right) - G_{k,[m] \setminus S_{k},\bm{\eta}}\left(\nw_{1,\ell,\bm{\eta}}^{(n)}: \ell \in [m] \setminus S_{k}\right)\right|\\ \leqslant 2\epsilon_{1} + \mathcal{E}_{k,[m] \setminus S_{k},\bm{\chi}}\max_{j \in [m]}\left|\nw_{1,j,\bm{\chi}}^{(n)} - \nw_{1,j,\bm{\eta}}^{(n)}\right|.
\end{multline}
We let $\epsilon_{3}$ denote the upper bound in \eqref{main_3_part_1_upper_bound_1} when $\mathcal{L}_{\bm{\chi}} \in \mathcal{D}_{1}$, and we let
\begin{equation}\label{epsilon_{3}_defn}
\epsilon_{3} = 2\epsilon_{1} + \mathcal{E}_{\bm{\chi}}\max_{j \in [m]}\left|\nw_{1,j,\bm{\chi}}^{(n)} - \nw_{1,j,\bm{\eta}}^{(n)}\right|, \text{ with } \mathcal{E}_{\bm{\chi}} = \max_{k \in [m]}\mathcal{E}_{k,[m] \setminus S_{k},\bm{\chi}},
\end{equation}
when $\mathcal{L}_{\bm{\chi}} \in \mathcal{D}_{4}$. Next, from Lemma~\ref{main_3_lem_4} and \eqref{normal_recur_4}, we have $1 - G_{k,[m] \setminus S_{k},\bm{\chi}}\left(\nw_{1,\ell,\bm{\chi}}^{(n)}: \ell \in [m] \setminus S_{k}\right) = 1 - \nl_{2,k,\bm{\chi}}^{(n+1)} \leqslant 1 - \beta_{k,\bm{\chi}}$. If $\mathcal{L}_{\bm{\chi}}$ in $\mathcal{D}_{2}$ and $\mathcal{L}_{\bm{\eta}}$ satisfies
\begin{equation}\label{dist_cond_main_3_part_1}
d_{0}\left(\mathcal{L}_{\bm{\eta}}, \mathcal{L}_{\bm{\chi}}\right) \leqslant \epsilon_{1} < \frac{1}{4} \min\left\{\beta_{j,\bm{\chi}}: j \in [m]\right\},
\end{equation}
then from Lemma~\ref{main_3_lem_1} and Lemma~\ref{main_3_lem_4}, we have, for each $j \in [m]$,
\begin{equation}
\max\left\{1 - G_{k,[m] \setminus S_{k},\bm{\chi}}\left(\nw_{1,\ell,\bm{\chi}}^{(n)}: \ell \in [m] \setminus S_{k}\right), 1 - G_{k,[m] \setminus S_{k},\bm{\eta}}\left(\nw_{1,\ell,\bm{\eta}}^{(n)}: \ell \in [m] \setminus S_{k}\right)\right\} \leqslant 1 - \frac{1}{2} \beta_{j,\bm{\chi}}.\nonumber
\end{equation}
Setting $\beta = \max\left\{1 - \frac{\beta_{j,\bm{\chi}}}{2}: j \in [m]\right\}$, we use \eqref{main_3_lem_5_eq_1} of Lemma~\ref{main_3_lem_5} and \eqref{dist_cond_main_3_part_1} to write
\begin{multline}\label{main_3_part_1_upper_bound_3}
\left|F_{N,j,\bm{\chi}}\left(\bnw_{1,\bm{\chi}}^{(n)}\right) - F_{N,j,\bm{\eta}}\left(\bnw_{1,\bm{\eta}}^{(n)}\right)\right| \leqslant 2\epsilon_{1} + S_{\beta} \max_{k \in [m]}\Big|G_{k,[m] \setminus S_{k},\bm{\chi}}\left(\nw_{1,\ell,\bm{\chi}}^{(n)}: \ell \in [m] \setminus S_{k}\right) \\- G_{k,[m] \setminus S_{k},\bm{\eta}}\left(\nw_{1,\ell,\bm{\eta}}^{(n)}: \ell \in [m] \setminus S_{k}\right)\Big| \leqslant 2\epsilon_{1} + S_{\beta} \epsilon_{3}.
\end{multline}
When $\mathcal{L}_{\bm{\chi}}$ is in $\mathcal{D}_{3}$, then for any $\mathcal{L}_{\bm{\eta}}$, we have, using \eqref{main_3_lem_5_eq_2} of Lemma~\ref{main_3_lem_5}:
\begin{multline}\label{main_3_part_1_upper_bound_4}
\left|F_{N,j,\bm{\chi}}\left(\bnw_{1,\bm{\chi}}^{(n)}\right) - F_{N,j,\bm{\eta}}\left(\bnw_{1,\bm{\eta}}^{(n)}\right)\right| \leqslant 2\epsilon_{1} + \mathcal{E}_{j,S_{j},\bm{\chi}} \max_{k \in [m]}\Big|G_{k,[m] \setminus S_{k},\bm{\chi}}\left(\nw_{1,\ell,\bm{\chi}}^{(n)}: \ell \in [m] \setminus S_{k}\right) \\- G_{k,[m] \setminus S_{k},\bm{\eta}}\left(\nw_{1,\ell,\bm{\eta}}^{(n)}: \ell \in [m] \setminus S_{k}\right)\Big| \leqslant 2\epsilon_{1} + \mathcal{E}_{j,S_{j},\bm{\chi}} \epsilon_{3}.
\end{multline}
Note that $S_{\alpha}$, $S_{\beta}$, $\mathcal{E}_{\bm{\chi}}$ and $\mathcal{E}_{j,S_{j},\bm{\chi}}$ are constants dependent only on $\bm{\chi}$. By the induction hypothesis, given $\epsilon_{2} > 0$, there exists $\epsilon_{1} > 0$ such that
\begin{equation}\label{main_3_part_1_upper_bound_5}
d_{0}\left(\mathcal{L}_{\bm{\chi}}, \mathcal{L}_{\bm{\eta}}\right) \leqslant \epsilon_{1} \implies \max_{j \in [m]} \left|\nw_{1,j,\bm{\chi}}^{(n)} - \nw_{1,j,\bm{\eta}}^{(n)}\right| \leqslant \epsilon_{2}.
\end{equation}
Combining \eqref{main_3_part_1_upper_bound_1}, \eqref{main_3_part_1_upper_bound_2}, \eqref{epsilon_{3}_defn}, \eqref{main_3_part_1_upper_bound_3}, \eqref{main_3_part_1_upper_bound_4} and \eqref{main_3_part_1_upper_bound_5}, we get:
\begin{itemize}
\item For $\mathcal{L}_{\bm{\chi}} \in \mathcal{D}_{1} \cap \mathcal{D}_{2}$ and $d_{0}\left(\mathcal{L}_{\bm{\chi}}, \mathcal{L}_{\bm{\eta}}\right) \leqslant \epsilon_{1}$ with $\epsilon_{1}$ satisfying \eqref{dist_laws_main_3_part_1}, \eqref{dist_cond_main_3_part_1} and \eqref {main_3_part_1_upper_bound_5}, we have
\begin{equation}\label{case_1_main_3_part_1}
\left|F_{N,j,\bm{\chi}}\left(\bnw_{1,\bm{\chi}}^{(n)}\right) - F_{N,j,\bm{\eta}}\left(\bnw_{1,\bm{\eta}}^{(n)}\right)\right| \leqslant 2\epsilon_{1} + S_{\beta} \left(2\epsilon_{1} + S_{\alpha}\epsilon_{2}\right).
\end{equation}
\item For $\mathcal{L}_{\bm{\chi}} \in \mathcal{D}_{1} \cap \mathcal{D}_{3}$ and $d_{0}\left(\mathcal{L}_{\bm{\chi}}, \mathcal{L}_{\bm{\eta}}\right) \leqslant \epsilon_{1}$ with $\epsilon_{1}$ satisfying \eqref{dist_laws_main_3_part_1} and \eqref {main_3_part_1_upper_bound_5}, we have
\begin{equation}\label{case_1_main_3_part_2}
\left|F_{N,j,\bm{\chi}}\left(\bnw_{1,\bm{\chi}}^{(n)}\right) - F_{N,j,\bm{\eta}}\left(\bnw_{1,\bm{\eta}}^{(n)}\right)\right| \leqslant 2\epsilon_{1} + \mathcal{E}_{j,S_{j},\bm{\chi}} (2\epsilon_{1} + S_{\alpha}\epsilon_{2}).
\end{equation}
\item For $\mathcal{L}_{\bm{\chi}} \in \mathcal{D}_{4} \cap \mathcal{D}_{2}$ and $d_{0}\left(\mathcal{L}_{\bm{\chi}}, \mathcal{L}_{\bm{\eta}}\right) \leqslant \epsilon_{1}$ with $\epsilon_{1}$ satisfying \eqref{dist_cond_main_3_part_1} and \eqref {main_3_part_1_upper_bound_5}, we have
\begin{equation}\label{case_1_main_3_part_3}
\left|F_{N,j,\bm{\chi}}\left(\bnw_{1,\bm{\chi}}^{(n)}\right) - F_{N,j,\bm{\eta}}\left(\bnw_{1,\bm{\eta}}^{(n)}\right)\right| \leqslant 2\epsilon_{1} + S_{\beta} \left(2\epsilon_{1} + \mathcal{E}_{\bm{\chi}}\epsilon_{2}\right).
\end{equation}
\item For $\mathcal{L}_{\bm{\chi}} \in \mathcal{D}_{4} \cap \mathcal{D}_{3}$ and $d_{0}\left(\mathcal{L}_{\bm{\chi}}, \mathcal{L}_{\bm{\eta}}\right) \leqslant \epsilon_{1}$ with $\epsilon_{1}$ satisfying \eqref {main_3_part_1_upper_bound_5}, we have
\begin{equation}\label{case_1_main_3_part_4}
\left|F_{N,j,\bm{\chi}}\left(\bnw_{1,\bm{\chi}}^{(n)}\right) - F_{N,j,\bm{\eta}}\left(\bnw_{1,\bm{\eta}}^{(n)}\right)\right| \leqslant 2\epsilon_{1} + \mathcal{E}_{j,S_{j},\bm{\chi}} \left(2\epsilon_{1} + \mathcal{E}_{\bm{\chi}}\epsilon_{2}\right).
\end{equation}
\end{itemize}
From \eqref{main_3_part_1_upper_bound_5}, choosing $\epsilon_{1}$ arbitrarily small, we can make $\epsilon_{2}$ arbitrarily small, and thereby make the upper bound in each of \eqref{case_1_main_3_part_1}, \eqref{case_1_main_3_part_2},\eqref{case_1_main_3_part_3} and \eqref{case_1_main_3_part_4} arbitrarily small. This concludes the proof.
The proof of \ref{main_3_part_2} is almost entirely the same as that of \ref{main_3_part_1}. We only point out that, when $\mathcal{L}_{\bm{\chi}} \in \mathcal{C}_{1}$, we bound $\mw_{1,\ell,\bm{\chi}}^{(n)}$ using Lemma~\ref{main_3_lem_4}. When $\mathcal{L}_{\bm{\chi}} \in \mathcal{C}_{2}$, Lemma~\ref{main_3_lem_4} and the recursions in \S\ref{subsec:misere_recursions} yield
\begin{multline}
1 - G_{k,[m] \setminus S_{k},\bm{\chi}}\left(\mw_{1,\ell,\bm{\chi}}^{(n)}: \ell \in [m] \setminus S_{k}\right) + \beta_{k,\bm{\chi}} = 1 - \ml_{2,k}^{(n+1)}\\ \leqslant 1 - G_{k,[m] \setminus S_{k},\bm{\chi}}\left(\alpha_{\ell,\bm{\chi}}: \ell \in [m] \setminus S_{k}\right) + \beta_{k,\bm{\chi}} < 1.\nonumber
\end{multline}
We skip the proof of \ref{main_3_part_3} entirely as the argument follows the exact same lines as those of \ref{main_3_part_1} and \ref{main_3_part_2}.
When $\nd_{1,j,\bm{\chi}} = 0$, we have $\nw_{1,j,\bm{\chi}} = 1 - \nl_{1,j,\bm{\chi}}$. By \ref{main_3_part_1}, we know that $\nl_{1,j,\bm{\chi}}$ is lower semicontinuous, so that $1 - \nl_{1,j,\bm{\chi}}$ is upper semicontinuous in $\mathcal{L}_{\bm{\chi}}$. This makes $\nw_{1,j,\bm{\chi}}$, again by \ref{main_3_part_1}, both a lower and an upper semicontinuous function. Hence $\nw_{1,j,\bm{\chi}}$ is a continuous function of $\mathcal{L}_{\bm{\chi}}$.
\end{proof}
\section{Proof of Theorem~\ref{thm:main_4}}\label{sec:main_4_proof}
We use a \emph{forcing strategy} somewhat similar to that of [\cite{holroyd_martin}, Proposition 13]. The root $\phi$ of $\mathcal{T}$ is the initial vertex, and Escaper plays the first round. Let $\mathcal{T}'$ be the subtree comprising paths $\mathcal{P} = (u_{0}, u_{1}, u_{2}, \ldots)$ that satisfy the following conditions (recall $f$ from Theorem~\ref{thm:main_4}):
\begin{itemize}
\item $u_{0} = \phi$, i.e.\ the root belongs to each such path,
\item $\sigma(u_{2n+1}) \in [m] \setminus S_{\sigma(u_{2n})}$ and $\sigma(u_{2n+2}) \in S_{\sigma(u_{2n+1})}$ for every $n \in \mathbb{N}_{0}$,
\item each $u_{2n+1}$ has precisely one child (\emph{viz}.\ $u_{2n+2}$) whose colour is in $S_{\sigma(u_{2n+1})}$, and $\sigma(u_{2n+2}) = f\left(\sigma(u_{2n+1})\right)$ (note: $u_{2n+1}$ can have any number of children with colours in $[m] \setminus S_{\sigma(u_{2n+1})}$).
\end{itemize}
If $\mathcal{T}'$ is infinite, and Escaper keeps the game confined to $\mathcal{T}'$, which she can, then each player is \emph{always} able to make a move, leading to a win for Escaper. We remove all the odd-leveled vertices from $\mathcal{T}'$, resulting in another rooted tree $\mathcal{T}''$, so that $\mathcal{T}'$ is infinite iff $\mathcal{T}''$ is infinite. We now seek a criterion that guarantees the survival of $\mathcal{T}''$ with positive probability.
Suppose $u_{2n} \in V\left(\mathcal{T}''\right)$ with $\sigma(u_{2n}) = i$ for some $i \in [m]$, and $v$ is a child of $u_{2n}$ in $\mathcal{T}$ with $\sigma(v) = k$ for some $k \in [m] \setminus S_{i}$. Call $v$ \emph{special} if, of all the children of $v$, there is precisely one, say $w$, whose colour belongs to the set $S_{k}$, and $\sigma(w) = f(k)$. The event that $v$ is special is independent of the subtrees $\mathcal{T}(v')$ of all other children $v'$ of $u_{2n}$, and the probability of this event is given by $\gamma_{k,f(k)}$. Thus, for each $k \in [m] \setminus S_{i}$, the number of special children $v$ of $u_{2n}$ with $\sigma(v) = k$ is given by $\bin\left(X_{u_{2n},k}, \gamma_{k,f(k)}\right)$. The total number of children of colour $j$ of $u_{2n}$ in $\mathcal{T}''$ is distributed as $\sum_{k \in [m] \setminus S_{i}: f(k) = j} \bin\left(X_{u_{2n},k}, \gamma_{k,j}\right)$, which yields
\begin{equation
m''_{i,j} = \sum_{k \in [m] \setminus S_{i}: f(k) = j} \E\left[\bin\left(X_{u_{2n},k}, \gamma_{k,j}\right)\big|\sigma(u_{2n}) = i\right] = \sum_{k \in [m] \setminus S_{i}: f(k) = j} m_{i,k} \gamma_{k,j}.\nonumber
\end{equation}
By [\cite{athreya_ney}, Chapter V, \S3, Theorem 2], we conclude that if, for some choice of $f$, the largest eigenvalue of the mean matrix $M'' = \left(\left(m''_{i,j}\right)\right)_{i,j \in [m]}$, is strictly bigger than $1$, then $\mathcal{T}''$ survives with positive probability, thus implying that $\eew_{\sigma(\phi)} > 0$.
Assume $\eew_{j} > 0$ for all $j \in [m]$, and $\sigma(\phi) = i$. Since $\alpha_{i} < 1$, there exist non-negative integers $n_{k}$ for every $k \in S_{i}$, not all $0$ simultaneously, such that $\Prob\left[X_{\phi,k} = n_{k} \text{ for all } k \in S_{i}\big|\sigma(\phi) = i\right] > 0$. Name the children of $\phi$ of colour $k$ as $v_{k,1}, \ldots, v_{k,n_{k}}$. The probability that for every $k \in S_{i}$, every $v_{k,t}$ is in $\EEW_{k}$ for $1 \leqslant t \leqslant n_{k}$, equals $\Prob\left[X_{\phi,k} = n_{k} \text{ for all } k \in S_{i}\big|\sigma(\phi) = i\right] \prod_{k \in S_{i}}\left(\eew_{k}\right)^{n_{k}}$, which by our hypothesis is strictly positive. If $\phi$ is the initial vertex and Stopper plays the first round, Stopper is forced to move the token to some child $v_{k,t}$ of $\phi$ for some $k \in S_{i}$, thus allowing Escaper to win. This yields $\esl_{i} > 0$. A similar argument yields the converse.
|
1412.7517
|
\section{Introduction} \label{sec:intro}
According to the definition of the Handbook \cite{FouqueLangsam2013aa}, systemic risk is the risk of a disruption of the proper functioning of the market which results in the reduction of the growth of the world's Gross Domestic Product (GDP). In economics, a system such as a market can be described by a game, i.e. a set of agents endowed with strategies (and possibly other attributes) that they may play upon to maximize their utility function. In a game, the utility function depends on the other agents' strategies. The proper functioning of a market is associated to a Nash equilibrium of this game, i.e. a set of strategies such that no agent can improve on his utility function by changing his own strategy, given that the other agents' strategies are fixed. At the market scale, the number and diversity of agents is huge and it is more effective to use games with a continuum of players. Games with a continuum of players have been widely explored \cite{Aumann1966aa,Mas-Colell1984aa,Schmeidler1973aa,ShapiroShapley1978aa}.
To study systemic risk and its induced catastrophic changes in the economy, it is of primary importance to incorporate the time-dimension into the description of the system. A possible framework to achieve this is by means of a control-theoretic approach, where the optimal goal is not a simple Nash equilibrium, but a whole set of optimal trajectories of the agents in the strategy space. Such an approach has been formalized in the seminal work of \cite{LasryLions2007aa} and popularized under the name of 'Mean-Field Game (MFG)'. It has given rise to an abundant literature, among which (to cite only a few) \cite{BlanchetCarlier2014aa,BensoussanFrehseYam2014aa,BensoussanFrehseYam2013aa,GueantLasryLions2011aa,Cardaliaguet2010aa}. The MFG approach offers a promising route to investigate systemic risk. For instance, in the recent work \cite{CarmonaFouqueSun2013aa}, the MFG framework has been proposed to model systemic risk associated with inter-bank borrowing and lending strategies.
However, the fact that the agents are able to optimize their trajectory over a large time horizon in the spirit of physical particles subjected to the least action principle can be seen as a bit unrealistic. A related but different approach has been proposed in \cite{DegondLiuRinghofer2012aa} and builds on earlier work on pedestrian dynamics \cite{DegondAppert-RollandMoussaid2013aa}. It consists in assuming that agents perform the so-called 'Best-Reply Strategy' (BRS): they determine a local (in time) direction of steepest descent (of the cost function, i.e. minus the utility function) and evolve their strategy variable in this direction. This approach has been applied to models describing the evolution of the wealth distribution among interacting economies, in the case of conservative \cite{DegondLiuRinghofer2014ac} and nonconservative economies \cite{DegondLiuRinghofer2014aa}. However, the link between MFG and BRS was still to be elaborated. This is the object of the present paper. We show that the BRS can be obtained as a MFG over a short interval of time which recedes as times evolves. This type of control is known as Model Predictive Control (MPC) or as Receding Horizon Control. The fact that the agents may be able to optimize the trajectories in the strategy space over a small but finite interval of time is certainly a reasonable assumption and this MPC strategy could be viewed as an improvement over the BRS and some kind of compromise between the BRS and a fully optimal but fairly unrealistic MFG strategy. We believe that MPC can lead to a promising route to model systemic risk. In this paper though, we propose a general framework to connect BRS to MFG through MPC and defer its application to specific models of systemic risk to future work.
Recently, many contributions on meanfield games and control mechanisms for particle systems have been made. For more details on meanfield games we refer to \cite{BlanchetCarlier2014aa,BensoussanFrehseYam2014aa,BensoussanFrehseYam2013aa,GueantLasryLions2011aa,Cardaliaguet2010aa,LasryLions2007aa}. Among the
many possible meanfield games to consider we are interested in differential (Nash) games of possibly infinitely many particles (also called players).
Most of the literature in this respect
treats theoretical and numerical approaches for solving the Hamilton--Jacobi Bellmann (HJB) equation for the value
function of the underlying game, see e.g. \cite{Cardaliaguet2010aa} for an overview.
Solving the HJB equation allows to determine the optimal control for the particle game. However, the associated
HJB equation posses several theoretical and numerical difficulties among which the need to solve it backwards in time is the most severe one, at least from
a numerical perspective. Therefore, recently model predictive control (MPC) concepts on the level particles or of the associated kinetic equation have been proposed \cite{DegondAppert-RollandMoussaid2013aa,CouzinKrauseFranks2005aa,FornasierSolombrino2013aa,CaponigroFornasierPiccoli2013aa,AlbiHertyPareschi2014aa,DegondLiuRinghofer2014aa,DegondLiuRinghofer2014ac,CaponigroFornasierPiccoli2013aa}.
While MPC has been well established in the case of finite--dimensional problems \cite{GrunePannek2011ab,Sontag1998aa,MayneMichalska1990aa}, and also in engineering literature under the term receding horizon control, contributions to systems of infinitely many interacting particles and/or game theoretic questions related to infinitely many particles are rather recent. It has been shown that MPC concepts applied to problems of infinitely many interacting particles have the advantage to allow for efficient computation \cite{AlbiHertyPareschi2014aa,DegondLiuRinghofer2014ac}. However, by construction MPC only leads to suboptimal solutions, see for example \cite{HertySteffensenPareschi2014aa} for a comparison in the case of simple opinion formation model. Also, the existing approaches mostly for alignment models do not necessarily treat game theoretic concepts but focus on
for example sparse global controls \cite{CouzinKrauseFranks2005aa,FornasierSolombrino2013aa,CaponigroFornasierPiccoli2013aa}, time--scale separation and local mean--field controls
\cite{DegondLiuRinghofer2014aa} called best--reply strategy, or MPC on very short time--scales \cite{AlbiHertyPareschi2014aa} called instantaneous control.
Typically the MPC strategy is obtained solving an auxiliary problem
(implicit or explicit) and the resulting expression for the control is substituted back into
the original dynamics leading to a possibly modified and new dynamics. Then, a meanfield description is derived using Boltzmann or a macroscopic approximation.
This requires the action of the control to be {\em local } in time and independent of future states of the system contrary to solutions of the HJB equation.
Usually in MPC approaches independent optimal control problems are solved where particles do not anticipate the optimal control choices
other particles contrary to meanfield games \cite{LasryLions2007aa}.
\par
In this paper we contribute to the recent discussion by formal computations leading to a link between meanfield games and MPC concepts
proposed on the level of particle games and associated kinetic equations. The relationship we plan to establish is highlighted
in Figure \ref{fig1}. More precisely, we want to show that the MPC concept of the best--reply strategy \cite{DegondLiuRinghofer2014ac}
may be at least formally be derived from a meanfield games context.
\begin{figure}[htb]\center
\includegraphics[width=.75\textwidth]{relation.jpg}
\caption{ Relation between MPC concepts and meanfield games. The starting point are finite--dimensional differential games with $N$ players in the top left part (Section \ref{sec:setting}).
The connection for $N\to\infty$ of this games has been investigated for example in \cite{LasryLions2007aa,Cardaliaguet2010aa}
and leads to the HJB for meanfield games in the bottom left part of the figure (Section \ref{sec:meanfield}). If applying MPC concepts to the differential game
as for example the best-reply strategy we obtain a controlled dynamics for $N$ particles in the top right part \cite{DegondLiuRinghofer2014ac} (Section \ref{top-left-to-top-right}).
The meanfield limit for $N\to \infty$
leads to a kinetic equation in the bottom right part (Section \ref{top-right-to-bottom-right}). Those results are summarized in Lemma \ref{lemma1}.
This paper also investigates the link between the meanfield game and the kinetic equation indicated
by a question mark. The result is summarized in Proposition \ref{lemma2}.
}
\label{fig1}
\end{figure}
\section{Setting of the problem }\label{sec:setting}
\newcommand{\mathbb{R}}{\mathbb{R}}
We consider $N$ particles labeled by $i=1,\dots,N$ where each particle has a state
$x_i \in \mathbb{R}.$ We denote by $X=(x_i)_{i=1}^N$ the state of all particles
and by $X_{-i}=(x_j)_{j=1, j\not =i }^N$ the states of all particles except $i.$ Further,
we assume that each particle's dynamics is governed by a smooth function $f_i:\mathbb{R}^N\to \mathbb{R}$
depending on the state $X$ and we assume that each particle
may control its dynamics by a control $u_i.$ The dynamics for the particles $i=1,\dots,N$
is then given by
\begin{equation}
\label{eq:full dynamics}
\frac{d}{dt} x_i(t) = f_i(X(t)) + u_i(t), \; i=1,\dots,N,
\end{equation}
and initial conditions
\begin{equation}\label{eq:IC}
x_i(0)=\bar{x_i}.
\end{equation}
We will drop the time--dependence of the variables whenever the intention is clear.
Examples of models of the type \eqref{eq:full dynamics}
are alignment models in socio--ecological context, microscopic traffic flow models, production and many more, see e.g.
the recent
survey \cite{MotschTadmor2013aa,NaldiPareschiToscani2010aa,VicsekZafeiris2012aa}.
In recent contributions to control theory for equation \eqref{eq:full dynamics}
the case of a {\em single} control variable $u_i \equiv u$ for all $i$ has been considered \cite{AlbiHertyPareschi2014aa,AlbiPareschiZanella2014aa,CaponigroFornasierPiccoli2013aa}.
Here, we allow each particle to chose its own control strategy $u_i$. We suppose a control horizon of $T>0$ be given.
As in \cite{Cardaliaguet2010aa} we suppose that particle $i$ minimizes its own objective functional
and determines therefore the optimal $u_i^{*}$ by
\begin{equation}
\label{eq:general min particle}
u^{*}_i(\cdot) = \mbox{ argmin }_{ u_i: [0,T] \to \mathbb{R} } \int_0^T \left( \frac{ \alpha_i(s) }2 u_i^2(s) + h_i(X(s)) \right) ds, \; i=1,\dots,N.
\end{equation}
Herein, $X(s)$ is the solution to \eqref{eq:full dynamics} and equation \eqref{eq:IC}. The optimal control and the corresponding
optimal trajectory will from now on be denoted with superscript $\ast.$
The minimization is performed on all sufficiently smooth functions $u_i:[0,T] \to \mathbb{R}$. There is no restriction on the control $u_i$
similar to \cite{LasryLions2007aa}. The objective $h_i:\mathbb{R}^{N}\to \mathbb{R}$ related
to particle $i$ is also supposed to be sufficiently smooth. The weights of the control $\alpha_i(t) >0, \forall i, \; t\geq 0$ and under additional conditions convexity
of each optimization problem \eqref{eq:general min particle} is guaranteed. As seen in Section \ref{sec:meanfield} a challenge in solving the problem \eqref{eq:general min particle}
relies on the fact that the associated HJB has to be solved backwards in time. Contrary to \cite{AlbiHertyPareschi2014aa,CaponigroFornasierPiccoli2013aa}
problem \eqref{eq:general min particle} are in fact $N$ optimization problems that need to be solved {\em simultaneously } due to the dependence
of $X$ on $U=(u_i)_{i=1}^{N}$ through equation \eqref{eq:full dynamics}. This implies that each particle $i$ {\em anticipates the optimal } strategy
of all other particles $U^{*}_{-i}$ when determining its optimal control $u^{*}_i$. Obviously, the problem \eqref{eq:general min particle} simplifies
when each particle $i$ {\em anticipates an a priori fixed strategy} of all other particles $U_{-i}.$ Then, the problem \eqref{eq:general min particle}
decouples (in $i$) and the optimal strategy $u_i$ is determined independent of the optimal strategies $U^{*}_{-i}.$ It has been argued that
this is the case for reaction in pedestrian motions \cite{DegondLiuRinghofer2014aa}.
In fact, therein the following best--reply strategy has been proposed as a substitute for problem \eqref{eq:full dynamics}
\begin{equation}\label{eq:best reply}
u_i(t) = - \partial_{x_i} h_i(X(t)), \; t \in [0,T].
\end{equation}
As in the meanfield theory presented in \cite{LasryLions2007aa,Cardaliaguet2010aa} we need to impose assumptions {\bf (A)} on $f_i(X)$
and $h_i(X)$ before passing to the limit $N\to\infty.$ The assumption {\bf (B)} will be used in Section \ref{sec:meanfield}.
\begin{itemize}
\item[{\bf (A)}] For all $i=1,\dots,N$ and any permutation $\sigma_i : \{1,\dots,N\} \backslash \{ i \} \to \{1,\dots,N\} \backslash \{ i \} $
we have
$$f_i(X) = f(x_i,X_{-i}) \mbox{ and } f(x_i,X_{-i}) = f(x_i, X_{\sigma_i} )$$
for a smooth function $f:\mathbb{R} \times \mathbb{R}^{N-1} \to \mathbb{R}$ and where $X_{\sigma_i} = (x_{\sigma_i(j)})_{j=1, j\not = i}^{N}.$
Further we assume that for each $i$ the function $h_i(X)$ enjoys the same properties as stated for $f_i(X).$
\item[ {\bf (B)}] We assume that $\alpha_i(t) = \alpha(t)$ for all $t \in [0,T]$ and all $i=1,\dots,N.$
\end{itemize}
Under additional growth conditions sequences of symmetric functions in many variables
have a limit in the space of functions defined on probability measures, see e.g.
\cite[Theorem 2.1]{Cardaliaguet2010aa}, \cite[Theorem 4.1]{BlanchetCarlier2014aa}.
The corresponding result is recalled as Theorem~\ref{Theorem2.1Card} in the appendix
for convenience.
\par
To exemplify computations later
on we will use a basic wealth model
\cite{BouchaudMezard2000aa,CordierPareschiToscani2005aa,DegondLiuRinghofer2012aa,DegondLiuRinghofer2014aa}
where
\begin{equation}
\label{eq:ex:opinion}
f_i(X) = \frac{1}N \sum\limits_{j=1}^N P(x_i,x_j) ( x_j - x_i)
\end{equation}
for some bounded, non--negative and smooth function $P(x,\tilde{x}).$ Clearly, $f$ fulfills {\bf(A)}.
As objective function we use a measure depending only on aggregated quantities as in \cite{DegondLiuRinghofer2014ac}.
An example fulfilling {\bf (A)} is
\begin{equation}
\label{eq:ex:objective}
h_i(X) = \frac{1}{N-1} \sum\limits_{j=1, j\not = i}^{N} \phi(x_i,x_j)
\end{equation}
for some smooth function $\phi:\mathbb{R}\times \mathbb{R} \to\mathbb{R}.$
\par
Finally, we introduce some additional notation. We denote by $\mathcal{P}(\mathbb{R})$ the space of Borel probability measures over $\mathbb{R}$.
The empirical discrete probability measure $m^{N} \in \mathcal{P}(\mathbb{R})$ concentrated at a positions $X \in \mathbb{R}^N$ is denoted by
$$ m^{N}_X = \frac{1}N \sum\limits_{i=1}^N \delta(x-x_i).$$
We also use this notation if $X$ is time dependent, i.e., $X=X(t)$, leading to the family of probability measures $m^{N}_X = m^{N}_X(t)=\frac{1}N \sum\limits_{i=1}^N \delta( x- x_i(t)).$
If the intention is clear we do not explicitly denote the dependence on $x$ of the measure $m^{N}_X$ (and on time $t$ if $X=X(t)$ is time-dependent).
\par
Based on the assumption {\bf (A)} we will frequently use Theorem \cite[Theorem 2.1]{Cardaliaguet2010aa}.
This theorem is repeated for convenience in the appendix as Theorem \ref{Theorem2.1Card}: let $g:\mathbb{R}^{N}\to \mathbb{R}$ such that $g$ is symmetric $g(X)=g(X_\sigma)$
where $X_\sigma=(x_{\sigma(i)})_{i=1}^{N}$ and any permutation $\sigma: \{ 1,\dots, N\} \to \{1,\dots,N\}$. We may extend $g$ to a function $g^{N}:\mathcal{P}(\mathbb{R}) \to \mathbb{R}$ such that
$g^{N}(m^{N}_X) = g(X).$ Under assumptions given in Theorem \ref{Theorem2.1Card} the
family $(g^{N})_{N=1}^\infty$ is equicontinuous and there exists a limit ${\bf g}:\mathcal{P}(\mathbb{R})\to \mathbb{R}$ such that up to a subsequence
$\lim\limits_{N\to \infty} \sup_{X} | g(X) - {\bf g}( m^{N}_X ) | = 0.$
The result can be extended to a family of functions $f$ fulfilling assumption {\bf (A)}, see \cite[Theorem 4.1]{BlanchetCarlier2014aa}.
We obtain an equicontinuous family $(f^{N})_{N=1}^\infty$ with $f^{N}:\mathbb{R} \times \mathcal{P}(\mathbb{R}) \to \mathbb{R}$ such that
$f^{N}(\xi, m^{N-1}_{X_{-i}}) = f(\xi, X_{-i})$ with limit $ {\bf f}:\mathbb{R} \times \mathcal{P}(\mathbb{R}) \to \mathbb{R}$ and such that for any $i \in \{ 1,\dots,N \}$ and a compact set $Q \subset \mathbb{R}^{N-1}$
we have for any fixed $R>0$
\begin{equation}
\label{def:con} \lim\limits_{N\to \infty} \sup_{ | \xi | < R, X_{-i} \subset Q } | f(\xi,X_{-i}) - {\bf f}(\xi,m^{N}_X) | = 0.
\end{equation}
In equation \eqref{def:con} we have the empirical measure on $N$ points in the argument of $ {\bf f}$ even so $f^{N}$ is defined on the empirical measure $m^{N-1}_{X_{-i}}$, i.e.,
we have $$\lim\limits_{N\to \infty} \sup_{ | \xi | < R, X_{-i} \subset Q } | f^{N}(\xi,m^{N-1}_{X_{-i}}) - {\bf f}(\xi,m^{N}_X) | =0.$$
This is true since in the definition of $f^{N}$ the contribution of the empirical measure is $\frac{1}N$ for each point $x_i.$ More details are given in \cite{BlanchetCarlier2014aa}
and \cite[Section 7]{Cardaliaguet2010aa}. Since in the following it is often of importance to highlight the dependence on the empirical measure $m^{N}_X$
we introduce the following notation: we write
$$ f(\xi,X_{-i}) = f^{N}(\xi, m^{N-1}_{X_{-i}} ) \sim {\bf f}(\xi,m^{N}_X), \; $$
whenever equation \eqref{def:con} holds true.
\subsection{From differential games to controlled particle dynamics } \label{top-left-to-top-right}
The best--reply strategy \eqref{eq:best reply} is obtained also from a MPC approach \cite{MayneMichalska1990aa}
applied to equations \eqref{eq:full dynamics} and \eqref{eq:general min particle}. In order to derive the best--reply
strategy we consider the following problem: suppose we are given the state $X(t)$ of the system \eqref{eq:full dynamics}
at time $t>0$. Then, we consider a control horizon of the MPC of $\Delta t>0$ and supposedly small. We assume
that the applied control $u_i(s)$ on $(t, t+\Delta t)$ is constant. For particle $i$ we
denote the unknown constant by $\tilde{u}_i.$ Instead of solving the problem
\eqref{eq:general min particle} now on the full time interval $(t,T)$
we consider the objective function only on the receding time horizon $(t,t+\Delta t).$
Further, we discretize the dynamics \eqref{eq:full dynamics} on $(t,t+\Delta t)$
using an explicit Euler discretization for the initial value $\bar{X} = X_i(t).$ We discretize the objective function
by a Riemann sum. A naive discretization leads to a penalization of the control of the type
$\frac{\alpha_i(t+\Delta t)}2 \tilde{u}^{2}.$ Since the explicit Euler discretization in equation \eqref{eq:MPC 1} is only accurate up to order $O( (\Delta t)^{2} )$
we additionally require to have $\tilde{u}_i = O(1)$ to obtain a meaningful control in the discretization \eqref{eq:MPC 1} and also in the limit for $\Delta t \to 0$.
Therefore, in the MPC problem we need to scale the penalization of the control accordingly by $\Delta t.$
This leads to a MPC problem associated with equation \eqref{eq:general min particle} and given by
\begin{align}
\label{eq:MPC 1} x_i(t+\Delta t) = \bar{x}_i +\Delta t \left( f_i( \bar{X} ) + \tilde{u}_i \right), \;& & i=1,\dots, N, \\
\label{eq:MPC 2} \tilde{u}_i = \mbox{ argmin }_{ \tilde{u} \in \mathbb{R}} \Delta t \left( h_i\left( X(t+\Delta t) \right) + \Delta t \frac{ \alpha_i( t+ \Delta t) }2 \tilde{u}^2 \right) , \;& & i=1,\dots, N.
\end{align}
Solving the minimization problem \eqref{eq:MPC 2} leads to
\begin{equation*}
\alpha_i(t+\Delta t) \; \tilde{u}_i = - \partial_{x_i} h_i( \bar{X} ), \; i=1,\dots,N.
\end{equation*}
Now, we obtain a $\tilde{u}_i$ of order $O(1)$ by Taylor expansion of $\alpha_i$ at time $t.$
Within the MPC approach the control for the time interval $(t,t+\Delta t)$ is therefore given by equation \eqref{eq:MPC best reply}.
\begin{equation}\label{eq:MPC best reply}
\tilde{u}_i = - \frac{1}{\alpha_i(t)} \partial_{x_i} h_i( \bar{X} ), \; i=1,\dots,N.
\end{equation}
Usually, the dynamics \eqref{eq:MPC 1} is then computed with the computed control up to $t+\Delta t.$ Then, the process is repeated
using the new state $X(t+\Delta t).$ Substituting \eqref{eq:MPC best reply} into \eqref{eq:MPC 1} and letting $\Delta t \to 0$
we obtain
\begin{equation}\label{eq:MPC controlled dynamics}
\frac{d}{dt} x_i(t) = f_i(X(t)) - \frac{1}{\alpha_i(t)} \partial_{x_i} h_i( X(t) ), \; i=1,\dots, N, t \in [0,T].
\end{equation}
This dynamics coincide with the dynamics generated by the best--reply strategy \eqref{eq:best reply} provided that $\alpha_i(t) \equiv 1.$
Therefore, on a particle level the controlled dynamics \eqref{eq:MPC controlled dynamics} of the best--reply strategy \cite{DegondLiuRinghofer2014ac} is equivalent to a MPC
formulation of the problem \eqref{eq:general min particle}. For the toy example we obtain
\begin{equation}
\label{eq:ex:MPC} \frac{d}{dt} x_i = \frac{1}N \sum\limits_{j=1}^N P(x_i,x_j) ( x_j - x_i) - \frac{1}{ (N-1) \alpha_i(t)} \sum \limits_{j=1, j\not = i}^N \partial_{x_i} \phi(x_i, x_j).
\end{equation}
\subsection{From controlled particle dynamics \eqref{eq:MPC controlled dynamics} to kinetic equation} \label{top-right-to-bottom-right}
The considerations herein have essentially been studied for the best--reply strategy in the series of papers \cite{DegondLiuRinghofer2012aa,DegondLiuRinghofer2014ac,DegondLiuRinghofer2014aa} and it is only repeated for
convenience. The starting point is the controlled dynamics given by equation \eqref{eq:MPC controlled dynamics} which slightly extends
the best--reply
strategy. In order to pass to the meanfield limit we assume that {\bf (A)} and {\bf (B)} holds true. Then the particles are governed by
\begin{equation}
\label{eq:controlled dynamics}
\frac{d}{dt} x_i(t) = f(x_i(t), X_{-i}(t)) - \frac{1}{\alpha(t)} \partial_{x_i} h(x_i(t), X_{-i}(t) ), \; i=1,\dots, N.
\end{equation}
Associated with the trajectories $X=X(t)$ the discrete probability measure $m^N_X$ is given by $m^{N}_X=\frac{1}N \sum\limits_{j=1}^N \delta(x-x_i(t)).$
Using the weak formulation for a test function $\psi:\mathbb{R} \to \mathbb{R}$ we compute the dynamics of $m^N_X$ over time as
\begin{align*}
\frac{d}{dt} \int \psi(x) m^N_X dx = \frac{1}N \sum\limits_{i=1}^N \int \psi'(x) \left( f(x,X_{-i}) - \frac{1}{\alpha} \partial_x h(x,X_{-i}) \right) \delta(x-x_i(t)) dx
\end{align*}
Using \cite[Theorem 2.1]{Cardaliaguet2010aa} and denoting by $m^{N-1}_{X_{-j}}(t) = \frac{1}{ N-1} \sum\limits_{k=1, k\not =j } \delta(x-x_k(t))$
a family of empirical measures on $\mathbb{R}$
we obtain from the previous equation
\begin{align*}
\frac{d}{dt} \int \psi(x) m^N_X dx = \frac{1}N \sum\limits_{i=1}^N \int \psi'(x) \left( f^N(x, m^{N-1}_{X_{-i}} ) -
\frac{1}{\alpha} \partial_x h^N(x,m^{N-1}_{X_{-i}} ) \right) \delta(x-x_i(t)) dx
\end{align*}
for some function $f^N, h^N:\mathbb{R} \times \mathcal{P}(\mathbb{R}) \to \mathbb{R}$. Assume $f$ and $h$
fulfill the assertions of \cite[Theorem 4.1]{BlanchetCarlier2014aa}. Then, we obtain in
the sense of equation \eqref{def:con} that for any $i$ and $N$ sufficiently large
$ {\bf f}(x,m^{N}_X) \sim f^{N}(x,m^{N-1}_{X_{-i}})$ and $ {\bf h}(x,m^{N}_X) \sim h^{N}(x,X_{-i})$ and therefore
in the sense of equation \eqref{def:con}
\begin{align*}
\frac{d}{dt} \int \psi(x) m^N_X dx = \frac{1}N \sum\limits_{i=1}^N \int \psi'(x)\left( {\bf f}(x, m^N_X ) -
\frac{1}{\alpha} \partial_x {\bf h}(x,m^N_X ) \right) \delta(x-x_i(t)) dx = \\
\int \psi'(x) m^N_X \left( {\bf f}(x, m^N_X ) - \frac{1}{\alpha} \partial_x {\bf h}(x,m^N_X ) \right) dx
\end{align*}
This is the weak form of the kinetic equation for a probability measure $m=m(t,x)$
\begin{equation}\label{eq:best reply kinetic}
\partial_t m + \partial_x \left( m \left( {\bf f}(x,m) - \frac{1}\alpha \partial_x {\bf h } (x,m) \right) \right) = 0
\end{equation}
\par
\noindent
For the toy example the corresponding function $f^N$ is given
by $$f^N(x,m^N_{X_{-i}} ) = \frac{N-1}N \left( \int P(x,y) (y-x) m^N_{X_{-i}}(t,y) dy \right) $$
and ${\bf f}(x,m^N_X) = \int P(x,y) (y-x) m^N_X(t,y) dy.$
Therefore, we obtain
\begin{equation*}
\partial_t m(t,x) + \partial_x \left( \int \left( P(x,y)(y-x) - \frac{1}\alpha \partial_x \phi(x,y) \right) m(t,x) m(t,y) dy \right) = 0.
\end{equation*}
We summarize the previous findings in the following lemma.
\begin{lemma}\label{lemma1}
Consider a fixed time horizon $T>0$ and consider $N$ particles governed by the dynamics \eqref{eq:full dynamics} and initial conditions \eqref{eq:IC}. Assume {\bf (A)} and {\bf (B)}
hold true. Assume each particle $i=1,\dots,N$, chooses its control $u_i$ at time $t \in [0,T]$ by
\begin{equation}\label{lem:ctrl} u_i(t) = - \frac{1}{\alpha_i(t)} \partial_{x_i} h_i(X(t)).\end{equation}
Assume that $m^{N}_{\bar{X}} \to {\overline{m}} \in \mathcal{P}(\mathbb{R})$ for $N\to\infty.$ Then,
the meanfield limit of the particle dynamics \eqref{eq:full dynamics} and \eqref{lem:ctrl} is given by
\begin{equation}
\label{lem:mf}
\partial_t m + \partial_x \left( m \left( {\bf f}(x,m) - \frac{1}\alpha \partial_x {\bf h } (x,m) \right) \right) = 0
\end{equation}
for initial data $m(t=0,x)=\overline{m}.$
\end{lemma}
Finally, we summarize the MPC approach. Consider a time interval $\Delta t >0$, an equidistant discretization $t_\ell = (\ell-1) \Delta t, \ell=1,\dots, N_T$ such that $N_T \Delta t =T$ and
a first--order numerical discretization of $N$ particle dynamics \eqref{eq:full dynamics} given by
\begin{equation}\label{lem:disc}
x_{\ell,i} = x_{\ell-1,i} + \Delta t \left( f_i(X(t_{\ell})) + \tilde{u}_{\ell,i} \right), \; \ell = 1,\dots, N_T \mbox{ and } \; x_{0,i}=\bar{x}_i.
\end{equation}
for $x_i(t_\ell)=x_{\ell,i} $ and $i=1,\dots, N.$ Let in equation \eqref{lem:disc} be the control $u_i(t) = \sum\limits_{\ell=1}^{N_T} \chi_{ \left[ (\ell-1)\Delta t, \ell \Delta t \right) }(t) \tilde{u}_{\ell,i}$ is piecewise constant. The constants $\tilde{u}_{\ell,i}$ obtained as discretization in time of equation \eqref{lem:ctrl} are given by
\begin{equation}\label{lem:discctl} \tilde{u}_{\ell,i} = - \frac{1}{\alpha_i(t_\ell)} \partial_{x_i} h_i(X(t_\ell)), \; \ell=1,\dots,N_T.
\end{equation}
Then, each $\tilde{u}_{\ell,i}$ coincides up to $O(\Delta t)$ with optimal control on the time interval $\left[ (\ell-1)\Delta t, \ell \Delta t \right)$ of the
minimization problems \eqref{lem:min pb}. For each $\ell=1,\dots,N_T$ and each fixed value $X(t_{\ell-1})$ we have up $O(\Delta t)$
\begin{equation}
\label{lem:min pb}
\tilde{u}_{\ell,i} = \mbox{argmin}_{u \in \mathbb{R}} \left( h_i(X(t_\ell)) + \Delta t \; \frac{\alpha_i(t_\ell)}2 u^2 \right), i=1,\dots, N,
\end{equation}
where $X(t_\ell)$ is given by equation \eqref{lem:disc}.
\section{Results related to meanfield games }\label{sec:meanfield}
In this paragraph we consider the limit of the problem \eqref{eq:general min particle}
for a large number of particles. This has been investigated for example in \cite{LasryLions2007aa}
and derivations (in a slightly different setting) have been detailed in \cite[Section 7]{Cardaliaguet2010aa}. In order to show the links presented in Figure \ref{fig1}
we repeat the formal computations in \cite{Cardaliaguet2010aa}.
\par
A notion of solution to the competing $N$ optimization problem \eqref{eq:general min particle}
is the concept of Nash equilibrium. If it exists it may be computed for the differential games using the HJB equation. We briefly present computations leading to the HJB equation. Then, we discuss
the large particle limit of the HJB equation and derive the best--reply strategy.
\subsection{Derivation of the finite--dimensional HJB equation }
The HJB equation describes the evolution of a value function $V_i=V_i(t,Y)$
of the particle $i.$ The value function is defined as the future costs for a
particle trajectory governed by equation \eqref{eq:full dynamics} and starting
at time $t \in (0,T)$, position $Y$ and control $u_i:(t,T)\to \mathbb{R}, i=1,\dots,N,$
\begin{equation}
\label{eq:value fct i}
V_i(t, Y ) = \int_t^T \left( \frac{ \alpha_i(s)}2 u_i^2(s) + h_i(X(s)) \right)ds,
\end{equation}
where $X(s)=(x_i(s))_{i=1}^N$ is the solution to equation \eqref{eq:full dynamics} with control
$U$ and initial condition
\begin{equation}
\label{eq:ic value i}
X(t) = Y.
\end{equation}
Among all possible controls $u_i$ we denote by $u_i^{*}$ the
optimal control that minimizes $V_i(t,Y)$. We investigate the relation of the value function of particle $i$ to
the optimal control $u_i^*$. To this end assume that the coupled problem \eqref{eq:general min particle}
has a unique solution denoted by $U^*=(u^*_i)_{i=1}^N$.
Each $u_i^* : [t,T]\to \mathbb{R}$ for each $i=1,\dots,N$, is hence a solution to
$$ u^{*}_i = \mbox{ argmin }_{ u_i(\cdot):[t,T]\to\mathbb{R} } \{ V_i(t,Y): X \mbox{ solves } \eqref{3:state} \}, \; i=1,\dots,N. $$
The corresponding particle trajectories are denoted by $X^* = (x_i^*)_{i=1}^N$
and are obtained through \eqref{eq:full dynamics} for an initial condition $X^*(0)=\bar{X}.$
\par
Since $X^*(\cdot)$ depends on $U^*,$ minimizing the value function \eqref{eq:value fct i}
leads to the computation of formal derivatives of $V_i$ with respect to $u_i.$ The optimal
control $u_i^*$ is then found as formal point (in function space) where the derivative
of $V_i$ with respect to $u_i$ vanishes. We have as Gateaux derivative of $V_i$ in an arbitrary direction $v:[0,T]\to \mathbb{R}$
\begin{align}\label{3:reduced cost}
\frac{d}{d u_i } V_i(t,Y)[v] = \int_t^T \left( \alpha_i(s) u_i^*(s) + \sum\limits_{k=1}^N \partial_{x_k} h_i(X^*(s)) \partial_{u_i} \left( x_k^{*}(s) \right) \right) v(s) ds = 0
\end{align}
The derivative is not easily computed due to the unknown derivative of each state $x_k^{*}$ with respect to the acting control $u_i^{*}.$
However, choosing a set of suitable co-states $\phi_j^\ell:[0,T] \to \mathbb{R}$ for $\ell=1,\dots,N$ and $j=1,\dots,N,$ we may simplify the
previous equation \eqref{3:reduced cost}: we test equation \eqref{eq:full dynamics} by functions $\phi_j^\ell :[0,T] \to \mathbb{R}$
for $\ell,j=1,\dots,N$ such that
$\phi_j^{\ell}(T)=0$ and integrate on $(t,T)$ with $0\leq t<T$ and initial data at $X^*(t)=Y$ to obtain
\begin{align}\label{3:weak state}
\sum\limits_{j=1}^N \left\{ \int_t^T - \frac{d}{ds} (\phi_j^\ell(s) ) x_j^*(s) - \phi_j^\ell(s) \left( f_j(X^*(s)) + u_j^*(s) \right) ds - \phi_j^\ell(t)y_j \right\} = 0, \; \ell=1,\dots,N.
\end{align}
The derivative with respect to $u_i$ in an arbitrary direction $v$ is then
\begin{align} \nonumber
\int_t^T \left\{ \sum\limits_{j=1}^N \left( - \frac{d}{ds} (\phi_j^\ell(s) ) \partial_{u_i} \left( x_j^*(s) \right) - \phi_j^\ell(s) \left( \sum\limits_{k=1}^N \partial_{x_k} \left( f_j(X^*(s)) \right) \partial_{u_i} (x_k^*(s)) \right) \right) - \phi_i^\ell(s) \right\} \\
\times v(s) ds = 0.\label{3:weak derivative}
\end{align}
The previous equation can be equivalently rewritten as
\begin{align}
\sum\limits_{k=1}^N \left( - \frac{d}{ds} \phi^\ell_k(s) - \sum\limits_{j=1}^N \phi_j^\ell(s) \partial_{x_k} f_j(X^*(s)) \right) \partial_{u_i} ( x_k^*(s) ) v(s)
ds = \int_t^T \phi_i^\ell(s) v(s) ds.\label{eq:p-ast}
\end{align}
Let $\phi^i_j$ for $i,j=1,\dots,N$ fulfill the coupled linear system of adjoint equations (or co-state equations), solved backwards in time,
\begin{align}\label{3:adjoint}
- \frac{d}{dt} \phi^i_j(t) - \sum\limits_{k=1}^N \phi^i_k (t) \partial_{x_j} ( f_k(X^*(t)) = \partial_{x_j} h_i(X^*(t)), \; \phi^i_j(T)=0.
\end{align}
Then, formally for every $s \in (t,T)$ we have
\begin{align*}
\sum\limits_{j=1}^N \partial_{x_j} h_i(X^*(s)) \partial_{u_i} (x_j(s) ) =
\sum\limits_{k=1}^N
- \frac{d}{dt} \phi^i_k(s) \partial_{u_i} (x^{*}_k(s)) - \sum\limits_{j=1}^N \sum\limits_{k=1}^N \phi^i_j (s) \partial_{x_k} ( f_j(X^*(s)) \partial_{u_i} (x^{*}_k(s)).
\end{align*}
Since for all $v$ we have
\begin{align*}
\sum\limits_{j=1}^N \int_t^T \partial_{x_j} h_i(X^*(s)) \partial_{u_i} \left( x_j^{*}(s) \right) v(s) ds = \int_t^T \phi_i^{i}(s) v(s) ds
\end{align*}
it follows that $$\sum\limits_{j=1}^N \partial_{x_j} h_i(X^*(s)) \partial_{u_i} \left( x_j^{*}(s) \right) = \phi^{i}_i(s)$$ for $s \in (t,T).$ At
optimality the necessary condition is
$$\frac{d}{du_i} V_i(t,Y)[v] = 0$$
for all $v$ which implies that thanks to equation \eqref{3:reduced cost} we obtain for a.e. $s\in (t,T)$
$$
\left( \alpha_i(s) u_i^*(s) \right) + \sum\limits_{j=1}^N \partial_{x_j} h_i(X^*(s)) \partial_{u_i} \left( x_j^{*}(s) \right) =0.
$$
This leads to the following equation a.e. in $s \in (t,T)$
\begin{align}
\label{3:gradient}
\alpha_i(s) u_i^*(s) + \phi_i^i(s) = 0.
\end{align}
Equations
\begin{equation}\label{3:state}
\frac{d}{ds} x_j^*(s) = f_j(X^*(s)) + u_j^*(s), \; X^*(t)=Y.
\end{equation}
and \eqref{3:adjoint} for $j=1,\dots,N$ and equation \eqref{3:gradient} comprise the optimality
conditions for the minimization of the value function $V_i(t,Y)$ given by equation \eqref{eq:value fct i}. Due to
equation \eqref{3:state} this system is a coupled system of ordinary differential and algebraic equations
in the unknowns $\mathcal{S}:=(x_j^*, u_j^*, (\phi_j^i)_{i=1}^N )_{j=1}^N.$ Solving for all those unknowns yields
in particular the optimal control $u_i^*$ for the value function $V_i(t,Y)$ for all $i=1,\dots,N.$ The adjoint equation \eqref{3:adjoint}
is posed backwards in time such that the system is two--point boundary value problem and due to the strong coupling
of $x_j^*$ and $\phi^i_j$ it is not easy to solve. The derived system is a version of Pontryagins maximum principle (PMP)
for sufficient regular and unique controls. We refer to \cite{Friedman1974aa,Sontag1998aa,Bressan2011aa} for more details.
From now on we assume that equation \eqref{3:gradient} where $\phi_j^i$ solves equation \eqref{3:adjoint} is
necessary and sufficient for optimality of $u_i^*$ for minimizing the value function $V_i(t,Y).$ The corresponding
optimal trajectory and co-state is denoted by $\mathcal{S}$ introduced above. We formally
derive the HJB based on the previous equations of PMP and refer to \cite[Chapter 8]{Friedman1974aa}
for a careful theoretical discussion.
\par
Consider the function $V_i(t,Y)$ evaluated along the optimal trajectory $\mathcal{S},$ i.e.,
let $ {\mathcal{V}}_i(t)=V_i(t, X^*(t)).$ Then, by definition of $V_i$ and $\mathcal{S}$ we have
\begin{align*}
- \frac{\alpha_i(t)}2 (u_i^*)^2(t) - h_i(X^*(t)) = \frac{d}{dt} {\mathcal{V}}_i(t) \\
= \partial_t V_i(t,X^*(t)) + \sum\limits_{k=1}^{N} \partial_{x_k} V_i(t,X^*(t)) \left( f_k(X^*(t))+u_k^*(t) \right).
\end{align*}
Using the necessary condition \eqref{3:gradient} we obtain
\begin{align}\label{3:temp}
- \frac{1}{ 2\alpha_i(t)} (\phi^i_i)^2(t) - h_i(X^*(t)) \\
= \partial_t V_i(t,X^*(t)) + \sum\limits_{k=1}^{N} \partial_{x_k} V_i(t,X^*(t)) \left( f_k(X^*(t))-\frac{1}{\alpha_k(t)} \phi_k^k(t) \right). \nonumber
\end{align}
The trajectory of $X^*(s)$ depends on the initial condition $Y=(y_i)_{i=1}^N$.
Computing the variation of $V_i(t,Y)$ with respect to $y_o$ for $o \in \{1,\dots,N\}$ and evaluating at $\mathcal{S}$ yields (since $u_i^{*}$ does not dependent
on $Y$):
\begin{align*}
\partial_{y_o} V_i(t,Y) = \int_t^T \left( \sum\limits_{k=1}^N \partial_{x_k} h_i(X^*(s)) \partial_{ y_o } (x_k^*(s)) \right) ds.
\end{align*}
From the weak form of the state equation \eqref{3:weak state} we obtain after differentiation with respect to the initial condition $y_o$ for $\ell=1,\dots,N$
\begin{align*}
\sum\limits_{j=1}^N \int_t^T \left( -\frac{d}{ds} \phi^\ell_j(s) \partial_{y_o} (x_j^*(s) ) - \phi_j^\ell(s) \sum\limits_{k=1}^N (\partial_{x_k} f_j) (X^*(s) ) \partial_{y_o} ( x_k^*(s) ) \right)
ds = \phi^\ell_o(t).
\end{align*}
Similarly to the computations before we use the equation for $\phi^i_j$ given by equation \eqref{3:adjoint}
to express
\begin{align*}
\partial_{y_o} V_i(t,Y) = \int_t^T \left( \sum\limits_{k=1}^N (\partial_{x_k} h_i) (X^*(s)) \partial_{y_o} ( x_k^*(s) ) \right) ds = \\
\int_t^T \left( \sum\limits_{k=1}^N \left(
- \frac{d}{ds} \phi_k^i(s) - \sum\limits_{j=1}^N \phi_j^i(s) (\partial_{x_k} f_j) (X^*(s) ) \right) \partial_{y_o} ( x_k^*(s) ) \right) ds = \phi_o^i(t).
\end{align*}
Therefore, $\nabla_Y V_i(t,Y) = (\phi^{i}_k)_{k=1}^{N}$ provided that $\phi^{i}_k$ is a solution to equation \eqref{3:adjoint}.
Now, along $\mathcal{S}$ we may express in equation \eqref{3:temp} the co--state by the derivative of $V_i$ with respect to $Y.$ Replacing $Y=X^{*}(t)$
we obtain
\begin{align*}
- \frac{1}{2\alpha_i(t)} \left( \partial_{x_i} V_i(t,X^{*}(t)) \right)^{2} - h_i(X^*(t)) = \partial_t V_i(t,X^{*}(t)) + \\
\sum\limits_{k=1}^{N} \partial_{x_k} V_i(t,X^{*}(t)) \left( f_k(X^{*}(t)) - \frac{1}{\alpha_k(t)}
\partial_{x_k} V_k(t,X^{*}(t)) \right).
\end{align*}
By definition we have $V_i(T,X)=0$ for all $X.$ Therefore, instead of solving the PMP equation we may ask to solve the $N$ HJB for $V_i=V_i(t,X)$ on
$[0,T] \times \mathbb{R}^{N}$ for $i=1,\dots,N$ given by the reformulation of the previous equation:
\begin{align}
\label{3:HJB}
\partial_t V_i(t,X) + \sum\limits_{k=1, k\not = i}^{N} \partial_{x_k} V_i(t,X) \left( f_k(X) - \frac{1}{\alpha_k(t)} \partial_{x_k} V_k(t,X) \right) + \partial_{x_i} V_i(t,X) f_i(X) = \\
- h_i(X) + \frac{1}{2 \alpha_i(t)} (\partial_{x_i} V_i(t,X) )^{2},\nonumber
\end{align}
with terminal condition
\begin{align}
\label{3:terminal cond}
V_i(T,X) = 0, \; i=1,\dots, N.
\end{align}
\begin{remark}
Since $V_i(t,Y)$ does not contain terminal costs of the type $g_i(X(T))$ the terminal condition
for $V_i$ is zero. In case of terminal costs we obtain
$V_i(T,X) = g_i(X(T))$
and terminal constraints for the co-state $\phi^{i}_j$ as
$\phi^{i}_j(T)= \partial_{x_j} g_i(X^{*}(T))$ in equation \eqref{3:adjoint}.
\par
The aspect of the game theoretic concept is seen in the HJB equation \eqref{3:HJB}
in the mixed terms $\partial_{x_k} V_i.$ If we model particles $i$ that do not
anticipate the optimal choice of the control of other particles $j\not =i,$
then $N$
minimization problems for $V_i$ in equation \eqref{eq:value fct i} are independent.
Therefore the corresponding HJB for $V_i$
and $V_j$ with $j\not =i$ decouple and all mixed terms vanish. In a different setting this situation
has been studied in \cite{AlbiHertyPareschi2014aa,AlbiPareschiZanella2014aa} where only a single
control for all particles is present.
\end{remark}
Assume we have a (sufficiently regular)
solution $(V_i)_{i=1}^{N}$ with $V_i : [0,T] \times \mathbb{R}^{N} \to \mathbb{R}$. Then, we obtain the optimal control $u_i^{*}(t)$ and the optimal trajectory
$X^{*}(t)$ for minimizing $V_i$ by
\begin{equation}\label{3:ctrl thu hjb}
u_i^{*}(t) = -\frac{1}{\alpha_i(t)} \partial_{x_i} V_i(t,X^{*}(t)), \; i=1,\dots,N,
\end{equation}
where $X^{*}$ fulfills equation \eqref{3:state}. This is an implicit definition of $u_i^{*}.$ However, in view of the dynamics \eqref{3:state}
it is not necessary to solve equation \eqref{3:ctrl thu hjb} for $u_i^{*}.$ Similar to the discussion in Section \ref{top-left-to-top-right} we obtain
a controlled systems dynamic provided we have a solution to the HJB equation. The associated controlled dynamics are given by
\begin{equation}\label{3:ctrl HJB}
\frac{d}{dt} x_i(t) = f_i(X(t)) - \frac{1}{\alpha_i(t)} \partial_{x_i} V_i(t,X(t)), \; j=1,\dots,N,
\end{equation}
and initial conditions \eqref{eq:IC}. Comparing the HJB controlled dynamics with equation \eqref{eq:MPC controlled dynamics}
we observe that in the best--reply strategy the full solution to the HJB is not required. Instead, $\partial_{x_i}
V_i(t,X)$ is approximated by $\partial_{x_i} h_i(X(t)).$ This approximation is also obtained using a discretization
of equation \eqref{3:HJB} in a MPC framework. Since the equation for $V_i$ is backwards in time we may use
a semi discretization in time on the interval $(T- \Delta t, T)$ given by
\begin{align*}
\frac{ V_i(T,X) - V_i(T-\Delta t,X) }{\Delta t} + \sum\limits_{k=1, k\not = i}^{N} \partial_{x_k} V_i(T,X) \left( f_k(X) - \frac{1}{\alpha_k(t)} \partial_{x_k} V_k(T,X) \right) + \\
+ \partial_{x_i} V_i(T,X) f_i(X) = - h_i(X) + \frac{1}{2 \alpha_i(t)} (\partial_{x_i} V_i(T,X) )^{2} +O(\Delta t), \\
V_i(T,X)=0.
\end{align*}
Using the terminal condition we obtain that $V_i(T-\Delta t,X ) = h_i(X)$ for all $X \in \mathbb{R}^N.$
\par
The derivation of the equation for the HJB equation for $V_i(t,Y)$ allows for an arbitrary choice of $T>t.$
Hence we may set the terminal time $T$ also to $T:=t+\Delta t.$ This implies to consider the value function
$$ V_i^{\Delta t}(t,Y) = \int_t^{t+\Delta t} \left( \frac{\alpha_i(s)}2 u_i^{2}(s) + h_i(X(s)) \right) ds$$
where $X(s), s \in (t, t+\Delta t)$ fulfills \eqref{3:state} and where we indicate the dependence on $\Delta t$
by a superscript on $V_i.$ Applying the explicit Euler discretization
as shown before leads therefore to
$$V^{\Delta t}_i(t,Y) = h_i(Y), \; Y=X(t).$$
Hence, the best--reply strategy applied at time $t$ for a finite--dimensional problem of $N$ interacting particles
coincides with an explicit Euler discretization of the HJB equation for a value function given by $V_i^{\Delta t}(t,Y)$
where $Y=X(t)$ is the state of the particle system at time $t.$
\subsection{ Meanfield limit of the HJB equation \eqref{3:HJB} }
Next, we turn to the meanfield limit of equation \eqref{3:HJB} for $i=1,\dots, N.$ To this end
we assume that {\bf (A)} and {\bf (B)} holds. We further recall and introduce some notation;
\begin{align*}
X=(x_i)_{i=1}^{N}, \; Z=(z_i)_{i=1}^{N}, \; {\mathbb{Z}} =(\eta,z_1,\dots,z_{N-1}), \; {\mathbb{Z}} _k:=\left(z_k, \eta, z_1,\dots, z_{k-1}, z_{k+1}, \dots, z_{N-1} \right).
\end{align*}
\par
We obtain the following set of equations for $V_i(t,X)$ and $i=1,\dots,N,$
\begin{align}\label{eq:sym HJB}
\partial_t V_i(t,X) + \sum\limits_{ k=1, k\not = i}^N \partial_{x_k} V_i(t,X) \left( f(x_k,X_{-k}) - \frac{1}{\alpha(t)} \partial_{x_k} V_k(t,X) \right) \\
+ \partial_{x_i} V_i(t,X) f(x_i, X_{-i}) = - h(x_i,X_{-i}) + \frac{1}{2\alpha(t)} \left( \partial_{x_i} V_i(t,X) \right)^2, \qquad V_i(t,X)=0.\nonumber
\end{align}
We show that a solution $(V_i)_{i=1}^{N}$ to the previous set of equations is obtained by considering the
equation \eqref{eq:sol gen} below.
Suppose that a function $W=W(t, {\mathbb{Z}} ): [0,T]\times \mathbb{R} \times \mathbb{R}^{N-1}\to \mathbb{R}$ fulfills
\begin{align}\label{eq:sol gen}
\partial_t W(t, {\mathbb{Z}} ) + \sum\limits_{k=1}^{N-1} \partial_{z_k} W(t, {\mathbb{Z}} ) \left( f( {\mathbb{Z}} _k) - \frac{1}{\alpha(t)} \partial_{\eta} W(t, {\mathbb{Z}} ) \right) + \partial_\eta W(t, {\mathbb{Z}} ) f( {\mathbb{Z}} ) \\
= - h( {\mathbb{Z}} ) + \frac{1}{2\alpha(t)} \left( \partial_\eta W(t, {\mathbb{Z}} ) \right)^{2}, \nonumber
\end{align}
and terminal condition $W(T, {\mathbb{Z}} )= 0.$ Suppose a solution $W$ to equation \eqref{eq:sol gen} exists and fulfills the previous equation pointwise a.e. $(t, {\mathbb{Z}} )\in [0,T]\times \mathbb{R}^{N}.$
Then, we define
\begin{equation}\label{eq:sol spc}
V_i(t,X) := W(t,x_i,X_{-i}), \; i=1,\dots,N.
\end{equation}
By definition $W=W(t, {\mathbb{Z}} ),$ therefore the partial derivatives
of $V_i$ are computed as follows where $$(x_i,X_{-i}) = (\eta,z_1,\dots,z_{N-1}):$$
\begin{align*}
\partial_t V_i(t,X) &=\partial_t W(t,x_i,X_{-i}), \; \partial_{x_i} V_i(t,X) = \partial_\eta W(t,x_i,X_{-i}), \\
\partial_{x_k} V_k(t,X) &= \partial_{x_k} W(t,x_k,X_{-k}) = \partial_\eta W(t,x_k,X_{-k}), \\
\partial_{x_k} V_i(t,X) &= \partial_{z_{k}} W(t, {\mathbb{Z}} ) \; \mbox{ for } k\in \{1, \dots, i-1 \}, \\
\partial_{x_k} V_i(t,X) &= \partial_{z_{k-1}} W(t, {\mathbb{Z}} ) \; \mbox{ for } k \in \{ i+1, \dots, N\}.
\end{align*}
Due to assumption {\bf (A)} we have that
$$f( {\mathbb{Z}} _k) = f(z_k, z_1,\dots, z_{k-1},z_{k+1}, \dots, z_{i-1}, \eta, z_{i}, \dots, z_{N-1})$$
for any $i \in \{1,\dots,N-1 \}.$ The same is true for the argument of $h.$ Therefore,
$$f(x_k, X_{-k}) = f( {\mathbb{Z}} _k) $$
and $h(x_i,X_{-i})=h( {\mathbb{Z}} ).$ Therefore, $V_i(t,X)= W(t,x_i,X_{-i})$ fulfills equation
\eqref{eq:sym HJB}. Hence, instead of studying equation \eqref{eq:sym HJB} we may study the limit for $N\to \infty$
of equations \eqref{eq:sol gen}. In view of Theorem \ref{Theorem2.1Card} a limit exists provided
$W$ is symmetric (and fulfills uniform bound and uniform continuity estimates).
\par
Note that, $W$ as a solution to equation \eqref{eq:sol gen} is symmetric with respect to the argument $(z_1,\dots,z_{N-1}).$
This holds true, since $f$ and $h$ are symmetric with respect to $X_{-i}$ for any $i \in \{ 1,\dots, N \}.$ Hence, in the following we may assume to have a solution $W$
to equation \eqref{eq:sol gen} with the property that for any permutation $\sigma:\{1,\dots,N-1\}\to \{1,\dots,N-1\}$ we have
\begin{equation}\label{eq:sym of sol}
W(t, {\mathbb{Z}} ) = W(t,\eta, z_{\sigma_1}, \dots, z_{\sigma_{N-1}}).
\end{equation}
In view of Theorem \ref{Theorem2.1Card} we expect $W(t, {\mathbb{Z}} )$ to converge for for $N\to\infty$
to a limit function $ {\bf W}:[0,T] \times \mathbb{R} \times \mathcal{P}(\mathbb{R}) \to \mathbb{R}$ in the sense of
Theorem \ref{Theorem2.1Card}, i.e., up to a subsequence and for $Z \in \mathbb{R}^{N}$
\begin{align*}
\lim\limits_{N\to \infty} \sup\limits_{ |\eta|\leq R, t \in [0,T], Z_{-N} \subset \mathbb{R}^{N-1} }
| W(t, {\mathbb{Z}} ) - {\bf W}(t,\eta,m^{N-1}_{ Z_{-N} } ) | =0
\end{align*}
Similar to equation \eqref{def:con} we obtain that the limit
$ {\bf W}:[0,T] \times \mathbb{R} \times \mathcal{P}(\mathbb{R}) \to \mathbb{R}$ fulfills
the convergence if the measure $m^{N-1}_{Z_{-N}}$ is replaced
by the empirical measure $m^{N}_Z$ for any $Z \in \mathbb{R}^{N}.$
Using the introduced notation in Section \ref{sec:setting} we may therefore write
$$ W(t, {\mathbb{Z}} )=W^{N}(t,\eta,m^{N-1}_{Z_{-N}}) \sim {\bf W}(t,\eta,m^{N}_{ Z } ).$$
\par
Similarly, we obtain the following meanfield limits for $N$ sufficiently large (and provided the assumptions
of Theorem \ref{Theorem2.1Card} and \cite[Theorem 4.1]{BlanchetCarlier2014aa} are fulfilled.
\begin{align*}
\partial_t V_i(t,X) &= \partial_t W(t,x_i,X_{-i}) = \partial_t W^{N}(t,x_i,m^{N-1}_{X_{-i}}) & \sim& \partial_t {\bf W}(t,x_i,m^{N}_X), \\
h_i(X) &=h(x_i,X_{-i}) = h^N(x_i,m^{N-1}_{X_{-i}}) & \sim & {\bf h}(x_i,m^{N}_X), \\
(\partial_{x_i} V_i(t,X))^2 &= (\partial_{x_i} W(t,x_i,X_{-i}))^2 = (\partial_{x_i} W^N(t,x_i,m^{N-1}_{X_{-i}}) )^2 & \sim & (\partial_{x_i} {\bf W}(t,x_i,m^{N}_X))^2.
\end{align*}
It remains to discuss the limit of the mixed term
in equations \eqref{eq:sym HJB} and \eqref{eq:sol gen}, respectively.
\begin{equation}\label{eq:ttemp}
\sum\limits_{k=1}^{N-1} \partial_{z_k} W(t, {\mathbb{Z}} ) \left( f( {\mathbb{Z}} _k) - \frac{1}{\alpha(t)} \partial_\eta W(t, {\mathbb{Z}} ) \right).
\end{equation}
In order to derive the meanfield limit for equation \eqref{eq:ttemp} we require $f$ to be symmetric
in {\em all} variables, i.e.,
\begin{itemize}
\item[ {\bf (C)} ] We assume $ f(Z) = f( (z_{\sigma_i})_{i=1}^{N} )$ for any permutation $\sigma:\{1,\dots,N\} \to \{1,\dots, N\}$ and for all $Z \in \mathbb{R}^{N}.$
\end{itemize}
Under assumption {\bf (C)} we have in particular for all $k \in \{1,\dots,N\}$ and a permutation
$\sigma:\{1,\dots,N-1\} \to \{1,\dots,N-1\}$
$$ f( {\mathbb{Z}} _k) = f(\eta, z_1,\dots, z_{N-1}) = f(\eta, z_{\sigma_1}, \dots, z_{\sigma_{N-1}} ).$$
Therefore, $f( {\mathbb{Z}} ) = f_N(\eta, m^{N-1}_{ Z_{-N} )}).$ In the sense of equation \eqref{def:con}
we further obtain $f_N(\eta,m^{N-1}_{ Z_{-N} )}) \sim {\bf f}(\eta,m^{N}_Z)$ for any $(\eta,Z).$
However under assumption {\bf (C)} we also obtain $f(Z)=f_N(m^{N}_Z) \sim {\bf f}(m^{N}_Z).$
Assuming the limit in Theorem \ref{Theorem2.1Card} is unique we obtain that ${\bf f}$ is therefore
{\em independent } of $\eta.$
\par
Now, consider the discrete measure $m^{N}_Z=\frac{1}N \sum\limits_{j=1}^N m^{N}_{z_j}$ and
$m_{z_j} = \delta(x-z_j) \in \mathcal{P}(\mathbb{R}).$
For each $j$ we denote by $m_{z_j}( \zeta) = \mathcal{Z}(\zeta) \# m_{z_j}$ the push forward
of the discrete measure with the flow field $c:(t,t+a) \times \mathbb{R} \times \mathcal{P}(\mathbb{R}) \to\mathbb{R}$ and $m_{z_j}(t)=m_{z_j}.$
Let the characteristic equations for $\mathcal{Z}$ for fixed $\eta$ be given by the flow field
\begin{equation}
\label{eq:psuh c}
\frac{d}{d\zeta} \mathcal{Z}(\zeta) = c(\zeta, \eta, m^{N}_Z(\zeta) ) := {\bf f}( m^{N}_Z(\zeta) )-\frac{1}{\alpha(\zeta)} \partial_\eta {\bf W}(\zeta,\eta,m^{N}_Z(\zeta)).
\end{equation}
Similarly to equation \eqref{def:ms}, we obtain the directional derivative of the measure
of $ {\bf W}(t,\eta,m^{N}_Z)$with respect to the measure $m^{N}_Z$ in direction of the vectorfield $c$ at $\zeta=t$
as
\begin{align*}
& \sum\limits_{k=1}^{N-1} \partial_{z_k} W(t, {\mathbb{Z}} ) \left( f( {\mathbb{Z}} ) - \frac{1}{\alpha(t)} \partial_\eta W(t, {\mathbb{Z}} ) \right) \sim
\\
\qquad & \langle \partial_m {\bf W}(t,\eta,m^{N}_Z), {\bf f}( m^{N}_Z)-\frac{1}{\alpha(t)} \partial_\eta {\bf W}(t,\eta,m^{N}_Z) \rangle_{L^2_{m^{N}_Z}},
\end{align*}
where $L^2_{m^{N}_Z}$ denotes the space of square integrable functions for the measure $m^{N}_Z$.
Performing the limits for $N\to \infty$ in the sense of equation \eqref{def:con}, replacing $\eta$ by $x$, we obtain finally the meanfield
equation for $ {\bf W}= {\bf W}(t,x,m):[0,T]\times \mathbb{R} \times \mathcal{P}(\mathbb{R}) \to \mathbb{R}$ given by
\begin{align}\label{eq:meanfield W}
\partial_t {\bf W}(t,x,m) + \langle \partial_m {\bf W}(t,x,m), {\bf f}(m)-\frac{1}{\alpha(t)} \partial_x {\bf W}(t,x,m) \rangle_{L^2_m} + \partial_x {\bf W}(t,x,m) {\bf F}(x,m) \\
= - {\bf h}(x,m) + \frac{1}{2\alpha(t)} \left( \partial_x {\bf W}(t,x,m) \right)^{2}, \;
{\bf W}(T,x,m) = 0. \nonumber
\end{align}
The previous equation is reformulated using the concept of directional derivatives of measures $m$ outlined in the Appendix \ref{appendix}.
Denote by $c(t,x,m) = {\bf f}(m) - \frac{1}{\alpha(t)} \partial_x {\bf W}(t,x,m)$ a field.
If $m_{x_j}(t) \in \mathcal{P}(\mathbb{R})$ for each $t$ is obtained as push forward with the vector field $c,$ then, $m_{x_j}$ fulfills in a weak sense the
continuity equation \eqref{continuity eq}. Therefore, $m^{N}_X= \frac{1}N \sum\limits_{j=1}^{N} m_{x_j}$ fulfills
\begin{equation}\label{eq:mf t1}
\partial_t m^{N}_X(t,x) + \partial_x \left( c(t,x,m^{N}_X) m^{N}_X(t,x) \right) = 0.
\end{equation}
As seen from the previous equations and the computations in equation \eqref{def:ms} we therefore have
\begin{align*}
\partial_t {\bf W}(t,x,m^{N}_X(t,x)) + \langle \partial_m {\bf W}(t,x,m^{N}_X(t,x)), {\bf f}(m^{N}_X(t,x)) - \frac{1}{\alpha(t)} \partial_x {\bf W}(t,x,m^{N}_X(t,x)) \rangle_{L^2_{m^{N}_X}} = \\
\frac{d}{dt} {\bf W}(t,x,m^{N}_X(t,x)).
\end{align*}
This motivates the following definition. For a family of measures $(m(t))_{t \in [0,T]}$ with $m(t,\cdot) \in \mathcal{P}(\mathbb{R})$,
define $\v:[0,T]\times \mathbb{R} \to \mathbb{R}$ by
\begin{equation}
\label{eq:def v}
\v(t,x) := {\bf W}(t,x,m(t,x)).
\end{equation}
Then, from equation \eqref{eq:meanfield W} we obtain
\begin{equation}\label{eq:mf 1}
\partial_t \v(t,x) + \left( \partial_x \v(t,x) \right) {\bf f}(m) = - {\bf h}(x,m) + \frac{1}{2\alpha(t)} \left( \partial_x \v(t,x) \right)^{2}
\end{equation}
and from equation \eqref{eq:mf t1} we obtain using the definition \eqref{eq:def v}
\begin{equation}\label{eq:mf 2}
\partial_t m(t,x) + \partial_x \left( \left( {\bf f}(m) - \frac{1}{\alpha(t)} \partial_x \v(t,x) \right) m(t,x) \right) = 0.
\end{equation}
Provided we may solve the meanfield equations \eqref{eq:mf 1} and \eqref{eq:mf 2} for $(\v,m)$ we obtain
a solution $ {\bf W}$ along the characteristics in $m-$space by the implicit relation \eqref{eq:def v}.
In this sense and under the assumptions ${\bf (A)}$ to ${\bf (C)}$ the meanfield limit of equation \eqref{eq:sym HJB} or respectively equation \eqref{eq:sol gen} is given
by the system of the following equations \eqref{eq:final mf} and \eqref{eq:final mf2}
for $\v:[0,T]\times \mathbb{R} \to \mathbb{R}$
and $m(t) \in \mathcal{P}(\mathbb{R})$ for all $t\in [0,T].$ The terminal condition for $\v$ is given by $\v(T,x)=0.$
\begin{align}
\label{eq:final mf}
\partial_t \v(t,x) + \partial_x \left( \v(t,x) \right) {\bf f}(m(t,x)) - \frac{1}{2\alpha(t)} (\partial_x \v(t,x) )^2 &= - {\bf h}(x,m(t,x)), \\
\partial_t m(t,x) + \partial_x \left( \left( {\bf f}(m(t,x)) - \frac{1}{\alpha(t)} \partial_x \v(t,x) \right) m(t,x) \right) &= 0.
\label{eq:final mf2}
\end{align}
We may also express the control $u_i^{*}$ given by equation \eqref{3:ctrl thu hjb}, i.e.,
$$ u_i^{*}(t) = - \frac{1}{\alpha(t)} \partial_{x_i} V_i(t,x_i(t),X_{-i}(t)),$$
in the meanfield limit. Under assumption {\bf (B)} and using equation \eqref{eq:sol spc} and equation \eqref{eq:def v}
For any $X$ we have
\begin{align*}
- \frac{1}{\alpha(t)} \partial_{x_i} V_i(t,X) = -\frac{1}{\alpha(t)} \partial_x W(t,x_i,X_{-i}) \sim -\frac{1}{\alpha(t)} \partial_x {\bf W}(t,x,m^{N}_X) = -\frac{1}{\alpha(t)} \partial_x \v(t,x).
\end{align*}
\subsection{ MPC and best reply strategy for the meanfield equation \eqref{eq:final mf}--\eqref{eq:final mf2} }
We obtain the best--reply strategy through a MPC approach. Note that the calculations leading to equation \eqref{eq:final mf}
are independent of the terminal time $T.$ Now, let a time $\tau \in [0,T]$ be fixed
and let $\Delta t>0$ be sufficiently small. Consider the value function on the receding horizon
$(\tau, \tau+\Delta t)$ with initial conditions given at $\tau$ and where we, as before,
add $\Delta t$ as a superscript
to indicate the dependence on the short time horizon:
\begin{equation}\label{eq:vbar}
V^{\Delta t}_i(\tau,Y) = \int_\tau^{\tau+\Delta t} \left( \frac{\alpha_i(s)}2 u_i^{2}(s) + h_i(X(s)) ds \right) ds.
\end{equation}
Repeating the derivation of the meanfield limit computations for $V^{\Delta t}_i$
we obtain equation \eqref{eq:final mf} defined only for $t \in [\tau,\tau+\Delta t].$
Also, we obtain $\v(\tau+\Delta t,x) = 0.$ A first--order in $\Delta t$ approximation of the solution $\v(\tau,x)$ to the (backwards in time )
equation \eqref{eq:final mf} is therefore given by
\begin{equation}\label{eq:mf app}
\v(\tau,x) = {\bf h}(x,m(t,x)) + O(\Delta t).
\end{equation}
Substituting this relation in the equation for $m$ in \eqref{eq:final mf}
we obtain the MPC meanfield equation as
\begin{align}
\label{eq:final mf superfinal}
\partial_t m(t,x) + \partial_x \left( \left( {\bf f}(m(t,x)) - \frac{1}{\alpha(t)} \partial_x {\bf h}(x,m(t,x)) \right) m(t,x) \right) &= 0.
\end{align}
This equation is precisely the same as we had obtained for the
controlled dynamics using the best--reply strategy
derived in the previous section and given by equation \eqref{eq:best reply kinetic}.
\par
\begin{remark}\label{rem1}
The best--reply strategy for a meanfield game corresponds therefore to considering at each time $\tau$
a value function measuring only the costs for a small next time step. Those costs may depend
on the optimal choices of the other agents. However, for a small time horizon the derivative of the running costs
(i.e. $ {\bf h}$) is a sufficient approximation to the otherwise intractable solution to the full system of meanfield
equations~\eqref{eq:final mf}-\eqref{eq:final mf2}.
\end{remark}
We summarize the findings in the following Proposition.
\begin{prop}\label{lemma2}
Assume {\bf (A)} to {\bf (C)} holds true and let $\Delta t>0$ be given. Denote by $ {\bf f}(m)$ and $ {\bf h}(x,m)$
the meanfield limit for $N\to\infty$ of $ f(X)$ and $h(X),$ respectively.
Assume that $m:[0,T] \times \mathbb{R} \to \mathbb{R}$ be such that $m(t,\cdot) \in \mathcal{P}(\mathbb{R})$ and fulfill equation
\begin{align}\label{lem:mf 2}
\partial_t m(t,x) + \partial_x \left( \left( {\bf f}(m(t,x)) - \frac{1}{\alpha(t)} \partial_x {\bf h}(x,m(t,x)) \right) m(t,x) \right) &= 0.
\end{align}
and let
\begin{align*}
\v(t,x) = {\bf h}(t,x).
\end{align*}
Then, for any $t\in [0,T]$ and up to an error of order $O(\Delta t)$ the function $ {\bf W}:[t,t+\Delta t] \times \mathbb{R} \times \mathcal{P}(\mathbb{R}) \to \mathbb{R}$ implicitly defined by
\begin{align*}
{\bf W}(s,x,m(t,x)) = \v(s,x), \; x \in \mathbb{R}, s \in [t,t+\Delta t],
\end{align*}
is a solution to the meanfield equation
\begin{align*}
\partial_s {\bf W}(s,x,m) + \langle \partial_m {\bf W}(s,x,m), {\bf f}(m)-\frac{1}{\alpha(s)} \partial_x {\bf W}(s,x,m) \rangle_{L^2_m} + \partial_x {\bf W}(s,x,m) {\bf f}(m) \\
= - {\bf h}(x,m) + \frac{1}{2\alpha(s)} \left( \partial_x {\bf W}(s,x,m) \right)^{2}, \; {\bf W}(t+\Delta t,x,m)=0.
\end{align*}
The meanfield equation is the formal limit for $N\to\infty$ of an $N$ particle game on the time interval $(t,t+\Delta t)$ and described
by equation \eqref{eq:full dynamics} for $i=1,\dots, N,$ i.e.,
\begin{align*}
\frac{d}{ds} x_i(s) = f_i(X(s)) + u_i(s), \\
u_i(s) = \mbox{ argmin }_{ u:[t,t+\Delta t] \to \mathbb{R} } \int_t^{t+\Delta t} \left( \frac{\alpha_i(s)}2 u^{2}(r) + h_i(X(r)) \right) dr.
\end{align*}
A solution to the associated $i$th HJB equations for $V_i:[t,t+\Delta t] \times \mathbb{R}^{N} \to \mathbb{R}$ are given by $V_i(t,X):= {\bf W}(s,x_i,m^{N}_{X_{-i}})$ for $i=1,\dots,N,$
and the optimal control is $u_i^{*}(s) = -\frac{1}{\alpha_i(s)} \partial_{x_i} V_i(s,X(s)).$
\par
Under assumption {\bf (C)} the meanfield equation \eqref{lem:mf 2} coincides with the
formal meanfield equation obtained using the best reply strategy \eqref{eq:best reply kinetic}.
\end{prop}
\newpage
|
1601.00205
|
\section{Introduction}
\subsection{Background}
A measure-preserving transformation is an automorphism of a standard Lebesgue space. Formally, it is a quadruple $(X, \mathcal{B}, \mu, T)$, where
\begin{enumerate}
\item $(X, \mathcal{B}, \mu)$ is a measure space isomorphic to the unit interval with the Lebesgue measure on all Borel sets,
\item $T$ is a bijection from $X$ to $X$ such that $T$ and $T^{-1}$ are both $\mu$-measurable and preserve the measure $\mu$.
\end{enumerate}
When the algebra of measurable sets is clear, we will refer to the transformation $(X, \mathcal{B}, \mu, T)$ by $(X, \mu, T)$. If $(X, \mathcal{B}, \mu, T)$ is a measure-preserving transformation, then so is its inverse, $(X, \mathcal{B}, \mu, T^{-1})$.
Two measure-preserving transformations $(X, \mathcal{B}, \mu, T)$ and $(X^\prime, \mathcal{B}^\prime, \mu^\prime, T^\prime)$ are isomorphic if there is a measure isomorphism $\phi$ from $(X, \mathcal{B}, \mu)$ to $(X^\prime, \mathcal{B}^\prime, \mu^\prime)$ such that $\mu$ almost everywhere, $\phi \circ T = T^\prime \circ \phi $.
One of the central problems of ergodic theory, originally posed by von Neumann, is the isomorphism problem: How can one determine whether two measure-preserving transformations are isomorphic? The inverse problem is one of its natural restrictions: How can one determine whether a measure-preserving transformation is isomorphic to its inverse?
In the early 1940s, Halmos and von Neumann \cite{HalmosvonNeumann} showed that ergodic measure-preserving transformations with pure point spectrum are isomorphic iff they have the same spectrum. It immediately follows from this that every ergodic measure-preserving transformation with pure point spectrum is isomorphic to its inverse.
About a decade later, Anzai \cite{Anzai} gave the first example of a measure-preserving transformation not isomorphic to its inverse. Later, Fieldsteel \cite{Fieldsteel} and del Junco, Rahe, and Swanson \cite{delJuncoRaheSwanson} independently showed that Chacon's transformation--one of the earliest examples of what we now call rank-1 transformations--is not isomorphic to its inverse. In the late 1980s, Ageev \cite{Ageev3} showed that a generic measure-preserving transformation is not isomorphic to its inverse.
In 2011, Foreman, Rudolph, and Weiss \cite{ForemanRudolphWeiss} showed that the set of ergodic measure-preserving transformations of a fixed standard Lebesgue space that are isomorphic to their inverse is a complete analytic subset of all measure-preserving transformations on that space. In essence, this result shows that there is no simple (i.e., Borel) condition which is satisfied if and only if an ergodic measure-preserving transformation is isomorphic to its inverse. However, in the same paper they show that the isomorphism relation becomes much simpler when restricted to the generic class of rank-1 transformations. It follows from their work that there exists a simple (i.e., Borel) condition which is satisfied if and only if a rank-1 measure-preserving transformation is isomorphic to its inverse. Currently, however, no such condition is known. In this paper we give a simple condition that is sufficient for a rank-1 transformation to be isomorphic to its inverse and show that for canonically bounded rank-1 transformations, the condition is also necessary.
\subsection{Rank-1 transformations}
\label{comments}
In this subsection we state the definitions and basic facts pertaining to rank-1 transformations that will be used in our main arguments.
We mostly follow the symbolic presentation in \cite{GaoHill1} and \cite{GaoHill2}, but also provide comments that hopefully will be helpful to those more familiar with a different approach to rank-1 transformations. Additional information about the connections between different approaches to rank-1 transformations can be found in the survey article \cite{Ferenczi}.
We first remark that by $\N$ we mean the set of all finite ordinals, including zero: $\{0, 1, 2, \ldots \}$.
Our main objects of study are symbolic rank-1 measure-preserving transformations. Each such transformation is a measure-preserving transformation $(X, \mathcal{B}, \mu, \sigma)$, where $X$ is a closed, shift-invariant subset of $\{0,1\}^\Z$, $\mathcal{B}$ is the collection of Borel sets that $X$ inherits from the product topology on $\{0,1\}^\Z$, $\mu$ is an atomless, shift-invariant (Borel) probability measure on $X$, and $\sigma$ is the shift. To be precise, the shift $\sigma$ is the bijection from $\{0,1\}^\Z$ to $\{0,1\}^\Z$, where $\sigma (x) (i) = x (i+1)$. Since the measure algebra of a symbolic measure-preserving transformation comes from the topology on $\{0,1\}^\Z$, we will omit the reference to that measure algebra and simply refer to a symbolic measure-preserving transformation as $(X, \mu, \sigma)$.
Symbolic rank-1 measure-preserving transformations are usually described by {\em cutting and spacer parameters}. The cutting parameter is a sequence $(r_n : n \in \N)$ of integers greater than 1. The spacer parameter is a sequence of tuples $(s_n : n \in \N)$, where formally $s_n$ is a function from $\{1, 2, \ldots, r_n -1\}$ to $\N$ (note that $s_n$ is allowed to take the value zero). Given
such cutting and spacer parameters, one defines the symbolic rank-1 system $(X, \sigma)$ as follows. First define a sequence of finite words $(v_n : n \in \N)$ by $v_0 =0$ and $$v_{n+1} = v_n 1^{s_n(1)} v_n 1^{s_n(2)}v_n \ldots v_n 1^{s_n(r_n-1)} v_n.$$ The sequence $(v_n: n \in \N)$ is called a {\em generating sequence}. Then let $$X = \{x \in \{0,1\}^\Z: \text{ every finite subword of $x$ is a subword of some $v_n$}\}.$$ It is straightforward to check that $X$ is a closed, shift-invariant subset of $\{0,1\}^\Z$. These symbolic rank-1 systems are treated extensively--as topological dynamical systems--in \cite{GaoHill1}. In order to introduce a nice measure $\mu$ and thus obtain a measurable dynamical system, we make two additional assumptions on the cutting and spacer parameters.
\begin{enumerate}
\item For every $N \in \N$ there exist $n, n^\prime \geq N$ and $0 < i < r_n$ and $0 < i^\prime < r_{n^\prime}$ such that $$s_n(i) \neq s_{n^\prime} (i^\prime).$$
\item $\displaystyle \sup_{n \in \N} \frac{ \text{\# of 1s in $v_n$}}{|v_n|} < 1 $
\end{enumerate}
It is straightforward to show that there is a unique shift-invariant measure on $X$ which assigns measure 1 to the set $\{x \in X: x (0) = 0\}$. As long as the first condition above is satisfied, that measure is atomless. As long as the second condition above is satisfied, that measure is finite. Assuming both conditions are satisfied, the normalization of that measure is called $\mu$ and then $(X, \mu, \sigma)$ is a measure-preserving transformation. We call such an $(X, \mu, \sigma)$ a symbolic rank-1 measure-preserving transformation.
Below are several important remarks about symbolic rank-1 measure-preserving transformations that will be helpful in understanding the arguments in Section 2.
\begin{itemize}
\item {\em Bounded rank-1 transformations:} Suppose $(r_n: n \in \N)$ and $(s_n: n \in \N)$ are cutting and spacer parameters for $(X, \mu, \sigma)$. We say the cutting parameter is bounded if there is some $R \in \N$ such that for all $n \in \N$, $r_n \leq R$. We say that the spacer parameter is bounded if there is some $S \in \N$ such that for all $n \in \N$ and all $0<i < r_n$, $s_n(i) \leq S$.
Let $(X, \mu, \sigma)$ be a symbolic rank-1 measure-preserving transformation. We say that $(X, \mu, \sigma)$ is bounded if there are cutting and spacer parameters $(r_n: n \in \N)$ and $(s_n: n \in \N)$ that give rise to $(X, \mu, \sigma)$ that are both bounded.
\item {\em Canonical cutting and spacer parameters:} There is an obvious bijective correspondence between cutting and spacer parameters and generating sequences, but there are many different generating sequences that give rise to the same symbolic rank-1 system. For example, any proper subsequence of a generating sequence will be a different generating sequence that gives rise to the same symbolic rank-1 system. There is a way, however, described in \cite{GaoHill1}, to associate to each symbolic rank-1 system a unique canonical generating sequence, which in turn gives rise to the canonical cutting and spacer parameters of that symbolic system. The canonical generating sequence was used in \cite{GaoHill1} to fully understand topological isomorphisms between symbolic rank-1 systems; it was also used in \cite{GaoHill2} to explicitly describe when a bounded rank-1 measure-preserving transformation has trivial centralizer.
There is only one fact that about canonical generating sequences that is used in our argument. It is this: If $(r_n: n \in \N)$ and $(s_n: n \in \N)$ are the canonical cutting and spacer parameters for $(X, \mu, \sigma)$, then for all $n \in \N$, there is $0<i<r_n$ and $0<j<r_{n+1}$ such that $s_n(i) \neq s_n(j)$. (See the definition of canonical generating sequence in section 2.3.2 and 2.3.3 in \cite{GaoHill1})
\item {\em Expected occurrences:} Let $(v_n: n \in \N)$ be a generating sequence giving rise to the symbolic system $(X, \sigma)$. Then for each $n \in \N$, there is a unique way to view each $x \in X$ as a disjoint collection of occurrences of $v_n$ separated only by 1s. Such occurrences of $v_n$ in $x$ are called {\em expected} and the following all hold.
\begin{enumerate}
\item For all $x \in X$ and $n \in \N$, every occurrence of $0$ in $x$ is contained in a unique expected occurrence of $v_n$.
\item For all $x \in X$ and $n \in \N$, $x$ has an expected occurrence of $v_n$ beginning at position $i$ iff $\sigma (x)$ has an expected occurrence of $v_n$ beginning at position $(i-1)$.
\item If $x \in X$ has an expected occurrence of $v_n$ beginning at position $i$, and $n^\prime > n$, then the unique expected occurrence of $v_{n^\prime}$ that contains the 0 at position $i$ completely contains the expected occurrence of $v_n$ that begins at $i$.
\item If $x \in X$ has expected occurrences of $v_n$ beginning at positions $i$ and $j$, with $|i - j| < |v_n|$, then $i=j$. In other words, distinct expected occurrences of $v_n$ cannot overlap.
\item If $n>m$ and $x\in X$ has as expected occurrence of $v_n$ beginning at $i$ which completely contains an expected occurrence of $v_m$ beginning at $i + l$, then whenever $j$ is such that $x$ has an expected occurrence of $v_n$ beginning at $j$, that occurrence completely contains an expected occurrence of $v_m$ beginning at $j + l$.
\end{enumerate}
For $n \in \N$ and $i \in \Z$ we define $E_{v_n,i}$ to be the set of all $x \in X$ that have an expected occurrence of $v_n$ beginning at position $i$.
\item {\em Relation to cutting and stacking constructions:} Let $(v_n: n \in \N)$ be a generating sequence giving rise to the symbolic rank-1 measure-preserving system $(X, \mu, \sigma)$. One can take the cutting and spacer parameters associated to $(v_n: n \in \N)$ and build, using a cutting and stacking construction, a rank-1 measure-preserving transformation. This construction involves a sequence of Rokhlin towers. There is a direct correspondence between the base of the $n$th tower in the cutting and stacking construction and the set $E_{v_n, 0}$ in the symbolic system. The height of the $n$th tower in the cutting and stacking construction then corresponds to (i.e., is equal to) the length of the word $v_n$. If the reader is more familiar with rank-1 transformations as cutting and stacking constructions, one can use this correspondence to translate the arguments in Section 2 to that setting.
\item {\em Expectedness and the measure algebra:} Let $(v_n: n \in \N)$ be a generating sequence giving rise to the symbolic rank-1 measure-preserving system $(X, \mu, \sigma)$. If $\mathbb{M}$ is any infinite subset of $\N$, then the collection of sets $\{E_{v_n, i}: n \in \mathbb{M}, i \in \Z\}$ is dense in the measure algebra of $(X, \mu)$. Thus if $A$ is any positive measure set and $\epsilon >0$, there is some $n \in \mathbb{M}$ and $i \in \Z$ such that $$\frac{\mu (E_{v_n, i} \cap A)}{\mu(E_{v_n, i}) } > 1 - \epsilon$$
\item {\em Rank-1 Inverses:} Let $(r_n: n \in \N)$ and $(s_n: n \in \N)$ be cutting and spacer parameters for the symbolic rank-1 measure-preserving transformation $(X, \mu, \sigma)$. It is straightforward to check that a simple modification of the parameters results in a symbolic rank-1 measure-preserving transformation that is isomorphic to $(X, \mu, \sigma^{-1})$. For each tuple $s_n$ in the spacer parameter, let $\overline{s_n}$ be the reverse tuple, i.e., for $0 < i < r_n$, $\overline{s_n } (i) = s_n (r_n -i)$. It is easy to check that the cutting and spacer parameters $(r_n: n \in \N)$ and $(\overline{s_n}: n \in \N)$ satisfy the two measure conditions necessary to produce a symbolic rank-1 measure-preserving transformation. If one denotes that transformation by $(\overline{X}, \overline{\mu}, \sigma)$ and defines $\phi : X \rightarrow \overline{X}$ by $\psi (x) (i) = x(-i)$, then it is straightforward to check that $\psi$ is an isomorphism between $(X, \mu, \sigma^{-1})$ and $(\overline{X}, \overline{\mu}, \sigma)$. Thus to check whether a given symbolic rank-1 measure-preserving transformation $(X, \mu, \sigma)$ is isomorphic to its inverse, one need only check whether it is isomorphic to the symbolic rank-1 measure-preserving transformation $(\overline{X}, \overline{\mu}, \sigma)$.
\end{itemize}
\subsection{The condition for isomorphism and the statement of the theorem}
Let $(r_n: n \in \N)$ and $(s_n: n \in \N)$ be cutting and spacer parameters for the symbolic rank-1 measure-preserving transformation $(X, \mu, \sigma)$. Suppose that there is an $N \in \N$ such that for all $n \geq N$, $s_n = \overline{s_n}$. Let $\phi : X \rightarrow \overline{X}$ be defined so that $\phi (x)$ is obtained from $x$ by replacing every expected occurrence of $v_N$ by $\overline{v_N}$ (the reverse of $v_N$). It is straightforward to check that $\phi$ is an isomorphism between $(X, \mu, \sigma)$ and $(\overline{X}, \overline{\mu}, \sigma)$, thus showing that $(X, \mu, \sigma)$ is isomorphic to its inverse $(X, \mu, \sigma^{-1})$.
As an example, Chacon2 is the rank-one transformation that can be defined by $v_{n+1} = v_n 1^n v_n$. (In the cutting and stacking setting, Chacon2 is usually described by $B_{n+1} = B_n B_n 1$, but that is easily seen to be equivalent to $B_{n+1} = B_n 1^n B_n$.) In this case $r_n = 2$ and $s_n(1)=n$, for all $n$. Since $s_n = \overline{s_n}$ for all $n$, Chacon2 is isomorphic to its inverse.
\begin{theorem}
\label{theorem}
Let $(r_n: n \in \N)$ and $(s_n: n \in \N)$ be the canonical cutting and spacer parameters for the symbolic rank-1 measure-preserving transformation $(X, \mu, \sigma)$. If those parameters are bounded, then $(X, \mu, \sigma)$ is isomorphic to $(X, \mu, \sigma^{-1})$ if and only if there is an $N \in \N$ such that for all $n \geq N$, $s_n = \overline{s_n}$.
\end{theorem}
We remark that in \cite{GaoHill1}, the author and Su Gao have completely characterized when two symbolic rank-1 system are {\em topologically} isomorphic, and as a corollary have a complete characterization of when a symbolic rank-1 system is {\em topologically} isomorphic to its inverse. A topological isomorphism between symbolic rank-1 systems is a homeomorphism between the underlying spaces that commutes with the shift. Since each the underlying space of a symbolic rank-1 system admits at most one atomless, shift-invariant probability measure, every topological isomorphism between symbolic rank-1 systems is also a measure-theoretic isomorphism. On the other hand, there are symbolic rank-1 systems that are measure-theoretically isomorphic, but not topologically isomorphic.
We note here the main difference between these two settings. Suppose $\phi$ is an isomorphism--either a measure-theoretic isomorphism or a topological isomorphism--between two symbolic rank-1 systems $(X, \mu, \sigma)$ and $(Y, \nu, \sigma)$.
Let $(v_n : n \in \N)$ and $(w_n : n \in \N)$ be generating sequences that gives rise to $(X, \mu, \sigma)$ and $(Y, \nu, \sigma)$, respectively.
One can consider a set $E_{w_m, 0} \subseteq Y$ and its pre-image, call it $A$, under $\phi$. If $\phi$ is a measure-theoretic isomorphism then one can find some $E_{v_n, i}$ so that $$\frac{\mu (E_{v_n, i} \cap A)}{\mu(E_{v_n, i}) } > 1 - \epsilon.$$
However, if $\phi$ is in fact a topological isomorphism, then one can find some $E_{v_n, i}$ so that $$E_{v_n, i} \subseteq A.$$
The stronger condition in the case of a topological isomorphism is what makes possible the analysis done by the author and Gao in \cite{GaoHill1}. In this paper, we are able to use the weaker condition, together with certain ``bounded" conditions on the generating sequences $(v_n : n \in \N)$ and $(w_n: n \in \N)$ to achieve our results.
\section{Arguments}
We begin with a short subsection introducing two new pieces of notation. Then we prove a general proposition that can be used to show that certain symbolic rank-1 measure-preserving transformations are not isomorphic. Finally, we show how to use the general proposition to prove the non-trivial direction of Theorem \ref{theorem}.
\subsection{New notation}
The first new piece of notation is $*$, a binary operation on all finite sequences of natural numbers. The second is $\perp$, a relation (signifying incompatibility) between finite sequences of natural numbers that have the same length.
{\bf The notation $*$:} We will first describe the reason for introducing this new notation. We will then then give the formal definition of $*$ and then illustrate that definition with an example. Suppose $(r_n :n \in \N)$ and $(s_n: n \in \N)$ are cutting and spacer parameters for the symbolic system $(X, \mu, \sigma)$ and that $(v_n: n \in \N)$ is the generating sequence corresponding to those parameters. Fix $n_0 > 0$ and consider the generating sequence $(w_n : n \in \N)$, defined as follows.
$$w_n =
\begin{cases}
v_n, \quad &\text{ if } n < n_0 \\
v_{n+1}, \quad &\text{ if } n \geq n_0\\
\end{cases}
$$
It is clear that $(w_n: n \in \N)$ is a subsequence of $(v_n: n \in \N)$, missing only the element $v_{n_0}$; thus, $(w_n: n \in \N)$ gives rise to the same symbolic system $(X, \mu, \sigma)$. We would like to be able to easily describe the cutting and spacer parameters that correspond to the generating sequence $(w_n:n \in \N)$. Let $(r_n^\prime :n \in \N)$ and $(s_n^\prime : n \in \N)$ be those cutting and spacer parameters. It is clear that for $n< n_0$ we have $r_n^\prime = r_n$ and $s_n^\prime = s_n$. It is also clear that for $n>n_0$ we have $r_n^\prime = r_{n+1}$ and $s_n^\prime = s_{n+1}$. It is straightforward to check that $r_{n_0}^\prime = r_{n_0+1} \cdot r_{n_0}$. The definition below for $*$ is precisely what is needed so that $s_{n_0}^\prime = s_{n_0+1} * s_{n_0}$.
Here is the definition. Let $s_1$ be any function from $\{1, 2, \ldots, r_1 -1\}$ to $\N$ and let $s_2$ be any function from $\{1, 2, \ldots, r_2 -1\}$ to $\N$. We define $s_2 * s_1$, a function from $\{1, 2, \ldots, r_2 \cdot r_1 -1\}$ to $\N$, as follows.
$$(s_2 *s_1 )(i) =
\begin{cases}
s_1(k), \quad &\text{ if } 0 < k < r_1 \text{ and } i \equiv k \mod r_1 \\
s_2(i/r_1), \quad &\text{ if } i \equiv 0 \mod r_1 \\
\end{cases}
$$
It is important to note, and straightforward to check, that the operation $*$ is associative.
To illustrate, suppose that $s_1$ is the function from $\{1,2,3\}$ to $\N$ with $s_1 (1) = 0$, $s_1 (2) = 1$, and $s_1 (3) = 0$ and that $s_2$ is the function from $\{1,2\}$ to $\N$ such that $s_2 (1) = 5$ and $s_2(2) = 6$; we abbreviate this by simply saying that $s_1 = (0,1,0)$ and $s_2 = (5,6)$. Then $s_2 * s_1 = (0,1,0, 5, 0,1,0,6,0,1,0)$.
{\bf The notation $\perp$:} Suppose $s$ and $s^\prime$ are both functions from $\{1, 2, \ldots, r-1\}$ to $\N$. We say that $s$ is {\em compatible} with $s^\prime$ if there exists a function $c$ from $\{1\}$ to $\N$ so that $s$ is a subsequence of $c*s^\prime$. Otherwise we say that $s$ is incompatible with $s^\prime$ and write $s\perp s^\prime$.
To illustrate, consider $s = (0,1,0)$ and $s^\prime = (0,0,1)$. Then $s$ is compatible with $s^\prime$ because if $c = 0$, then $c * s^\prime = (0,0,1,0,0,0,1)$ and $(0,1,0)$ does occur as a subsequence of $(0,{ 0,1,0},0,0,1)$. If $s^{\prime\prime} = (0,1,2)$, then $s^{\prime}$ is compatible with $s^{\prime\prime}$ (again let $c=0$), but $s \perp s^{\prime\prime} $, because $(0,1,0)$ can never be a subsequence of $(0,1,2,c,0,1,2)$.
Though not used in our arguments, it is worth noting, and is straightforward to check, that $s \perp s^\prime$ iff $s^\prime \perp s$. (It is important here that $s$ and $s^\prime$ have the same length.)
We now state the main point of this definition of incompatibility. This fact will be crucial in the proof of Proposition 2.1. Suppose $(r_n^\prime : n \in \N)$ and $(s_n^\prime : n \in \N)$ are cutting and spacer parameters associated to the symbolic rank-1 measure-preserving transformation $(Y, \nu, \sigma)$ and that $(w_n : n \in \N)$ is the generating sequence associated to those parameters.
If $n$ is such that $r_n = r_n^\prime$ and $s_n \perp s_n^\prime$, then no element of $y \in Y$ contains an occurrence of $$w_n 1^{s_n (1)} w_n 1^{s_n(2)} \ldots 1^{s_n(r_n -1)} w_n$$ where each of the demonstrated occurrence of $w_n$ is expected.
Indeed, suppose that that beginning at position $i$, some $y \in Y$ did have such an occurrence of $w_n 1^{s_n (1)} w_n 1^{s_n(2)} \ldots 1^{s_n(r_n -1)} w_n$. The expected occurrence of $w_n$ beginning at $i$ must be completely contained in some expected occurrence of $w_{n+1}$, say that begins at position $j$. We know that the expected occurrence of $w_{n+1}$ beginning at position $j$ contains exactly $r_n$-many expected occurrences of $w_n$. Let $1 \leq l \leq r_n$ be such that the expected occurrence of $w_n$ beginning at position $i$ is the $l$th expected occurrence of $w_n$ beginning at position $j$. If $l=1$, then $s_n = s_n^\prime$, which implies that $s_n$ is a subsequence of $c*s_n^\prime$ for any $c$. If, on the other hand, $1< l \leq r_n$, then letting $c = s_n (r_n - l +1)$, we have that $s_n$ is a subsequence of $c*s_n^\prime$. In either case this would result in $s_n$ being compatible with $s_n^\prime$.
\subsection{A general proposition guaranteeing non-isomorphism}
\begin{proposition}
\label{prop}
Let $(r_n: n \in \N)$ and $(s_n: n \in \N)$ be the cutting and spacer parameters for a symbolic rank-1 system $(X, \mu, \sigma)$ and let $(r^\prime_n: n \in \N)$ and $(s^\prime_n: n \in \N)$ be the cutting and spacer parameters for a symbolic rank-1 system $(Y, \nu, \sigma)$. Suppose the following hold.
\begin{enumerate}
\item For all $n$, $r_n = r^\prime_n$ and $\displaystyle \sum_{0 < i < r_n} s_{n}(i) = \sum_{0 < i < r_n} s^\prime_{n}(i)$.
\item There is an $S \in \N$ such that for all $n$ and all $0 < i < r_n$, $$s_n(i) \leq S \textnormal{ and } s^\prime_n(i) \leq S.$$
\item There is an $R \in \N$ such that for infinitely many $n$, $$r_n \leq R \textnormal{ and } s_n \perp s^\prime_n.$$
\end{enumerate}
Then $(X, \mu, \sigma)$ and $(Y, \nu, \sigma)$ are not measure-theoretically isomorphic.
\end{proposition}
\begin{proof}
Let $(v_n: n \in \N)$ be the generating sequence associated to the cutting and spacer parameters $(r_n: n \in \N)$ and $(s_n: n \in \N)$. Let $(w_n: n \in \N)$ be the generating sequence associated to the cutting and spacer parameters $(r^\prime_n: n \in \N)$ and $(s^\prime_n: n \in \N)$. Condition (1) implies that for all $n$, $|v_n| = |w_n|$. Now suppose, towards a contradiction, that $\phi$ is an isomorphism between $(X, \mu, \sigma)$ and $(Y, \nu, \sigma)$.
First, choose $m \in \N$ so that $|v_m| = |w_m|$ is greater than the $S$ from condition (2). Next, consider the positive $\mu$-measure set $$\phi^{-1} (E_{w_m, 0}) = \{x \in X: \phi (x) \text{ has an expected occurrence of $w_m$ at 0}\}.$$ Let $\mathbb{M} = \{n \in \N: \textnormal{$r_n \leq R $ and $s_n \perp s^\prime_n$}\}$, where $R$ is from condition (3), and note that $\mathbb{M}$ is infinite. We can then find $n \in \mathbb{M}$ and $k \in \N$ such that $$\displaystyle \frac{\mu (E_{v_n, k} \cap \phi^{-1} (E_{w_m, 0}))}{\mu(E_{v_n, k}) } > 1- \frac{1}{R}.$$
One can loosely describe the above inequality by saying: Most $x \in X$ that have an expected occurrence of $v_n$ beginning at position $k$ are such that $\phi (x)$ has an expected occurrence of $w_m$ beginning at position 0.
We say that an expected occurrence of $v_n$ in any $x \in X$ (say it begins at $i$) is a {\em good} occurrence of $v_n$ if $\phi (x)$ has an expected occurrence of $w_m$ beginning at position $i-k$. In this case we say the good occurrence of $v_n$ beginning at $i$ {\em forces} the expected occurrence of $w_m$ beginning at position $i-k$. Note that an expected occurrence of $v_n$ beginning at position $i$ in $x \in X$ is good iff $\sigma^{i} (x) \in E_{v_n, k} \cap \phi^{-1} (E_{w_m, 0})$, since $\phi$ commutes with $\sigma$. A simple application of the ergodic theorem shows that $\mu$ almost every $x \in X$ satisfies $$\displaystyle \lim_{N \rightarrow \infty} \frac{|\{i \in [-N,N] : \textnormal{ $x$ has a good occurrence of $v_n$ at $i$}\}|}{|\{i \in [-N,N] : \textnormal{ $x$ has an expected occurrence of $v_n$ at $i$}\}|} > 1- \frac{1}{R}.$$
Since $ r_n \leq R$, this implies that $\mu$ almost every $x \in X$ contains an expected occurrence of $v_{n+1}$ such that each of the $r_n$-many expected occurrences of $v_n$ that it contains is good. We say that such an occurrence of $v_{n+1}$ is {\em totally good}.
Let $x\in X$ and $i \in \Z$ be such that $x$ has a totally good occurrence of $v_{n+1}$ beginning at $i$. There are $r_n$-many expected occurrences of $v_n$ in the expected occurrence $v_{n+1}$ beginning at $i$ and each of them forces an expected occurrence of $w_m$ in $\phi(x)$. The first of these forced expected occurrences of $w_m$ in $\phi (x)$ begins at position $i - k$ and must be part of some expected occurrence of $w_n$, say it begins at position $i^\prime$. We claim that, in fact, $\phi (x)$ must have an occurrence of $$w_n 1^{s_n (1)} w_n 1^{s_n(2)} \ldots 1^{s_n(r_n -1)} w_n$$ beginning at $i^\prime$, where each of the demonstrated occurrence of $w_n$ is expected. This will contradict the fact that $s_n \perp s_n^\prime$.
Proving the claim involves an argument that is repeated $r_n -1$ times. The next paragraph contains the first instance of that argument, showing that the expected occurrence of $w_n$ beginning at $i^\prime$ in $\phi (x)$ is immediately followed by $1^{s_n(1)}$ and then another expected occurrence of $w_n$, this one containing the second forced occurrence of $w_m$. The next instance of the argument would show that the expected occurrence of $w_n$ beginning at $i^\prime + |w_n| + s_n(1)$ is immediately followed by $1^{s_n(2)}$ and then another expected occurrence of $w_n$, this one containing the third forced occurrence of $w_m$. After the $r_n -1$ instances of that argument, the claim would be proven.
Here is the first instance of the argument: We know that $\phi (x)$ has an expected occurrence of $w_n$ beginning at $i^\prime$ and that this expected occurrence of $w_n$ contains the expected occurrence of $w_m$ beginning at position $$i-k = (i^\prime) + (i - k - i^\prime).$$ Thus, by point (5) of the remark about expected occurrences in Section \ref{comments}, if $j \in \Z$ is such that $\phi (x)$ has an expected occurrence of $w_n$ beginning at position $j$, that occurrence completely contains an expected occurrence of $w_m$ beginning at position $j + (i - k - i^\prime)$.
The expected occurrence of $w_n$ beginning at position $i^\prime$ must be followed by $1^t$ and then another expected occurrence of $w_n$, for some $0 \leq t \leq S$. The expected occurrence of $w_n$ beginning at position $i^\prime + |w_n| + t$ must contain an expected occurrence of $w_m$ beginning at position $$(i -k) + |w_n| + t = (i^\prime + |w_n| + t) + (i - k - i^\prime).$$
But we also know that the expected occurrence of $v_n$ beginning at position $i + |v_n| + s_n(1)$ in $x$ forces an expected occurrence of $w_m$ beginning at $i + |w_n| + s_n(1)$ in $\phi (x)$. Since $0 \leq s_n(1),t \leq S$ it must be that $0 \leq |t - s_n(1)| \leq S$ and thus, since $|w_m| > S$, the expected occurrences beginning at positions $i + |w_n| + s_n(1)$ and $i + |w_n| + t$ must overlap. Since distinct expected occurrences of $w_m$ cannot overlap, it must be the case that $s_n(1)=t$ (See point (4) of the remark about expected occurrences in Section \ref{comments}). Thus, the expected occurrence of $w_n$ beginning at $i^\prime$ in $\phi (x)$ is immediately followed by $1^{s_n(1)}$ and then another expected occurrence of $w_n$, this one containing the second forced occurrence of $w_m$.
\end{proof}
\subsection{Proving the theorem}
We start this subsection with a comment and a simple lemma. The comment is that if $(r_n: n \in \N)$ and $(s_n: n \in \N)$ are the canonical cutting and spacer parameters for a symbolic rank-1 measure-preserving transformation $(X, \mu, \sigma)$, then for all $n \in \N$, $s_{n+1}*s_n$ is not constant. This is simply a restatement, using the notation $*$, of the last sentence in the remark about canonical generating sequences in Section \ref{comments}.
\begin{lemma}
\label{lemma}
Let $s_1$ be a function from $\{1, 2, \ldots, r_1-1\}$ to $\N$ such that $s_1 \neq \overline{s_1}$. Let $s_2$ be function from $\{1, 2, \ldots, r_2-1\}$ to $\N$ that is not constant. Then $s_2 * s_1 \perp \overline{s_2 * s_1}$.
\end{lemma}
\begin{proof}
Suppose, towards a contradiction, that $s_2 * s_1$ is compatible with $\overline{s_2 * s_1}$. Then there is function $c$ from $\{1\}$ to $\N$ so that $s_2 * s_1$ is a subsequence of $$c * (\overline{s_2 * s_1}) = c * (\overline{s_2} * \overline{s_1}) = (c * \overline{s_2}) * \overline{s_1}.$$ In other words, there is some $0 \leq k \leq r_2 \cdot r_1$ such that for all $0 < l < r_2 \cdot r_1$, $(s_2 * s_1)(l) = ((c * \overline{s_2}) * \overline{s_1})(k+l) $. We now have two cases.
Case 1: $k \equiv 0 \mod r_1$. Then for all $0<i<r_1$, $$s_1(i) = (s_2 * s_1) (i) = ((c * \overline{s_2}) * \overline{s_1}) (k+i) = \overline{s_1} (i).$$ Thus $s_1 = \overline{s_1}$, which is a contradiction.
Case 2: There is some $0<m<r_1$ such that $k+m \equiv 0 \mod r_1$. For all $0 \leq d < r_2$, we have $(s_2 * s_1)(m + dr_1) = s_1 (m) $. But also, $$(s_2 * s_1)(m + dr_1) = ((c * \overline{s_2}) * \overline{s_1}) (k + m + dr_1) = (c * \overline{s_2}) \left(\frac{k + m}{r_1} + d\right) .$$ This implies that the function $c*\overline{s_2}$ is constant (taking the value $s_1 (m)$) for $r_2$-many consecutive inputs. This implies that $\overline{s_2}$ must be constant, which is a contradiction.
\end{proof}
We will now prove the non-trivial direction of the theorem.
\begin{proof}
Let $(\tilde{r}_n: n \in \N)$ and $(\tilde{s}_n: n \in \N)$ be the canonical cutting and spacer parameters for a symbolic rank-1 measure-preserving transformation $(X, \mu, \sigma)$. Suppose both parameters are bounded; let $\tilde{R}$ be such that for all $n \in \N$, $\tilde{r}_n \leq \tilde{R}$ and let $S$ be such that for all $n \in \N$ and all $0<i<\tilde{r}_n$, $\tilde{s}_n(i) \leq S$. Also, assume that for infinity many $n$, $\tilde{s}_n \neq \overline{\tilde{s}_n}$. To prove the non-trivial direction of the theorem, we need to show that $(X, \mu, \sigma)$ is not isomorphic to its inverse.
Let $(u_n :n \in \N)$ be the generating sequence corresponding to the parameters $(\tilde{r}_n: n \in \N)$ and $(\tilde{s}_n: n \in \N)$. We will now describe a subsequence $(v_n: n \in \N)$ of $(u_n:n \in \N)$ and let $(r_n:n \in \N)$ and $(s_n: n \in \N)$ be the cutting and spacer parameters corresponding to the generating sequence $(v_n: n \in \N)$ (which also gives rise to $(X, \mu, \sigma)$). First, let $v_0 = u_0 =0$. Now, suppose $v_{2n}$ has been defined as $u_{k}$. Let $m>k$ be as small as possible so that $\tilde{s}_m \neq \overline{\tilde{s}_m}$, and define $v_{2n+1} = u_m$ and $v_{2n+2} = u_{m+3}$. It is very important to note here that $$r_{2n+1} = \tilde{r}_{m+2} \cdot \tilde{r}_{m+1} \cdot \tilde{r}_{m} $$
and that $$s_{2n+1} = \tilde{s}_{m+2} * \tilde{s}_{m+1} * \tilde{s}_{m}. $$ This has two important consequences. First, we have that for $n \in \N$, $r_{2n+1} \leq \tilde{R}^3$. By the remark before Lemma \ref{lemma}, we also have that $\tilde{s}_{m+3} * \tilde{s}_{m+2} $ is not constant and thus, by Lemma \ref{lemma}, $\tilde{s}_{m+3} * \tilde{s}_{m+2} * \tilde{s}_{m+1} \perp \overline{\tilde{s}_{m+3} * \tilde{s}_{m+2} * \tilde{s}_{m+1} }$; put another way, $s_{2n+1} \perp \overline{s_{2n+1}}$.
Now for each $n$, let $r_n^\prime = r_n$ and $s_n^\prime = \overline{s_n}$. Let $(Y, \nu, \sigma)$ be the symbolic rank-1 transformation corresponding to the cutting and spacer parameters $(r_n^\prime : n \in \N)$ and $(s_n^\prime : n \in \N)$. As mentioned in the remark on rank-1 inverses at the end of Section \ref{comments}, the transformation $(Y, \nu, \sigma)$ is isomorphic to the inverse of $(X, \mu, \sigma)$. Thus to show that $(X, \mu, \sigma)$ is not isomorphic to its inverse, we can show that $(X, \mu, \sigma)$ and $(Y, \nu, \sigma)$ are not isomorphic.
To do this we will apply Proposition \ref{prop}. We need to check that the following three conditions hold.
\begin{enumerate}
\item For all $n$, $r_n = r^\prime_n$ and $\displaystyle \sum_{0 < i < r_n} s_{n}(i) = \sum_{0 < i < r_n} s^\prime_{n}(i)$.
\item There is an $S \in \N$ such that for all $n$ and all $0 < i < r_n$, $$s_n(i) \leq S \textnormal{ and } s^\prime_n(i) \leq S.$$
\item There is an $R \in \N$ such that for infinitely many $n$, $$r_n \leq R \textnormal{ and } s_n \perp s^\prime_n.$$
\end{enumerate}
Condition (1) above follows immediately from the fact that for all $n \in N$, $r_n^\prime = r_n$ and $s_n^\prime = \overline{s_n}$. Condition (2) follows from the fact that each $s_n(i)$ is equal to some $\tilde{s}_m (j)$ which is less than or equal to $S$. Finally, to verify condition (3), let $R=\tilde{R}^3$ and note that, as remarked above, for all $n \in \N$ we have that $r_{2n+1} \leq \tilde{R}^3$ and $s_{2n+1} \perp \overline{s_{2n+1}}$. We now apply Proposition \ref{prop} and conclude that $(X, \mu, \sigma)$ and $(Y, \nu, \sigma)$ are not isomorphic. Thus $(X, \mu, \sigma)$ is not isomorphic to its inverse.
\end{proof}
\bibliographystyle{amsalpha}
|
1504.07721
|
\section{Introduction}
In model theory, strong types and Lascar types are important objects to understand invariant equivalence relations in a theory. Let $T=T^{eq}$ be a theory and ${\mathcal C}$ be a monster model of $T$. Let $A$ be a small set in ${\mathcal C}$. We say that an equivalence relation $E$ is {\em finite} if it has finitely many $E$-classes, {\em bounded} if the number of $E$-classes are less than the cardinality of ${\mathcal C}$, and {\em $A$-invariant} if for $a,b$ and $f\in\operatorname{Aut}_A({\mathcal C})$, $(a,b)\in E$ implies $(f(a),f(b))\in E$. We say finite tuples $a,b$ have the same {\em strong(or Shelah) type over $A$}, written $a\equiv_A^s b$ if for any $A$-definable finite equivalence relation $E$ over ${\mathcal C}^{|a|}$, $(a,b)\in E$. It is well known that $a\equiv_A^s b$ if and only if $a\equiv_{\operatorname{acl}(A)} b$. Thus we take $\operatorname{stp}(a/A)$, the strong type of $a$ over $A$ as $\operatorname{tp}(a/\operatorname{acl}(A))$, and it is the orbit of $a$ under the action of $\operatorname{Aut}_{\operatorname{acl}(A)}({\mathcal C})$. So, it makes sense to consider a strong type of an infinite tuple over $A$. The Lascar types over $A$ are defined as follows : We say tuples $a,b$ possibly infinite length have the same {\em Lascar type over $A$}, written $a\equiv_A^L b$ if for any $A$-invariant bounded equivalence relation $E$ over ${\mathcal C}^{|a|}$, $(a,b)\in E$. Like strong types, a well known fact is that $a\equiv_A^L b$ if and only if there is a finite sequence $(a_0,a_1,\ldots,a_n)$ in ${\mathcal C}^{|a|}$ such that $a_0=a$, $a_n=b$, and $a_i\equiv_{M_i} a_{i+1}$ for a submodel $M_i$ of ${\mathcal C}$ containing $A$ for each $i=0,\ldots,n-1$. So $\operatorname{Ltp}(a/A)$, the Lascar type of $a$ over $A$ is the orbit of $a$ under the action of $\operatorname{Autf}_{A}({\mathcal C})$ which is the group of automorphism fixing a submodel containing $A$, and $\operatorname{Autf}_{A}({\mathcal C})$ is a normal subgroup in $\operatorname{Aut}_A({\mathcal C})$. So, it is well defined the group $\operatorname{Gal}_L({\mathcal C};A)=\operatorname{Aut}_A({\mathcal C})/\operatorname{Autf}_A({\mathcal C})$, called the Lascar group over $A$. One asks when $\operatorname{stp}=\operatorname{Ltp}$ in $T$ and it is determined by $\operatorname{Gal}_L({\mathcal C};A)$ for $A=\operatorname{acl}(A)$.
Two notions of strong and Lascar types are also important in the context of classification theory. In stable theories, each type over a model $A$ is stationary; it has only one of unique non-forking extension over each $B$ containing $A$. And this stationarity over a model is a characterization property of stable theories. In the case when a theory has a ternary invariant independence relation satisfying symmetry, transitivity, extension, finite and local characters, and stationarity over a model, it is well known that the theory is stable. In simple theories, the stationarity over models is substituted by type amalgamation over a model( or $3$-amalgamation); for a model $A$ and $a_1,a_2,b_1,b_2$ with $a_1\equiv_A a_2$, if $\{a_i,b_i\}$ for $i=1,2$ and $\{b_1,b_2\}$ are independent over $A$, then there is $a_3\equiv_{Ab_i} a_i$ for $i=1,2$ such that $\{a_3,b_1,b_2\}$ is independent over $A$. One generalizes $3$-amalgamation to $n$-amalgamation for $n\ge 3$, called generalized amalgamation properties. We assume $T=T^{heq}$ in the case that $T$ is simple. It is well known that each strong type over $A$ which is a type over $\operatorname{acl}(A)$ is stationary in a stable theory $T$, and if $T$ is simple, $a\equiv_A^L b$ if and only if $a\equiv_{\operatorname{bdd}(A)}b$ and each types over $\operatorname{bdd}(A)$ satisfies $3$-amalgamation, where $\operatorname{bdd}(A)$ is the set of elements in ${\mathcal C}^1$ whose the cardinality of orbit under $\operatorname{Aut}_A({\mathcal C})$ is less than $|{\mathcal C}|$. In \cite{GKK}, J. Goodrick, B. Kim, and A. Kolesnikov introduced homology groups related with generalized amalgamation property for strong types in the context of rosy theories. They computed the first homology groups for some cases in \cite{GKK}(which were all zero), and in stable theories(of course, $3$-amalgamation holds in this case), they gave an explicit description of higher homology groups in \cite{GKK1}\cite{GKK2}. It was not known much about the first homology groups in general. We supposed that the first homology group for a strong type $p$ in a rosy theory is related with a Lascar group. Indeed, in \cite{KKL}, B. Kim, S. Kim, and the author showed that for a Lascar strong type $p$, the first homology group for $p$ is always zero, so even though in a rosy theory, Lascar strong type has no $3$-amalgamation still it satisfies a kind of complicated form of amalgamation.
In this paper, instead of the original definition of the first homology group in \cite{GKK}, we work with a restricted one, called a first homology group {\em over $A$} in a strong type over $A$ for $A=\operatorname{acl}(A)$ and we give a canonical surjective homomorphism from the Lascar group over $A$ into these first homology groups in strong types over $A$ in rosy theories. We describe its kernel by an invariant bounded equivalence relation whose classes are described by some subgroup of the automorphism group. From this description of the kernel, we deduce that the cardinality of this first homology group for a strong type is always one or at lest $2^{\aleph_0}$. At last, we give two examples of rosy theories having a strong type over $\operatorname{acl}^{eq}(\emptyset)$ with a non trivial first homology group exactly isomorphic to their Lascar groups over $\operatorname{acl}^{eq}(\emptyset)$. From known examples in \cite{GKK}\cite{KKL} and our two examples, we conjecture that these first homology group in strong types over $A$ are isomorphic to the abelianization of Lascar group over $A=\operatorname{acl}(A)$ under the assumption that the algebraic closures of non-empty small sets are again models in a rosy theory $T$.\\
\medskip
We review some notions and facts from
\cite{GKK1},\cite{GKK} and \cite{KKL}. First we recall the definitions
of simplices and the corresponding homology groups introduced in
\cite{GKK1},\cite{GKK}. {\bf Throughout we work with a large saturated model ${\mathcal C}(={\mathcal C}^{\operatorname{eq}})$ whose theory $T(=T^{eq})$ is rosy with the thorn-independence relation} $\mathop{\smile \hskip -0.9em ^| \ }$ {\bf on the small sets of ${\mathcal C}$. For a small $A\subset {\mathcal C}$, we denote the algebraic closure and definable closure in the home sort as $\operatorname{acl}_{{\mathcal C}}(A)$ and $\operatorname{dcl}_{{\mathcal C}}(A)$, and in the imaginary sort as $\operatorname{acl}_{{\mathcal C}}^{eq}(A)$ and $\operatorname{dcl}_{{\mathcal C}}^{eq}(A)$. If there is no risk of confusion, we shall write $\operatorname{acl}(A)$ and $\operatorname{dcl}(A)$ in both cases. We fix a small algebraically closed set $A=\operatorname{acl}(A)$ and $p(x)\in S(A)$ (with possibly infinite $x$)}. When we say $T$ is simple, we consider $T=T^{heq}$.
\medskip
Let ${\mathcal S}_A$ denote the category, where
\begin{enumerate}
\item The objects are small subsets of ${\mathcal C}$ containing $A$, and
\item The morphisms are elementary maps which fix $A$ pointwise.
\end{enumerate}
And for a finite $s \subset \omega$, the power set of $s$, ${\mathcal P}(s)$ forms the category as an ordered set :
\begin{enumerate}
\item $\operatorname{Ob}({\mathcal P}(s))={\mathcal P}(s)$, and
\item For $u, v \in {\mathcal P}(s)$, $\operatorname{Mor}(u,v)=\{\iota_{u,v}\}$, where $\iota_{u,v}$ is the single inclusion map for $u\subseteq v$, or $=\emptyset$ otherwise.
\end{enumerate}
For a functor $f:{\mathcal P}(s) \to {\mathcal C}_A$ and $u\subseteq v \in {\mathcal P}(s)$, we
write $f^u_v:=f(\iota_{u,v})\in \operatorname{Mor}(f(u),f(v))$ and $f^u_v(u):=f^u_v(f(u))\subseteq f(v)$.
\begin{definition}
A functor $f:{\mathcal P}(s) \to {\mathcal S}_A$ for some finite $s \subset \omega$ is
said to be a {\em closed independent (regular) $n$}-{\em simplex} in $p$ if
\begin{enumerate}
\item $|s|=n+1$
\item $f(\emptyset) \supseteq A$; and for $i\in s$, $f(\{i\})$ is of the form $\operatorname{acl}(Ca)$ where $a(\models p)$ is independent with $C=f^{\emptyset}_{ \{i\} }(\emptyset)$ over $A$.
\item
For all non-empty $u\in {\mathcal P}(s)$, we have
$$f(u) = \operatorname{acl}(A \cup \bigcup_{i\in u} f^{\{i\}}_u(\{i\}));$$
and $\{f^{\{i\}}_u(\{i\})|\ i\in u \}$ is independent over $f^{\emptyset}_u(\emptyset)$.
\end{enumerate}
We say $f$ is {\em over $A$} if $f(\emptyset)=A$(so for any $u\subset s$, $f^{\emptyset}_u(\emptyset)=A$). We shall call a closed independent $n$-simplex simply by an {\em $n$-simplex}. The set $s$ is called the {\em support of $f$}, denoted by $\operatorname{supp}(f)$.
\end{definition}
In this paper, we only consider simplices over $A$. {\bf We fix an enumeration of $\operatorname{acl}(aA)$ for each $a\in {\mathcal C}^{|x|}$} such that $a,b\in {\mathcal C}^{|x|}$, $a\equiv_A b$ if and only if $\operatorname{acl}(aA)\equiv\operatorname{acl}(bA)$ because in \cite{KKT}, there is a counterexample which fails generalized amalgamation properties without fixing enumeration of bounded closed set.
\begin{definition}
Let $S_{n}(p;A)$ denote the collection of all $n$-simplices over $A$ in $p$ and
$C_{n}(p;A)$ the free abelian group generated by $n$-simplices in $S_{n}(p;A)$; its elements are called $n$-{\em chains over $A$} in $p$.
A non-zero $n$-chain $c$ is uniquely written (up to permutation of $i$'s) as $c=\sum_{1\le i\le
k}\limits n_i f_i$, where $n_i$ is a non-zero integer and
$f_1,\ldots,f_k$ are distinct $n$-simplices. We call $|c|:=|n_1|+\cdots+|n_k|$ the {\em length} of the chain $c$, and
define the {\em support} of
$c$ as the union of $\operatorname{supp}(f_i)$'s.
\end{definition}
\noindent We use $a,b,c,\ldots,f,g,h,\ldots,\alpha,\beta,\ldots$ to denote
simplices and chains. Now we define the boundary operators and using
the boundary operators we will define homology groups.
\begin{definition}
Let $n \geq 1$ and $0 \leq i \leq n$. The $i$-th {\em boundary
operator} $\partial_n ^i : C_n(p;A) \rightarrow C_{n-1}(p;A)$ is defined so
that if $f$ is an $n$-simplex with domain ${\mathcal P}(s)$ with
$s = \{s_0<\cdots<s_n \}$, then
\begin{center}
$\partial_n^i(f)=f\upharpoonright {\mathcal P}(s\setminus\{s_i\})$
\end{center}
and extended linearly to all $n$-chains in $C_n(p)$.
The {\em boundary map} $\partial_n
: C_n(p;A)\rightarrow C_{n-1}(p;A)$ is defined by the rule
\begin{center}
$\partial_n(c)=\sum_{0\leq i\leq n}\limits (-1)^i \partial_n^i(c)$.
\end{center}
We write $\partial^i$ and $\partial$ for $\partial_n^i$ and $\partial_n$,
respectively, if $n$ is clear from context.
\end{definition}
\begin{definition}
The kernel of $\partial_n$ is denoted $Z_{n}(p;A)$, and its elements
are called ($n$-){\em cycles over $A$}. The image of $\partial_{n+1}$ in $C_n(p;A)$
is denoted by $B_{n}(p;A)$ and its elements are called ($n$-){\em boundaries over $A$}.
\end{definition}
Since $\partial_n\circ\partial_{n+1}=0$, $B_n(p;A)\subseteq Z_n(p;A)$
and we can define simplicial homology groups in $p$.
\begin{definition}
The $n$-th ({\em simplicial}) {\em homology group over $A$} in $p$ is
\[H_{n}(p;A):= Z_{n}(p;A)/B_{n}(p;A).\]
\end{definition}
\noindent In \cite{GKK}, original simplicial homology groups in $p$ were defined using not only simplices over $A$ in $p$ but also other simplices in $p$. But in this paper, we consider the homology groups over $A$ in $p$.
\begin{notation}
We shall abbreviate $S_n(p;A),C_n(p;A),\ldots$ as $S_n(p),C_n(p),\ldots$ and we shall also abbreviate $H_n(p;A)$ simply as $H_n(p)$.
\end{notation}
\begin{definition}
For $n \geq 1$, an $n$-chain $c$ is called an $n$-{\em shell} if it is in the
form
$$c= \pm\sum_{0\leq i\leq n+1}\limits (-1)^i f_i,$$
where $f_0,\cdots,f_{n+1}$ are $n$-simplices such that whenever
$0\leq i < j \leq n+1$, we have $\partial^i f_j = \partial^{j-1} f_i$. Specially, a $1$-shell $c$ is of the form $$c=f_{0}-f_{1}+f_{2}.$$
\end{definition}
\begin{remark} The boundary of an $2$-simplex is a $1$-shell, and the boundary of any $1$-shell is $0$.
\end{remark}
\begin{definition}
For $n\geq 0$, we say $p$ has $(n+2)$-{\em amalgamation} if any $n$-shell in $p$ is the boundary of some $(n+1)$-simplex in $p$, and $p$ has $(n+2)$-{\em complete amalgamation} (or simply $(n+2)$-CA) if $p$ has $k$-amalgamation for every $2 \leq k \leq n+2$. By extension axiom of the independence relation, whenever $f : {\mathcal P}(s) \rightarrow
{\mathcal C}_A$, $g : {\mathcal P}(t) \rightarrow {\mathcal C}_A \in S(p)$ and $f\upharpoonright
{\mathcal P}(s\cap t) = g \upharpoonright {\mathcal P}(s\cap t)$, then $f$ and $g$ can be extended to a simplex $h : {\mathcal P}(s\cup t) \rightarrow {\mathcal C}_A$ in $p$. This property is called {\em strong} $2$-{\em amalgamation}.
\end{definition}
The following fact shows why the notion of shells is important.
\begin{fact}\label{funfact}\cite{GKK1},\cite{GKK}
If $p$ has $(n + 1)$-CA for some $n \geq 1$, then
\begin{center}
$H_n(p) = \{ [c] : c \mbox{ is an } n\mbox{-shell over }A \mbox{
with }\operatorname{supp}(c)=\{0,\ldots,n+1\}\;\}$.
\end{center}
Thus the first homology group in $p$ is generated by $1$-shells in $p$ with its support $\{0,1,2\}$.
\end{fact}
\noindent So, we have that $H_1(p)$ is trivial iff any 1-shell of its support $\{0,1,2\}$ in
$p$ is the boundary of some 2-chain in $p$. Therefore, if $T$ is
simple, due to 3-amalgamation $H_1(p)$ is trivial. The following shows
that the same result holds in any rosy theory.
\begin{fact}\label{h1=0}\cite{KKL}
Suppose that $p$ is any Lascar strong type in a rosy theory. Then $H_1(p)=0$.
\end{fact}
We introduce a notion of type homologies in \cite{GKK}. We call types with possibly infinite sets of variables $*$-types. We fix a set ${\mathcal V}$ of variables which is large enough so that all variables in $*$-types come from the set ${\mathcal V}$ and $|{\mathcal C}|>2^{|{\mathcal V}|}$. For any $X\subset {\mathcal V}$, any injective function $\sigma:\ X\rightarrow {\mathcal V}$, and any $*$-type $p(\bar{x})$ with $\bar{x}\subset X$, we let $\sigma_* p:=\{\phi(\sigma(\bar{x})):\ \phi(\bar{x})\in p\}$. For $A=\operatorname{acl}(A)$, let ${\mathcal T}_A$ be the category, where
\begin{enumerate}
\item The objects of ${\mathcal T}_A$ are all the complete $*$-types in $T$ over $A$, including a single distinguished type $p_{\emptyset}$ with no free variables;
\item $\operatorname{Mor}_{{\mathcal T}_A}(p(\bar{x}),q(\bar{y})$ is the set of all injective maps $\sigma:\ \bar{x}\rightarrow \bar{y}$ such that $\sigma_* p\subset q$.
\end{enumerate}
\begin{definition}
Let $A=\operatorname{acl}(A)$ and $p\in S(A)$. A {\em closed independent type-$n$-smplex in $p$} is a functor $f:\ {\mathcal P}(s)\rightarrow {\mathcal T}_A$ for $s\subset \omega$ such that
\begin{enumerate}
\item $|s|=n+1$.
\item Let $w\subset s$ and $u,v\subset w$. Set $f_w^u:=f(\iota_{u,w})$. Write $\bar{x}_w$ as the variable set of $f(w)$. Then whenever $\bar{a}$ realizes the type $f(w)$ and $\bar{a}_u$, $\bar{a}_v$, and $\bar{a}_{u\cap v}$ denote subtuples corresponding to the variable sets $f_w^u(\bar{x}_u)$, $f_w^v(\bar{x}_v)$, and $f_w^{u\cap v}(\bar{x}_{u\cap v}$, then $$\bar{a}_u\mathop{\smile \hskip -0.9em ^| \ }_{A\cup \bar{a}_{u\cap v}} \bar{a}_v.$$
\item For all non-empty $u\subset s$ and any $\bar{a}$ realizing $f(u)$, we have $\bar{a}=\operatorname{acl}(A\cup\bigcup_{i\in u}\bar{a}_{\{i\}}$.
\item For $i\in s$, $f(\{i\})$ is the complete $*$-type of $\operatorname{acl}(AC\cup\{b\})$ over $A$, where $C$ is some realization of $f(\emptyset)$ and $b$ is some realization of a nonforking extension of $p$ to $AC$.
\end{enumerate}
We say $f$ is {\em over $A$} if $f(\emptyset)=A$.
\end{definition}
Using closed independent type-functors in $p$ we define the $n$-the type homology groups over $A$ in $p$, denoted by $H_n^{t}(p;A)$. We shall write $H_n^t(p;A)$ as $H_n^t(p)$. Then for each $n$, the $n$-homology groups $H_n(p;A)$ and $H_n^t(p;A)$ are non-canonically isomorphic, which is depending on the choice of enumerations of each $*$-types of closed independent simplices in $p$.
We see a notion of chain-walk notion, which is motivated from directed walk in graph theory, in \cite{KKL}\cite{KL}. The chain-walk was used to reduce a given a $2$-chain of $1$-shell boundary to one of simple form of $2$-chain having the same $1$-shell boundary and it is useful to compute the first homology group of a strong type. Two fundamental operations were used in reducing $2$-chains to the forms of chain-walks : {\em crossing} and {\em renaming support} operations. We refer the reader to \cite{KKL} for the definitions of crossing and renaming support operations and to \cite{KKL}\cite{KL} for the detail of classification of $2$-chains. In \cite{KL}, one defined the chain-walk using notion of direct walk in graph theory, here we give the definition of chain-walk in terms of simplces.
\begin{definition}\label{chainwalk}
Let $\alpha$ be a 2-chain having the boundary $f_{12} -f_{02} +
f_{01}$.
A subchain $\beta=\sum_{i=0}^m\limits \epsilon_i b_i$ of $\alpha$ (where $\epsilon_i = \pm 1$ and $b_i$ is a $2$-simplex, for each $i$) is called
a {\em chain-walk in $\alpha$ from $f_{01}$ to $-f_{02}$} if
\begin{enumerate}\item there are non-zero numbers $k_0,\ldots,k_{m+1}$ (not necessarily distinct) such that $k_0=1$, $k_{m+1}=2$, and for $ i\leq m$, $\operatorname{supp}(b_i)=\{k_i,k_{i+1},0\}$;
\item $(\partial \epsilon_0 b_0)^{0,1} = f_{01}$, $(\partial \epsilon_m b_m)^{0,2}=- f_{02}$; and
\item for $0 \le i < m$,
$$(\partial \epsilon_i b_i)^{0, k_{i+1}}+ (\partial \epsilon_{i+1} b_{i+1})^{0,k_{i+1}}=0.$$
\end{enumerate}
\end{definition}
\noindent Any $2$-chain having a $1$-shell boundary is reduced to a chain-walk $2$-chain having the same boundary of the support $\{0,1,2\}$.
\begin{fact}\label{supp=3}{\cite{KKL}\cite{KL}} Applying crossing and renaming support operation to a $2$-chain $\alpha$ with the $1$-shell boundary $f_{12} -f_{02}
+f_{01}$, it is reduced to a $2$-chain $\alpha'=\sum_{i=0}^{2n}\limits (-1)^i a_i$ with $|\alpha'|\le |\alpha|$, which itself is a chain-walk from $f_{01}$ to $-f_{02}$, and $\operatorname{supp}(\alpha')=\{0,1,2\}$.
\end{fact}
\section{Lascar groups and the first homology groups}
In this section we show that there is a canonical epimorphism from the Lascar group $\operatorname{Gal}_L({\mathcal C};A)$ over $A$ into the first homology group $H_1(p)$ in $p$.
Let $f\colon {\mathcal P}(s) \rightarrow {\mathcal C}_A$ be a $n$-simplex in $p$. For $u\subset s$ with $u = \{ i_0 <\ldots <i_k \}$, we shall write $f(u)=[a_0 \ldots a_k]_u$, where $a_j\models p$,
$f(u)=\operatorname{acl}(A,a_0\ldots a_k)$, and $\operatorname{acl}(a_j A)=f^{ \{ i_j \} } _u (\{i_j\})$, or we write $f(u)\equiv [a_0 \ldots a_k]_u$ by changing '$=$' to '$\equiv$'. In both cases, $\{a_0,\ldots,a_k\}$ is independent over $A$.
\subsection{Representations of 1-shells}
Given two $1$-shells $s_k=f^k_{01}+f^k_{12}-f^k_{02}$ with $k=0,1$, if $f^0_{ij}(\{i,j\})\equiv_A f^1_{ij}(\{i,j\})$ for $0\le i<j\le 2$, then the homology classes of $s_0$ and $s_1$ are same in $H_1(p)$ since $H_1^t(p)\cong H_1(p)$. From this, we introduce a notion of a representation of a $1$-shell and we describe the first homology group in $p$ using this notion.
\begin{def/rem}
Let $s=f_{01}+f_{12}-f_{02}$ be a $1$-shell such that $\operatorname{supp}(f_{ij})=\{i,j\}$ for $0\le i<j\le 2$. There is a quadruple $(a_0,a_1,a_2,a_3)$ in $p({\mathcal C})^4$ such that $f_{01}(\{0,1\})\equiv[a_0 a_1]_{\{0,1\}}$, $f_{12}(\{1,2\})\equiv[a_1 a_2]_{\{1,2\}}$, and $f_{02}(\{0,2\})\equiv[a_3 a_2]_{\{0,2\}}$. We call this quadruple {\em a representation of $s$}.
Note that a representation for a $1$-shell need not be unique and it is possible that the same quadruple represents different $1$-shells even though they have the same support because given $a,b\models p$, the enumeration of $\operatorname{acl}(aA)$ is fixed but the enumeration of $\operatorname{acl}(abA)$ is not fixed.
\end{def/rem}
\begin{definition}
Let $s$ be a $1$-shell and $(a,b,c,a')$ be a representation of $s$. We call $a$ {\em an initial point}, $a'$ {\em a terminal point}, $(a,a')$ {\em an endpoint pair} of this representation.
\end{definition}
In the next theorem, we'll see that the endpoint pairs of representations determine the classes of $1$-shells in $H_1(p)$, and the group structure of $H_1(p)$ can be described by endpoint pairs.
\begin{theorem}\label{endpt_class}
Let $s_0$ and $s_1$ be $1$-shells of the support $\{0,1,2\}$. Suppose they have some representations of a same endpoint pair. Then $s_0-s_1$ is a boundary of a $2$-chain, that is, they are in the same homology class in $H_1(p)$
\end{theorem}
\begin{proof}
We assume that $A=\emptyset$. Consdider two $1$-shelss $s_k=f_{12}^k-f_{02}^k+f_{01}^k$ for $k=0,1$. Suppose $s_0$ and $s_1$ have representations $(a,b_0,c_0,a')$ and $(a,b_1,c_1,a')$ respectively. Take two independent elements $b,c\models p$ such that $bc\mathop{\smile \hskip -0.9em ^| \ } ab_0b_1c_0c_1a'$ and consider a $1$-shell $s$ of its support $\{0,3,4\}$ which is represented by $(a,b,c,a')$. Then there is a $2$-chain $\alpha=(a^0_{01}+a^0_{12}-b^0-a^0_{02})-(a^1_{01}+a^1_{12}-b^1-a^1_{02})$ where for each $k=0,1$ and $0\le i<j\le 2$, $a^k_{ij}$ and $b^k$ are $2$-simplices satisfying the followings :
\begin{enumerate}
\item If $j-i=1$, $\operatorname{supp}(a^k_{ij})=\{i,j,3\}$, otherwise, $\operatorname{supp}(a^k_{02})=\{0,2,4\}$. And $\operatorname{supp}(b^k)=\{2,3,4\}$;
\item $a^k_{ij}\upharpoonright {\mathcal P}(\{i,j\})=f^k_{ij}$; and
\item $a^k_{01}(\{0,1,3\})=[a b_k b]_{\{0,1,3\}}$, $a^k_{12}(\{1,2,3\})=[b_k c_k b]_{\{1,2,3\}}$, $a^k_{02}(\{0,2,4\})=[a'c_k c]_{\{0,2,4\}}$; and $b^k(\{2,3,4\})=[c_k bc]_{\{2,3,4\}}$.
\end{enumerate}
\noindent and $\partial(a^k_{01}+a^k_{12}-b^k-a^k_{02})=s_k-s$. So $\partial\alpha=(s_0-s)-(s_1-s)=s_0-s_1$, and $s_0$ and $s_1$ are in the same homology class.
\end{proof}
\begin{theorem}\label{endpt_gpstr}
Let $s_0$ and $s_1$ be $1$-shells of a support $\{0,1,2\}$, and let $a,a',a''\models p$ be such that $(a,a')$ and $(a',a'')$ are endpoint pairs of representations of $s_0$ and $s_1$ respectively. Then there is a $1$-shell $s$ of a support $\{0,1,2\}$ having an endpoint of representation $(a,a'')$ and $[s]=[s_0]+[s_1]$ in $H_1(p)$.
\end{theorem}
\begin{proof}
Assume $A=\emptyset$. Consider two $1$-shells of a support $\{0,1,2\}$, $s_0=f^0_{01}+f^0_{12}-f^0_{02}, s_1=f^1_{01}+f^1_{12}-f^1_{02}$. Suppose there are representations of $s_0,s_1$ such that the terminal point of one of $s_0$ and the initial point of one of $s_1$ are same. Let $a'\models p$ be the common element and $a,a''\models p$ be elements so that $(a,a')$ and $(a',a'')$ are endpoint pairs of $s_0$ and $s_1$ respectively. Let $b_0, b_1, c_0, c_1\models p$ be elements such that two quadruples $(a,b_0,c_0,a')$ and $(a',b_1,c_1,a'')$ are representations of $s_0$ and $s_1$ respectively. Consider two independent elements $d,e\models p$ with $de\mathop{\smile \hskip -0.9em ^| \ } aa'a''b_0 b_1 c_0 c_1$. Then there is a $2$-chain $\alpha=(a^0_{01}+a^0_{12}-a^0_{02})-b+(a^1_{01}+a^1_{12}-a^1_{02})$, where for $k=0,1$ and $0\le i<j\le 2$, $a^k_{ij}$ and $b$ are $2$-simplices satisfying the followings :
\begin{enumerate}
\item $\operatorname{supp}(a^k_{ij})=\{i,j,3+k\}$ and $\operatorname{supp}(b)=\{0,3,4\}$;
\item $a^k_{ij}\upharpoonright {\mathcal P}(\{i,j\})=f^k_{ij}$;
\item $a^0_{01}(\{0,1,3\})=[a,b_0,d]_{\{0,1,3\}}$, $a^0_{12}(\{1,2,3\})=[b_0,c_0,d]_{\{0,2,3\}}$, $a^0_{02}(\{0,2,3\})=[a',c_0,d]_{\{0,2,3\}}$, $a^1_{01}(\{0,1,4\})=[a',b_1,e]_{\{0,1,4\}}$, $a^1_{12}(\{1,2,4\})=[b_1,c_1,e]_{\{1,2,4\}}$, $a^1_{02}(\{0,2,4\})=[a'',c_1,e]_{\{0,2,4\}}$, and $b(\{0,3,4\})=[a',d,e]_{\{0,3,4\}}$.
\end{enumerate}
\noindent and $\partial(a)=s_0+s_1-s'$, where $s'=-a^0_{01}\upharpoonright {\mathcal P}(\{0,3\})+b\upharpoonright {\mathcal P}(\{3,4\})+a^1_{12}\upharpoonright {\mathcal P}(\{0,4\})$ is a $1$-shell of a support $\{0,3,4\}$. Using the proof of Theorem \ref{endpt_class}, we get a $1$-shell $s$ of a support $\{0,1,2\}$ having an endpoint $(a,a'')$ and $[s]=[s']$ in $H_1(p)$. Thus, there is a 2-chain $\alpha'$ having a $1$-chain $s_0+s_1-s$ as its boundary and so $[s]=[s_0]+[s_1]$.
\end{proof}
Next we consider an action of $\operatorname{Aut}_A({\mathcal C})$ on each $C_n(p)$ and this action induces an action of $\operatorname{Aut}_A ({\mathcal C})$ on $H_n(p)$. From the theorem \ref{endpt_class}, this action becomes trivial on $H_n(p)$. But this triviality is very crucial in finding a connection between the Lascar group over $A$ and the first homology group in $p$.
\begin{def/rem}
We define an action of $\operatorname{Aut}_A({\mathcal C})$ on each $C_n(p)$. Let $\sigma\in \operatorname{Aut}({\mathcal C})$. For a $n$-chain $c=\sum_{i=0}^k\limits n_i f_i$, we define $$\sigma(c):=\sum_{i=0}^k\limits n_i \sigma(f_i),$$
where for a $n$-simplex $f:{\mathcal P}(s)\rightarrow C_A$ with $s=\{s_0<s_1<\cdots<s_n\}$, a $n$-simplex $\sigma(f)$ is defined as follows :
\begin{enumerate}
\item $\sigma(f)(u):=\sigma(f(u))$ for each $u\subset s$; and
\item $\sigma(f)(\iota_{u,v}):=\sigma\circ f(\iota_{u,v})\circ \sigma^{-1}$ for each inclusion map $\iota_{u,v}$.
\end{enumerate}
Furthermore, this action commutes with $\partial$, i.e.,
$$\partial(\sigma(c))=\sigma(\partial(c)).$$
So this action induces an action of $\operatorname{Aut}_A({\mathcal C})$ on $H_1(p)$ as follows : for each $[s]\in H_1(p)$, $\sigma([s]):=[\sigma(s)]$.
\end{def/rem}
\begin{note}
Let $s$ be a $1$-shell in $p$ and let $(a,b)$ be an endpoint pair of $s$. For each $\sigma\in\operatorname{Aut}_A({\mathcal C})$, $(\sigma(a),\sigma(b))$ is an endpoint pair of $\sigma(s)$.
\end{note}
\noindent Since the $n$-th type-homology group and the $n$-th homology group in $p$ are isomorphic, the action of $\operatorname{Aut}_A({\mathcal C})$ on $H_1(p)$ is trivial.
\begin{corollary}\label{auto_action_triviality}
Let $s$ be a $1$-shell and let $\sigma \in \operatorname{Aut}_A({\mathcal C})$. Then there is a $2$-chain $\alpha$ having the boundary of $s-\sigma(s)$.
\end{corollary}
\noindent We denote the ordered bracket $[a,b]$ for the class of $1$-shell $s$ in $H_1(p)$ which has an endpoint pair $(a,b)$ for $a,b\models p$. By Theorem \ref{endpt_class}, this bracket notion is well-defined. We can summarize Theorems \ref{endpt_class}, \ref{endpt_gpstr}, and Corollary \ref{auto_action_triviality} as follows : For $a,b,c\in p({\mathcal C})$ and $\sigma\in\operatorname{Aut}_A({\mathcal C})$, in $H_1(p)$,
\begin{enumerate}
\item $[a,b]+[b,c]=[a,c]$;
\item $[a,a]$ is the identity element;
\item $-[a,b]=[b,a]$; and
\item $\sigma([a,b])=[\sigma(a),\sigma(b)]=[a,b]$.
\end{enumerate}
\subsection{Lascar group and the first homology groups}
Here, using the ordered bracket notion of endpoint pairs, we define a map $\psi_a$ from the automorphism group over $A$ into the first homology group in $p$ for each $a\models p$. This map is proven to be a surjective homomorphism(or epimorphism) and this map does not depending on the choice of $a\models p$. Thus we get a canonical epimorphism from $\operatorname{Aut}_A({\mathcal C})$ into $H_1(p)$ and we study about its kernel.
For each $a\models p$, we define a map $\psi_a$ from $\operatorname{Aut}_A({\mathcal C})$ to $H_1(p)$ by sending $\sigma$ into $[a,\sigma(a)]$.
\begin{theorem}\label{canonical_epi}
\begin{enumerate}
\item Each $\psi_a$ is a epimorphism;
\item For $a,b\models p$, $\psi_a=\psi_b$. So we get a canonical map $\psi$ from $\operatorname{Aut}_A({\mathcal C})$ into $H_1(p)$.
\end{enumerate}
\end{theorem}
\begin{proof}
(1) Fix $a\models p$. At first, surjectivity of $\psi_a$ comes from the fact that for $b\models p$, there is a $\sigma\in \operatorname{Aut}_A({\mathcal C})$ such that $\sigma(a)=b$. It is enough to show that $\psi_a$ is a homomorphism. For $\sigma,\tau\in \operatorname{Aut}_A({\mathcal C})$,
$$\begin{array}{c c l}
\psi(\sigma\tau)&=&[a,\sigma\tau(a)]\\
&=&[a,\sigma(a)]+[\sigma(a),\sigma\tau(a)]\\
&=&[a,\sigma(a)]+\sigma[a,\tau(a)]\\
&=&[a,\sigma(a)]+[a,\tau(a)]\\
&=&\psi_a(\sigma)+\psi_a(\tau).
\end{array}$$
So $\psi_a$ is a homomorphism.\\
\smallskip
(2) Choose $a,b\models p$. Then there is $\tau\in \operatorname{Aut}_A({\mathcal C})$ such that $b=\tau(a)$. For $\sigma\in\operatorname{Aut}({\mathcal C})$,
$$\begin{array}{c c l}
\psi_b(\sigma)&=&\psi_a(\tau^{-1}\sigma\tau)\\
&=&\psi_a(\tau^{-1})+\psi_a(\sigma)+\psi_a(\tau)\\
&=&\psi_a(\sigma).
\end{array}$$
Thus $\psi_a=\psi_b$ and we get a canonical epimorphism $\psi(=\psi_a):\ \operatorname{Aut}_A({\mathcal C})\rightarrow H_1(p)$ for some $a\models p$.
\end{proof}
\noindent So $H_1(p)$ is isomorphic to $\operatorname{Aut}_A({\mathcal C})/\operatorname{Ker}(\psi)$ and it is need to understand the kernel of $\psi$. In \cite{KKL}, it was shown that if $p$ is a Lascar strong type, then the first homology group is zero. This fact can be restated using endpoint notion as follows :
\begin{fact}\label{Lascar_zero}{\cite{KKL}}
Let $a,b\models p$ be such that $a\equiv_A^L b$. Then any $1$-shell having endpoint pair $(a,b)$ is a boundary of a $2$-chain, i.e., $[a,b]=0$ in $H_1(p)$.
\end{fact}
\begin{definition}
\begin{enumerate}
\item Let $\operatorname{Aut}_B({\mathcal C})$ be the set of elements $\sigma\in \operatorname{Aut}({\mathcal C})$ fixing $B$ pointwise.
\item For a group $G$, the commutator of $G$ is the subgroup of $G$ generated by $\{ghg^{-1}h^{-1}|\ g,h\in G\}$, denoted by $[G,G]$ and this is the smallest normal subgroup between normal subgroups $N$ of $G$ making $G/N$ abelian.
\end{enumerate}
\end{definition}
\begin{theorem}\label{kernel_canonical_epi}
Let $N$ be the normal subgroup of $\operatorname{Aut}_A({\mathcal C})$ generated by automorphisms in $\operatorname{Autf}_A({\mathcal C})$, or in $\operatorname{Aut}_{\operatorname{acl}(Aa)}$ for some $a\models p$, let $G=\operatorname{Aut}_A({\mathcal C})/N$, and consider the canonical quotient map $\Psi':\operatorname{Aut}_A({\mathcal C})\rightarrow G$. Let $\psi:\operatorname{Aut}_A({\mathcal C})\rightarrow H_1(p)$ be the canonical map. Then the kernel of $\psi$ contains the followings :
\begin{enumerate}
\item $\operatorname{Aut}_{\operatorname{acl}(Aa)}({\mathcal C})$ for each $a\models p$;
\item $\operatorname{Autf}_A({\mathcal C})$; and
\item $(\Psi')^{-1}([G,G])$.
\end{enumerate}
Specially, from the second one, $\psi$ induces a canonical epimorphism $\Psi$ from $\operatorname{Gal}_L({\mathcal C};A)$ into $H_1(p)$.
\end{theorem}
\begin{proof}
(1) For any $a\models p$, $[a,a]$ is the identity element in $H_1(p)$, and since we fix an enumeration of $\operatorname{acl}(aA)$, $\operatorname{Aut}_{\operatorname{acl}(Aa)}({\mathcal C})$ is contained the kernel of $\psi$.\\
\smallskip
(2) It comes from Fact \ref{Lascar_zero}\\
\smallskip
(3) It comes from the fact that $H_1(p)$ is always abelian.
\end{proof}
We define an $A$-invariant equivalence relation on $p({\mathcal C})^2$, and using this equivalence relation we describe the kernel of $\psi$. By Fact \ref{supp=3}, we can describe $1$-shells which are boundary of $2$-chains as follows :
\begin{theorem}\label{chainwalk_representation}
A $1$-shell $s$ is a boundary of $2$-chain if and only if there is a representation $(a,b,c,a')$ such that for some $n$ there is a finite sequence $(d_i)_{0\le i\le 2n+1}$ of elements in $p({\mathcal C})$ satisfying the following conditions :
\begin{enumerate}
\item $d_0=a,d_{2n+1}=c$ and $d_{2i_0}=a'$ for some $0<i_0 \le n$;
\item $\{d_{2i},d_{2i+1},b\}$ is independent for each $0\le i\le n$; and
\item There is a bijection $m$ from $\{0,2,\cdots,2n\}\setminus \{2i_0\}$ to $\{1,3,\cdots,2n-1\}$ such that $d_{2i} d_{2i+1}\equiv d_{m(2i)+1}d_{m(2i)}$ for $0\le i\neq i_0 \le n$, and $d_{2i_0}d_{2i_0 +1}\equiv a'c$.
\end{enumerate}
\end{theorem}
\begin{proof}
Let $s=f_{01}+f_{12}-f_{02}$ be a $1$-shell and let $\alpha=\sum_{i=0}^{2n}\limits (-1)^i a_i$ be a $2$-chain with $\operatorname{supp}(\alpha')=\{0,1,2\}$ and $\partial(\alpha)=s$, which is a chain-walk from $f_{01}$ to $-f_{02}$. Then there are a representation $(a,b,c,a')$ and a finite sequence $(d_i)_{0\le i\le 2n+1}$ of elements in $p({\mathcal C})$ satisfying the following conditions :
\begin{enumerate}
\item $d_0=a,d_{2n+1}=c$ and $d_{2i_0}=a'$ for some $0<i_0 \le n$;
\item $\{d_{2i},d_{2i+1},b\}$ is independent for each $0\le i\le n$; and
\item There is a bijection $m$ from $\{0,2,\cdots,2n\}\setminus \{2i_0\}$ to $\{1,3,\cdots,2n-1\}$ such that $d_{2i} d_{2i+1}\equiv d_{m(2i)+1}d_{m(2i)}$ for $0\le i\neq i_0 \le n$, and $d_{2i_0}d_{2i_0 +1}\equiv a'c$.
\end{enumerate}
\end{proof}
\noindent Now we define an $A$-invariant equivalence relation on $p({\mathcal C})$ representing the kernel of $\psi$. For $m\ge 1$, define a partial type $p^{\odot m}(x_1,\ldots,x_m)$ over $A$ as
$$\bigwedge_{i\le m} p(x_i)\wedge \bigwedge_{j<m} x_i\mathop{\smile \hskip -0.9em ^| \ } x_{i+1}.$$
\noindent Next define a relation $\sim$ on $p^{\odot 4}({\mathcal C})$ as follows : for $(a_i,b_i,c_i,a'_i)\in p^{\odot 4}({\mathcal C})$ and $i=0,1$, $(a_0,b_0,c_0,a'_0)\sim (a_1,b_1,c_1,a'_1)$ if $a_0b_0\equiv_A a_1b_1$, $b_0c_0\equiv_A b_1c_1$, and $a'_0c_0\equiv_A a'_1c_1$. Note that for $(a_0,b_0,c_0,a'_0)$ and $(a_1,b_1,c_1,a'_1)$ in $p^{\odot 4}({\mathcal C})$, $(a_0,b_0,c_0,a'_0)\sim (a_1,b_1,c_1,a'_1)$ if and only if both quadruples represent a same $1$-shell. This relation is an $A$-type-defianble equivalence relation. And for each $n\ge 0$, the $(1)$, $(2)$ and $(3)$ conditions of $(a,b,c,a')$ and $(d_0,d_1,\ldots,d_{2n+1})$ in Theorem \ref{chainwalk_representation} is $A$-type-definable. We define a partial type $F_n(x,y,z,w;v_0,v_1,\ldots,v_{2n+1})$ as
$$\begin{array}{c l}
&\bigwedge_{0\le i\le 2n+1}\limits p(v_i) \wedge v_0=x \wedge v_{2n+1}=w\\
\wedge & \bigwedge_{0\le j\le n}\limits \{v_{2j},v_{2j+1},z\}\mbox{ is independent over }A\\
\wedge & \bigvee_{0\le i_0\le n}\limits [ d_{2i_0}=y\\
\wedge & \bigvee_m\limits\ (\bigwedge_{0\le i\neq i_0\le n}\limits v_{2i}v_{2i+1}\equiv_A v_{m(2i)+1}v_{m(2i)})\\
&\mbox{for each bijetion } m : \{0,2,\cdots,2n\}\setminus \{2i_0\}\rightarrow \{1,3,\cdots,2n-1\}\\
\wedge & v_{2i_0}v_{2i_0+1}\equiv_A yw].
\end{array}
$$
At last, for each $n\ge 0$, we define a partial type $E'_n(x,y)$ over $A$ as
$$\begin{array}{c l}
&p(x)\wedge p(y)\\
\wedge &\exists zwx'y'z'w'\ [( p^{\odot 4}(x,z,w,y)\wedge p^{\odot 4}(x',z',w',y')\wedge (x,z,w,y)\sim (x',z',w',y'))\\
\wedge & \exists v_0 v_1\ldots v_{2n+1}\ F_n(x',y',z',w';v_0,v_1,\ldots,v_{2n+1})].
\end{array}$$
The relation $E'_n(x,y)$ says that $(x,y)$ is an endpoint pair of a $1$-shell which is a boundary of $2$-chain which is a chain-walk of length $2n+1$. Take $E_n(x,y)\equiv E'_n(x,y)\wedge E'_n(y,x)$. So, for each $n\ge 0$, $E_n$ is $A$-type-definable symmetric relation. At last, define the binary relation $E(x,y)$ as $$x=y\vee \bigvee_{n\ge 0}E_n(x,y).$$
This relation is $A$-invariant, reflexive, and symmetric. By Theorem \ref{endpt_gpstr}, it is transitive and by Theorem \ref{chainwalk_representation} $E(a,b)$ if and only if $[a,b]=0$ in $H_1(p)$ for $a,b\models p$. So this relation $E$ is a desired $A$-invariant equivalence relation.\\
\smallskip
Next, we define a distance-like notion on $p({\mathcal C})$ as follows : For $a,b\models p$,
$$
d_E(a,b):=
\begin{cases}
\min\{n|E_n(a,b)\} & \mbox{if } E(a,b)\\
\infty & \mbox{otherwise.}
\end{cases}
$$
This distance-like notion is not necessary to satisfy triangle inequality, i.e., for $a,b,c\models p$, $d_E(a,b)\le d_E(a,c)+d_E(c,b)$. But it does hold that for $a,b,c\models p$, $d_E(a,b)\le d_E(a,c)+d_E(c,b)+8$ since in the proof of Theorem \ref{endpt_gpstr} for two $1$-shells $s_0$ and $s_1$, there is a $1$-shell $s$ such that $s_0+s_1-s$ is a boundary of $2$-chain of length of $15(=2\times 8-1)$. So we can apply the results in \cite{N} and we know that if $E$ is not type-definable, then the cardinality of $H_1(p)$ is at least $2^{\aleph_0}$. And, in the Appendix A, we prove that for any bounded (type-)definable equivalence relation on a strong type, the possible cardinality of the set of equivalence classes in the strong type is one or at least $2^{\aleph_0}$. In \cite{KrT}, an invariant equivalence relation $E$ on a type-definable set $X$ is called {\em orbital equivalence relation} if there is a subgroup $\Gamma$ of $\operatorname{Aut}({\mathcal C})$ such that $\Gamma$ preserves classes of $E$ setwise and acts transitively on each class. By Theorem \ref{canonical_epi}, $E(a,b)$ if and only if there is $\sigma\in \operatorname{Ker}(\psi)$ such that $\sigma(a)=b$ for $a,b\models p$. So, our equivalence relation $E$ is a orbital equivalence relation.
\begin{theorem}
\begin{enumerate}
\item $E$ is an orbital equivalence relation.
\item The cardinality of $H_1(p)$ is zero or $\ge 2^{\aleph_0}$.
\end{enumerate}
\end{theorem}
\noindent Next section, we give two examples and in two examples, we compute two first homology groups, which are non trivial and their cardinalities are exactly $2^{\aleph_0}$.
\section{Examples}
In simple theories including stable theories, the first homology group of a strong type is always zero by $3$-amalgamation. In \cite{GKK}, the first homology groups of strong types were computed for some cases and they were all zero, and they showed that in o-minimal theories, the first homology group of a strong 1-type is always trivial. Here, we give two examples of rosy theories having a non trivial first homology group in a strong type. They are the first cases to give a non trivial first homology group in a strong type. In \cite{KKL}, B. Kim, S. Kim, and the author considered the structures in \cite{CLPZ}, ${\mathcal M}_{1,n}=(M;S;g_{1/n})$ for each $n\in {\mathbb N}\setminus \{0\}$ where
\begin{enumerate}
\item $M$ is a saturated circle;
\item $g_{1/n}$ is a rotation (clockwise) by $2\pi/n$-radian; and
\item $S$ is a ternary relation such that $S(a,b,c)$ holds if $a,b,c$ are distinct and $b$ comes before $c$ going around the circle clockwise starting at $a$.
\end{enumerate}
and it was shown that the unique strong 1-type $p_n$ in $S_1(\emptyset)$ has the trivial first homology group for every $n$, which is actually a Lascar strong type. Here we consider two structure ${\mathcal M}_1=(M;S;g_{1/n} : n\in {\mathbb N}\setminus\{0\})$ expanding the structures ${\mathcal M}_{1,n}$ by adding all rotation functions of $2\pi/n$-radian for each $n\in {\mathbb N}\setminus\{0\}$ at the same time. When we write $g_r$ for $r=m/n$ in ${\mathbb Q}\cap[0,1)$, it means $g_{1/n}^m$, and ${\mathcal M}_2=(M;U_{<r},U_{=r}|r\in (0,1/2]\cap {\mathbb Q})$, where $U_{<r}(x,y)$ says the smallest length between $x$ and $y$ along the arc is less than $2\pi r$, and $U_{<r}(x,y)$ says the smallest length between $x$ and $y$ along the arc is exactly equal to $2\pi r$.
\subsection{Rosiness of $\operatorname{Th}({\mathcal M}_1)$ and $\operatorname{Th}({\mathcal M}_2)$}In this subsection, we mainly show that two theories of ${\mathcal M}_1$ and ${\mathcal M}_2$ are rosy. In \cite{EO}, C. Ealy and A. Onshuus gave a sufficient condition for being a rosy theory.
\begin{Fact}\label{character_rosy}
Any theory $T$ which geometrically eliminates imaginaries and for which algebraic closure defines a pregeometry is rosy of thorn $U$-rank $1$.
\end{Fact}
For rosiness of $\operatorname{Th}({\mathcal M}_i)$($i=1,2$), we show that $\operatorname{Th}({\mathcal M}_1)$ has weak elimination of imaginaries and $\operatorname{Th}({\mathcal M}_2)$ has geometric elimination of imaginaries. In \cite{P}, B. Poizat defined for a theory $T$ to have {\em weak elimination of imaginaries} if for every definable set has a smallest algebraically closed set which it is definable over. And $T$ has {\em a geometric elimination of imaginaries}, a weaker notion of weak elimination of imaginaries, if for each imaginary $e\in {\mathcal M}^{eq}\models T$, there is a real tuple $\bar{a}\subset M$ such that $e\in\operatorname{acl}^{eq}(\bar{a})$ and $\bar{a}\in \operatorname{acl}^{eq}(e)$. We give a sufficient condition of weak elimination of imaginaries for $\aleph_0$-categorical theory, used in \cite{KKL} :
\begin{theorem}\label{wei_omegacat}
Let $T$ be $\aleph_0$-categorical and let ${\mathcal M}=(M,\ldots)$ be a saturated model of $T$. Suppose that for all $A$, $\operatorname{acl}(A)=\operatorname{dcl}(A)$. Suppose for a subset $X$ of $M^1$, if $X$ is $A_0(=\operatorname{acl}(A_0))$-definable and $A_1(=\operatorname{acl}(A_1))$-definable, then $X$ is $B(=A_0\cap A_1)$-definable. Then for a subset $Y$ of $M^n$, if $Y$ is $A_0$-definable and $A_1$-definable, then $Y$ is $B$-definable. Furthermore, in this case, $T$ has weak elimination of imaginaries.
\end{theorem}
\begin{proof}
Let $A_0=\operatorname{acl}(A_0)$, $A_1=\operatorname{acl}(A_1)$, and $B=A_0\cap A_1$. We use induction on $n$. If $n=1$, it holds by assumption. Let's show this holds for the case $n+1$ with inductive hypothesis for the case $n$. Let $A_0=\operatorname{acl}(A_0)$, $A_1=\operatorname{acl}(A_1)$, and $B=A_0\cap A_1$. We may assume $A_0$ and $A_1$ are finite, and so is $B$. Let $Y\subset M^{n+1}$ be $A_i$-definable, defined by formula $\phi_i(x_0,\ldots,x_n;\bar{a}_i)$ for $\bar{a}_i\subset A_i$ respectively. Then for each $c\in M$, the fiber of $Y$ over $c$, $Y_c:=\{ \bar{x} \in M^n|\ \phi_i(\bar{x},c;\bar{a})\}$ is $cB$-definable by induction. By $\aleph_0$-categoricity, there are only finitely many formulas over $\emptyset$ modulo $T$, and it easily follows that for each $y$, $\phi_i(x_0,\ldots,x_{n-1},y,\bar{a}_i)$ is $B$-definable. Thus $Y$ is $B$-definable.
And since there is no infinite descending chain of algebraically closed sets generating by finitely many elements, it makes for any definable set to have a smallest algebraically closed set where it is definable. Thus $T$ weakly eliminate imaginaries.
\end{proof}
\noindent As a corollary of Theorem \ref{wei_omegacat}, we showed that for each $n\ge 2$, $\operatorname{Th}({\mathcal M}_{1,n})$ has weak elimination of imaginaries.
\begin{fact}\label{wei_M_1_reduct}\cite{KKL}
For each $n\ge 2$, $\operatorname{Th}({\mathcal M}_{1,n})$ weakly eliminates imaginaries.
\end{fact}
Next we will see that the theory of ${\mathcal M}_1$ has quantifier-elimination.
\begin{definition}\label{rotation_closure}
Let $M$ be the underlying set of ${\mathcal M}_1$ and ${\mathcal M}_2$, so it is a saturated circle. For $A\subset M$, let $\operatorname{cl}(A):=\{g_r(a)|\ a\in A,\ r\in {\mathbb Q}\cap [0,1) \}$. Later, we will see that $\operatorname{cl}(A)=\operatorname{dcl}_{{\mathcal M}_1}(A)=\operatorname{acl}_{{\mathcal M}_1}(A)$ in the home sort of ${\mathcal M}_1$ and $\operatorname{cl}(A)=\operatorname{acl}_{{\mathcal M}_2}$ in the home sort of ${\mathcal M}_2$. It is also easy to see that $\operatorname{cl}(A)$ is a substructure of ${\mathcal M}_1$ and ${\mathcal M}_2$.
\end{definition}
\begin{theorem}\label{QE_M_1}
The theory of ${\mathcal M}_1$ has quantifier-elimination.
\end{theorem}
\begin{proof}
Take two small subset $A,B\subset M$ such that $A=\operatorname{cl}(A)$ and $B=\operatorname{cl}(B)$ in $M$. Take $a\in M\setminus A$. We will find $b\in M\setminus B$ such that the map $f\cup\{(a,b)\}$ is extended to an embedding from $\operatorname{cl}(Aa)$ to $\operatorname{cl}(Bb)$ in ${\mathcal M}_1$. Then, the quantifier-elimination of $\operatorname{Th}({\mathcal M}_1)$ comes from a standard argument. We divide $A$ into two parts $A_0 :=\{x\in A|\ S(a,x,g_{1/2}(a))\}$ and $A_1 :=\{x\in A|\ S(g_{1/2}(a),x,a)\}$. Then $B$ is also divided into two parts $B_0=f(A_0)$ and $B_1=f(A_1)$. Take arbitrary $b\in M$ such that for all $y_0\in B_0,\ y_1\in B_1$, $S(y_1,b,y_0)$. Then $b$ is a desired element.
\end{proof}
\begin{theorem}\label{wei_M_1}
The theory of ${\mathcal M}_1$ weakly eliminate imaginaries, and is rosy of thorn $U$-rank $1$.
\end{theorem}
\begin{proof}
In the structure ${\mathcal M}_1$, there is no infinite descending chain of algebraical closure of finite sets by quantifier elimination. It is enough to show that if $X\subset M^n$ is $A_0(=\operatorname{acl}(A_0))$- and $A_1(=\operatorname{acl}(A_1))$-definable, then $X$ is $A_0\cap A_1(=B)$-definable. Then $X$ has a smallest algebraically closed set defining $X$, and $\operatorname{Th}({\mathcal M}_1)$ has weak elimination of imaginaries.
Let $A_i=\operatorname{acl}(A_i)=\operatorname{cl}(A_i)$ for $i=0,1$ and let $B=A_0\cap A_1$. Let $X\subset M^m$ be $A_i$-definable in ${\mathcal M}_1$. Then $X$ is definable over $A_i$ for $i=0,1$ in some reduct ${\mathcal M}_{1,n}$. Since ${\mathcal M}_{1,n}$ weakly eliminates imaginaries, $X$ is definable over $B$ in ${\mathcal M}_{1,n}$, defined by a formula $\psi(\bar{x},\bar{b})$. Then by the same formula $\psi(\bar{x},\bar{b})$, $X$ is $B$-definable in ${\mathcal M}_1$.
By quantifier elimination, it is easily verified that the algebraic closure in ${\mathcal M}_1$ gives a trivial pregeometry. Thus by Fact \ref{character_rosy}, $\operatorname{Th}({\mathcal M}_1)$ is a rosy theory having thorn $U$-rank $1$.
\end{proof}
\noindent There is only one $1$-strong type over empty set $p_0(x)=\{x=x\}$ in ${\mathcal M}_1$.\\
Next we show that $\operatorname{Th}({\mathcal M}_2)$ has geometric elimination of imaginaries. We consider an expansion of ${\mathcal M}_2$, ${\mathcal N}_2=(M,U_{<r},U_{=r},g_{1/n})|r\in (0,1/2]\cap {\mathbb Q}, n>0)$ by adding rotation functions $g_{1/n}$, and the reducts of ${\mathcal N}_{2}$, ${\mathcal N}_{2,k}=(M,U_{<r},U_{=r},g_{1/k}|\ r\in (0,1/2]_k)$, where $(0,1/2]_k=\{i/k|\ 0< i/k\le 1/2, i\in {\mathbb N}\}$, for each $k\ge 1$.
\begin{theorem}\label{wei_N_2}
\begin{enumerate}
\item For each $k\ge 3$, $\operatorname{Th}({\mathcal N}_{2,k})$ has quantifier-elimination, and weak elimination of imaginaries, and it is $\omega$-categorical.
\item $\operatorname{Th}({\mathcal N})$ has weak elimination of imaginaries.
\end{enumerate}
\end{theorem}
\begin{proof}
(1) Fix $k\ge 3$. Each binary relations $U_{<r}$ and $U_{=r}$ for $r\in (0,1/2]_k$ are $\emptyset$-definable in ${\mathcal M}_{1,k}$ by quantifier-free formulas. So, by Fact \ref{wei_M_1_reduct}, it is enough to show that the ternary relation $S(x,y,z)$ is $\emptyset$-definable in ${\mathcal N}_{2,k}$ by a quantifier-free formula. Denote $g_{i/k}(x)<y<g_{(i+1)/k}(x)$ for the formula $U_{<1/k}(g_{i/k}(x),y)\wedge U_{<1/k}(g_{(i+1)/k}(x),y))$. Consider the following quantifier-free formula
$$
\begin{array}{c c l}
S_k'(x,y,z)&\equiv&\bigvee_{i=1}^{k-1}\limits [ z=g_{i/k}(x)\rightarrow (\bigvee_{j=1}^{i-1}\limits y=g_{j/k}(x)\\
&\vee& \bigvee_{j=0}^{i-1}\limits g_{j/k}(x)<y<g_{(j+1)/k}(x))]\\
&\vee& \bigvee_{i=0}^{k-1}\limits[ g_{i/k}(x)<z<g_{(i+1)/k}(x)\rightarrow\\
&&( g_{i/k}(x)<y<g_{(i+1)/k}(x)\wedge g_{-1/k(z)}(z)<y<z)\\
&\vee&\bigvee_{j=0}^{i-1}\limits g_{j/k}(x)<y<g_{(j+1)/k}(x)],
\end{array}
$$
and this formula defines $S(x,y,z)$ in ${\mathcal N}_{2,k}$.\\
(2) Consider the ternary relation $S'_k$ for some $k\ge 3$. Then $S'_k$ also defines the ternary relation $S$ in ${\mathcal N}$. Thus, as the relation between ${\mathcal N}_{2,k}$ and ${\mathcal M}_{1,k}$, by Theorems \ref{QE_M_1} and \ref{wei_M_1}, the theory of ${\mathcal N}$ has quantifier-elimination and weakly eliminate imaginaries.
\end{proof}
\begin{theorem}\label{wei_M_2}
The theory of ${\mathcal M}_2$ geometrically eliminate imaginaries.
\end{theorem}
\begin{proof}
At first, we define some equivalence relations $E_n$ on $M^n$, which are $\emptyset$-definable in ${\mathcal M}_2$. Let $n\ge 1$. Define a formula $$C_n(z_1,\ldots,z_n)\equiv \bigwedge_{1\le i<j\le n}\limits z_i\neq z_j\wedge \bigwedge_{1\le i<j\le n}\limits \neg U_{<1/n}(z_i,z_{j})$$ so that it defines the set $\{ z,g_{1/n}(z),\ldots, g_{(n-1)/n}(z) \}$. Next, we define the following formula $$\begin{array}{c c l}
I_n(x,y;z_1,\ldots,z_n)&\equiv&C_n(\bar{z})\\
&\wedge&[\bigvee_{i=1}^{n}\limits (y=z_i)\rightarrow(U_{<1/n}(z_i,x)\wedge U_{<1/n}(z_{i+1},x))]\\
&\vee&[\bigvee_{i=1}^{n}\limits (U_{<1/n}(z_i,x)\wedge U_{<1/n}(z_{i+1},x))\rightarrow\\
&&\exists y_2\cdots y_n(C_n(y,y_2,\ldots,y_n)\\
&\wedge&(U_{<1/n}(z_{i+1},y_2)\wedge U_{<1/n}(z_{i+2},y_2))\\
&\wedge&(U_{<1/n}(y,x)\wedge U_{<1/n}(y_2,x))]
\end{array}
$$
,where $z_{n+1}=z_1$ and $z_{n+2}=z_2$, which defines the sets $\{(a,b)|\ b<a<g_{1/n}(b)\}$ or $\{(a,b)|\ g_{-1/n}(b)<a<b\}$ up to $z_i=g_{(i-1)/n}(z_1)$ or $z_i=g_{-(i-1)/n}(z_1)$ respectively. At last, consider the following equivalence relation $E_n$ on $M^n$ defined as follows $$E_n(\bar{z},\bar{z}')\equiv \forall xy(I_n(x,y;\bar{z})\leftrightarrow I_n(x,y;\bar{z}')).$$ Then $|M^n/E_n|=3$ and each classes represents one of the following tuples :
$$
\begin{cases}
(z,g_{1/n}(z),\ldots,g_{(n-1)/n}(z))\\
(z,g_{-1/n}(z),\ldots,g_{-(n-1)/n}(z))\\
\mbox{other wise}
\end{cases}
$$
Let ${\mathcal L}({\mathcal M}_2)=\{U_{<r},U_{=r}\}$ be the language of ${\mathcal M}_2$ and ${\mathcal L}({\mathcal N}_2)={\mathcal L}({\mathcal M}_2)\cup\{g_{1/n}\}_{n\ge 1}$ be the language of ${\mathcal N}_2$. Expand $M_2$ to $M_2'=(M,U_{<r},U_{=r},a_n,b_n,c_n)_{n\ge 1}$ by adding imaginary elements such that $\{a_n,b_n,c_n\}=M^n/E_n$, where $a_n=[z,g_{1/n}(z),\ldots,g_{(n-1)/n}(z)]_{E_n}$, and $b_n=[z,g_{-1/n}(z),\ldots,g_{-(n-1)/n}(z)]_{E_n}$, for each $n\ge 1$, and the language of ${\mathcal M}_2$, ${\mathcal L}({\mathcal M}_2)$ is the union of ${\mathcal L}({\mathcal M}_2)$ and $\{S_{E_n},f_{E_n}\}_{n\ge 1}$, where $S_{E_n}$ is interpreted as the sort for $M^n/E_n$ and $f_{E_n}$ is as the canonical function from $M_n$ into $S_{E_n}$ such that for $\bar{a},\bar{b}\in M^n$, $f_{E_n}(\bar{a})=f_{E_n}(\bar{b})$ if and only if $E_n(\bar{a},\bar{b})$.
\begin{claim}\label{g_n_definalbe_M_2'}
Each function $g_{1/n}$ is definable in $M_2'$.
\end{claim}
\begin{proof}
Two functions $g_1$ and $g_{1/2}$ are already definable in $M_2$ and so in $M_2'$ also. Let $n\ge 3$. For each $a\in M$, there is only one element $a'$ in $M$ such that $\exists x_3,\ldots,x_n (C_n(a,a',x_3,\ldots,x_n)\wedge f_{E_n}(a,a',x_3,\ldots,x_n)=a_n)$, and thus $a'=g_{1/n}(a)$. Therefore, the graph of $g_{1/n}$ is defined by the formula $\exists x_3,\ldots,x_n (C_n(x,y,x_3,\ldots,x_n)\wedge f_{E_n}(x,y,x_3,\ldots,x_n)=a_n)$.
\end{proof}
\begin{claim}\label{equiv_M_2'eq_N_2eq}
$({\mathcal M}_2')^{eq}={\mathcal N}_2^{eq}$.
\end{claim}
\begin{proof}
For each $e\in ({\mathcal M}_2')^{eq}$, since $M^n/E_n\subset {\mathcal M}_2^{eq}$, $e$ is in $({\mathcal M}_2^{eq})^{eq}={\mathcal M}_2^{eq}$. By the way, ${\mathcal M}_2$ is a reduct of ${\mathcal N}_2$, and $e\in {\mathcal N}_2^{eq}$. Conversely, each $e'\in {\mathcal N}_2^{eq}$ is clearly in $({\mathcal N}_2')^{eq}$, wehre ${\mathcal N}_2'=(M,U_{<r},U_{=r},g_{1/n},a_n,b_n,c_n)$. By Claim \ref{g_n_definalbe_M_2'}, $({\mathcal N}_2')^{eq}=({\mathcal M}_2')^{eq}$ and $e'\in ({\mathcal M}_2')^{eq}$.
\end{proof}
Take $e\in {\mathcal M}_2^{eq}$ arbitrary. Since ${\mathcal M}_2$ is a reduct of ${\mathcal N}_2$, $e$ is in $N_2^{eq}$. By weak elimination of imaginaries of $\operatorname{Th}({\mathcal N}_2)$, there is a finite tuple $\bar{b}\subset M$ such that $$e\in \operatorname{dcl}_{{\mathcal N}_2}^{eq}(\bar{b})\mbox{ and } \bar{b}\in\operatorname{acl}_{{\mathcal N}_2}^{eq}(e).$$ By Claim \ref{equiv_M_2'eq_N_2eq}, $$e\in \operatorname{dcl}_{{\mathcal M}_2'}^{eq}(\bar{b})\mbox{ and } \bar{b}\in\operatorname{acl}_{{\mathcal M}_2'}^{eq}(e).$$ Since $M^n/E_n\subset \operatorname{acl}_{{\mathcal M}_2}^{eq}(\emptyset)$, $$e\in \operatorname{acl}_{{\mathcal M}_2}^{eq}(\bar{b})\mbox{ and } \bar{b}\in\operatorname{acl}_{{\mathcal M}_2}^{eq}(e).$$ Therefore, each imaginaries in ${\mathcal M}_2^{eq}$ is inter-algebraic with a finite tuple in the home sortin ${\mathcal M}_2$ and $\operatorname{Th}({\mathcal M}_2)$ has geometric elimination of imaginaries.
\end{proof}
\noindent Now we show that $\operatorname{Th}({\mathcal M}_2)$ is a rosy theory having thorn-$U$ rank $1$.
\begin{theorem}
The theory of ${\mathcal M}_2$ is a rosy theory of thorn-$U$ rank $1$.
\end{theorem}
\begin{proof}
By Fact \ref{character_rosy} and Theorem \ref{wei_M_2}, it is enough to show that the algebraic closure in the home sort gives a trivial pregeometry in ${\mathcal M}_2$. For any $A\subset M$, it is clear that $\operatorname{cl}(A)\subset \operatorname{acl}_{{\mathcal M}_2}(A)$. By the way, from Theorem \ref{wei_N_2}, $\operatorname{acl}_{{\mathcal N}_2}(A)=\operatorname{cl}(A)$. Since ${\mathcal M}_2$ is a reduct of ${\mathcal N}_2$, $\operatorname{acl}_{{\mathcal M}_2}(A)\subset \operatorname{acl}_{{\mathcal N}_2}(A)$, and thus $\operatorname{cl}(A)=\operatorname{acl}_{{\mathcal M}_2}(A)$. So, the algebraic closure in ${\mathcal M}_2$ gives a trivial pregeometry.
\end{proof}
\noindent The theory of ${\mathcal M}_2$ does not eliminate quantifier but there is only one $1$-type over $\operatorname{acl}^{eq}(\emptyset)(\neq \emptyset)$, $q_0(x)=\{x=x\}$.
\subsection{Computation of $H_1$ in ${\mathcal M}_1$}
In the section 2, we see that the first homology groups are determined by end-point pairs of $1$-shells. In ${\mathcal M}_1$, for a fixed $a\in M$, we observe that $S_1(a)$ looks like a circle with a rotation. From this observation, we compute the first homology group of $p_0$ in ${\mathcal M}_1$ :
\begin{theorem}\label{h1_in_M_1}
In ${\mathcal M}_1$, the first homology group of $p_0$ is isomorphic to ${\mathbb R}/{\mathbb Z}$.
\end{theorem}
We start with defining a distance-like notion between two points on $M$. For a subset $A$ in ${\mathbb R}$, we denote $A_{{\mathbb Q}}$ for $A\cap {\mathbb Q}$.
\begin{definition}\label{distance}
Let $a,b\in M$ be two elements. We define the \em{$S$-distance} of $b$ from $a$, denoted by $\operatorname{Sd}(a,b)$ as follows : For $r\in {\mathbb Q}$ and $s<t\in [0,1)_{{\mathbb Q}}$,
\begin{enumerate}
\item $\operatorname{Sd}(a,b)=r$ if $b=g_r(a)$;
\item $s<\operatorname{Sd}(a,b)<1$ if ${\mathcal M}_1 \models S(g_s(a),b,a)$;
\item $0<\operatorname{Sd}(a,b)<t$ if ${\mathcal M}_1 \models S(a,b,g_t(a))$; and
\item $s<\operatorname{Sd}(a,b)<t$ if ${\mathcal M}_1 \models S(g_s(a),b,g_r(a))$.
\end{enumerate}
\noindent For $r\in [0,1)\setminus {\mathbb Q}$, we write $\operatorname{Sd}(a,b)=r$ if for $s<t\in [0,1)_{{\mathbb Q}}$ and $s<r<t$, $s<\operatorname{Sd}(a,b)<t$. Let $r \in (0,1)_{{\mathbb Q}}$. We write $\operatorname{Sd}(a,b)=r-\epsilon$ if for $s\in (0,1)_{{\mathbb Q}}$ with $s<r$, $s<\operatorname{Sd}(a,b)<r$. We write $\operatorname{Sd}(a,b)=r+\epsilon$ if for $t\in (0,1)_{{\mathbb Q}}$ with $r< t$, $r<\operatorname{Sd}(a,b)<t$. We write $\operatorname{Sd}(a,b)=\epsilon$ if for all $s\in (0,1)_{{\mathbb Q}}$, $0<\operatorname{Sd}(a,b)<s$. We write $\operatorname{Sd}(a,b)=1-\epsilon$ if for all $s\in (0,1)_{{\mathbb Q}}$, $s<\operatorname{Sd}(a,b)<1$.
\end{definition}
\noindent For a subset $A\subset {\mathbb Q}$, we define $A^*:=A\cup\{x\pm\epsilon| x\in A \}$. This $S$-distance has the values in $[0,1)\cup[0,1)^*_{{\mathbb Q}}\cup\{1-\epsilon\}$. In Appendix B, using Dedekind cut, we develop multivalued operations $+^*,\times^*,-^*$ to make ${\mathbb R}\cup{\mathbb Q}^*$ a ring-like structure. Now we extend the values of $S$-distance to ${\mathbb R}\cup{\mathbb Q}^*$. Since $g_k=\operatorname{id}$ for all $k\in {\mathbb Z}$, we write $\operatorname{Sd}(a,b)=r$ for $r\in {\mathbb R}\cup{\mathbb Q}^*$ if $\operatorname{Sd}(a,b)=r'$ where $r'$ is the unique number in $[0,1)\cup [0,1)_{{\mathbb Q}}^*$ such that $r\in r'+^*n$ for some $n\in {\mathbb Z}$. Then this values depends only on the type of $(a,b)$, that is, for $a_0,a_1,b_0,b_1\in M$, if $a_0b_0\equiv a_1b_1$, then $\operatorname{Sd}(a_0,b_0)=\operatorname{Sd}(a_1,b_1)$ of the values in $[0,1)\cup[0,1)_{{\mathbb Q}}^*$. Then the following fact is easily verified :
\begin{fact}\label{direct-distance}
Let $a,b,c\in M$.
\begin{enumerate}
\item $\operatorname{Sd}(b,a)=1-^*\operatorname{\widehat{S}d}(a,b)$.
\item $\operatorname{Sd}(a,c)=\operatorname{\widehat{S}d}(a,b)+^*\operatorname{\widehat{S}d}(b,c)$ modulo ${\mathbb Z}\cup {\mathbb Z}^*$, that is, $\operatorname{Sd}(a,b)+^*\operatorname{Sd}(b,c)-^*\operatorname{Sd}(a,c)\subset {\mathbb Z}^* $.
\end{enumerate}
\noindent By (1), $\operatorname{Sd}$ is not symmetric, that is, for $a,b\in M$, $\operatorname{Sd}(a,b)\neq\operatorname{Sd}(b,a)$ and so it is called a directed distance.
\end{fact}
\noindent Now we assign each $1$-simplex $f$ a value $n_f$ in ${\mathbb R}\cup{\mathbb Q}^*$ as follows : There are $a,b\in M$ such that $[a,b]=f$, and we define $n_f$ as $\operatorname{Sd}(a,b)$. Then $n_f$ is well-define, that is, it does not depend on the choice of $a,b$ because if $a_i,b_i\in M$ satisfy $[a_0,b_0]=[a_1,b_1]=f$, then $a_0 b_0\equiv a_1 b_1$ and $\operatorname{Sd}(a_0,b_0)=\operatorname{Sd}(a_1,b_1)$. We also assign each $1$-shell $s=f_{01}+f_{12}-f_{02}$ to a multivalue $n_s$ in ${\mathbb R}\cup{\mathbb Q}^*$ as follows : $n_s=n_{f_{01}}+^*n_{f_{12}}-^* n_{f_{02}}$. This value is also related with the distance of end points. Let $(a,a')$ be an endpoint pair of $s$, then $\operatorname{Sd}(a,a')=n_s$ modulo ${\mathbb Z}^*$. Using this assignment of 1-shells, we give a necessary and sufficient condition for a $1$-shell to be a boundary of a $2$-chain :
\begin{theorem}\label{NSfor[s]=0inM}
A 1-shell $s=f_{12}-f_{02}+f_{01}$ is a boundary of a 2-chain in $p$ if and only if
\begin{center}
$n_s=n_{01}+^*n_{12}+^*n_{20}\subset{\mathbb Z}^*$,
\end{center}
where $n_{01}=n_{f_{01}},\ n_{12}=n_{f_{12}},\ n_{20}=-^* n_{f_{02}}$. Moreover it is equivalent to that the two end points of $s$ are Lascar equivalent over $\emptyset$.
\end{theorem}
\begin{proof}
($\Rightarrow$) Let $\alpha$ be a 2-chain having the boundary $s$. By Fact \ref{supp=3}, we may assume $\alpha=\sum_{i=0}^{2n}\limits (-1)^i a_i$ be a chain-walk from $f_{01}$ to $-f_{02}$ with $\operatorname{supp}(\alpha)=\{0,1,2\}$. Let $[3]=\{0,1,2\}$. From Theorem \ref{chainwalk_representation}, there are independent elements $d_0,d_1,\cdots,d_{2n+2}$ such that
\begin{itemize}
\item $a_i([3])\equiv [d_0,d_{2i+1},d_{2i+2}]_{[3]}$ if $i$ is even, and $a_i([3])=[d_0,d_{2i+2},d_{2i+1}]_{[3]}$ if $i$ is odd;
\item For some even number $0 \le i_0\le 2n$, $[d_{2i_0+1},d_{2i_0+2}]_{\{1,2\}}=f_{12}(\{1,2,\})$; and
\item For each even number $0 \le j_o\neq i_0 \le 2n$, there is an odd number $0\le j_1\le 2n$ such that $[d_{2j_0+1},d_{2j_0+2}]_{\{1,2\}}=[d_{2j_1+2},d_{2j_1+1}]_{\{1,2\}}$.
\end{itemize}
\noindent Then $\operatorname{Sd}(d_1,d_0)+^* \operatorname{Sd}(d_0,d_{2n+2})=-^* n_{01}-^* n_{20}$ and by Fact \ref{direct-distance} (1), $\operatorname{Sd}(d_1,d_2)+^* \operatorname{Sd}(d_2,d_3)+^*\cdots+^* \operatorname{Sd}(d_{2n+1},d_{2n+2})=n+^* n_{12}$. By Fact \ref{direct-distance} (2), $\operatorname{Sd}(d_1,d_{dn+2})\in (-^* n_{01}-^* n_{20})\cap (n+^* n_{12})$. Since $\{0\}^*=(-^* n_{01}-^* n_{20})+^*(n_{01}+^*n_{20})$ and $(n+^* n_{12})+^*(n_{01}+^*n_{20})=n+^*(n_{01}+^*n_{12}+^*n_{20})$, these two equations imply $n+^*(n_{01}+^*n_{12}+^*n_{20})\subset \{0\}^*$. Therefore, $n_{01}+^*n_{12}+^*n_{20}\subset \{0\}^*-^* \{n\}^*=\{-n\}^*$ for $n\in {\mathbb N}\subset {\mathbb Z}$.\\
($\Leftarrow$) Suppose $n_{01}+^*n_{12}+^*n_{20}\subset\{n\}^*$ for some $n\in {\mathbb Z}$. Then there are independent elements $a,b,c,a'$ such that
$$[a,b]_{\{0,1\}}=f_{01}(\{0,1\}),\ [b,c]_{\{1,2\}}=f_{12}(\{1,2\}),\ [a',c]_{\{0,2\}}=f_{02}(\{0,2\}).$$
So, $\operatorname{Sd}(a,b)=n_{01},\ \operatorname{Sd}(b,c)=n_{12},\ \operatorname{Sd}(c,a)=n_{20},$ and $\operatorname{Sd}(a,a')\in n_{01}+^*n_{12}+^*n_{20}$. Thus $\operatorname{Sd}(a,a')\in \{n\}^*$ and $\operatorname{Sd}(a,a')\in \{0\}^*$. Since $\{a,a'\}$ is independent, $\operatorname{Sd}(a,a')\in \{0\}^*\setminus \{0\}$, that is, $\operatorname{Sd}(a,a')=\epsilon$ or $\operatorname{Sd}(a,a')=1-\epsilon$.
We will find $d\in M$ such that $a\equiv_d a'$ and $d\mathop{\smile \hskip -0.9em ^| \ } abca'$. Consdier a partial type $\Sigma(x)=\{s<\operatorname{Sd}(x,a)<t\leftrightarrow s<\operatorname{Sd}(x,a')<t\}_{s<t\in [0,1]}$. Consider finitely many pairs $(s_i,t_i)$ with $s_i<t_i$ and a formula $$\bigwedge (s_i<\operatorname{Sd}(x,a)<t_i\leftrightarrow s_i<\operatorname{Sd}(x,a')<t_i).$$
We may assume $s_i \le s_0 < t_0 \le t_i$. It is enough to show that
$$s_0<\operatorname{Sd}(x,a)<t_0\leftrightarrow s_0<\operatorname{Sd}(x,a')<t_0$$
is satisfiable. Suppose $s_0<\operatorname{Sd}(x,a)<t_0$ is satisfiable. Then there is a pair $(s,t)$ such that $s_0<s<t<t_0$ and $s<\operatorname{Sd}(x,a)<t$ is satisfiable. Let $e\in M$ be independent from $a$ such that $s<\operatorname{Sd}(e,a)<t$ holds. Since $\operatorname{Sd}(a,a')\in \{0\}^*\setminus \{0\}$, there is is a pair $(s',t')$ such that $s'<t'$, $s_0<s+s'<t+t'<t_0$, and $s'<\operatorname{Sd}(a,a')<t'$. Then, $s<\operatorname{Sd}(e,a)<t$ and $s'<\operatorname{Sd}(a,a')<t'$ imply $s+s'<\operatorname{Sd}(e,a')<t+t'$. Since
$s_0<s+s'<t+t'<t_0$, $s_0<\operatorname{Sd}(e,a')<t_0$ and $s_0<\operatorname{Sd}(x,a')<t_0$ is satisfiable. By the same way, $s_0<\operatorname{Sd}(x,a')<t_0\rightarrow s_0<\operatorname{Sd}(x,a)<t_0$.
Therefore, there is $d\in M$ such that $\Sigma(d)$ and $\operatorname{Sd}(d,a)=\operatorname{Sd}(d,a')$. Moreover we may assume that $\{a,b,c,a',d\}$ is independent by taking $d\mathop{\smile \hskip -0.9em ^| \ }_{aa'}bc$. Then, there is a 2-chain $\alpha=a_0+a_1-a_2$, where
\begin{itemize}
\item $\operatorname{supp}(a_0)=\{0,1,3\}$, $\operatorname{supp}(a_1)=\{1,2,3\}$, and $\operatorname{supp}(a_2)=\{0,2,3\}$;
\item $a_0(\{0,1,3\})=[a ,b,d]_{\{0,1,3\}}$, $a_1(\{1,2,3\})=[b,c,d]_{\{1,2,3\}}$, and $a_2(\{0,2,3\})=[a' ,c,d]_{\{0,2,3\}}$;
\item $a_0^{01}=f_{01}$, $a_1^{12}=f_{12}$, and $a_2^{02}=f_{02}$; and
\item $a_0^{03}=a_2^{03}$, $a_0^{13}=a_1^{13}$, and $a_1^{23}=a_2^{23}$.
\end{itemize}
\noindent Then $\partial\alpha=f_{0,1}+f_{1,2}-f_{02}+(a_2^{03}-a_0^{03})$ and $\partial \alpha=f_{0,1}+f_{1,2}-f_{02}$.\\
Now we show moreover part. Let $a,a'$ be end points of $s$. If $a\equiv^L a'$, then $s$ is a boundary of $2$-chain and $n_s\subset \{n\}^*$ for some $n\in{\mathbb Z}$. Conversely, we assume that $n_s\subset \{n\}^*$ for some $n\in {\mathbb Z}$. In the proof of right-to-left, we found $d\in M$ such that $a\equiv_d a'$. Consider a substructure generated by $d$. $\operatorname{cl}(d)=\operatorname{dcl}(c)=\operatorname{acl}(c)$. Then $a\equiv_{cl(d)} a'$ and $a\equiv^L a'$.
\end{proof}
Now we are ready to prove Theorem \ref{h1_in_M_1}. Define a map $\Phi:\ H_1(p_0)\rightarrow ({\mathbb R}\cup{\mathbb Q}^*)/{\mathbb Z}^*$ as sending $[s]$ into $n_s+^*{\mathbb Z}^*$, and $({\mathbb R}\cup{\mathbb Q}^*)/{\mathbb Z}^*\cong {\mathbb R}/{\mathbb Z}$ in Appendix B. It is easy to see this map is surjective. Since for an endpoint pair $(a,b)$ of $s$, $n_s+^*{\mathbb Z}^*=\operatorname{Sd}(a,b)+^*{\mathbb Z}^*$, this map $\Phi$ depends on the endpoint pairs of $1$-shells. By Theorem \ref{endpt_gpstr}, given $1$-shells $s_0$ and $s_1$ and endpoint pairs $(a,b)$ and $(b,c)$ with respect to $s_0$ and $s_1$, there is an $1$-shell $s$ such that $[s]=[s_0]+[s_1]$ and $(a,c)$ is an endpoint pair of $s$, thus this map $\Phi$ is a group homomorphism. Moreover by Theorem \ref{NSfor[s]=0inM}, it is injective, and therefore it is an isomorphism.
Therefore, we show that the first homology group in $p_0$ is isomorphic to ${\mathbb R}/{\mathbb Z}$, and it is interesting that from Theorem \ref{NSfor[s]=0inM}, this first homology group is exactly isomorphic to the Lascar group $\operatorname{Gal}_L ({\mathcal M}_1;\emptyset)$.
\subsection{Computation of $H_1$ in ${\mathcal M}_2$}
In this subsection, we compute the first homology group of $q_0$. Since $q_0$ is over $\operatorname{acl}^{eq}(\emptyset)$, we work in ${\mathcal M}_2^{eq}$ with constant elements in $\operatorname{acl}^{eq}(\emptyset)$. Since each elements in $M^n/E_n$ is already in $\operatorname{acl}^{eq}(\emptyset)$, and by Theorem \ref{wei_N_2}, we may work in $({\mathcal M}_2')^{eq}={\mathcal N}_2^{eq}$. But already noted in the proof of \ref{wei_N_2} (2), the ternary relation $S(x,y,z)$ is definable in ${\mathcal N}_2$ and thus we work in ${\mathcal M}_1^{eq}$. So, by the previous subsection, the first homology group of $q_0$ is same with one of $p_0$ and it is isomorphic to ${\mathbb R}/{\mathbb Z}$, which is also Lascar group over $\operatorname{acl}^{eq}(\emptyset)$ in ${\mathcal M}_2$.\\
\medskip
We conjecture that there are only automorphisms described in Theorem \ref{kernel_canonical_epi} in the kernel of the canonical epimorphism in Theorem \ref{canonical_epi}.
\begin{question}\label{qeustion_kernel}
Let $T=T^{eq}$ be a rosy theory, and ${\mathcal C}\models T$. For $A=\operatorname{acl}(A)$, for a strong type $p$ over $A$, let $\Psi:\ \operatorname{Aut}_A({\mathcal C})\rightarrow H_1(p)$ be a canonical epimorphism. Then, is the kernel of $\Psi$ exactly generated by automorphisms in the following :
\begin{enumerate}
\item $\operatorname{Aut}_{\operatorname{acl}(aA)}({\mathcal C})$ for $a\models p$;
\item $\operatorname{Autf}({\mathcal C})$; and
\item $(\Psi')^{-1}([G,G])$, where $G$ and $\Psi'$ are in Theorem \ref{kernel_canonical_epi},
\end{enumerate}
so, is $H_1(p)\cong (\operatorname{Gal}_L({\mathcal C};A)/<\bar{\sigma}:\ \sigma\in \operatorname{Aut}_{\operatorname{acl}(Aa)},\ a\models p>)^{ab}$? Furthermore if any algebraic closure is again a substructure, then is $H_1(p)\cong \operatorname{Gal}_L({\mathcal C};A)^{ab}$?
\end{question}
\noindent Fortunately, the answer for Question \ref{qeustion_kernel} is yes for known examples in \cite{GKK}\cite{KKL} and our two examples.
\section{Appendix}
\subsection{Appendix A} We show the possible number of bounded type-definable equivalence classes on a strong type is $1$ or at least $2^{\aleph_0}$. Let $T(=T^{eq})$ be any theory of a language ${\mathcal L}$ and let ${\mathcal C}$ be a monster model of $T$. Fix a small subset, $A=\operatorname{acl}(A)$ and choose a strong type $p(\bar{x})$ over $A$(with $\bar{x}$ of possibly infinite length). We shall denote $\bar{x}$ as $x$ conventionally.
\begin{theorem}
Let $E(x,y)$ be a bounded $A$-type-definable equivalence relation on $p(x)$ and denote $p/E$ for the set of $E$-classes on $p$. Then, \begin{center}
$|p/E|=1$ or $\ge 2^{\aleph_0}$.
\end{center}
\end{theorem}
\begin{proof}
For a convention, we assume $A=\emptyset$. We divide two cases that $p/E$ is finite and $p/E$ is infinite.\\
Case 1. $p/E$ is finite : Suppose $p/E$ is finite. Let $a_0,\cdots, a_n\models p$ be representatives of all distinct classes in $p/E$, and let $\bar{a}=(a_0,a_1,\ldots ,a_n)$. At first, we show that $E$ is relatively definable on $p$. Consider two type-definable formula $E(x,a_0)$ and $\bigvee_{i>0}E(x,a_i)$ partitioning $p$, and by compactness, $p(x)\models E(x,a_0)\leftrightarrow \phi(x,a_0)$ for some formula $\phi(x,z)$ such that $E(x,a_0)\models \phi(x;a_0)$. Since $a_0\equiv a_i$, $p(x)\models E(x,a_i)\leftrightarrow \phi(x,a_i)$ for all $i\le n$. Thus, $p(x)\wedge p(y)\models E(x,y)\leftrightarrow \psi(x,y;\bar{a})$, where $\psi(x,y;\bar{z})=\bigvee_i [\phi(x,z_i)\wedge\bigvee_{j\neq i}\phi(y,z_j )]$. Since $E$ is invariant, $p(x)\wedge p(y)\wedge \psi(x,y;\bar{z})\wedge \operatorname{tp}(\bar{a})(\bar{z})\models \psi(x,y;\bar{a})(\leftrightarrow E(x,y))$. By compactness, there is a formula $\psi'(\bar{z})$ in $\operatorname{tp}(\bar{a})(\bar{z})$ such that $p(x)\wedge p(y)\wedge \psi(x,y;\bar{z})\wedge \psi'(\bar{z})\models \psi(x,y;\bar{a})$. Take $\theta(x,y)\equiv \exists \bar{z} (\psi'(\bar{z})\wedge \psi(x,y;\bar{z}))$. Then $p(x)\wedge p(y)\models \theta(x,y)\leftrightarrow \psi(x,y;\bar{a})$. Therefore $E$ is relatively definable on $p$ by the formula $\theta$. Moreover, we may assume $\theta(x,y)$ is a reflexive and symmetric relation by taking $x=y\vee (\theta(x,y)\wedge \theta(y,x))$.
Next, we find a finite $\emptyset$-definable equivalence relation $E'$ such that $p(x)\wedge p(y)\models E(x,y)\leftrightarrow E'(x,y)$. Since $E$ is an equivalence relation,
$$\begin{array}{c c l}
p(x)\wedge p(y)\wedge p(z)&\models& \bigvee_i\limits \theta(x,a_i)\wedge\bigvee_i\limits \theta(y,a_i)\wedge\bigvee_i\limits \theta(z,a_i)\\
&\wedge& \bigwedge_i \limits (\theta(x,a_i)\rightarrow \bigwedge_{i\neq j}\limits \neg \theta(x,a_j))\\
&\wedge&\bigwedge_i \limits (\theta(y,a_i)\rightarrow \bigwedge_{i\neq j}\limits \neg \theta(y,a_j))\\
&\wedge&\bigwedge_i \limits (\theta(z,a_i)\rightarrow \bigwedge_{i\neq j}\limits \neg \theta(z,a_j)) \\
&\wedge&(\theta(x,y)\wedge\theta(y,z)\rightarrow \theta(x,z)).\ (*)
\end{array}$$
Again by compactness, there is $\delta(x)\in p(x)$ such that $$\delta(x)\wedge\delta(y)\wedge\delta(z)\models (*).$$ Define a definable equivalence relation $E'(x,y)\equiv [\neg \delta(x)\wedge \neg \delta(y)]\vee [\delta(x)\wedge \delta(y)\wedge \forall z(\delta(z)\rightarrow (\theta(z,x)\leftrightarrow \theta(z,y)))]$.
\begin{claim}\label{fin_equiv_relation}
The equivalence relation $E'$ is finite.
\end{claim}
\begin{proof}
First, $\neg\delta(x)$ is a $E'$-class. We show that on $\delta$, the $E'$-classes are of the form of $\theta(x,a_i)\wedge \delta(x)$. By the choice of $\delta$, it is partitioned by $\{\theta(x,a_i)\wedge \delta(x)\}_{i\le n}$.\\
1) We show that $\models \theta(x,a_i)\wedge \delta(x)\rightarrow E'(x,a_i)$ : Choose $b\models \theta(x,a_i)\wedge \delta(x)$. Take $c\models \delta(x)\wedge \theta(x,a_i)$. Since $\theta$ is transitive on $\delta$ and $\theta(b,a_i)$ holds, $\theta(c,b)$ holds. Conversely, if $d\models \delta(x)\wedge \theta(x,b)$, then by transitivity of $\theta$ on $\delta$, $\theta(d,a_i)$ holds. Therefore, $E'(b,a_i)$ holds.\\
2) For $i\neq j$, $\neg E'(a_i,a_j)$ : Suppose for some $i\neq j$, $E'(a_i,a_j)$ holds. Then $\theta(a_i,a_j)$ holds but it is impossible since $a_i,a_j\models p$ and $\theta$ and $E$ are same on $p\times p$.\\
\noindent By 1) and 2), the $E'$-classes are of the form of $\theta(x,a_i)\wedge \delta(x)$ or $\neg\delta(x)$ and $E'$ is a finite equivalence relation.
\end{proof}
\noindent From the proof of Claim \ref{fin_equiv_relation}, $E'$ and $E$ are the same equivalence relation on $p\times p$. Since $E'$ is finite and $p$ is a strong type, $p/E=p/E'$ and there are only one $E$-class in $p$.\\
Case 2. $p/E$ is infinite : Suppose $p/E$ is infinite. Let $\kappa=|p/E|$. If $E$ is definable, then by compactness, $|p/E|\ge \kappa'$ for any $\kappa'<|{\mathcal C}|$ and $E$ is not bounded. So $E$ is type-definable and $E(x,y)\equiv \bigwedge_{i<\lambda}\limits \phi_i(x,y)$, where $\phi_i(x,y)$ is a formula and $\lambda$ is an infinite cardinal. We may assume $\phi_i(x,y)$ is reflexive and symmetric by taking $x=y\vee(\phi_i(x,y)\wedge \phi_i(y,x))$ instead of $\phi_i(x,y)$ for each $i<\lambda$, and that for $i<j<\lambda$, $\models \phi_j(x,y)\rightarrow\phi_i(x,y)$ $(\dagger)$ by taking $\phi_j(x,y)\wedge\phi_i(x,y)$. Moreover, by compactness, we may assume that for $\models \exists z(\phi_{i+1}(x,z)\wedge \phi_{i+1}(z,y))\rightarrow \phi_i(x,y)$ $(\ddagger)$. Let $\{a_k\models p\}_{k<\kappa}$ be the set of representatives of $E$-classes.
\begin{claim}\label{inf_classes_in_phi}
For each $i<\lambda$ and $k<\kappa$, $\phi_i(x,a_k)({\mathcal C})$ contains infinitely many $E$-classes.
\end{claim}
\begin{proof}
Fix $i<\lambda$. By compactness, there are finitely many $k_0<k_1<\cdots<k_n$ such that $p\models \bigvee_j \phi_i(x,a_{k_j})$. By Pigeonhole Principle, some $\phi_i(x,a_{k_l})$ contains infinitely many $a_k$'s. By $(\dagger)$ and $(\ddagger)$, $\phi_i(x,a_{k_l})$ contains infinitely many $E$-classes. Since $a_n\equiv a_m$ for $n,m<\kappa$ and $E$ is invariant, each $\phi_i(x,a_k)$ contains infinitely many $E$-classes.
\end{proof}
\begin{claim}\label{disj_phi_in_phi}
For each $i<\lambda$ and $k<\kappa$, there are $i<j<\lambda$ and $k_0,k_1<\kappa$ such that $$\models [(\phi_j(x,a_{k_0})\vee\phi_j(x,a_{k_0}))\rightarrow \phi_i(x,a_k)]\wedge [\neg \exists x (\phi_j(x,a_{k_0})\wedge \phi_j(x,a_{k_1}))].$$
\end{claim}
\begin{proof}
Fix $i<\lambda$ and $k<\kappa$. By Claim \ref{inf_classes_in_phi}, $\phi_i(x,a_k)$ contains infinitely many $E$-classes. Choose two $E$-classes in $\phi_i(x,a_k)$ and let $a_{k_0}$ and $a_{k_1}$ be representatives of two classes respectively. Since $E(x,a_{k_0})({\mathcal C})$ and $E(x,a_{k_1})({\mathcal C})$ are disjoint, by compactness, for some $j>i$, $\phi_j(x,a_{k_0})({\mathcal C})$ and $\phi_j(x,a_{k_1})({\mathcal C})$ are disjoint and we are done.
\end{proof}
\noindent From Claim \ref{inf_classes_in_phi}, \ref{disj_phi_in_phi} and the fact that the cofinality of $\lambda$ is at least $\aleph_0$, we get a binary tree ${\mathcal B} :\ 2^{< \omega} \rightarrow \omega\times \kappa$ such that for each $b\in 2^{<\omega}$, ${\mathcal B}(\stackrel\frown{b0})=(j,k_0)$ and ${\mathcal B}(\stackrel\frown{b1})=(j,k_1)$ where if ${\mathcal B}(b)=(i,k)$, then $j<\omega$ and $k_0,k_1<\kappa$ satisfies Claim \ref{disj_phi_in_phi} for $(i,k)$. Then for each $\tau \in 2^{\omega}$, we get a set of formula $\{\phi_{i(\tau\upharpoonright n )}(x,a_{k(\tau\upharpoonright n)})\}$, where ${\mathcal B}(\tau\upharpoonright n)=(i(\tau\upharpoonright n ),k(\tau\upharpoonright n))$ for each $n\in \omega$. By the choice of ${\mathcal B}$, for $\tau_0\neq \tau_1 \in 2^{\omega}$, $\bigcap_n \phi_{i(\tau_0\upharpoonright n )}(x,a_{k(\tau_0 \upharpoonright n)})({\mathcal C})$ and $\bigcap_n \phi_{i(\tau_1 \upharpoonright n )}(x,a_{k(\tau_1\upharpoonright n)})({\mathcal C})$ are disjoint, and each contains $E$-classes. Thus, $p/E$ has at least $2^{\aleph_0}$ many elements.
\end{proof}
\subsection{Appendix B}
We see how to recover a real ordered group $({\mathbb R},+)$ from a dense linear order extending $({\mathbb Q},<)$ using Dedekind cut. Consider a language ${\mathcal L}_{od,{\mathbb Q}}=\{<\}\cup\{r\}_{r\in{\mathbb Q}}$ and a ${\mathcal L}_{od,{\mathbb Q}}$-structure ${\mathcal U}=(U,<,r : r\in {\mathbb Q})$ which is a saturated dense linear order extending $({\mathbb Q},<)$. Then $\operatorname{Th}({\mathcal U})$ has quantifier elimination.
Consider the $1$-types over empty set, $S_1(\emptyset)(=S_1)$. Then by quantifier elimination, any $1$-type $p$ has one of the following forms : For $r\in {\mathbb Q}$ and $r'\in {\mathbb R}\setminus{\mathbb Q}$,
\begin{enumerate}
\item $\{x=r\}$;
\item $\{l<x<r|\ l<r\}$;
\item $\{r<x<u|\ r<u\}$; and
\item $\{l<x<u|\ l<r'<u\}$.
\end{enumerate}
For a subset $S\subset {\mathbb Q}$, we write $S^*:=S\cup\{s\pm\epsilon|\ s\in S\}$, where we consider $\epsilon$ as infinitesimal. So we can identify $S_1$ with the set ${\mathbb R}\cup {\mathbb Q}^*$ in the following way :
For $r\in {\mathbb Q}$ and $r'\in {\mathbb R}\setminus{\mathbb Q}$,
\begin{enumerate}
\item $\{x=r\}\leftrightarrow r$;
\item $\{l<x<r|\ l<r\}\leftrightarrow (r-\epsilon)$;
\item $\{r<x<u|\ r<u\}\leftrightarrow (r+\epsilon)$; and
\item $\{l<x<u|\ l<r'<u\}\leftrightarrow r'$.
\end{enumerate}
Next we define a group-like structure on $S_1$. Define a plus-like operation $+^* : S_1\times S_1 \rightarrow {\mathcal P}(S_1)$ as follows :
$$p_1 +^* p_2:=\{p|\ p\models (l_1+l_2<x<u_1+u_2),\ p_i\models l_i<x<u_i\},$$
and define a minus-like operation $-^* : S_1 \rightarrow S_1$ as follows :
$$(-^* p):=\{-u<x<-l|\ p\models l<x<u\}.$$
We define a composition of plus operation as follows :
$$(p_1 +^* p_2 )+^* p_3:=\bigcup_{p\in p_1 +^* p_2}\limits p+^* p_3,$$
and
$$p_1 +^* (p_2 +^* p_3):=\bigcup_{p\in p_2 +^* p_3}\limits p_1+^* p.$$
Then $+^*$ and $-^*$ is commutative, associate, and distributive. And for any $p_1,\cdots,p_k\in S_1$ and $k\ge 1$,
$$ |p_1+^* \cdots +^* p_k|\le 3$$
We write $p_1 -^* p_2 $ for $p_1+^* (-^*p_2)$. These two notions are naturally assigned to ${\mathbb R}\cup{\mathbb Q}^*$ and they are defined as follows :
\begin{enumerate}
\item
\begin{enumerate}
\item If both $r_1$ and $r_2$ are in ${\mathbb R}$ and let $r=r_1+r_2$, then
\[r_1+^* r_2:= \left \{
\begin{array}{ll}
r & \mbox{if $\{r\}\in {\mathbb R}\setminus{\mathbb Q}$}\\
r & \mbox{if $\{r\}\in {\mathbb Q}$ and $r_1,r_2\in {\mathbb Q}$}\\
\{r-\epsilon,\ r, r+\epsilon\} & \mbox{if $r\in {\mathbb Q}$ and $r_1,r_2\notin {\mathbb Q}$}
\end{array}
\right.
\];
\item If $r_1\in {\mathbb R}\setminus {\mathbb Q}$ and $r_2=q\pm\epsilon \in {\mathbb Q}^*$, then $r_1+^* r_2:=\{r_1+q\}$;
\item If $r_1\in {\mathbb Q}$ and $r_2=q\pm\epsilon \in {\mathbb Q}^*$, then $r_1+^* r_2:=\{(r_1+q)\pm\epsilon\}$;
\item If $r_1=p\pm\epsilon$ and $r_2=q\pm\epsilon\in {\mathbb Q}^*$, then $r_1+^* r_2:=\{(p+q)\pm\epsilon\}$;
\item If $r_1=p\pm\epsilon$ and $r_2=q\mp\epsilon\in {\mathbb Q}^*$, then $r_1+^* r_2:=\{(p+q)-\epsilon,(p+q),(p+q)+\epsilon\}$.
\end{enumerate}
\item
\begin{enumerate}
\item If $r_1\in {\mathbb R}$, then $-^*r_1:=-r_1$;
\item If $r_1=p\pm\epsilon \in {\mathbb Q}^*$, then $-^*r_1:=-p\mp\epsilon$.
\end{enumerate}
\end{enumerate}
Now we induce a group structure from $(S_1,+^*,-^*)$. Define a equivalence relation $\equiv_0$ on $S_1$,
$$p_1\equiv_0 p_2\ \mbox{iff}\ p_1-^* p_2 \subset \{0-\epsilon,\ 0,\ 0+\epsilon\},$$
and denote $[p]_0$ for $p\in S_1$ for the equivalence class.
Since $\{0-\epsilon,\ 0,\ 0+\epsilon\}$ is closed under $+^*$ and $-^*$, $+^*$ and $-^*$ are extended on $S_1/\equiv_0$. Then $(S_1/\equiv_0,+^*,-^*,[\operatorname{tp}(0)]_0 )$ is a group. Actually it is isomorphic to $({\mathbb R},+,-,0)$.
\begin{theorem}
$(S_1/\equiv_0,+^*,-^*,[\operatorname{tp}(0)]_0)\cong ({\mathbb R},+,-,0)$.
\end{theorem}
\noindent And define a equivalence relation $\equiv_{{\mathbb Z}}$ on $S_1$,
$$p_1\equiv_{{\mathbb Z}} p_2\ \mbox{iff}\ p_1 -^* p_2 \subset {\mathbb Z}^*,$$
and denote $[p]_{{\mathbb Z}}$ for the equivalence class.
As same as $\equiv_0$, ${\mathbb Z}^*$ is closed under $+^*$ and $-^*$ and $+^*$ and $-^*$ are extended on $S_1/\equiv_{{\mathbb Z}}$. And $(S_1/\equiv_{{\mathbb Z}},+^*,-^*,[\operatorname{tp}(0)]_{{\mathbb Z}} )$ is isomorphic to $({\mathbb R}/{\mathbb Z},+,-,0)$ as groups.
\begin{theorem}
$(S_1/\equiv_{{\mathbb Z}},+^*,-^*,[0]_{{\mathbb Z}} )\cong ({\mathbb R}/{\mathbb Z},+,-,0)$.
\end{theorem}
These two equivalences $\equiv_0$ and $\equiv_{{\mathbb Z}}$ are defined on ${\mathbb R}\cup {\mathbb Q}^*$ and
$$({\mathbb R}\cup {\mathbb Q}^*)/\equiv_0 \cong {\mathbb R}\ \mbox{and}\ ({\mathbb R}\cup {\mathbb Q}^*)/\equiv_{{\mathbb Z}} \cong {\mathbb R}/{\mathbb Z}.$$
Moreover we can define a multiplication-like operation $\times^*$ on $S_1$ as similar as the plus-like operation $+^*$. Note that for $r_0\in {\mathbb Q}$ and $r\in {\mathbb R}\cup{\mathbb Q}^*$,
$
\begin{array}{c c l l}
\operatorname{tp}(r_0)\times^* \operatorname{tp}(r) & := & \operatorname{tp}(r_0r) & \mbox{if } r\in {\mathbb R} \\
& & \operatorname{tp}(r_0q+\epsilon) & \mbox{if } r=q+\epsilon,\ q\in {\mathbb Q} \\
& & \operatorname{tp}(r_0q-\epsilon) & \mbox{if } r=q-\epsilon,\ q\in {\mathbb Q} \\
\end{array}
$
These plus-, minus-, multiplication-like operations make
$$(S_1/\equiv_0,[\operatorname{tp}(0)]_0,[\operatorname{tp}(1)]_0,+^*,-^*,\times^*) \cong ({\mathbb R},0,1,+,-,\times),$$
and
$$(S_1/\equiv_{{\mathbb Z}},[\operatorname{tp}(0)]_{{\mathbb Z}},[\operatorname{tp}(1)]_{{\mathbb Z}},+^*,-^*,\times^*) \cong ({\mathbb R}/{\mathbb Z},0,1,+,-,\times).$$
|
1504.07756
|
\section{Introduction}
It is well known that the class $\mathcal{B}(H)$ of all bounded linear operators on a Hilbert space $H$ can be organized as a $C^*$~-~algebra, and any $C^*$~-~algebra embeds isometrically in such an operator algebra. At the same time, because the above algebra $\mathcal{B}(H)$ is the dual of the trace-class $\mathscr{C}_{1}(H)$, it follows that it is a $W^*$~-~algebra and conversely, any $W^*$~-~algebra can be identified up to an algebraic-topological isomorphism with a weak operator closed $*$~-~subalgebra in $\mathcal{B}(H)$. Hence, the
spectral theory of bounded linear operators on a Hilbert space is developed in close connection with the theory of $C^*$~-~algebras. In recent times, a more general theory, namely that of locally $C^*$~-~algebras (\cite{In.1971}) and of locally $W^*$~-~algebras (\cite{Fr.1986}, \cite{Ma.1986}), is developed. Since such a locally convex $*$~-~algebra embeds in an algebra of continuous linear operators on a so-called locally Hilbert space and since the most important concepts (as selfadjointness, normality, positivity etc.) of the $C^*$~-~ and $W^*$~-~algebras theory extend to the frame of locally $C^*$~-~ and $W^*$~-~algebras, a self-adjoint spectral theory on locally Hilbert spaces can be easily developed (we refer to \cite{Ap.1971, Fr.1988, Sc.1975}).
This paper is intended to be an introduction to the non-selfadjoint spectral theory in this frame. More precisely, after completing some results on linear operators between locally Hilbert spaces (adjoint, isometries, partial isometries, contractions, unitary operators), we introduce reproducing kernel locally Hilbert spaces, we give a general dilation theorem for positive definite locally Hilbert space operator valued maps and, as consequences, we obtain dilation results for locally semi-spectral measures, locally ($\rho$~-)~contractions, semi-groups of locally contractions, as well as extensions for isometries and subnormal operators in the setting of locally Hilbert spaces.
Let us now recall the basic definitions and results regarding a general locally $C^*$~-~algebra, a locally Hilbert space and the associated locally $C^*$~-~algebra of continuous operators on it.
If $A$ is a $*$~-~algebra (over $\mathbb{C}$), then a $C^*$~-~\textit{seminorm} $p$ on $A$ is a seminorm satisfying:
\begin{equation*}
p(a^* a) = p(a)^2 ,\ \ a \in A.
\end{equation*}
It was proved, first by C. Apostol in \cite{Ap.1971}, that for a complete locally convex $*$~-~algebra $A$ and for a continuous $C^*$~-~seminorm $p$ on $A$, the quotient $*$~-~algebra
$A_p :\ = A/ N(p),\ \ N(p) = \ker p$ is a $C^*$~-~algebra. The set of all such $p$ ' s will be denoted by $S(A)$. Let us also note that such a $C^*$~-~seminorm $p$ satisfies (see \cite{Se.1979})
\begin{equation*}
p(ab) \le p(a) p(b) ,\ \ a, b \in A
\end{equation*}
(i.e. is submultiplicative), and
\begin{equation*}
p(a) = p(a^*) , \ a \in A
\end{equation*}
(i.e. is a $m^*$~-~seminorm).
Complete locally convex $*$~-~algebras endowed with the topology generated by a calibration consisting of all continuous $C^*$~-~seminorms were first studied by C. Apostol \cite{Ap.1971} and A. Inoue \cite{In.1971} in 1971. The latter, as well as M. Fragoulopoulou \cite{Fr.1988} later on, called these objects \emph{locally $C^*$~-~algebras}, whereas other authors \cite{Ap.1971, Sc.1975} called them \emph{$LMC^*$~-~algebras (locally multiplicative $*$~-~algebras)} or even \emph{pro~-~$C^*$~-~algebras} (see \cite{Am.2002, Ma.1986}). We shall adopt here the terminology \emph{$LC^*$~-~algebra} (see also \cite{Zh.Sh.2001}). We shall also suppose that such an $A$ has a unit. Note also that to each $LC^*$~-~algebra $A$ (endowed with the calibration $S(A)$) an inverse system of $C^*$~-~algebras can be attached (for example $\{ A_p , \pi_{p,q} \}_{p \le q}$, where $\pi_{p, q}$ is the natural embedding of $A_p$ into $A_q$), such that $A$ is the inverse (projective) limit of such a system ($A = \varprojlim\limits_{p \in S(A)} A_p$).
In fact inverse limit of any inverse system of $C^*$~-~algebras can stand for defining $LC^*$~-~algebras. Analogously, inverse limit of $W^*$~-~algebras were called \emph{locally $W^*$~-~algebras} or \emph{$LW^*$~-~algebras} (see \cite{Fr.1986, In.1971, Ma.1986, Sc.1975} a.o.). This is why many aspects of the selfadjoint spectral theory can be transposed from $C^*$~-~and $W^*$~-~algebras to $LC^*$~-~and $LW^*$~-~algebras, respectively (see \cite{Ap.1971} for the commutative case and \cite{Fr.1986, Fr.1988, In.1971, Jo.2006, Sc.1975} for the non-commutative one). So for an element $a$ from an $LC^*$~-~algebra $A$ we can define, as usually, its spectra $Sp(a)$ and the following assertions hold (see \cite{Am.2002, Ap.1971, Fr.1986, Fr.1988, In.1971, Ph.1988, Sc.1975}):
\begin{itemize}
\item[(i)] $a$ is (locally) self-adjoint (i.e. $a=a^*$), iff $Sp(a)\subset\mathbb{R}$; we shall denote that
$a\in A_h$;
\item[(ii)] $a$ is (locally) positive (i.e. $a=b^*b$, for some $b\in A$), iff $Sp(a)\subset [0,\infty)$; we denote that by $a\in A_+$; $A_+$ is a closed cone in $A$ and $A_+\cap - A_+=\{0\}$;
\item[(iii)] for a (locally) projection $a$ (i.e. $a=a^2\in A_h$), we denote $a\in\mathcal{P}(A)$; we easily obtain that $\mathcal{P}(A)\subset A_+\subset A_h$;
\item[(iv)] $a$ is (locally) normal (i.e. $aa^*=a^*a$), we denote briefly $a\in A_n$; it, evidently, holds $A_h\subset A_n$;
\item[(v)] if $a$ is a (locally) isometry (i.e. $a^*a=e$, where $e$ is the unit element of $A$), then $aa^*\in\mathcal{P}(A)$ ($aa^*$ being the ``range'' projection);
\item[(vi)] if $a$ and $a^*$ are simultaneously (locally) isometries, then $a$ will be called a (locally) unitary element ($a\in\mathcal{U}(A)$); evidently $Sp(a)\subset\mathbb{T}$, where $\mathbb{T}$ is the torus and $\mathcal{U}(A)\subset A_n$;
\item[(vii)] $a\in A$ is a bounded element in $A$ (i.e. $\|a\|_b:=\sup\{p(a),p\in\mathcal{S}(A)\}<+\infty$, denoted by $a\in A_b$), iff $Sp(a)$ is bounded in $\mathbb{C}$; evidently $\mathcal{U}(A)\subset A_b$; moreover $A_b$ endowed with the above mentioned sup-norm is a $\mathbb{C}^*$-algebra, which is dense in $A$;
\item[(viii)] any normal element has an integral representation with respect to a spectral measure (locally projection valued) on $\mathbb{C}$ (supported on the spectrum).
\end{itemize}
Now, having in view the above mentioned embedding of any $LC^*$~-~ or $LW^*$~-~algebra in a special $*$~-~algebra of continuous linear operators on a locally Hilbert space and that, in the Hilbert space frame, a non-self-adjoint spectral theory can be developed with the aid of dilation theory, we shall work with operators on locally Hilbert spaces, but also with operators between such spaces. Let's recall some precise definitions.
A \emph{locally Hilbert space} is a strict inductive limit of some ascending family (indexed after a directed set) of Hilbert spaces.
More precisely, given a directed (index) set $\Lambda$ and a family $\{H_\lambda\}_{\lambda\in\Lambda}$ of Hilbert spaces such that
\begin{equation}
\label{eq12} H_\lambda\subset H_\mu \text{ and } \langle\cdot,\cdot\rangle_\lambda=\langle\cdot,\cdot\rangle_\mu \text{ on } H_\lambda,
\end{equation}
(i.e. for $\lambda\leq\mu$, the natural embedding $J_{\lambda\mu}$ of $H_\lambda$ into $H_\mu$ is an isometry), then we endow the linear space $\mathbb{H}=\bigcup\limits_{\lambda\in\Lambda}H_\lambda$ with the inductive limit of $H_\lambda$ ($\lambda\in\Lambda$). Such an $\mathbb{H}$ will be called a \emph{locally Hilbert space} (associated to the ``inductive'' family $\{H_\lambda\}_{\lambda\in\Lambda}$). Recall that the \emph{inductive limit topology} on $\mathbb{H}$ is the finest one, for which the embeddings $J_\lambda$ of $H_\lambda$ into $\mathbb{H}$ are continuous, for all $\lambda\in\Lambda$.
Now we define the associated $LC^*$-algebra. Let $\{T_\lambda\}_{\lambda\in\Lambda}$ be an inductive system of bounded linear operators on $\{H_\lambda\}_{\lambda\in\Lambda}$ (i.e. $T_\lambda\in\mathcal{B}(H_\lambda)$ and
\begin{equation}\label{eq13}
T_\mu J_{\lambda\mu}=J_{\lambda\mu}T_\lambda,
\end{equation}
for each $\lambda,\mu\in\Lambda,$ $\lambda\leq\mu$), such that
\begin{equation}\label{eq14}
T_\mu P_{\lambda\mu}=P_{\lambda\mu}T_\mu,\ \lambda,\mu\in\Lambda,\ \lambda\leq\mu,
\end{equation}
where $P_{\lambda\mu}=J_{\lambda\mu}J_{\lambda\mu}^*$ is the
self-adjoint projection on $H_\mu$ with the range $H_\lambda$. Then
by putting $Th:=T_\lambda h, \, h\in H_\lambda,\,\lambda\in\Lambda$ we
have a correct definition of the linear operator $T$ on $\mathbb{H}$, which is also continuous (relative to the inductive limit topology on $\mathbb{H}$). We use the notation
$T=\varinjlim\limits_\lambda T_\lambda$, and $\mathcal{L}(\mathbb{H})$ for the algebra of all operators $T$ as above.
Let us also note that a given linear operator $T$ on $\mathbb{H}=\varinjlim\limits_{\lambda\in\Lambda} H_\lambda$ is defined by an inductive system of linear operators $T=\varinjlim\limits_\lambda T_\lambda$, iff it is invariant to each $H_{\lambda}$, $\lambda\in\Lambda$, i.e. it satisfies $range(TJ_\lambda)\subset H_\lambda$, $\lambda\in\Lambda$, the linear operator $T_\lambda$ from $H_\lambda$ into $H_\lambda$, being given by $T_{\lambda}h:=Th$, $h\in H_\lambda$, $\lambda\in\Lambda$. We also add that, in this case, $T$ is continuous on $\mathbb{H}$ iff $T_\lambda\in\mathcal{B}(H_\lambda)$, $\lambda\in\Lambda$. Consequently, the following assertion holds:
\begin{itemize}
\item[(ix)] $ \mathcal{L}(\mathbb{H})$ consists exactly of those continuous linear operators on $\mathbb{H}$, which are invariant to each $H_\lambda$, $\lambda\in\Lambda$ and whose ``restrictions'' satisfy \eqref{eq14}.
\end{itemize}
Let us now remark that if $T=\varinjlim\limits_\lambda T_\lambda\in\mathcal{L}(\mathbb{H})$, then $\{T^*_\lambda\}_{\lambda\in\Lambda}$ is an inductive system of
operators on $\bigcup\limits_\lambda H_\lambda$, satisfying (\ref{eq14}). Indeed,
$T^*_\lambda\in\mathcal{B}(H_\lambda)$,
($\lambda\in\Lambda$) and (\ref{eq14}) is equivalent (by passing to
the adjoint) to
\begin{equation}
\label{eq15} T^*_\mu P_{\lambda\mu}=P_{\lambda\mu}T^*_\mu ,\quad\lambda,\mu\in\Lambda,\, \lambda\leq\mu.
\end{equation}
Now (\ref{eq13}) and (\ref{eq14}) for the system $\{T_\lambda\}_{\lambda\in\Lambda}$ implies (\ref{eq13}) for $\{T^*_\lambda\}_{\lambda\in\Lambda}$ in the following manner.
For an arbitrary $h_\lambda\in H_\lambda$, by applying (\ref{eq15}), it holds that $T^*_\mu h_\lambda=T^*_\mu P_{\lambda\mu}h_\lambda=P_{\lambda\mu}T^*_\mu h_\lambda,\,\lambda\leq\mu$, and hence $T^*_\mu h_\lambda\in H_\lambda$.
By a straightforward computation, we have $(T^*_\mu
h_\lambda-T^*_\lambda h_\lambda,h_\lambda')_\lambda=(h_\lambda,T_\mu
h'_\lambda-T_\lambda h'_\lambda)_\lambda,\, h'_\lambda\in
H_\lambda,$ where, by (\ref{eq13}), the right hand site vanishes.
Now $T^*_\mu h_\lambda-T^*_\lambda h_\lambda=0, h_\lambda\in
H_\lambda$ which means (\ref{eq13}) for
$\{T^*_\lambda\}_{\lambda\in\Lambda}$. In this way we may define
$\varinjlim\limits_{\lambda\in\Lambda} T^*_\lambda:=T^*$ as an operator on $\mathbb{H}$. So $T\to T^*$ is an involution on $\mathcal{L}(\mathbb{H})$. It is now
a simple matter to check that $\mathcal{L}(\mathbb{H})$ is an $LC^*$~-~algebra
with the calibration $\{\|\cdot\|_\lambda\}_{\lambda\in\Lambda}$
defined by
\begin{equation}
\label{eq16} \|T\|_\lambda:=\|T_\lambda\|_{\mathcal{B}(H_\lambda)},\quad\lambda\in\Lambda.
\end{equation}
\section{The space $\mathcal{L}(\mathbb{H}^1,\mathbb{H}^2)$}
Now, since for the dilation theorems the ``isometric'' embedding of a locally Hilbert space into another one is necessary, we extend the definition of $\mathcal{L}(\mathbb{H})$ to $\mathcal{L}(\mathbb{H}^1,\mathbb{H}^2)$ with two different locally Hilbert spaces $\mathbb{H}^1$ and $\mathbb{H}^2$, define the involution in this case and the locally partial isometries.
Given two families of Hilbert spaces $\{H^1_\lambda\}_{\lambda\in\Lambda}$ and $\{H^2_\lambda\}_{\lambda\in\Lambda}$,
indexed by the same directed set $\Lambda$ and satisfying (each of them) the condition (\ref{eq12}), we denote by $J_{\lambda\mu}^k$ the natural embeddings of $ H_\lambda^k$ into $H_\mu^k$, $\lambda\leq\mu$ and consider $\mathbb{H}^k=\lim\limits_{\stackrel{\longrightarrow}{\lambda}}H_\lambda^k=\bigcup\limits_{\lambda\in\Lambda}H_\lambda^k,k=1,2$, the corresponding inductive limit.
Take also an inductive system of bounded linear operators $\{T_\lambda\}_{\lambda\in\Lambda}$ from $\bigcup\limits_{\lambda\in\Lambda}H^1_\lambda$ into $\bigcup\limits_{\lambda\in\Lambda}H^2_\lambda$ (i.e. $T_\lambda\in\mathcal{B}(H_\lambda^1,H^2_\lambda), \,\lambda\in\Lambda,$ and
\begin{equation}
\label{eq21} T_\mu J_{\lambda\mu}^1=J_{\lambda\mu}^2 T_\lambda,
\end{equation}
for each $\lambda\leq\mu,\,\lambda,\mu\in\Lambda$), which also satisfies
\begin{equation}
\label{eq22} T_\mu P_{\lambda\mu}^1=P_{\lambda\mu}^2 T_\mu,\,\,\lambda,\mu\in\Lambda,\,\lambda\leq\mu,
\end{equation}
where $P_{\lambda\mu}^k=J_{\lambda\mu}^k J_{\lambda\mu}^{k*}$ are self-adjoint projections in $H^k_\mu$, having the range $H^k_\lambda$ ($k=1,2$).
Now (\ref{eq21}) allows us to define correctly the operator $T$ through
\begin{equation}
\label{eq23} Th=T_\lambda h,\quad h\in H_\lambda^1,\, \lambda\in\Lambda,
\end{equation}
as a linear operator from $\mathbb{H}^1$ into $\mathbb{H}^2$, which is continuous in the inductive limit topology. The class of this operators will be denoted by $\mathcal{L}(\mathbb{H}^1,\mathbb{H}^2)$. This is a complete locally convex space with the calibration consisting of the semi-norms defined as
\begin{equation}
\label{eq24} \|T\|_\lambda:=\|T_\lambda\|_{\mathcal{B}(H^1_\lambda, H^2_\lambda)},\lambda\in\Lambda, T=\varinjlim\limits_{\lambda\in\Lambda} T_\lambda\in\mathcal{L}(\mathbb{H}^1, \mathbb{H}^2).
\end{equation}
It is clear that $\mathcal{L}(\mathbb{H}, \mathbb{H})=\mathcal{L}(\mathbb{H})$.
Now, returning to an operator $T$ from $\mathcal{L}(\mathbb{H}^1,\mathbb{H}^2)$, the relation (\ref{eq22}) for the inductive system $\{T_\lambda\}_{\lambda\in\Lambda}$ is equivalent (by passing to the hilbertian adjoint) to
\begin{equation}
\label{eq25} T^*_\mu P_{\lambda\mu}^2=P_{\lambda\mu}^1T^*_\mu,\quad\lambda,\mu\in\Lambda,\,\lambda\leq\mu,
\end{equation}
which as in the first section and by (\ref{eq21}) implies
\begin{equation}
\label{eq26} T^*_\mu J_{\lambda\mu}^2=J_{\lambda\mu}^1T^*_\lambda,\quad\lambda,\mu\in\Lambda,\,\lambda\leq\mu.
\end{equation}
Indeed, since $T_\mu^*\in\mathcal{B}(H^2_\mu,H^1_\mu)$, for an arbitrary $h_\lambda^2\in H_\lambda^2$ it holds
$$T_\mu^* h_\lambda^2=T_\mu^* P^2_{\lambda\mu} h_\lambda^2=P^1_{\lambda\mu} T_\mu^* h_\lambda^2,$$
hence $T_\mu^* h_\lambda^2\in H_\lambda^1$. Now $T_\mu^* h_\lambda^2-T_\lambda^* h_\lambda^2$ satisfies for arbitrary $h1_\lambda \in H^1_\Lambda$:
$$(T_\mu^* h_\lambda^2-T_\lambda^* h_\lambda^2,h_\lambda^1)=(h_\lambda^2,T_\mu h_\lambda^1-T_\lambda h^1_\lambda),$$
which, by (\ref{eq21}), vanishes. Since $h_\lambda^1\in H_\lambda^1$ is arbitrary, it results $T_\mu^* h_\lambda^2-T_\lambda^* h_\lambda^2=0.$ Because $h_\lambda^2$ is arbitrary in $H_\lambda^2$, relation (\ref{eq26}) holds.\\
Defining
\begin{equation}
\label{eq27}
T^*:=\varinjlim\limits_{\lambda\in\Lambda} T_\lambda^*,
\end{equation}
we obtain $T^*\in\mathcal{L}(\mathbb{H}^2,\mathbb{H}^1)$ and, finally, the mapping
\begin{equation}
\label{eq28}\mathcal{L}(\mathbb{H}^1,\mathbb{H}^2)\ni T\longmapsto T^*\in \mathcal{L}(\mathbb{H}^2,\mathbb{H}^1)
\end{equation}
satisfies the properties of the adjunction (as in the case of Hilbert space operators from one space to another).
For the adjunction of a product let us observe that if we have three locally Hilbert spaces
$\mathbb{H}^k=\varinjlim\limits_{\lambda}H_\lambda^k=\bigcup\limits_{\lambda\in\Lambda}H_\lambda^k$
($k=1,2,3$), and $T=\varinjlim\limits_\lambda T_\lambda\in\mathcal{L}(\mathbb{H}^1,\mathbb{H}^2)$,
$S=\varinjlim\limits_\lambda S_\lambda\in\mathcal{L}(\mathbb{H}^2,\mathbb{H}^3)$, by (\ref{eq21}), for
$T$ and $S$ we successively have $$S_\mu T_\mu J^1_{\lambda\mu}=S_\mu J^2_{\lambda\mu} T_\lambda=J^3_{\lambda\mu}S_\mu T_\mu,\,\lambda,\mu\in\Lambda,\,\lambda\leq\mu,$$
from where $\{S_\lambda T_\lambda\}_{\lambda\in\Lambda}$ satisfy (\ref{eq21}) as operators from
$\mathcal{B}(H_\lambda^1,H_\lambda^3)$. Analogously from (\ref{eq22}) for $T$ and $S$ we infer (\ref{eq22}) for $ST$.
Consequently $ST$ defined by $\varinjlim\limits_\lambda S_\lambda T_\lambda$ belongs to
$\mathcal{L}(\mathbb{H}^1,\mathbb{H}^3)$, and is in fact the composition operator of $T$ and $S$.
Let us also observe that the corresponding semi-norms from $\mathcal{L}(\mathbb{H}^1,\mathbb{H}^3)$ satisfy
\begin{equation}
\label{eq29} \|ST\|_\lambda\leq\|S\|_\lambda\|T\|_\lambda,T\in\mathcal{L}(\mathbb{H}^1,\mathbb{H}^2),S\in\mathcal{L}(\mathbb{H}^2,\mathbb{H}^3),\lambda\in\Lambda.
\end{equation}
In a similar way it is possible to define the composition $T^*S^*$ as a member of $\mathcal{L}(\mathbb{H}^3,\mathbb{H}^1)$. Regarding both constructions, applying the adjunction of a product of Hilbert space operators, we get
\begin{equation}
\label{eq210} T^*S^*=\varinjlim\limits_{\lambda\in\Lambda} T_\lambda^* S^*_\lambda = \varinjlim\limits_{\lambda\in\Lambda} (S_\lambda T_\lambda )^*= (ST)^*.
\end{equation}
Noticing that for $T\in\mathcal{L}(\mathbb{H}^1,\mathbb{H}^2)$ it is possible to form $T^*T\in\mathcal{L}(\mathbb{H}^1)$ and $TT^*\in\mathcal{L}(\mathbb{H}^2)$. These are clearly self-adjoint elements in the corresponding $LC^*$-algebras, the semi-norms $\|\cdot\|_\lambda$ satisfying the property $$\|T^*T\|_\lambda=\|T\|^2_\lambda=\|T^*\|^2_\lambda,\,\lambda\in\Lambda$$ (where the semi-norms are in $\mathcal{L}(\mathbb{H}^1)$, $\mathcal{L}(\mathbb{H}^1,\mathbb{H}^2)$ and $\mathcal{L}(\mathbb{H}^2,\mathbb{H}^1)$, respectively).
Having in view the above construction, the following characterizations of special elements in $\mathcal{L}(\mathbb{H})$ are immediate and the proof will be omitted.
\begin{prop}\label{prop21}
Let $\mathbb{H}=\varinjlim\limits_{\lambda\in\Lambda}H_\lambda$ be a locally Hilbert space and $T=\varinjlim\limits_{\lambda}T_\lambda$ be an element in $\mathcal{L}(\mathbb{H})$.
Then
\begin{itemize}
\item[(i)] $T$ is locally self-adjoint on $\mathbb{H}$, iff each $T_\lambda$ is self-adjoint on $H_\lambda$ $(\lambda\in \Lambda)$;
\item[(ii)] $T$ is locally positive on $\mathbb{H}$, iff each $T_\lambda$ is positive on $H_\lambda$ $(\lambda\in \Lambda)$;
\item[(iii)] $T$ is a locally projection on $\mathbb{H}$, iff each $T_\lambda$ is a projection on $H_\lambda$ $(\lambda\in \Lambda)$;
\item[(iv)] $T$ is locally normal on $\mathbb{H}$, iff each $T_\lambda$ is normal on $H_\lambda$ $(\lambda\in \Lambda)$;
\item[(v)] $T$ is a local isometry on $\mathbb{H}$ (i.e. $T^*T=I_\mathbb{H}$), iff each $T_\lambda$ is an isometry on $H_\lambda$ $(\lambda\in \Lambda)$.
\end{itemize}
\end{prop}
Now, it is possible to define a locally partial isometry between two locally Hilbert spaces. Namely $V$ is a \emph{locally partial isometry}, when it is an operator acting between two locally Hilbert spaces $\mathbb{H}^1$ and $\mathbb{H}^2$, $V\in\mathcal{L}(\mathbb{H}^1,\mathbb{H}^2)$ and $V^*V$ is a locally projection on $\mathbb{H}^1$ (i.e. $V^*V\in\mathcal{P}(\mathcal{L}(\mathbb{H}^1))$).
Let us note that, as in the Hilbert space case, if $V\in\mathcal{L}(\mathbb{H}^1,\mathbb{H}^2)$ is a locally partial isometry, then $VV^*$ is a locally projection (on $\mathbb{H}^2$) as well. If $V^*V=1_{\mathbb{H}^1}$, then $V$ will be called a \emph{locally isometry} (from $\mathbb{H}^1$ to $\mathbb{H}^2$). In the case in which $VV^*=1_{\mathbb{H}^2}$, $V$ will be called a \emph{locally co-isometry}. A locally isometry, which is also a locally co-isometry is a \emph{locally unitary operator} from $\mathbb{H}^1$ into $\mathbb{H}^2$.
The following characterizations are also easy to prove:
\begin{thm}\label{thm22}
Let $V=\varinjlim\limits_\lambda V_\lambda$ be an element from $\mathcal{L}(\mathbb{H}^1,\mathbb{H}^2)$. Then
\begin{itemize}
\item[(i)] $V$ is a locally partial isometry, iff each $V_\lambda$ is a partial isometry from $H^1_\lambda$ into $H^2_\lambda$ ($\lambda\in\Lambda$);
\item[(ii)] $V$ is a locally co-isometry, iff each $V_\lambda$ is a co-isometry from $H^1_\lambda$ into $H^2_\lambda$ ($\lambda\in\Lambda$);
\item[(iii)] $V$ is an invertible operator, iff each $V_\lambda$ is invertible ($\lambda\in\Lambda$). In this case $V^{-1}=\varinjlim\limits_{\lambda\in\Lambda}V_\lambda^{-1}\in\mathcal{L}(\mathbb{H}^2,\mathbb{H}^1)$;
\item[(iv)] $V$ is a locally unitary operator from $\mathbb{H}^1$ onto $\mathbb{H}^2$, iff each $V_\lambda$ is a unitary operator from $H_\lambda^1$ onto $H^2_\lambda$ ($\lambda\in\Lambda$);
\item[(v)] \emph{(Fuglede-Putnam Theorem)} Let $N_1\in\mathcal{L}_n(\mathbb{H}^1)$ (locally normal operator) and $N_2\in\mathcal{L}_n(\mathbb{H}^2)$. If there exists $S\in\mathcal{L}(\mathbb{H}^1,\mathbb{H}^2)$ such that $SN_1=N_2S$, then $SN_1^*=N^*_2S$.
\end{itemize}
\end{thm}
Now it is interesting to observe that the notion of orthogonally closed subspace has a correspondent in the frame of locally Hilbert spaces. Indeed, we can give the following definition:
\begin{defn}\label{def21}
A subspace $\mathbb{H}^1$ of a locally Hilbert space $\mathbb{H}$ is \emph{orthogonally complementable}, if there is a locally self-adjoint projection $P \in \mathcal{L}(\mathbb{H})$, such that $P\mathbb{H} = \mathbb{H}^1$.
\end{defn}
It is clear that any such ``orthogonally'' complementable subspace is closed. For now we are not interested in the problem whether each closed subspace $\mathbb{H}^1$ is orthogonally complementable. However it is interesting to see that each $H_{\lambda_0} \ (\lambda_0 \in \Lambda)$ is orthogonally complementable in $\mathbb{H} = \varinjlim\limits_{\lambda \in \Lambda} H_\lambda$. This is easily seen if we regard
$H_{\lambda_0}$ as the strict inductive limit of $H_{\lambda_0} \cap H_{\lambda}, \ \lambda \in \Lambda$, i.e. $H_{\lambda_0} = \varinjlim\limits_{\lambda \in \Lambda} H_{\lambda_0} \cap H_\lambda$.
Is is easy to obtain that the family $\{ H_{\lambda_0} \cap H_\lambda , \ \lambda\in\Lambda \}$ satisfies the condition \eqref{eq12} and $H_{\lambda_0} = \bigcup\limits_{\lambda\in\Lambda} (H_{\lambda_0} \cap H_\lambda)$. Moreover, the natural embedding $J_{\lambda_0}$ of $H_{\lambda_0}$ into $\mathbb{H}$ satisfies the conditions \eqref{eq21} and \eqref{eq22}, if we consider $J_{\lambda_0} =
\varinjlim\limits_{\lambda \in \Lambda} J_{\lambda_0}^\lambda$, where $J_{\lambda_0}^\lambda$ is the natural embedding of $H_{\lambda_0} \cap H_\lambda$ into $H_\lambda$, $\lambda \in \Lambda$.
So $J_{\lambda_0}$ is a locally isometric operator from $\mathcal{L}(H_{\lambda_0}, \mathbb{H})$. In this way $J_{\lambda_0}J^*_{\lambda_0} \in \mathcal{L} (\mathbb{H})$ and $J_{\lambda_0}J^*_{\lambda_0} \mathbb{H} = H_{\lambda_0}$,
$J_{\lambda_0}J^*_{\lambda_0}$ being the desired locally self-adjoint projection. So we have proved:
\begin{prop}\label{prop23}
If $\ \mathbb{H}= \varinjlim\limits_{\lambda \in \Lambda} H_\lambda$ is a locally Hilbert space, then $J_\lambda$ is a locally isometry from $H_\lambda$ into $\mathbb{H}$ and each $H_\lambda$, $\lambda\in\Lambda$ is an orthogonally complementable subspace in $\mathbb{H}$. Moreover, if $\,T\in\mathcal{L}(\mathbb{H})$, then $T= \varinjlim\limits_{\lambda \in \Lambda} T_\lambda$, where $T_\lambda=J_\lambda^*TJ_\lambda$ and $TJ_\lambda=J_\lambda J_\lambda^*TJ_\lambda=J_\lambda T_\lambda,\ \lambda\in\Lambda$.
\end{prop}
\section{Locally positive definite operator valued kernels}
Let us mention that the first two named authors have introduced in \cite{Ga.Ga.2007} the positive definiteness for $LC^*$~-~algebra valued kernels. Recalling this definition for the $LC^*$~-~algebra $\mathcal{L}(\mathbb{H})$, we shall give a characterization of this positive definiteness in terms of elements of $\mathbb{H}$.
We start with a remark regarding the existence of a natural scalar product on a locally Hilbert space $\mathbb{H}=\varinjlim\limits_{\lambda\in\Lambda}H_\lambda$.
For a pair $h,k\in \mathbb{H}$, we put
\begin{equation}
\label{eq31}\langle h,k\rangle:=\langle h,k\rangle_\lambda,
\end{equation}
where $\lambda\in\Lambda$ is chosen such that $h,k\in H_\lambda$. From the condition \eqref{eq12} it is easy to see that the definition \eqref{eq31} is correct (does not depend on the choice of $\lambda$) and satisfies the properties of a scalar product.
\begin{defn}[\cite{Ga.Ga.2007}]\label{def31}
Let $\mathbb{H} = \varinjlim\limits_{\lambda\in\Lambda} H_\lambda$ be a locally Hilbert space and $\mathcal{L}(\mathbb{H})$ be the previously defined $LC^*$~-~algebra. An $\mathcal{L}(\mathbb{H})$~-~valued kernel on a set $S$ (i.e. a function $\Gamma:S\times S \to \mathcal{L}(\mathbb{H})$) is a \emph{locally positive definite kernel} (LPDK), if for each finitely supported function
\begin{equation}\label{eq32}
S\ni s \mapsto T_s = \varinjlim\limits_{\lambda\in\Lambda} T_s^\lambda \in \mathcal{L} (\mathbb{H})
\end{equation}
it holds
\begin{equation} \label{eq33}
\sum\limits_{s, t} T^*_t \Gamma(s, t) T_s \geq 0.
\end{equation}
\end{defn}
Looking at the condition \eqref{eq33} and using the scalar product \eqref{eq31} we shall deduce that it is equivalent to
\begin{equation}\label{eq34}
\sum\limits_{s,t}\langle\Gamma(s,t)h_s,h_t\rangle \geq 0,
\end{equation}
for any finitely supported function $S \ni s \mapsto h_s \in \mathbb{H}$.
Indeed, by Proposition \ref{prop21} (iii) and Proposition \ref{prop23}, \eqref{eq33} is equivalent to
\begin{equation}\label{eq3-1}
\sum_{s,t} T^{\lambda *}_t \Gamma^\lambda(s, t) T^\lambda_s\geq 0, \, \lambda\in\Lambda,
\end{equation}
which by the last part of Proposition \ref{prop23} is equivalent to
\begin{equation} \label{eq35}
\sum\limits_{s,t} T_t^{\lambda *} \Gamma^\lambda (s,t) T_s^\lambda \ge 0,\ \lambda \in \Lambda,
\end{equation}
which in turn, by the characterization of operatorial positive definiteness in the Hilbert space $H_\lambda$ is equivalent to
\begin{equation}\label{eq36}
\sum\limits_{s,t} \langle \Gamma^\lambda (s,t),h_s^\lambda, h_t^\lambda \rangle_\lambda \ge 0,\ \lambda \in \Lambda,
\end{equation}
which is obviously equivalent to \eqref{eq34} for each finitely supported ${\mathbb H}$~-~valued function $s\mapsto h_s$. \\
We have thus proven
\begin{thm} \label{thm31}
An $\mathcal{L}(\mathbb{H})$~-~valued kernel $\,\Gamma$ is an LPDK on $S$ iff for each finitely supported $\mathbb{H}$~-~valued function $s\mapsto h_s$ on $S$, relation \eqref{eq34} is fulfilled.
\end{thm}
\begin{defn} \label{def32}
Let $S$ be a (commutative) $*$~-~semigroup with a neutral element $e$. An $\mathcal{L}(\mathbb{H})$~-~valued mapping $\varphi$ on $S$ is a \emph{locally positive definite function} (LPDF) on $S$ if the associated kernel
$\Gamma_\varphi$ defined by $\Gamma_\varphi (s, t):\ = \varphi(t^* s),$ $s, t \in S$ is an LPDK.
\end{defn}
From Theorem \ref{thm31} we immediately infer
\begin{cor} \label{cor32}
An $\mathcal{L} (\mathbb{H})$~-~valued function $\varphi$ on a $*$~-~semigroup $S$, where $\mathbb{H} = \varinjlim\limits_{\lambda} H_\lambda$ is a locally Hilbert space, is an LPDF on $S$, iff for each finitely supported function $ S\ni s \mapsto h_s \in \mathbb{H}$ it holds
\begin{equation} \label{eq37}
\sum_{s, t}\langle{\varphi(t^*s)h_s},{h_t} \rangle\geq 0.
\end{equation}
\end{cor}
If we also look at the ``localization'' of all operators which occur in the above considerations, we easily deduce
\begin{cor} \label{cor33}
Let $\mathbb{H} = \varinjlim\limits_\lambda H_\lambda$ be a locally Hilbert space and $\Gamma: S \times S \to \mathcal{L} (\mathbb{H}),$ $\Gamma(s, t) = \varinjlim\limits_{\lambda \in \Lambda} \Gamma^\lambda (s, t), \ s, t \in S$ be a kernel on the set $S$. The following conditions are equivalent:
\begin{itemize}
\item[(i)]{ $\Gamma$ is an $\mathcal{L}(\mathbb{H})$~-~valued LPDK on $S$;}
\item[(ii)]{ for each $\lambda \in\Lambda$, $\Gamma^\lambda$ is a $\mathcal{B}(H_\lambda)$~-~valued PDK on $S$;}
\item[(iii)]{for each finitely supported $\mathbb{H}$~-~valued function $s\mapsto h_s$ on $S$, relation \eqref{eq34} holds.}
\end{itemize}
\end{cor}
\begin{cor}
For an $\mathcal{L}(\mathbb{H})$~-~valued function on the $*$~-~semigroup $S$, the following conditions are equivalent:
\begin{itemize}
\item[(i)]{$\varphi$ is an $\mathcal{L}(\mathbb{H})$~-~valued LPDF on $S$;}
\item[(ii)]{for each $\lambda\in\Lambda$, $\varphi^\lambda$ is a $\mathcal{B}(H_\lambda)$~-~valued PDF on $S$;}
\item[(iii)]{$\varphi$ satisfies the condition \eqref{eq37}.}
\end{itemize}
\end{cor}
\section{Reproducing kernel locally Hilbert spaces}
In \cite{Ga.Ga.2007} we have defined the reproducing kernel Hilbert module over an $LC^*$~-~algebra $C$. This works for the $LC^*$~-~algebra $\mathcal{L}(\mathbb{H})$ as well. But for $\mathcal{L}(\mathbb{H})$~-~valued kernels, analogue to the case of Hilbert spaces, it is also possible to introduce the reproducing kernel locally Hilbert space, whose reproducing kernel is $\mathcal{L}(\mathbb{H})$~-~valued.
\begin{defn} \label{def41}
Let $\mathbb{H} = \varinjlim\limits_\lambda H_\lambda$ be a fixed locally Hilbert space, $S$ be an arbitrary set and $\mathbb{K} = \varinjlim\limits_{\lambda \in \Lambda} K_\lambda $ be a locally Hilbert space consisting of $\mathbb{H}$~-~valued functions on $S$. $\mathbb{K}$ is called a \emph{reproducing kernel locally Hilbert space} (RKLHS), if there exists an $\mathcal{L}(\mathbb{H})$~-~valued kernel $\Gamma$ on $S$ such that the operators $\Gamma_s$, $s \in S$, between ${\mathbb H}$ and ${\mathbb K}$, defined by $$(\Gamma_s h)(t) :\ = \Gamma(s, t) h ;\ t \in S ,\ h \in \mathbb{H}$$ satisfy the following conditions:
\begin{itemize}
\item[(IP)]{$\Gamma_s \in \mathcal{L} (\mathbb{H}, \mathbb{K}) ,\ s \in S$ ;}
\item[(RP)]{$k(s) = \Gamma_s^* k ,\ k \in \mathbb{K} , \ s \in S$.}
\end{itemize}
\end{defn}
\begin{rem}
If an $\mathcal{L}(\mathbb{H})$~-~valued kernel on $S$, with the above property exists, then it is uniquely determined by $\mathbb{K}$. Indeed, if another $\mathcal{L}(\mathbb{H})$~-~valued kernel $\Gamma'$ satisfying the properties (IP) and (RP) exists, then (RP) implies $\Gamma_s^* k = {\Gamma_s'}^*k ,\ k \in \mathbb{K}$ wherefrom $\Gamma^*_s = {\Gamma'}^*_s$ as operators from $\mathcal{L}(\mathbb{H} , \mathbb{K})$. This implies now that $\Gamma(s, t) = \Gamma'(s,t);\ s,t \in S$. This is why, $\Gamma$ will be also called \emph{the locally reproducing kernel (LRK) of the RKLHS $\mathbb{K}$}. It will be also denoted by $\Gamma_\mathbb{K}$.
\end{rem}
\begin{rem}
If $\mathbb{H} = \varinjlim\limits_{\lambda\in\Lambda} H_\lambda$ is a locally Hilbert space and $\Gamma$ is an $\mathcal{L} (\mathbb{H})$~-~valued LRK for the locally Hilbert space $\mathbb{K} = \varinjlim\limits_{\lambda\in\Lambda} K_\lambda$, then, having in view that $\Gamma(s, t) = \varinjlim\limits_{\lambda\in\Lambda} \Gamma^\lambda (s,t) ,\ s, t \in S$ (as elements of $\mathcal{L}(\mathbb{H})$ !), the following properties are fulfilled:
\begin{itemize}
\item[(LIP)]{$\Gamma^\lambda_s h \in K_\lambda ,\,h\in H_\lambda,\ s\in S , \ \lambda \in \Lambda$;}
\item[(LRP)]{$\langle{k(s)},{h}\rangle_{H_\lambda} = \langle{k},{\Gamma_s^\lambda h}\rangle_{K_\lambda},\ h \in H_\lambda,\ k \in K_\lambda,\ s\in S , \ \lambda \in \Lambda$.}
\end{itemize}
This results by applying the definition of $\mathcal{L}(\mathbb{H}, \mathbb{K})$.
\end{rem}
It is now easily seen that the locally conditions (LIP) and (LRP) are sufficient to define $\mathbb{K} = \varinjlim\limits_{\lambda\in\Lambda} K_\lambda$ as RKLHS with $\Gamma=\varinjlim\limits_{\lambda\in\Lambda} \Gamma^\lambda$ as LRK on $S$.
Moreover, we obtain
\begin{cor}\label{cor43}
The locally Hilbert space $\mathbb{K} = \varinjlim\limits_{\lambda\in \Lambda} K_\lambda$ of $\mathbb{H}$~-~valued functions on $S$ is a RKLHS, iff for each $\lambda \in \Lambda$, the Hilbert space $K_\lambda$ of $H_\lambda$~-~valued functions on $S$ is a reproducing kernel Hilbert space (RKHS). Moreover, if $\Gamma_\mathbb{K}$ is the RK of $\mathbb{K}$ and $\Gamma_{K_\lambda}$ is the RK of $K_\lambda$, then, for each
$s, t \in S$, we have
$
\Gamma_\mathbb{K} (s, t) = \varinjlim\limits_{\lambda\in\Lambda} \Gamma_{K_\lambda} (s, t).
$
In other words, $\Gamma_{K_\lambda} = \Gamma_{\mathbb{K}}^\lambda \ (\lambda\in\Lambda)$.
\end{cor}
\begin{prop}
If $\,\Gamma = \Gamma_\mathbb{K}$ is a LRK for a locally Hilbert space $\mathbb{K}$ of $\,\mathbb{H}$~-~valued functions on $S$, then $\Gamma$ is an $\mathcal{L}(\mathbb{H})$~-~valued LPDK on $S$.
\end{prop}
The proof runs on the components $\Gamma^\lambda \ (\lambda\in\Lambda)$ of $\Gamma$, as in the corresponding Hilbert space case. \\
A more important result is:
\begin{thm}
Let $\mathbb{H} = \varinjlim\limits_{\lambda\in\Lambda} H_\lambda$ be a locally Hilbert space and $S$ be an arbitrary set. Then $\Gamma$ is an LRK for some locally Hilbert space $\mathbb{K} = \varinjlim\limits_{\lambda\in \Lambda} K_\lambda$ of $\mathbb{H}$~-~valued functions on $S$, iff it is an $\mathcal{L}(\mathbb{H})$~-~valued LPDK on $S$.
\end{thm}
\begin{proof}
It remains to prove that, for a given $\Gamma$ as above, there exists a locally Hilbert space $\mathbb{K} = \varinjlim\limits_{\lambda\in\Lambda} K_\lambda$, consisting of $\mathbb{H}$~-~valued functions on $S$, which is an RKLHS, for which $\Gamma =\Gamma_\mathbb{K}$. First, since $\Gamma (s, t) = \varinjlim\limits_{\lambda\in\Lambda} \Gamma^\lambda (s,t),\ s, t \in S$, from the condition of LPD, it results that, for each $\lambda \in \Lambda$, $\Gamma^\lambda$ is a $\mathcal{B}(H_\lambda)$~-~valued PDK on $S$ (see \cite{Ga.1970}). Denote $K_\lambda$ the RKHS, with $\Gamma^\lambda$ as RK. From the properties of $\mathcal{L}(\mathbb{H})$ it results that the family $\{ K_\lambda ,\ \lambda \in \Lambda \}$ satisfies the condition for the construction of the inductive limit $\varinjlim\limits_{\lambda \in \Lambda} K_\lambda = \bigcup\limits_{\lambda \in \Lambda} K_\lambda$. Then $\mathbb{K} = \bigcup\limits_{\lambda \in \Lambda} K_\lambda$ is the desired locally Hilbert space.
\end{proof}
\section{Dilations of LPD operator valued functions on $*$~-~semigroups}
We are now in a position to prove, in the frame of operators on locally Hilbert spaces, the analogue of the famous dilation theorem of B. Sz.-Nagy (\cite{Sz.1955}). \\
Let $S$ be an abelian $*$~-~semigroup with the neutral element $e$. A \emph{representation} of $S$ on a locally Hilbert space $\mathbb{K}$ is an algebra morphism $\pi$ from $S$ into $\mathcal{L}(\mathbb{K})$, i.e.
\begin{align*}
\pi(e) &= I_\mathbb{K} \\
\pi (st) & = \pi(s) \pi(t)\\
\pi(s^*) & = \pi(s)^* ,\,s,t\in S.
\end{align*}
It is clear that such a representation generates through
$$
\Gamma_\pi (s, t): \ = \pi (t^*s) ,\ \ s, t \in S
$$
an $\mathcal{L}(\mathbb{K})$~-~valued LPDK on $S$. The converse doesn't hold in general. However an $\mathcal{L}(\mathbb{H})$~-~valued LPDF on $S$ is extensible in some sense to a larger locally Hilbert space. More precisely it holds:
\begin{thm} \label{thm51}
Let $S$ be a $*$~-~semigroup, $\mathbb{H}$ be a locally Hilbert space and $s\mapsto \varphi(s)$ be an $\mathcal{L}(\mathbb{H})$~-~valued function on $S$, which is a LPDF and satisfies the following boundedness condition:
\begin{itemize}
\item[(LBC)]{for each $u \in S$ and $\lambda \in \Lambda$ there exists a positive constant $C_u^\lambda > 0$, such that
\begin{equation*}
\sum_{s, t} \langle{\varphi(t^*u^*us)h_s},{h_t}\rangle_{\lambda} \leq (C_u^\lambda)^2 \sum \langle{\varphi (t^*s) h_s},{h_t}\rangle_{\lambda},
\end{equation*}}
where $\{ h_s \}_{s \in S}$ is an arbitrary finitely supported $H_\lambda$~-~valued function.
\end{itemize}
Then there exists a locally Hilbert space $\mathbb{K}$, in which $\mathbb{H}$ is naturally embedded by a locally isometry $J\in\mathcal{L}(\mathbb{H},\mathbb{K})$ and a representation $\pi$ of $S$ on $\mathbb{K}$, such that
$$ \varphi(s) = J^* \pi(s) J ,\ s \in S.$$
Moreover, it is possible to choose $\mathbb{K}$ satisfying the minimality condition
$$\bigvee\limits_{s \in S} \pi(s) \mathbb{H} = \mathbb{K} $$
and in this case $\mathbb{K}$ is uniquely determined up to a locally unitary operator. The following conditions will hold as well:
\begin{itemize}
\item[(i)]{$||\pi(u)||_\lambda \le C_u^\lambda , \ \lambda \in\Lambda, u\in S$;}
\item[(ii)]{$\varphi(sut)+\varphi(svt) = \varphi(swt)$, for each $s, t\in S$ implies $\pi(w) = \pi(u) + \pi (v)$.}
\end{itemize}
\end{thm}
\begin{proof}
By defining $\Gamma_\varphi (s,t) :\ = \varphi (t^* s)$, we infer that $\Gamma_\varphi$ is an LPDK on $S$. Then the desired locally Hilbert space will be $\mathbb{K} = \mathbb{K}_{\Gamma_\varphi}$, the RKLHS with $\Gamma_\varphi$ as LRK. As it is known, a dense subspace in $\mathbb{K}$ is given by
\begin{equation*}
\biggl\{ \sum\limits_s \Gamma^\varphi_s h_s , \text{ where } s \mapsto h_s \text{ is a finitely supported } \mathbb{H}\text{-valued function on } S\biggr\}.
\end{equation*}
The operators $J$ will be defined as
\begin{equation*}
J h = \sum_s \Gamma^\varphi_s h_s ,\ \text{ where } h_e = h, \ \text{and} \ h_s = 0, \ \text{for} \ s \ne e,
\end{equation*}
whereas the representation $\pi$, will be
\begin{equation*}
\pi (u) \sum_s \Gamma^\varphi_s h_s = \sum_s \Gamma^\varphi_{us} h_s .
\end{equation*}
With the prerequisites of the preceding sections it is now easy to verify the statements (i) and (ii).
\end{proof}
The representation $\pi$ is called a \emph{minimal dilation} of the function $\varphi$. It is known in the frame of Hilbert space operators the notion of a minimal $\rho$~-~dilation (\cite{Ga.1970}).
A representation $\pi$ of $S$ on a locally Hilbert space $\mathbb{K}$, containing another $\mathbb{H}$ as a subspace, is called a $\rho$~-~dilation ($\rho > 0$) for an $\mathcal{L}(\mathbb{H})$~-~valued function $\varphi$ on $S$ if
$$ \varphi(s) = \rho J^* \pi(s) J ,\ s\in S\setminus\{e\}, $$
where $J$ is the natural (locally isometric) embedding of $\mathbb{H}$ into $\mathbb{K}$.
It is not hard to characterize the $\mathcal{L}(\mathbb{H})$~-~valued functions $\psi$ on a $*$~-~semigroup $S$, which admit $\rho$~-~dilations. Indeed, it holds:
\begin{thm} \label{thm52}
An $\mathcal{L}(\mathbb{H})$~-~valued function $\psi$ on $S$ has a $\rho$~-~dilation, iff the following conditions are fulfilled
\begin{itemize}
\item[($\rho$LPD)]{ $\rho \sum\limits_{s,t: t^*s=e} \langle{h_s},{h_t}\rangle + \sum\limits_{s, t : t^*s \ne e} \langle{\psi(t^*s)h_s},{h_t}\rangle \ge 0$, for each finitely supported $\mathbb{H}$~-~valued function $s \mapsto h_s$ on $S$;}
\item[($\rho$LBC)]{for each $\lambda\in\Lambda$ and each $u\in S$, there is a constant $C_u^\lambda > 0$, such that
\begin{multline*}
\rho \sum\limits_{s, t : t^*u^*us = e} \langle{h_s},{h_t}\rangle_{\lambda} +
\sum\limits_{s, t : t^*u^*us \ne e} \langle{\psi (t^*u^*us)h_s},{h_t}\rangle_{\lambda} \leq \\
(C_u^\lambda)^2 \left[ \rho \sum_{s,t : t^*s = e} \langle{h_s},{h_t}\rangle_{\lambda} + \sum_{s, t : t^*s \ne e}\langle{\psi(t^*s)h_s},{h_t}\rangle_{\lambda} \right]
\end{multline*}
for each finitely supported $H_\lambda$~-~valued function $s \mapsto h_s$ on $S$.}
\end{itemize}
It is also possible to have the minimality condition and analogue properties for the $\rho$~-~dilation as in the previous theorem.
\end{thm}
\begin{proof}
By putting
\begin{equation*}
\varphi (s) =
\begin{cases}
\frac{1}{\rho} \psi (s) , & s \ne e \\
I_\mathbb{H} , & s = e
\end{cases},
\end{equation*}
we have that $\varphi$ satisfies the conditions of Theorem \ref{thm51} and the dilation of the function $\varphi$ will be a $\rho$~-~dilation for the given function $\psi$.
\end{proof}
\section{Applications}
\paragraph1 It is now possible to dilate the \emph{locally positive and ``bounded'' $\mathcal{L} (\mathbb{H})$~-~valued measure} on a $\sigma$~-~algebra $\Sigma$, to a multiplicative locally projection $\mathcal{L}(\mathbb{K})$~-~valued multiplicative measure, where $\mathbb{H}\subset \mathbb{K}$.
Namely, it holds:
\begin{thm}[Neumark] \label{thm61}
If $\omega \mapsto E(\omega)$ is an $\mathcal{L} (\mathbb{H})$~-~valued measure on the $\sigma$~-~algebra $\Sigma$, such that $ 0 \leq E(\omega) \leq I_\mathbb{H}$, then there exist a locally Hilbert space $\mathbb{K}$, which includes $\mathbb{H}$ as a locally Hilbert subspace and an $\mathcal{L} (\mathbb{K})$~-~valued measure $\omega \mapsto F(\omega)$, such that $F(\omega)$ are self-adjoint projections on $\mathbb{K}$ and
\begin{equation} \label{eq61}
F(\omega) = J^* E(\omega) J ,\quad \omega \in \Sigma ,
\end{equation}
$J$ being the (locally isometric) embedding of $\,\mathbb{H}$ into $\mathbb{K}$.
\end{thm}
\begin{proof}
By putting $S = \Sigma$, the intersection as addition and the involution $\omega^* = \omega,$ $\,\omega \in \Sigma$ and, applying Theorem \ref{thm51}, the statement is easily inferred.
\end{proof}
\paragraph2 For a \emph{locally contraction} $T$ on $\mathbb{H}$, i.e. $I - T^* T$ is positive in $\mathcal{L} (\mathbb{H})$, by putting
\begin{equation} \label{eq62}
T^{(n)}: = \begin{cases} T^n , & n \in \mathbb{Z}_+ \\
T^{* - n} , & \text{ elsewhere }
\end{cases}
,
\end{equation}
$S =\mathbb{Z}$ and $n^* :\ = -n ,\ n \in \mathbb{Z}$, and applying Theorem \ref{thm51} we obtain a locally unitary minimal dilation $U = \pi (1)$ on a minimal larger locally Hilbert space $\mathbb{K}$, i.e. $T^n=J^*U^nJ$, $n\in\mathbb{Z}_{+}$, $\mathbb{K}=\bigvee\limits_{n \in \mathbb{Z}}U^nJ\mathbb{H}$.
\paragraph3 If $\{T_t\}_{t\in\mathbb{R}_{+}}$ is a \emph{locally contraction semigroup} on $\mathbb{H}$, then by defining
\begin{equation*}
T_{(t)}: = \begin{cases} T_t , & t \in \mathbb{R}_+ \\
T^{*}_{ (- t)} , & t<0
\end{cases}
,
\end{equation*}
the function $$\mathbb{R}\ni t\mapsto T_{(t)}\in\mathcal{L}(\mathbb{H})$$
will be LPD on the group $\mathbb{R}$ and satisfies the locally boundedness condition. By applying Theorem \ref{thm51}, we get the existence of a locally unitary group $U_t:=\pi(t)$, $t\in\mathbb{R}$ on $\mathbb{H}$, which dilates $\{T_t\}_{t\in\mathbb{R}_{+}}$. It is also possible to obtain dilation results for a semigroup $\{T_s\}_{s\in S}$ of locally contractions from $\mathcal{L}(\mathbb{H})$, where $S$ is an abelian subsemigroup of a group $G$, with $S\cap S^{-1}=\{e\}$ and $s^*=s^{-1}$, $s\in S$ (see \cite{Su.1975} for the Hilbert space frame).
\paragraph4 If $T = \varinjlim\limits_{\lambda\in\Lambda} T_\lambda \in \mathcal{L} (\mathbb{H})$ satisfies the locally condition
\begin{equation} \label{eq63}
\parallel{p(T_\lambda)}\parallel_\lambda \leq \sup\limits_{|{z}| \le 1} |{\rho p (z) + (1 - \rho) p (0)}|
\end{equation}
for each polynomial $p$, then $T$ has a unitary $\rho$~-~dilation $U$ on a (minimal) larger locally Hilbert space $\mathbb{K}$, i.e. $T^n=\rho J^*U^nJ$, $n=1,2,\dots$ and $\mathbb{K}=\bigvee\limits_{n \in \mathbb{Z}}U^nJ\mathbb{H}$. Indeed, it results, applying Theorem \ref{thm52}, that the above condition is in fact equivalent to the conditions ($\rho$LPD) and ($\rho$LBC) for $S = \mathbb{Z}$, as above, with
\begin{equation*}
T(n) :\ = \begin{cases} \frac{1}{\rho} T^{(n)} ,& n \ne 0 \\
I , & n = 0 \end{cases} , n \in \mathbb{Z} ,
\end{equation*} (compare also with \cite{Ga.1970}).
It is clear that such a $\rho$-contraction $T$ is a bounded element in the $LC^*$-algebra ${\mathscr L}({\mathbb H})$, i.e. $T \in \Bigl({\mathscr L}({\mathbb H})\Bigr)_b$.
It is not hard to see that for $T = \varinjlim\limits_{\lambda\in\Lambda} T_\lambda$ the following conditions are equivalent:
\begin{itemize}
\item[(i)] $T$ has a locally unitary $\rho$~-~dilation;
\item[(ii)] $T$ satisfies the condition (\ref{eq63});
\item[(iii)] each $T_\lambda$ has a unitary $\rho$~-~dilation, $\lambda\in\Lambda$.
\end{itemize}
\paragraph5 It is also possible to obtain from Theorem \ref{thm51} an analogue of the Bram criteria on an operator from $\mathcal{L}(\mathbb{H})$ for the existence of a normal extension. It means that the notion of
a subnormal operator in $\mathcal{L}(\mathbb{H})$ makes sense as in the Hilbert space case. Moreover, the particular case of the existence of a locally unitary operator can be obtained by applying {\bf 2} to a locally isometry. \\
Similar results, as in the Hilbert space case, can be obtained for commuting systems of operators on a locally Hilbert space.
\subsection*{Acknowledgment}
This work was entirely supported by research grant 2-CEx06-11-34/25.07.06.
|
1504.01312
|
\section{Introduction}
\label{intro}
Nanocomposite films made of nano-sized metallic particles embedded in an insulating matrix display peculiar
electron transport regimes depending on the relative amount of the metallic phase compared to the insulating one.
Above a critical concentration $x_c$ of the metallic phase, macroscopic clusters of connected metallic particles span the entire
sample, giving rise to an electrical conductivity $\sigma$ characterized in the vicinity of $x_c$ by a power-law behavior of the form:
\begin{equation}
\label{power}
\sigma\propto (x-x_c)^t,
\end{equation}where $t$ is the transport exponent which the percolation theory predicts to take the values $t\simeq 1.3$ and $t\simeq 2$ for strictly
two-dimensional (2d) and three-dimensional (3d) systems, respectively.\cite{Stauffer1994,Sahimi2003}
Equation \eqref{power} describes well the conductivity behavior in the $x>x_c$ region of several nanogranular metal films grown
by different methods,\cite{McAlister1985, Toker2003,Salvadori2008,Wei2013} with observed values of the exponent that are consistent
with the effective dimensionality of the films. Below $x_c$ the metallic phase is broken up into disconnected metallic regions, so that
for concentrations sufficiently smaller than $x_c$ electrons have to tunnel across the insulating barrier separating the conducting fillers.
In this regime, the conductivity of nanocomposites with embedded, isolated metallic particles follows
approximately:\cite{Ambegaokar1971,Hunt2005,Ambrosetti2010a,Ambrosetti2010b}
\begin{equation}
\label{tun1}
\sigma\propto e^{-\frac{2\delta(x)}{\xi}},
\end{equation}
where $\xi$ is the tunneling decay length which, depending on the nature of the composite constituents, ranges from a fraction to a few
nanometers, and $\delta(x)$ is a typical distance between the surfaces of two conducting particles. In the dilute limit $x\rightarrow 0$,
dimensional considerations\cite{Note1} imply that $\delta(x)\propto D/x^{1/d}$ for particles of mean size $D$ randomly dispersed
in a $d$-dimensional volume, as also inferred from transport measurements on 2d and 3d nanogranular metal composites.\cite{Wei2013,Fostner2014}
Recently, a survey of the conductivity data of several thick nanogranular composite films has evidenced that the percolation behavior
of Eq.~\eqref{power} evolves into the tunneling one of Eq.~\eqref{tun1} as the result of the competition between percolation and
tunneling transport mechanisms as $x$ decreases.\cite{Grimaldi2014}
In this article we study the conductivity behavior as a function of gold content in Au-implanted nanocomposite films above and below $x_c$.
We consider two distinct series of systems: Au-implanted alumina and Au-implanted polymethylmethacrylate (PMMA) films. Au-alumina
films have thicknesses $h$ a few times larger than the mean Au particle size $D$, while the conducting layer of Au-PMMA is basically
formed by a single layer of gold particles ($h\approx D$), which makes this system strictly two dimensional.
We show that for $x>x_c$ the conductivity of both systems follow Eq.~\eqref{power} with transport exponents very close to the 2d value $t\simeq 1.3$,
in accord with the prediction of the percolation theory of nanocomposites with nanometric values of $h$. To study the tunneling behavior
at Au concentrations below $x_c$, we analyze the conductivities of Au-PMMA and Au-alumina using an effective medium theory formulated for
film thicknesses ranging from $h=D$ to $h\gg D$, as to describe the evolution from 2d to 3d of the conductivity. We find that, in contrast to the
2d nature of the Au-PMMA films, the Au-alumina films are large enough to sustain tunneling conductivity with 3d character. The effective
dimensionality of our ion-implanted Au-alumina nanocomposites thus increases from 2d to 3d as the system crosses over from the
percolation to the tunneling regimes. This conclusion is supported by the value of $\xi$ extracted from the conductivity data, which coincides
with that observed\cite{Grimaldi2014} in thick (i.e., 3d) co-sputtered Au-alumina nanocomposite films.\cite{Abeles1975}
\section{Material and methods}
\label{material}
The experimental data used in the present work are basically the same ones presented in Refs.~\onlinecite{Salvadori2008,Salvadori2013,Teixeira2009},
but with a completely different approach and exploring a different range of the data. The mentioned works addressed specifically
implantation doses above the percolation thresholds, while the present work focuses in doses below the
percolation thresholds, where the particles are isolated from each other and the tunneling effect is dominant.
A summary of the experimental procedure will be done here and details can be obtained in the mentioned references.
PMMA was deposited on glass microscope slides using a spin coater, generating a film thickness about $50$ nm. Electrical contacts
were formed at both ends of the substrates by plasma deposition of thick ($200$ nm) platinum films. Very low energy ($49$ eV) was
used for ion implantation using a streaming (unidirectionally drifting) charge-neutral plasma formed by a vacuum arc plasma gun.
Gold ion implantation in alumina
using $40$ keV was proceeded in an implanter.\cite{Salvadori2012} Electrical contacts were
formed at both ends of the alumina sample using silver paint.
For both Au-PMMA and Au-alumina systems, in situ resistance measurements were performed as the ion implantation
proceeded:\cite{Salvadori2008,Salvadori2013} after a known dose of Au ions is implanted, the implantation process is temporary halted
and the resistance is measured. This process is repeated to determine the sample conductivity as a function of ion implantation dose.
In the present work, computer simulation using the TRIDYN computer code\cite{Moeller1984,Moeller1988} was used to estimate
the depth profiles of the ion implanted gold in the alumina substrate. TRIDYN is a Monte Carlo simulation program based on
the TRIM (Transport and Range of Ions in Matter) code.\cite{Ziegler1985} The program takes into account compositional changes
in the substrate due to: previously implanted dopant atoms, and sputtering of the substrate surface.
Note that, the parameters $t$ (transport exponent) and $x_c$ (critical concentration of the metallic phase) were recalculated in
the present work, using a different approach, presenting some deviation from those obtained in the previous works.\cite{Salvadori2008,Salvadori2013}
\section{Results}
\label{results}
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.42,clip=true]{fig1}
\caption{(Color online) Normalized conductivity (open squares) of (a) Au-PMMA and (b) Au-alumina nanocomposite films
as a function of the normalized ion dose $x$. Solid lines are fits with Eq.~\eqref{power}. }\label{fig1}
\end{center}
\end{figure}
Results of our conductivity measurements on Au-PMMA and Au-alumina nanocomposites are shown in Fig.~\ref{fig1} (open squares), where
the conductivity ratio $\sigma/\sigma_0$ is plotted as a function of the normalized Au concentration $x=\varphi/\varphi_0$.
The saturation dose $\varphi_0$ is defined as the implantation dose $\varphi$ above which a continuous metal film starts to
be deposited on the insulator surface, while the saturation conductivity $\sigma_0$ is the conductivity measured at
$\varphi=\varphi_0$ (i.e., at $x=1$). We have determined $\varphi_0=2\times 10^{16}$ atoms cm$^{-2}$ and $\sigma_0=2\times 10 ^6$ S/m
for Au-PMMA,\cite{Salvadori2008} and $\varphi_0=9.5\times 10^{16}$ atoms cm$^{-2}$ and $\sigma_0=14$ S/m for Au-alumina.\cite{Salvadori2013}
As is apparent from the semi-log plot of Fig.~\ref{fig1}, the $x$-dependence of $\sigma/\sigma_0$ for both Au-PMMA and Au-alumina films
presents a double hump shape, which is a feature commonly observed also in several other nanogranular metals composites.\cite{Grimaldi2014}
The hump at values of $x$ larger than $0.4-0.5$ stems from the presence of clusters of coalesced Au particles that extent
across the entire composite layer. In this region, $\sigma/\sigma_0$ is reasonably well fitted by Eq.~\eqref{power} (shown
by solid lines in Fig.~\ref{fig1}) with $t=1.26\pm 0.03$ and $x_c=0.443\pm 0.002$ for Au-PMMA, and $t=1.4\pm 0.2$ and $x_c=0.44\pm 0.02$
for Au-alumina composite films.
The fitted values of the transport exponents
are both consistent with the 2d value $t\simeq 1.3$, indicating that ion-implanted Au-PMMA and Au-alumina composites behave
as 2d percolating composites for $x>x_c$. This result is consistent with the values of the thickness of the conducting layers
of Au-PMMA and Au-alumina extracted from the TRYDIN analysis and from cross sectional TEM images of Au-PMMA.
For the case of Au-PMMA films we observed a conducting layer of
thickness $h\approx 5.5-8$ nm (see Fig.~\ref{fig2}(a)) formed by Au particles of size $D\approx5-6$ nm,\cite{Teixeira2009} which indicates that
Au-PMMA films are composed basically of one monolayer of Au particles embedded into the PMMA substrate.
Au-implanted PMMA films are thus strictly 2d systems. On the contrary, the conducting layer of Au-alumina films is
not strictly two-dimensional: the mean Au particle size extracted from TEM is $D\approx3.2$ nm,\cite{Salvadori2013} while
TRYDIN analyses indicate that $h\approx 20$ nm (see Fig.~\ref{fig2}(b)), that is, about $6$ times larger than $D$. The 2d character
of the percolation conductivity of Au-alumina films is explained by observing that the relevant length scale that governs
the power-law behavior of Eq. (1) is the correlation length $\zeta$ (not to be confused with the tunneling decay length $\xi$),
which measures the typical size of finite clusters of connected particles.\cite{Stauffer1994}
When $x$ approaches $x_c$ from either above or below the percolation threshold $x_c$, the correlation length increases
as $\zeta\approx a\vert x-x_c\vert^{-\nu}$, where $a$ is of the order of the particle size $D$, and $\nu>0$ is the correlation length exponent.
For concentrations such that $\zeta$ becomes larger than the film thickness $h$,
the composite behaves effectively as a 2d system,\cite{Sotta2003,Zekri2011} so that in the vicinity of $x_c$ the conductivity
is expected to follow Eq.~\eqref{power} with $t\simeq 1.3$, as observed in our Au-alumina samples.\cite{Note2}
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.42,clip=true]{fig2}
\caption{(Color online) TRIDYN simulations of the depth profiles for gold implanted into (a) PMMA with ion energy $49$ eV and
dose $\varphi=0.4\times 10^{16}$ atoms cm$^{-2}$, and into (b) alumina with ion energy $40$ keV and
dose $\varphi=2.5\times 10^{16}$ atoms cm$^{-2}$. With these doses, the systems are below their respective percolation
thresholds, presenting isolated gold nanoparticles. The arrows in (b) delimit $20$ nm that is the conducting layer thickness considered.}\label{fig2}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.38,clip=true]{fig3}
\caption{(Color online) Model of a nanocomposite film with finite thickness of the conducting layer. (a) The metallic particles
are taken to be spheres with identical diameter $D$ dispersed within the space delimited by two parallel hard planes of area $A$
and separated by a distance $gh$. The spheres cannot penetrate the confining walls. (b) Cross-sectional view illustrating the
volume $V_h=(h-D)A$ available for the sphere centers (grey region).}\label{fig3}
\end{center}
\end{figure}
Concerning the percolation thresholds extracted from the fits, we note that the values of $x_c$ of Au-PMMA and Au-alumina are
basically the same, although the percolation threshold is generally expected to depend on the particular combination of metal
and insulator components constituting the films and on the details of the fabrication process.\cite{Abeles1975,Grimaldi2014} We have
no arguments ruling out that the equivalence of the two percolation thresholds is just a mere coincidence, as very similar
thresholds have already been observed in granular films with different components.\cite{Grimaldi2014} Interestingly, however,
the value $x_c\simeq 0.44$ of our Au-alumina films is larger than the percolation threshold of three-dimensional Au-alumina composites
($x_c\simeq 0.38$), which is consistent with the observation that the percolation threshold generally diminishes as the dimensionality
increases.\cite{Sahimi2003}
For values of $x$ lower than $x_c$ clusters of coalesced particles no longer span the entire composite layer and the conductivity
is dominated by tunneling processes, which give rise to the hump of $\sigma$ at $x<x_c$ in Fig.~\ref{fig1}. In this region the
conductivities of Au-PMMA and of Au-alumina films are expected to follow Eq.~\eqref{tun1} with typical inter-particle distances
$\delta(x)$ whose $x$-dependence is influenced by the effective dimensionality of the system. Although for the case of Au-PMMA films,
which are strictly two-dimensional, $\delta(x)$ can be determined by considering dispersions of conducing particles in a plane,
for Au-alumina films one must consider the effect of the finite width $h$ of the conducting layer, as its observed value ($h\approx 6D$)
is such that tunneling processes between Au particles at different depths within the conducting layer are also possible.
To tackle this problem, we consider an effective medium approximation (EMA) for the tunneling conductivity
applied to a dispersion of $N$ spherical particles of diameter $D$ that are confined by two parallel hard walls separated by a
distance $h\geq D$ and of macroscopic area $A$, as shown schematically in Fig.~\ref{fig3}(a). The EMA conductivity $\bar{g}$ is the solution
of the following equation:\cite{Ambrosetti2010b,Grimaldi2014}
\begin{equation}
\label{ema1}
\frac{1}{N}\left\langle \sum_{i\neq j}\frac{g(r_{ij})}{\bar{g}+g(r_{ij})}\right\rangle=2,
\end{equation}
where $g(r_{ij})=g_0\exp[-2(r_{ij}-D)/\xi]$, with $r_{ij}\geq D$, is the tunneling conductance between two spherical particles $i$ and $j$,
$r_{ij}=\vert\vec{r}_i-\vec{r}_j\vert$ is the distance between their centers located at $\vec{r}_i$ and $\vec{r}_j$,
and $g_0$ is a tunneling prefactor. The angular brackets in Eq.~\eqref{ema1} denote a statistical average over the
positions of the spheres occupying the volume delimited by the two parallel planes located at
$z=\pm h/2$, as shown in Fig.~\ref{fig3}(b). Since the spherical particles cannot penetrate the hard walls, the available volume for
the sphere centers is $V_h=(h-D)A$.
Assuming that the particles are uncorrelated, the average reduces to:
\begin{equation}
\label{ave}
\left\langle\left(\cdots\right)\right\rangle=\frac{1}{V_h^N}\int_{V_h}d\vec{r}_1\cdots\int_{V_h} d\vec{r}_N\left(\cdots\right),
\end{equation}
so that Eq.~\eqref{ema1} can be rewritten as:
\begin{equation}
\label{ema2}
\frac{N}{V_h^2}\int_{V_h}d\vec{r}_1\int_{V_h}d\vec{r}_2\frac{\theta(r-D)}{g^*\exp[2(r-D)/\xi]+1}=2,
\end{equation}
where $g^*=\bar{g}/g_0$, $\theta(x)=1$ for $x\geq 0$ and $\theta(x)=0$ for $x<0$, and $r=\vert\vec{r}_1-\vec{r}_2\vert$.
Using
\begin{equation}
\label{sumz}
\iint_{-(h-D)/2}^{(h-D)/2}\!dz_1\,dz_2=\int_{-\infty}^{+\infty}\!dz\,(h-D-\vert z\vert)\theta(h-D-\vert z\vert),
\end{equation}
where $z=z_1-z_2$, we express Eq.~\eqref{ema2} in terms of an integral over $\vec{r}=\vec{r}_1-\vec{r}_2$ which, due
to the exponential decay of the tunneling conductance, can be extended over the whole space:
\begin{equation}
\label{ema3}
\frac{h\rho}{h-D}\int\! d\vec{r}\, \theta(r-D)\frac{(h-D-\vert z\vert)\theta(h-D-\vert z\vert)}{g^*\exp[2(r-D)/\xi]+1}=2,
\end{equation}
where $\rho=N/(hA)$ is the particle number density. Finally, we pass to spherical coordinates and integrate over the angles
to find that the EMA conductivity $g^*$ satisfies:
\begin{equation}
\label{ema4}
\frac{12 h\eta}{D^3}\int_D^{\infty}\!dr\,\frac{r}{g^*\exp[2(r-D)/\xi]+1}=2
\end{equation}
for $D\leq h\leq 2D$, and
\begin{align}
\label{ema5}
&\frac{12 h\eta}{D^3}\int_{h-D}^{\infty}\!dr\,\frac{r}{g^*\exp[2(r-D)/\xi]+1}\nonumber\\
&+\frac{24 h\eta}{(h-D)D^3}\int_D^{h-D}\!dr\,\frac{r^2}{g^*\exp[2(r-D)/\xi]+1}\nonumber\\
&-\frac{12 h\eta}{(h-D)^2D^3}\int_D^{h-D}\!dr\,\frac{r^3}{g^*\exp[2(r-D)/\xi]+1}=2
\end{align}
for $h\geq 2D$. In Eqs.~\eqref{ema4} and \eqref{ema5} we have introduced the dimensionless density $\eta=\pi D^3\rho/6$ which,
for the case of uncorrelated (i.e., penetrable) spheres, is related to the volume fraction $x$ of the metallic phase through
$x=1-\exp(-\eta)$.\cite{Torquato1990} The 2d and 3d limits are obtained by setting, respectively, $h=D$ in Eq. (8) and
$h\gg D$ in Eq. (9). It is instructive to express $g^*$ as a tunneling conductance between the surfaces of two spheres
separated by a characteristic distance $\delta^*$:
\begin{equation}
\label{tun2}
g^*=e^{-\frac{2\delta^*}{\xi}}.
\end{equation}
In this way, the term $1/\{g^*\exp[2(r-D)/\xi]+1\}$ that appears in the integrands of Eqs.~\eqref{ema4} and \eqref{ema5} reduces
for $\xi/D\ll 1$ to:
\begin{align}
\label{tun3}
\frac{1}{g^*\exp[2(r-D)/\xi]+1}&=\frac{1}{\exp[2(r-D-\delta^*)/\xi]+1}\nonumber\\
&\approx\theta(\delta^*+D-r).
\end{align}
Using Eq.~\eqref{tun3} in Eq.~\eqref{ema4}, we find thus that the characteristic distance $\delta^*$ for a strictly
2d system becomes:
\begin{equation}
\label{delta1}
\frac{\delta^*}{D}=\left[\frac{1}{3\ln(1-x)^{-1}}+1\right]^{1/2}-1,
\end{equation}
while, since the first and third integrals in Eq.~\eqref{ema5} vanish for $h\gg D$, for a 3d system we obtain:
\begin{equation}
\label{delta2}
\frac{\delta^*}{D}=\left[\frac{1}{4\ln(1-x)^{-1}}+1\right]^{1/3}-1.
\end{equation}
For $x\ll 1$ the above expressions correctly reproduce the dimensional scaling $\delta^*\propto D/x^{1/d}$ expected to hold
true in the dilute limit. Furthermore, Eqs.~\eqref{delta1} and \eqref{delta2} evidence that, for moderately small values of $x$,
the relevant tunneling distance for 2d systems is about twice that for 3d systems, as illustrated in Fig.~\ref{fig4}(a) where
Eqs.~\eqref{delta1} and \eqref{delta2} are shown by solid lines.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.33,clip=true]{fig4}
\caption{(Color online) (a) Relevant tunneling distance $\delta^*$ between the surfaces of two spheres as a function
of the metallic volume fraction $x$ and for different thicknesses $h$ of the film.
$\delta^*$ is calculated from $\delta^*=(\xi/2)\ln(1/g^*)$, where $g^*$ is the numerical solution of the EMA
equations \eqref{ema4} and \eqref{ema5} with tunneling decay length fixed at $\xi=0.05 D$. Solid lines are results
for the limits 2d and 3d given in Eqs.~\eqref{delta1} and \eqref{delta2}. (b) $\delta^*$ as a function of $h$ for selected
values of the volume fraction of the metallic spheres. For $\delta^*\gtrsim 3D$ the films behave practically as 3d systems.}\label{fig4}
\end{center}
\end{figure}
To assess how the EMA conductivity evolves from the 2d to the 3d limits, we solve numerically Eqs.~\eqref{ema4} and \eqref{ema5}
for films thicknesses ranging from $h=D$ to $h\gg D$. Expressing the resulting $g^*$ in terms of the distance $\delta^*$ as
specified in Eq.~\eqref{tun2}, we find that $\delta^*$ rapidly decreases as the film thickness increases from $h=D$,
and essentially matches the 3d limit already for $h\gtrsim 3D$, as shown in Fig.~\ref{fig4}(b) where $\delta^*$ is presented as a
function of $h/D$ for selected $x$ values. The rapid evolution from 2d to 3d is also illustrated in Fig.~\ref{fig4}(a), where the
numerical values of $\delta^*$ as function of $x$ and for several film thicknesses are compared with Eqs.~\eqref{delta1} and
\eqref{delta2}.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.33,clip=true]{fig5}
\caption{(Color online) Natural logarithm of measured $\sigma/\sigma_0$ (open squares) as a function of $\delta^*/D$
for (a) Au-PMMA and (b) Au-alumina nanocomposite films. The characteristic EMA distance $\delta^*$ is assumed to be given by
the 2d limit of Eq.~\eqref{delta1} for Au-PMMA and by the 3d limit of Eq.~\eqref{delta2} for Au-alumina.
Solid lines are the best fits of the data with Eq.~\eqref{fit}. }\label{fig5}
\end{center}
\end{figure}
The above analysis suggests that it suffices to have film thicknesses only a few times larger than the particle size to
induce a 3d character to the tunneling conductivity. This result is consistent with the observation that the relevant
length scale for tunneling is given by the mean inter-particle distance, and has interesting consequences with respect
to the nanocomposite films considered here. Indeed, if on the one hand Au-PMMA films are expected to display 2d conductivity
in the whole range of $x$ because $h\approx D$, on the other hand the conductivity of Au-alumina nanocomposites should be
understood as having 2d character for $x>x_c$ but 3d character for $x<x_c$. This is so because the measured thickness of
the conducting layer of Au-alumina ($h\approx 6D$) is sufficiently larger than the EMA 2d-3d crossover value $h\approx 3D$
(see Fig.~\ref{fig4}) to induce 3d tunneling.
Assuming that the above considerations capture the essential physics of the problem, we interpret the measured conductivity
data of Au-PMMA and Au-alumina films in terms of Eq.~\eqref{tun1}, where $\delta(x)$ is identified with $\delta^*$ as given
by Eqs.~\eqref{delta1} and \eqref{delta2}, respectively. Consistently with the exponential form of Eq.~\eqref{tun1},
the $\ln(\sigma/\sigma_0)$ data of both systems follow approximately a linear dependence as a function of $\delta^*$,
as shown in Fig.~\ref{fig5}. From linear fits of $\ln(\sigma/\sigma_0)$ with
\begin{equation}
\label{fit}
\ln(\sigma/\sigma_0)=-\frac{2D}{\xi}\frac{\delta}{D}+\textrm{const.},
\end{equation}
we extract the values of $2D/\xi$ that best fit the data to find $\xi/D=0.113\pm 0.002$ for Au-PMMA and $\xi/D=0.044\pm 0.001$
for Au-alumina. Remarkably, the value of $\xi/D$ found for Au-alumina coincides with the value extracted from analyses of
co-sputtered Au-alumina granular thick films,\cite{Abeles1975} which are three-dimensional systems.
Using the estimated mean size of Au particles in our films ($D\approx 3.2$ nm) we find $\xi\approx 0.14$ nm, which compares fairly
well with $\xi\approx 0.1$ nm, obtained from $\xi=\hbar/\sqrt{2m\Delta E}$, where $m$ is the electron mass and $\Delta E\approx 4$ eV
is the estimated barrier height for tunneling between Au and alumina.\cite{Grimaldi2014}
It is worth noting that fitting the Au-alumina conductivity data using Eq.~\eqref{fit} with $\delta^*$ as given by the 2d limit of
Eq.~\eqref{delta1} gives $\xi\approx 0.3$ nm, which is about twice the value found assuming 3d tunneling.
From $\xi/D\simeq 0.113$ found for Au-PMMA films and from the corresponding mean Au particle size ($D\approx 5-6$ nm) we find
$\xi\approx 0.6-0.7$ nm, which indicates that the tunneling decay length of Au-PMMA is substantially larger than that
of Au-alumina nanocomposites.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.38,clip=true]{fig6}
\caption{(Color online) Monte Carlo results (open symbols) for the critical distance $\delta_c$ calculated for dispersions
of impenetrable disks in 2d,\cite{Bug1985,Lee1990} and impenetrable spheres in 3d.\cite{Ambrosetti2010a,Miller2009}
Solid lines are $\delta_c=2.5(r_{NN}^{2d}-D)$ for the 2d case and $\delta_c=1.65(r_{NN}^{3d}-D)$ for the 3d case, where
$r_{NN}^{2d}$ and $r_{NN}^{3d}$ are the expressions of the corresponding mean distances between the centers of nearest
neighboring particles given in Eqs.~\eqref{r2d} and \eqref{r3d}.}
\label{fig6}
\end{center}
\end{figure}
To assess the robustness of the results based on the EMA approach, we repeat the above analysis by identifying $\delta$ of
Eq.~\eqref{tun1} with the critical distance $\delta_c$, as prescribed by the critical path
approximation (CPA).\cite{Ambegaokar1971,Hunt2005,Ambrosetti2010a,Chatterjee2013}
According to CPA, $\delta_c$ is defined as the shortest among the inter-particle distances $\delta_{ij}=r_{ij}-D$ such that
the set of bonds satisfying $\delta_{ij}\leq \delta_c$ forms a percolating cluster.
For dispersions of metallic particles in which the distances $\delta_{ij}$ span several
multiples of $\xi$, CPA ensures that Eq.~\eqref{tun1} with
$\delta=\delta_c$ gives a good estimate of the composite conductivity.\cite{Ambrosetti2010a}
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.33,clip=true]{fig7}
\caption{(Color online) Natural logarithm of measured $\sigma/\sigma_0$ (open squares) as a function of $\delta_c$ for (a)
Au-PMMA and (b) Au-alumina nanocomposite films. The critical distance $\delta_c$ is assumed to be given by the 2d limit
for Au-PMMA and by the 3d limit for Au-alumina. Solid lines are the best fits of the data with Eq.~\eqref{fit}. }
\label{fig7}
\end{center}
\end{figure}
Monte Carlo results of $\delta_c$ for impenetrable
spheres dispersed in a 3d volume\cite{Ambrosetti2010a,Miller2009} and impenetrable disks dispersed in a 2d
area\cite{Bug1985,Lee1990} are shown in Fig.~\ref{fig6} by open symbols. For the 2d case, we set $x=2\phi_{2d}/3$ to convert the area
fraction $\phi_{2d}$ covered by the disks to the volume fraction $x$ of spheres of equal diameter with centers lying on a plane.
In analogy with the EMA results of Fig.~\ref{fig4}, the critical distance for the 2d case is systematically larger than the critical
distance in 3d. For $x$ larger than about $0.1$ the Monte Carlo data are well reproduced by setting
$\delta_c=2.5(r_{NN}^{2d}-D)$ for the 3d case and $\delta_c=1.65(r_{NN}^{3d}-D)$ for the 3d case (solid lines in Fig.~\ref{fig6}), where
\begin{equation}
\label{r2d}
r_{NN}^{2d}=D+D\frac{(1-3x/2)^2}{6x(2-3x/2)},
\end{equation}
and
\begin{equation}
\label{r3d}
r_{NN}^{3d}=D+D\frac{(1-x)^3}{12x(2-x)},
\end{equation}
are the mean distances between the centers of nearest neighboring spheres in 2d and 3d, respectively.\cite{Torquato1990,MacDonald1992}
We fit the $\ln(\sigma/\sigma_0)$ data of Au-PMMA and Au-alumina with Eq.~\eqref{fit} using respectively the 2d and 3d functional
dependences of $\delta_c$, as shown in Fig.~\ref{fig7}. From the slopes of the straight lines we extract $\xi/D=0.208\pm 0.004$ for Au-PMMA
and $\xi/D=0.045\pm 0.001$ for Au-alumina. We immediately see that for the Au-alumina films the estimates of $\xi/D$ from CPA and
EMA coincide within errors, while those for Au-PMMA films differ by almost a factor $2$. This discrepancy could be attributed to
possible effects of local particle correlations, neglected within our EMA approach but fully accounted for in CPA, which are
expected to be more prominent in 2d than in 3d. Combining the estimates from EMA and CPA, and using the measured mean size
of Au particles, we infer that in 2d Au-PMMA films the tunneling decay length is $\xi\approx 0.6-1.2$ nm.
\section{Conclusions}
\label{concl}
We have presented a study of the dependence of the electrical conductivity on the gold concentration in Au-implanted composite
thin films with different insulating matrices. We have evidenced that the film thickness may influence substantially the
behavior of the conductivity below the percolation threshold $x_c$, where tunneling between isolated gold particles dominates.
Specifically, we have shown that an effective medium theory predicts a crossover from two-dimensional to three-dimensional
tunneling behavior when the film thickness $h$ is larger than only about 3 times the mean Au particle size $D$. Au-implanted
PMMA films, that have thickness$h\approx D$, are thus strictly 2d systems in the tunneling regime, while Au-implanted alumina
films are expected to show 3d tunneling behavior, as for this system $h\approx 6D$. Interestingly, the dimensionless tunneling
decay length $\xi/D$ extracted from the tunneling conductivity data of Au-implanted alumina films coincides with previous
estimates of $\xi/D$ in co-sputtered Au-alumina thick film composites. We have also shown that above the percolation threshold
the measured conductivity of both Au-PMMA and Au-alumina follows a percolation power-law behavior with 2d transport exponent,
in accord with the theory of percolation in thin disordered films. Au-PMMA films have thus 2d character in the whole range of
Au concentrations, while the effective dimensionality of Au-implanted alumina films increases from 2d to 3d as the system crosses
over from the percolation regime to the tunneling regime.
These results and interpretations could find a firmer confirmation by measuring the conductivity behavior in films
with thicknesses ranging continuously from $h\approx D$ to $h\gg D$. From our model we expect that the tunneling conductivity
crosses over from two-dimensional to three-dimensional behaviors when the film thickness is about $2D$-$3D$, as inferred from the
behavior shown in Fig.~\ref{fig4}.
\acknowledgements
This work was supported by the Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado de
S\~ao Paulo (FAPESP) and the Conselho Nacional de Desenvolvimento Cient\'ifico e
Tecnol\'ogico (CNPq), Brazil. We are grateful to the Institute of Ion Beam
Physics and Materials Research at the Forschungszentrum Dresden-Rossendorf,
Germany, for the TRIDYN-FZR computer simulation code.
|
2006.10884
|
\section{Introduction}
Sleeping well is an essential part of living healthy. The quality of sleep affects the health state of all individuals \cite{Nag2018Cross-modalEstimation} \cite{Nag2020HealthEstimation}. Sleep quality is most often understood as the feeling one has when they wake up in the morning. While it is important to feel refreshed after a night's sleep, the measure of feeling is inherently biased and does not capture much of what sleep research has attributed to sleep quality. Instead of personal perception, other measures of sleep quality require one to take tests that may entail an overnight stay in a sleep lab, known as polysomnography. This could easily bias the results of the sleep quality observed as a person might be uncomfortable in the unfamiliar environment of a sleep laboratory. Although polysomnography is regarded as the gold standard for measuring sleep quality, it is not viable to visit a sleep lab on a nightly basis. If a person would want to know how well they were sleeping, and more importantly, how to improve their sleep quality, they would need to educate themselves on advice from sleep experts. While this is a viable approach, it has a few flaws. For one, the advice given by sleep experts is based on studies done on the general population. These studies have the assumption that the general population shares similar sleeping habits, but it does not account for the cases in which certain people may have different responses to sleep expert advice. This lack of personalization leaves a person using a generalized tool, when they should be using a more sophisticated personalized approach towards lifestyle modification \cite{Nag2017HealthObservations}. For example, a person may know that exercising should help their sleep quality \cite{Kline2014TheImprovement.}, but that would not be able to tell them exactly how much they should exercise to improve their sleep quality significantly, or how varying exercise amount consequently changes its effect on sleep quality. These insights are required if we wish to start controlling our health with more precision using cybernetic principles and health navigation \cite{Nag2017CyberneticHealth} \cite{Nag2019ALife}.
Sleep is still not fully understood as a science but has many ties to human health. Studies have found sleep's bidirectional connection to immunity \cite{Besedovsky2019TheDisease}. Additional studies have shown the relationship between sleep quality and cardiovascular disease \cite{Cappuccio2017SleepDisease} \cite{Drager2017SleepScience} \cite{Javaheri2017InsomniaDisease}, obesity \cite{Ogilvie2017TheObesity} \cite{St-Onge2017SleepobesityTreatment}, and depression \cite{Steiger2019DepressionSleep}.
It is undeniable that sleep heavily factors into quality of life and has relations to many serious diseases that humans can face. From a computing perspective, these events that impact health and disease risk are known as interface events \cite{Pandey2020ContinuousRetrieval}. It is vital to deepen our understanding of sleep and the factors that affect it by continuously extracting relevant events in our life that modulate sleep. These investigations could lead us to profound discoveries in sleep research and human health. That being said, sleep is an incredibly complex activity that is not easy to understand. The quality and perceived quality can depend on a multitude of factors including mood, stress levels, quality of the mattress, sleeping partners, time of day, etc. These factors each have a more or less significant impact on the quality of sleep depending on the daily activities a person performs. For example, one might expect that after a person exercises for a majority of the day, their physical exhaustion would lead to a good night’s sleep. However, other factors such as heightened stress about the next day might negate the benefits of exercise on sleep quality. Understanding the factors relating to high sleep quality and being able to inform a person about how their lifestyle impacts their sleep quality could be integral to improving quality of life and reducing rates of the aforementioned diseases that are related to poor sleep quality.
We attempt to bridge the gap between the advice of sleep experts and personalized approaches by building a sleep model based on longitudinal data produced by the user. A key factor of our model is to not create general statements as most literature does. Instead, our goal is to show that we can harness the power of event mining to build a personalized model of sleep for a single individual. We record lifestyle choices, such as eating habits, exercise time, previous nights of sleep, and environmental factors via smartphones, wearables, and IoT devices. The model is engineered to provide feedback based on statistically significant causal relationships between daily activities and the coming night of sleep. By introducing this feedback, we can give individuals more understanding about their sleep by enabling them to see the direct effect their choices have on their sleep quality.
\section{Related Works}
Sleep monitoring and data collection for quality assessments is a growing area of research in the last several decades. This is true in both clinical-grade research applications and consumer-grade applications and products.
\subsection{Sleep Monitoring and Prediction Applications}
The current gold standard for understanding sleep quality is a study known as Polysomnography. This study requires a person to come into a sleep lab or have a sleep expert come to their sleeping location. It is primarily used to diagnose sleep disorders. The test records various metrics such as brain waves, oxygen levels in the blood, heart rate, breathing, and eye and leg movements \cite{PolysomnographyClinic}. This type of study requires a sleep expert and multiple medical sensors. The study's accuracy does come at the cost of needing too many resources and equipment to reliably be performed every night.
Another popular measurement technique that is used to measure sleep quality is known as Actigraphy. Actigraphy measures sleep quality via a wearable (e.g. a watch). Throughout a sleep event the wearable measures movement with the use of its onboard accelerometer. Actigraphy is a much more accessible form of measuring sleep quality as it only requires the user to remember to wear the device. Its simplicity does come with the cost of accuracy, as it can only infer sleep quality via movement measurements. For the purposes of our study, we used a combination of actigraphy and sound to record the night's sleep events.
Previous systems that have attempted to perform a similar task have used smartphone mic data in conjunction with machine learning to be able to classify and predict sleep quality \cite{MinTossDetector}. Other studies have attempted to use actigraphy graphs and utilize Deep Learning to predict sleep quality from them as well \cite{Sathyanarayana2016SleepLearning}. There are even studies that attempt to forgo the idea of tracking sleep and use factor graph models based on daily activity to predict sleep quality with 78\% accuracy \cite{Bai2012WillPhone}. While the predictions these models are very powerful, the areas that they lack, and what we want to improve on, deals with giving the user personal feedback to help improve sleep quality. There are also several sleep applications that use audio from the phone mic to record and report sleep quality \cite{SleepClock}\cite{SleepScoreExperts}\cite{SleepTechnologies}. These applications may try to incorporate recommendations for improving sleep quality, but many times the application does not reveal their intuition behind the quality calculation nor does it perform in-depth reviews about how the user can go about improving their sleep as well. These applications are also missing a holistic approach to creating their sleep model as they often only have access to steps taken throughout the day and the last night's sleep to make their recommendations. Our study incorporates multiple lifestyle factors that go beyond the scope of steps taken throughout the day in order to provide more insightful feedback about sleep.
\subsection{Sleep Quality Measure}
There is currently no agreed-upon true measure of what sleep quality is \cite{Ohayon2016National}. There have been attempts made by sleep researchers to create sleep quality measures regardless. One such measure is the Pittsburgh Sleep Quality Index (PSQI) \cite{SmythMSN2008TheResearch}. Although the PSQI is a highly regarded measure in sleep quality research, it also does involve a user needing to fill out a survey about their sleep events. Responses to these questions can have biased responses. The measure also relies on the fact that a user would be willing to, and would reliably, fill out this survey. For our study, we wanted to use an unbiased measure of sleep; to do so we turned to one of the latest reviews of sleep quality done by sleep experts.
This review identified the most important factors relating to sleep quality include Sleep Latency (the time it takes to fall asleep), Number of Awakenings Greater than 5 Minutes, Sleep Efficiency (ratio of asleep minutes to minutes in bed), and the Number of Minutes Awake throughout the night \cite{Ohayon2016National}. These four factors flesh out many important factors of a night's sleep that experts believe are essential to attaining high sleep quality. Our model uses these four factors to reason about sleep quality. The review also identified thresholds for sleep quality measures to further define what it means to get a good night's sleep (Table \ref{tab:SleepQualityThresh}).
\begin{table}[]
\begin{center}
\caption{Sleep Quality Measure and Event Thresholds}
\begin{tabular}{|l|l|l|}
\hline
\textbf{Variable} & \textbf{Classification Ranges} & \textbf{Event Name} \\ \hline
Sleep Latency & \begin{tabular}[c]{@{}l@{}}{[}0, 15{]}\\ (15, 30{]}\\ (30, $\infty$)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Good\\ Average\\ Poor\end{tabular} \\ \hline
Awake Minutes & \begin{tabular}[c]{@{}l@{}}{[}0, 20{]}\\ (20, $\infty$)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Good\\ Poor\end{tabular} \\ \hline
Awakenings \textgreater 5 mins & \begin{tabular}[c]{@{}l@{}}{[}0, 1{]}\\ (1, $\infty$)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Good\\ Poor\end{tabular} \\ \hline
Sleep Efficiency & \begin{tabular}[c]{@{}l@{}}{[}0.85, 1.00{]}\\ {[}0, 0.85)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Good\\ Poor\end{tabular} \\ \hline
\end{tabular}
\label{tab:SleepQualityThresh}
\end{center}
\end{table}
\subsection{Understanding the Effect of Lifestyle Activities on Sleep Quality}
Current literature has made many attempts to identify daily activities that affect a person's sleep quality. Studies have shown that sleep and exercise are related, and greater physical activity levels can lead to better sleep latency \cite{Yang2012ExerciseReview} \cite{Kline2014TheImprovement.}\cite{Kelley2017ExerciseMeta-analyses}. A systematic review has also shown that dietary patterns and the types of food eaten throughout the day lead to better sleep quality and duration \cite{St-Onge2016EffectsQuality}. The environment (temperature and humidity) is also important to our sleep duration and quality \cite{Troynikov2018SleepReview}. From these studies and many more, it is clear that choices made throughout the day have an effect on the quality of the next night's sleep.
Literature can, and has, told us that certain activities have effects on sleep quality, but it also make these conclusions with controlled environments set in place. The controlled environments are used to eliminate possible confounding variables, which is helpful in showing relationships between daily activities and sleep quality for the general population. While this control is useful, it fails to take into account the complexity of daily life, in which confounding variables cannot be controlled. It becomes much harder to explore the effects that all daily activities would have on sleep quality. Our model uses an event mining approach to help us perform N-of-1 experiments so we can identify causal relationships between daytime activities and sleep quality.
\section{Event Mining}
Our main goal is to model the relationships between sleep and daily activities computationally. To do this modelling, we utilize an analytic technique known as event mining \cite{Pandey2018UbiquitousHealth}. Event Mining is used to find relationships between events present in data. In this work, we attempt to use event mining to find patterns in a user's longitudinal sleep and lifestyle data. Event mining, combined with causal inference principles, allows us to run N-of-1 experiments using a person's data streams. This approach can help us describe relationships between daily activities and create an explainable model of a person's sleep. Event mining will produce rules for us in the form of $Event_i \rightarrow^C Event_o$, where $Event_i$ defines the input event, $Event_o$ is the output event, and C represents the confounding variables and/or the temporal conditions that may affect the relationship between $Event_i$ and $Event_o$. For our experiments, input events will be various daily activities, such as exercise, feeding times, temperature, etc. The output events will be the various sleep quality measures (latency, efficiency, awakenings, and awake minutes). An example of a specific relationship would be $MinutesAwakeBetweenSleepEvents$ $\rightarrow^{Previous Night Awake Minutes} Awake Minutes$. This relationship will further be explored in the Experiments section.
Hypothesis verification is the process of verifying trends under different scenarios. These patterns are traditionally defined by the literature. Once we have defined our relations between input and output variables, we attempt to identify the confounding variables that affect the relationship significantly. After finding confounding variables, we can perform a technique called contextual matching. This step will find data points that account for similar cases of the confounding variable. From there, we can verify the validity of the relationship and also measure the impact of an input event on the corresponding output event.
\subsection{Other Techniques}
Another technique we could have used to create a sleep model would be to use Machine Learning to try and fit various models to our data. Machine Learning could indeed create a very powerful model to predict upon the data. An area that could be lacking with this approach is relationship verification. In a complicated model, it becomes difficult to keep track of how input variables relate to each other and how they eventually affect the outcome of the model. Some advantages of event mining are that it allows us to efficiently test and verify relationships between various confounding variables and to directly measure the impacts of the input event on the output event.
\section{Methods}
\begin{figure*}[htbp]
\centerline{\includegraphics[scale=0.45]{CurrentModel2.png}}
\caption{Modeling Pipeline: This diagram describes the pipeline with which we take in the data from all of our sources, combine it into a unified one via temporal relations and then pass it through an event mining process to verify insights and generate the effects that input events have on output events.}
\label{fig:model}
\end{figure*}
This section will explore our data and apply it to create a model that can give a user personalized feedback about their sleep quality. Figure \ref{fig:model} shows the basic outline of the process we will go through to perform our experiments.
\subsection{Data Set}
The data set we used was made by temporally merging multiple data sources in order to help us better understand the connection between input and output events with regards to sleep quality.
In order to perform a holistic review of a person's daily activities and their effects on their sleep, we recorded the following pieces of data over the course of multiple years. Daily data were gathered from four main sources: Sleep Cycle, Apple Health Kit, Strava, and Environmental Sensors. The devices used to create these data sets include the user's Garmin Fenix 5 Smart Watch, their smartphone, and an IoT sensor. Sleep Cycle was primarily used to keep track of sleep events. Apple Health Kit was used to help compile sleep quality measures recorded by the Garmin Smart Watch, daily step counts, and floors climbed. The actigraphy measures of the smartwatch were combined with the sound recordings of Sleep Cycle in order to create sleep quality measures. Strava was used to keep track of exercise events. Environmental Sensors were used to keep track of feeding times (phone camera metadata), and temperature and humidity, and the start of sleep events (IoT Sensor). All of these data sources were then temporally matched in order to accurately record lifestyle events that took place throughout the day.
Additionally, some preprocessing was done on the data to ensure that the data set consisted of consecutive days in terms of sleep recordings. This ensures that we would be able to accurately gather information about the daily activities that would then directly affect the next night's sleep.
In order to make the data easier to work with in terms of an event mining framework, we set up classification ranges for our important data points. The classifications in Table \ref{tab:SleepQualityThresh} and Table \ref{tab:my-table2} are both used to transform our data from a continuous to discrete format.
\begin{table}[]
\begin{center}
\caption{Lifestyle Factors and Event Thresholds}
\begin{tabular}{|c|l|l|}
\hline
\textbf{Variables} & \multicolumn{1}{c|}{\textbf{Classification Ranges}} & \textbf{Event Name} \\ \hline
\begin{tabular}[c]{@{}c@{}}Exercise Minutes\\ Per Day\end{tabular} & \begin{tabular}[c]{@{}l@{}}{[}0{]}\\ (0, 50{]}\\ (50, 150{]}\\ (150, $\infty$)\end{tabular} & \begin{tabular}[c]{@{}l@{}}None\\ Poor\\ Average\\ Good\end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}Exercise Minutes \\ Per Week\end{tabular} & \begin{tabular}[c]{@{}l@{}}{[}0, 150{]}\\ (150, 300{]}\\ (300, $\infty$)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Poor\\ Average\\ Good\end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}Interval Between\\ Eating and Sleeping\end{tabular} & \begin{tabular}[c]{@{}l@{}}{[}0{]}\\ (0, 180{]}\\ (180, $\infty$)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Missing\\ Poor\\ Good\end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}Minutes Awake\\ Between Sleep Events\end{tabular} & \begin{tabular}[c]{@{}l@{}}{[}0, 900{]}\\ (900, 1020{]}\\ (1020, $\infty$)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Poor\\ Average\\ Good\end{tabular} \\ \hline
Starting Temperature & \begin{tabular}[c]{@{}l@{}}{[}0, 60{]}\\ (60, 67{]}\\ (67, $\infty$)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Cold\\ Comfortable\\ Warm\end{tabular} \\ \hline
Starting Humidity & \begin{tabular}[c]{@{}l@{}}{[}0, 30{]}\\ (30, 50{]}\\ (50, 100{]}\end{tabular} & \begin{tabular}[c]{@{}l@{}}Low\\ Ideal\\ High\end{tabular} \\ \hline
\end{tabular}
\label{tab:my-table2}
\end{center}
\end{table}
\subsection{Experiments}
For the experiments, there are ten input/confounding events that can be used: Previous Night's Sleep Quality Measures (4 categories), Exercise Minutes in the Day, Exercise Minutes Per Week, Interval Between Eating and Sleeping, Minutes Awake Between Sleep Events, Starting Temperature, and Starting Humidity. The possible output events are sleep quality measures (Table \ref{tab:SleepQualityThresh}). Since there are many experiments that we can perform with our possible input, output, and confounding variable tuples, we will go over one such experiment with close detail to explain the process. Afterward we display various figures containing all of our results. After displaying our results, we will go over any significant observations that we were able to find.
Our experiments have 2 stages. The first stage is designed to find any confounding variables that may affect the relation between and input and output event. The second stage is designed to find the average numerical effect that a certain $Event_i$ will have on $Event_o$, regardless of the confounding variables. Both stages utilize the power of Welch's t-tests to determine statistical significance. For all experiments, a p-value of 0.05 will be used as an indicator of statistical significance. For this section, we will specifically explore analyzing $Event_i$ = Minutes Awake Between Sleep Events, $Event_o$ = Awake Minutes, and $C$ = Prev Night Awakening Minutes. Since the Prev Night Awakening Minutes category has 2 categories in it, we will proceed to condition on each category, and then analyze how it changes the relation between $Event_i$ and $Event_o$. Figure \ref{fig:baselineVScond} shows the baseline distribution of $Event_i$ vs. $Event_o$ in the form of a heat map (left); the conditioned heat maps are shown as well (right).
\begin{figure*}[htbp]
\centerline{\includegraphics[width=0.95\linewidth]{DistributionComparisonv2Labeled.png}}
\caption{This figure shows the Baseline Distribution of Minutes Between Sleep Events and Awake Minutes (left) and then shows how the distribution changes when it is conditioned on each of the categories of Prev Night Awake Mins (right).}
\label{fig:baselineVScond}
\end{figure*}
Based on Figure \ref{fig:baselineVScond}, we can see that when the Previous Night had a poor sleep quality measure for awake minutes. The distribution of the data changes quite drastically when the person was awake between 17 hours and less ([0, 1020] minutes). To verify this, we can perform a t-test between the baseline distribution and the distribution conditioned on the confounding variable. A summary of the t-tests performed can be seen in Table \ref{tab:PrevAwakeCombo}. As suspected, a poor quality measure for last night's Awake Minutes significantly changes the effect that $Event_i$ has on $Event_o$, especially when the individual has been awake for less than 17 hours. Interestingly enough the distributions, and lack of significance from the t-test, seem to suggest that whatever relationship exists between staying awake for greater than 17 and $Event_o$ is strong enough to withstand the influence of having poor sleep last night. An additional insights we can gain from this t-test is that we can confirm that last night's sleep event has a strong influence on the relationship between $Event_i$ and $Event_o$ in two significant cases. This form of analysis gives us the ability to quickly identify any confounding variables that might affect a causal relationship between a pair of input and output events.
The next step of this experiment was to perform a similar t-test against the distributions when conditioning on having a good Previous Night Awake Minutes. A summary of the t-test results can be seen in Table \ref{tab:PrevAwakeCombo}. These t-tests further confirm that last night's sleep quality is an important confounding variable with regards to the relationship between $Event_i$ and $Event_o$. We can see via the t-tests that last night's sleep seems to boost the chance of getting a good night's sleep the next evening. This insight actually holds for most other input-output event pairs as well. A high-level overview of all confounding variables that we found significantly affecting the relationship between various input-output event pairs can be found in the appendix (Figures \ref{fig:LatPart1} - \ref{fig:EffPart2}). This image reduces the complexity of the results by only displaying the p-values of the most significant t-test that changes the distribution between input and output events. For example, in the experiment we ran above, two t-tests were run with respect to the baseline distribution of being awake between 15 to 17 hours and the conditioned ones. In any of the figures mentioned, the p-value displayed would be the more significant p-value between those two tests. In general, the p-value displayed represents the most significant effect that the confounding variable could have on the relationship between $Event_i$ and $Event_o$. Additionally, it is important to note that, in these tables any colored square represents a significant relationship. The larger the square, the more significant the relationship is. Before we talk about other interesting insights that could be gathered from these figures, we will go through the second stage of our experiments using the same input and output events.
\begin{table}[]
\caption{Summary of t-test results between Baseline Distribution of Minutes Between Sleep Events vs. Minutes Awake and the same distribution conditioned on quality of Awake Minutes the previous night.}
\begin{tabular}{|l|l|l|l|}
\hline
\textbf{Condition} & \textbf{Distribution} & \textbf{\begin{tabular}[c]{@{}l@{}}Mean Difference \\From Base \\ When Conditioned \\on Previous\\ Night Awake \\Minutes\end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}Significant\\ According\\ p = 0.05?\end{tabular}} \\ \hline
\begin{tabular}[c]{@{}l@{}}Good \\Previous Night\\ Awake Minutes\end{tabular} & & & \\ \hline
& (1020, $\infty$) & +1.45 Minutes & No \\ \hline
& (900, 1020{]} & -5.23 Minutes & Yes \\ \hline
& {[}0, 900{]} & -4.30 Minutes & Yes \\ \hline
\begin{tabular}[c]{@{}l@{}}Poor \\Previous Night \\ Awake Minutes\end{tabular} & & & \\ \hline
& (1020, $\infty$) & -3.78 Minutes & No \\ \hline
& (900, 1020{]} & +8.84 Minutes & Yes \\ \hline
& {[}0, 900{]} & +8.71 Minutes & Yes \\ \hline
\end{tabular}
\label{tab:PrevAwakeCombo}
\end{table}
\begin{figure*}[htbp]
\centerline{\includegraphics[width=0.95\linewidth]{Average_Effect_of_Input_Events_on_Output_Events.png}}
\caption{Average Effects that each input event has on the output event when compared to each input event's base category. If a metric is 0 then no significant relations were found.}
\label{fig:AvgEffects}
\end{figure*}
The second stage is focused on presenting the user with feedback about how variations in their daily activities specifically affect their sleep quality on average. To begin, we will once again refer to Figure \ref{fig:baselineVScond}, specifically the bottom right heat map. As can be seen from the figure, the distribution for awakening when sleeping after being awake for greater than 17 hours is drastically different when compared to sleeping after staying awake for less than 17 hours. For this experiment, we treat the first category of $Event_i$ as the baseline and then compare it to the other categories that are present in $Event_i$. If the t-test shows a statistical significance, then we know that under the condition of $C$, there is a significant difference in the distribution between the base category for $Event_i$ and the tested category. For a given $Event_i \rightarrow^C Event_o$ combination we perform these t-tests between the base category and the rest of the categories in $Event_i$ in order to find out how, on average, changing the input of $Event_i$ affects the outcome of $Event_o$. If we similarly carried out this experiment for all other nine confounding events and then averaged over all of the significant mean differences, we would find that, on average, staying awake for 15 to 17 hours when compared to the baseline, staying awake greater than 17 hours, would increase the minutes awake during a sleep event by 18 minutes. Similarly, when we compare the baseline event of staying awake greater than 17 hours to staying awake less than 15 hours, we can see that, on average, the minutes awake during the sleep event will increase by 15 minutes. Since both test conditions show that, compared to the baseline, staying awake less than 17 hours seems to detract from sleep quality we can say that staying awake greater than 17 hours seems to improve sleep quality.
The results of these experiments can be inspected in Figure \ref{fig:AvgEffects}. One interesting observation to note is that if we hold $Event_i$ the same and attempt to analyze its average effect on output events, we can see that an average temperature(60-67 $F^o$) seems to improve every sleep quality measure except for sleep latency. This is a profound observation as it shows that not all quality measures are correlated with each other and that an improvement in one given a certain input event does not necessarily equate to an improvement in all other sleep quality measures. Another example of this phenomenon can be seen with regards to the input event of the count of the number of Awakenings greater than 5 minutes for the previous night. Where compared to the baseline of waking up more than once for greater than 5 minutes, getting a good sleep quality measure seems to improve only 3 out of the 4 sleep quality measures for the next night. Once again, sleep latency deteriorate while all other sleep quality measures get improved.
Another interesting metric that should be noted is that exercise improves sleep latency the most. On average, we can tell that exercising a lot will reduce sleep latency by 10.5 minutes with just a small workout will help reduce sleep latency by an average of 8 minutes. This model can now give evidence-backed feedback to its user about how daily activities affect their sleep quality and even predict how much certain lifestyle changes will transform their sleep quality.
If we further analyze the Figures present in the appendix, we can also generally see that the last night's sleep is a significant confounding variable to consider in the sleep model. Another strong confounding variable is the amount of time elapsed between sleep events. It seems that longer times spent awake yield good results with regards to sleep quality. With specific regards to sleep efficiency, it is interesting to note that starting humidity and temperatures seem to be very important confounding variables. Additionally, we can see that exercise is not as significant of a confounding variable as the other daily activities, which could indicate that physical exhaustion might not be an important variable to consider when analyzing sleep quality.
\section{Conclusion}
Throughout this paper, we have shown the need for and built up a sleep model that utilizes event mining to provide useful feedback about the relationships between sleep quality. With enough data, this model can be very powerful and give people control over their sleeping habits in a way that has not been previously possible. This model has the power to identify various relationships between lifestyle choices and sleep quality and the possible variables that could strengthen or weaken those relationships.
Using our data and our event mining approach, we were able to identify various factors and how they affect the quality of sleep. For example, one such finding showed that exercising more than 150 minutes on a day that a person has had a good previous night's sleep can, on average, decrease the next night's sleep latency by about 10.5 minutes. Further analysis can show that the starting temperature for the sleep event can significantly strengthen the relationship between exercise and sleep, whereas having poor sleep the night before can detract from the beneficial effects of exercising. While these findings might not generalize well to the population, they do not have to. The power of the model comes from the fact that it is unique to the person. That being said, the model is made such that it can fit another person's data just as easily. Given another person's data set, the model can perform similar analyses and find different insights into how their daily activities affect their sleep quality.
\section{Future Directions}
Although this model incorporates many useful data sources and provides insights about them, it is also a proof of concept. The input events were all based on variables that literature had already identified to be important to sleep quality. While we were able to introduce further levels of complexity to the model to provide more insights than the literature does, there are also many more input events that we would need to include to create a comprehensive sleep model. The biggest missing piece from our current model is an input event that deals with stress and anxiety. Literature has found that anxiety levels do indeed affect sleep quality \cite{Gould2017AssociationSleepiness}. Intuitively, it makes sense that heightened anxiety would lead to difficulty falling asleep and could even cause more awakenings during a sleep event. Additionally, while it is possible to analyze this data and present results, a lot more work would need to be done before a user could plug in various data sources and subsequently get a succinct presentation about how their lifestyle affects their sleep quality. Moving forward, we hope to incorporate even more input events to extend to personal events, emotions, caloric intake, and heart rate. With these inputs, we would hopefully be able to find more insights and even identify certain lifestyle factors that affect sleep quality that has not been considered by the literature yet. That being said, the model is built around a plug and play framework, so incorporation of these and any other future features should be easily doable.
\bibliographystyle{plain}
|
2208.13795
|
\section{Introduction}
Without dynamical gravity, correlation functions measured in black hole backgrounds decay all the way to zero for late times. However, as pointed out initially by Maldacena \cite{Maldacena:2001kr}, in any full theory of quantum gravity this should not be possible. This is because black holes themselves are finite entropy quantum systems.
This predicted behavior for late time correlators was later sharpened by \cite{Cotler:2016fpe}. They considered a toy model of the thermal two point function, known as the spectral form factor ($H$ is the Hamiltonian of the holographically dual quantum system)
\begin{equation}
Z(\beta+\i T,\beta-\i T)=\Tr(e^{-(\beta+\i T) H})\Tr(e^{-(\beta-\i T) H})=\sum_{i=1}^{\text{dim}(H)}\sum_{j=1}^{\text{dim}(H)}e^{-\beta(E_i+E_j)}e^{-\i T(E_i-E_j)}\,,\label{11}
\end{equation}
and argued that its late-time behavior is universally given by a ramp-and plateau structure (on time scales $\sim e^{S}$ with $S$ the black hole entropy)\footnote{Actually, this simple ramp-and plateau structure is only visible after some time-averaging, or other types of averaging \cite{Cotler:2016fpe}. We will exclusively be concerned with the gravitational interpretation of such smeared quantities in this work.}
\begin{equation}
Z(\beta+\i T,\beta-\i T)=\int_{0}^{\infty}\d E\,e^{-2\beta E}\,\text{min}(\rho(E),T/2\pi)\,,\quad \rho(E)=e^{S(E)}\,.\label{1.2}
\end{equation}
In particular for $T\to \infty$ this goes to a \emph{non-zero} constant $Z(2\beta)$. That this does not decay all the way to zero indeed follows from the fact that black holes are discrete (or finite entropy) quantum systems: the value $Z(2\beta)$ arises from the terms $i=j$ in the sum \eqref{11}. If we instead had a continuous spectrum those terms would measure zero, so the correlator would instead indeed decay to zero.
It is interesting to ask how the bulk gravitational path integral reproduces this ramp-and plateau structure; one, because they are universal; and two, because the plateau is a signature of microstructure (and hence unitarity) in gravity. In gravity, one computes the spectral form factor by path integrating over all geometries with two asymptotically AdS boundaries (with appropriate boundary conditions implementing the $\beta\pm \i T$). It was found \cite{Saad:2018bqo} that the linear ramp (the $T/2\pi$ piece in \eqref{1.2}) is explained by wormhole geometries connecting both boundaries\footnote{Similarly wormholes were found to be important for understanding late-time correlators \cite{Saad:2019pqd,Blommaert:2019hjr,Iliesiu:2021ari, Kruthoff:2022voq}, the Page curve \cite{Penington:2019kki,Almheiri:2019qdq}, the fate of late-time infalling observers \cite{Stanford:2022fdt} and more.}
\begin{equation}
Z(\beta+\i T,\beta-\i T)_\text{conn}\supset \quad \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}, scale=0.7]
\pgftext{\includegraphics[scale=1]{euclworm1.pdf}} at (0,0);
\draw (-0.04, -1.5) node {$1$ wormhole};
\draw (-2.1, 2.2) node {$\beta+\i T$};
\draw (2.1, 2.2) node {$\beta-\i T$};
\end{tikzpicture}\quad.
\end{equation}
The origin of the plateau seems more mysterious, and discussions have been limited largely to scattered comments about D-brane effects in \cite{Saad:2019lba,Blommaert:2019wfy,Altland:2020ccq,Altland:2022xqx}, which lack an obvious truly geometric interpretation.\footnote{In particular, the plateau can be understood as due to a second saddle in a universe field theory description of gravity \cite{Haake:1315494,Altland:2020ccq,Post:2022dfi,Altland:2022xqx,Anous:2020lka}, but this second saddle and the perturbations around it are less geometric, they can not be understood using the gravitational path integral (as far as we know), which describes perturbations around the first saddle.}
In this work we consider generic models of AdS$_2$ dilaton gravity, of the type studied in \cite{Witten:2020wvy,Maxfield:2020ale,Almheiri:2014cka}
\begin{equation}
-I=\S\chi +\frac{1}{2}\int \d^2 x\sqrt{g} (\Phi R + W(\Phi))\,.\label{14}
\end{equation}
Extending results of Okuyama-Sakai \cite{Okuyama:2020ncd} (for Airy gravity), and Saad-Shenker-Stanford-Yang-Yao \cite{workinprogressothergroup} (for JT gravity), we show in \textbf{section \ref{sect:eucl}} that in the double scaling limit where $T\to\infty$ and $e^{\S}\to\infty$ with the combination $Te^{-\S}$ held fixed, that the plateau simply follows from the perturbative sum over all genus $g$ wormholes
\begin{equation}
Z(\beta+\i T,\beta-\i T)_\text{conn}=\sum_{g=0}^\infty e^{-2g\S}\,Z_g(\beta+\i T,\beta-\i T)_\text{conn}\,,
\end{equation}
and \emph{no} non-perturbative (non-geometric) D-brane corrections are required. More precisely, the genus $g$ wormhole amplitude is found to grow \emph{universally} as $T^{2g+1}$
\begin{equation}
Z_g(\beta+\i T,\beta-\i T)_\text{conn}=\quad \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}, scale=0.7]
\pgftext{\includegraphics[scale=1]{euclwormmany.pdf}} at (0,0);
\draw (-0.04, -2) node {genus $g$ wormholes};
\draw (-2.65, 2.2) node {$\beta+\i T$};
\draw (2.65, 2.2) node {$\beta-\i T$};
\draw (0, 0) node {$\dots$};
\end{tikzpicture}\quad = P_{g-1}(\beta)\,T^{2g+1}\,,\label{universalseries}
\end{equation}
with $P_{g-1}(\beta)$ a theory-specific degree $g-1$ polynomial. And, the sum over genus reproduces the plateau
\begin{equation}
\lim_{T\to\infty}\sum_{g=0}^\infty P_{g-1}(\beta)\,T^{2g+1}e^{-2 g \S}=Z(2\beta)\,.
\end{equation}
For the Airy case of Okuyama-Sakai \cite{Okuyama:2020ncd} this is visible in Fig. \ref{fig:airyconvergenceintro}, other cases are presented in \textbf{appendix \ref{app:a}}.
\begin{figure}[t]
\centering
\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}, scale=0.7]
\pgftext{\includegraphics[scale=1]{airyconvergencelog.pdf}} at (0,0);
\draw (-0.2, 3) node {$2$};
\draw (4.5, 3) node {$0$};
\draw (1.9, 3) node {$12$};
\draw (-2.1, -2.5) node {$1$};
\draw (-0.8, -2.5) node {$3$};
\draw (0.8, -2.5) node {$7$};
\draw (4.5, 1.2) node {exact};
\draw (5, -2.5) node {$T$};
\end{tikzpicture}\quad
\caption{The double scaling limit of the sum over genus $g$ wormholes in the Airy model, up to $g=g_\text{max}$ (numbers shown) with $e^{\S}=10$ and $\beta=1/2$.}
\label{fig:airyconvergenceintro}
\end{figure}
The focus of our work is explaining the universal growth $T^{2g+1}$ of the genus $g$ wormhole amplitudes. For fixed theories and fixed $g$ this behavior can be checked manually: genus $g$ amplitudes in the theories \eqref{14} can be computed \cite{Maxfield:2020ale,Witten:2020wvy} using topological recursion \cite{Eynard:2007fi,Eynard:2007kz,Saad:2019lba,Stanford:2019vob}. In particular, topological recursion spits out an even ``volume'' polynomial $V_{g,2}(b_1,b_2)$ of degree $6g-2$. As discussed in \textbf{section \ref{sect:eucl}}, based on dimensional analysis one actually expects a faster growth rate
\begin{equation}
Z_g(\beta+\i T,\beta-\i T)_\text{conn}\overset{?}{=}\#(g)\,T^{3g}\,.
\end{equation}
The factual case-by-case limitation of the maximal power of $T$ to $T^{2g+1}$ seems miraculous. It depends on very nontrivial cancellations in large sums involving the expansion coefficients of these polynomial volumes $V_{g,2}(b_1,b_2)$. To derive the universality of $T^{2g+1}$ \eqref{universalseries} using the gravitational path integral, we thus need to prove that these cancellations in the volumes happen at any genus $g$, and for any dilaton gravity theory \eqref{14}.
In \textbf{section \ref{sect:intersect}} and \textbf{section \ref{sect:universalcanc}} we derive these cancellations using the relations between our gravity models \eqref{14} and an infinite set of differential equations, known as the KdV hierarchy. In particular, one can use these differential equations to prove \cite{Liu:2007ip,eynard2021natural} some set of cancellations in sums of intersection numbers $\average{\tau_{d_1}\dots \tau_{d_n}}$, which are essentially integrals of products of specific two forms over the moduli space of Riemann surfaces. We then prove that those cancellations in intersection numbers imply the cancellations that we need in the volumes, for all theories \eqref{14}. One key ingredient is a duality between exponentials of operators $\tau_k$ and changes in the dilaton gravity potential, which we find in \textbf{section \ref{sect:interpretation}}
\begin{equation}
\exp\bigg(\sum_{k=2}^\infty t_k \tau_k\bigg)\quad \Leftrightarrow \quad \exp\bigg(\int\d^2 x\sqrt{g}\sum_{k=2}^\infty \frac{(-1)^k }{(2k-1)!!}\,t_k\,\Phi^{2k}\,e^{-2\pi\Phi} \bigg)\,.
\end{equation}
This can be viewed as an application of the fact that intersection numbers can be viewed as correlators of cusp defects in JT gravity, which we also derive. This also leads to a new understanding of the KdV equations directly in terms of dilaton gravity variables: they express how observables change when one alters certain parameters in the gravitational action.
In \textbf{section \ref{sect:intersect}} we give a gentle introduction to intersection numbers and their relation with gravity, with more intuitive comments gathered in \textbf{appendix \ref{app:c}}, as we did not want to assume that the readers were familiar with these more mathematical constructions.
\subsection*{Relation with other work}
Part of \textbf{section \ref{sect:eucl}} is based upon discussions with Saad, Shenker, Stanford and Yang. In particular, the observation that for Airy and JT gravity equation \eqref{latetime} and \eqref{27} gives the exact spectral form factor in the limit $T\to\infty$ and $e^{\S}\to\infty$ with $Te^{-\S}$ fixed, and that this leads to a series with a non-zero radius of convergence (matched by topological recursion) which gives the plateau \eqref{airyexact} and \eqref{JTexact}, is due to Saad, Stanford, Yang and Yao \cite{workinprogressothergroup}. The fact that equation \eqref{latetime} implies cancellations in Weil-Petersson volumes for JT gravity was independently observed and investigated by \cite{workinprogressregensburg}.
\section{The plateau from the perturbative sum over wormhole geometries}\label{sect:eucl}
In this section we derive a simple integral representation of the spectral form factor in the limit $T$ and $e^{\S}$ go to infinity keeping their ratio fixed (the $\tau$-scaling limit \cite{Okuyama:2018gfr,Okuyama:2020ncd}), as advertised in the introduction. This integral admits a power series in $T^{2g+1}$, which is reproduced by computing the genus $g$ wormhole amplitudes in gravity. In this $\tau$-scaling limit, the sum over genus converges to the plateau, without the need for non-perturbative (in $e^{\S}$) corrections.
\subsection{Powers of time}\label{sect:folding}
The connected spectral form factor is related with the spectral correlation via a Laplace transform
\begin{equation}
Z(\beta+\i T,\beta-\i T)_\text{conn}=\int_{-\infty}^{+\infty}\d E_1\int_{-\infty}^{+\infty}\d E_2\,e^{-\beta(E_1+E_2)}e^{\i T(E_1-E_2)}\,\rho(E_1,E_2)_\text{conn}\,.\label{basic}
\end{equation}
We will consider 2d dilaton gravities with an exact dual description as random matrix theories \cite{Saad:2019lba,Witten:2020wvy,Maxfield:2020ale,Mertens:2020hbs}. In the full matrix integral, $\rho(E_1,E_2)_\text{conn}$ is a smooth function of $E_1$ and $E_2$, except for a contact term. In random matrix theory that in the limit $T\to\infty$ and $e^{\S}\to\infty$ with $\tau = Te^{-\S}$ fixed \cite{Haake:1315494,Okuyama:2018gfr,Okuyama:2020ncd,workinprogressothergroup} ($\tau$-scaling limit), this integral simplifies to
\begin{equation}
Z(\beta+\i T,\beta-\i T)_\text{conn}=\int_{-\infty}^{+\infty}d E_1\int_{-\infty}^{+\infty}d E_2\,e^{-\beta(E_1+E_2)}e^{i T(E_1-E_2)}\,\rho(E_1,E_2)_\text{conn eff}\,,\label{latetime}
\end{equation}
where the effective spectral correlation features the sine kernel \cite{mehta2004random}\footnote{The generic proof of this fact goes through Efetov's non-linear sigma model description of random matrix theory \cite{efetov1983supersymmetry,Haake:1315494}, see also recently \cite{Altland:2020ccq,Belin:2021ibv,Altland:2022xqx,Blommaert:2021fob}. We can alternatively use D-brane calculus as explained in \cite{Saad:2019lba,Blommaert:2019wfy}.}
\begin{equation}
\rho(E_1,E_2)_\text{conn eff}=\delta(E_1-E_2)\rho_0(E)-\frac{\sin^2(\pi\rho_0(E)(E_1-E_2))}{\pi^2(E_1-E_2)^2}\,.\label{sinekernel}
\end{equation}
Changing variables to $E_1-E_2=\omega$ and $E_1+E_2=2E$ this becomes
\begin{equation}
Z(\beta+\i T,\beta-\i T)_\text{conn}=\int_{-\infty}^{+\infty}\d E\,e^{-2\beta E}\,\rho_0(E)-\int_{-\infty}^{+\infty}\d E\,e^{-2\beta E}\int_{-\infty}^{+\infty}\d\omega\, e^{\i T \omega}\,\frac{\sin^2(\pi \rho_0(E)\omega)}{\pi^2\omega^2}\,.\label{26}
\end{equation}
The $\omega$-integral is a standard Fourier-transform, which gives the familiar ramp-and plateau \cite{Cotler:2016fpe}
\begin{align}
Z(\beta+\i T,\beta-\i T)_\text{conn}&=\int_{-\infty}^{+\infty}\d E\,e^{-2\beta E}\,\text{min}(\rho_0(E),T/2\pi)\label{27}\\&=\int_{-\infty}^{+\infty}\d E\,e^{-2\beta E}\,\rho_0(E)-\int_{E(T)}^{+\infty}d E\,e^{-2\beta E}\,(\rho_0(E)-T/2\pi)\,,\quad \rho_0(E(T))=T/2\pi\,,\nonumber
\end{align}
with $E(T)$ determined by solving $\rho_0(E(T)) = T/2\pi$.\footnote{Here we assume $\rho_0(E)$ grows monotonically, in order to have a unique solution. The conclusion \eqref{expaexpa} remains true for non-monotonic spectra though, see appendix \ref{sect:non-monotonic}.} Choosing the energy axis such that $\rho_0(0)=0$,\footnote{Some dilaton gravities have a non-zero threshold energy, but such modifications are straightforward to incorporate.} and using integration by parts, we arrive at
\begin{align}
Z(\beta+\i T,\beta-\i T)_\text{conn}&=\int_{0}^{E(T)}\d E\,e^{-2\beta E}\,\rho_0(E)+\frac{1}{2\beta}\rho_0(E(T))e^{-2\beta E(T)}\nonumber\\&=\frac{1}{2\beta}\int_0^{T/2\pi}\d\rho_0\,e^{-2\beta E(\rho_0)}\,.\label{28}
\end{align}
This is the final formula for the spectral form factor in the $\tau$-scaling limit. One might say that we have put non-perturbative information into the calculation at this point, but here we just use the matrix model answers. The objective is to reproduce the final formula from gravity and explain its convergence properties, solely using perturbation theory.
As already alluded to, we consider 2d dilaton gravity \eqref{14} with a matrix integral dual and for these theories the spectrum has an expansion in powers of $E^{k+1/2}$ \cite{brezin1993exactly,douglas1990strings,Gross1989nonperturbative},
\begin{align}\label{spectrum}
\rho_0(E) = \frac{e^{\S}}{2\pi}\sum_{k=0}^\infty f_{k}\,E^{1/2+k} = \frac{e^{\S}}{2\pi}\,\bigg(w+\sum_{k=1}^{\infty}f_k\,w^{2k+1}\bigg)\,,
\end{align}
where we've introduced the notation $E = w^2$, and without loss of generality we have set $f_0 = 1$ (by changing $\S$). The function in round brackets will be referred to as $f(w)$ and its inverse (which exits by assumption) can be obtained using the Lagrange inversion theorem. Notably, this has a powers series expansion in odd powers of $f$ too
\begin{equation}
w(f)=f+\sum_{k=1}^{\infty}w_k\,f^{2k+1}\,.
\end{equation}
and as a result the Taylor series
\begin{equation}
e^{-2\beta w(f)^2}=1-2\beta f^2+4\pi\beta \sum_{n=2}^\infty (2n+1) P_{n-1}(\beta)\, f^{2 n}\,,
\end{equation}
is even in $f$. Here $P_n(\beta)$ is a degree $n$ polynomial in $\beta$, which can easily be computed explicitly for any fixed $n$. The tau-scaled spectral form factor \eqref{28} then expands as
\begin{align}
Z(\beta+\i T,\beta-\i T)_\text{conn}&=\frac{e^{\S}}{4\pi\beta}\int_0^{Te^{-\S}}\d f\,e^{-2\beta E(f)}\nonumber\\&=\sum_{g=0}^\infty P_{g-1}(\beta)\,T^{2g+1}e^{-2g\S}=\frac{T}{4\pi\beta}-\frac{1}{6\pi}T^3e^{-2\S}+\dots\label{expaexpa}
\end{align}
This is the main result of this section. Computing the tau-scaled limit of the (connected) spectral form factor using the non-perturbatively exact matrix integral formulation of 2d dilaton gravities results in a \emph{universal} series expansion in $T^{2g+1}e^{-2g\S}$. It is tempting to interpret the term at order $e^{-2g\S}$ as the $\tau$-scaling limit of the perturbative genus $g$ wormhole amplitude in gravity. We will confirm below in section \eqref{sect:euclideanwormholes} that this is indeed the case. Let us point out some noteworthy features of this expansion.
\begin{enumerate}
\item In the limit $T\to\infty$ this series approaches the (leading order) plateau
\begin{equation}
\frac{e^{\S}}{4\pi\beta}\int_0^{\infty}\d f\,e^{-2\beta w(f)^2}=\int_0^\infty \d E\,\rho_0(E)\,e^{-2\beta E}=Z_0(2\beta)\,.
\end{equation}
Depending on the case, the series expansion can have a finite or an infinite radius of convergence (as function of $T$). However in all cases (with invertible spectrum) the series converges to the exact answer for small enough $T$, and that answer has a unique analytic continuation to all positive $T$. The examples in appendix \ref{app:a} should clarify this.
So, in the $\tau$-scaling limit, the plateau is \emph{perturbatively} accessible in the genus expansion. Notice also that this is different from Borel resummation, since we are taking a limit where some badly growing terms in the genus $g$ go away.
\item The first two terms in the expansion \eqref{expaexpa} are theory-independent, meaning they do not depend on the $f_k$. The polynomial $P_n(\beta)$ depends only on $f_2\dots f_n$, so the higher the genus the more we probe the UV part of the spectrum. The sign of each term depends on these $f_k$ (and $\beta$) and are not-universal.
\item Theories with a nonzero Hagedorn temperature never reach a plateau when $2\beta<\beta_\text{H}$, because for these cases the would-be plateau $Z_0(2\beta)$ is divergent. We can appreciate this also using
\begin{equation} \label{simple}
\partial_T Z(\beta+\i T,\beta-\i T)_\text{conn} = \frac{1}{4\pi\b}e^{-2\b E(Te^{-\S})}
\end{equation}
For Hagedorn spectra $f(E)\sim e^{\beta_\text{H}E}$ one finds $E(f)=\log(f)/\beta_\text{H}+\rm{ constant}$, and hence indeed the $\tau$-scaled spectral form factor grows without bounds for late times. This Hagedorn growth at high energies only occurs for non-local theories, such as string theories.
\item The sine kernel \eqref{sinekernel} or the associated level repulsion is generally considered to be the hallmark feature of chaotic quantum systems, it is essentially synonymous with random matrix universality. The scaling $T^{2g+1}$ contains the same information as this sine kernel, and we consider it therefore to be the real-time version of random matrix universality. Therefore these should be an argument why in any gravity model (beyond 2d dilaton gravity) there are contributions growing like $T^{2g+1}$ in the gravitational path integral. This explanation we think should be intrinsically Lorentzian.
A proposal based on topology changing processes for how these powers of time can be explained, analogous to the double-cone \cite{Saad:2018bqo}, will be presented elsewhere \cite{workinprogress}.
\end{enumerate}
We further discuss this integral and its series expansion \eqref{expaexpa} for several examples, as well as the modifications for non-monotonic spectra, in appendix \ref{app:a}, in order not to disrupt the flow of the paper.
\subsection{Euclidean wormholes and cancellations in volumes}\label{sect:euclideanwormholes}
We now switch gears, and move to the gravitational computation. For this we do an expansion of the connected two-boundary observable in genus
\begin{equation}
Z(\beta+\i T,\beta-\i T)_\text{conn}=\sum_{g=0}^\infty e^{-2g\S}\,Z_g(\beta+\i T,\beta-\i T)_\text{conn}\,\overset{?}{+}\text{non-perturbative}\,,
\end{equation}
where the genus $g$ gravitational wormhole amplitude is computed as
\begin{equation}
Z_g(\beta_1,\beta_2)_\text{conn}=\int_0^\infty \d b_1 b_1\, \frac{1}{2\pi^{1/2}{\beta_1}^{1/2}}e^{-\frac{b_1^2}{4\beta_1}}\int_0^\infty \d b_2 b_2\, \frac{1}{2\pi^{1/2}{\beta_2}^{1/2}}e^{-\frac{b_2^2}{4\beta_2}}\,V_{g,2}(b_1,b_2)\,,\label{genusg}
\end{equation}
with $V_{g,2}(b_1,b_2)$ the (deformed) Weil-Petersson volume\footnote{Here we mean a slight generalisation of the Weil-Petersson volumes, which includes cases where we have summed over defects \cite{Maxfield:2020ale, Witten:2020wvy}. One might call these deformed Weil-Petersson volumes.} for a genus $g$ wormhole with (geodesic) boundaries with length $b_1$ and $b_2$. They are symmetric polynomial in $b_1^2$ and $b_2^2$ with a maximum total degree of $3g-1$. The way to compute these polynomials $V_{g,2}(b_1,b_2)$ in practice is using the Eynard-Orantin topological recursion \cite{Eynard:2004mh,Eynard:2007fi,Stanford:2019vob,Saad:2019lba}, with a spectral curve $y(z)$ (originating from \eqref{spectrum}) given by\footnote{Only for the spectral curve $y(z) = \sin(2\pi z)/(4\pi)$ do we obtain the true Weil-Petersson volumes.}
\begin{equation}
y(z)=\frac{1}{2}\sum_{k=0}^\infty (-1)^k f_{k}\,z^{1+2 k}\,.\label{speccurve}
\end{equation}
We stress that these polynomials are the result of doing the gravitational path integral over all metrics which are topologically a genus $g$ connected wormhole in the dilaton gravity with genus zero spectrum \eqref{spectrum}.
In order to compare \eqref{genusg} to \eqref{expaexpa}, we compute \eqref{genusg} explicitly by writing the WP volume as
\begin{equation}
V_{g,2}(b_1,b_2)=\sum_{d_1,d_2=0}^{d_1+d_2=3g-1}V_{g,2}^{d_1,d_2}\frac{b_1^{2d_1}}{4^{d_1}d_1!}\frac{b_2^{2d_2}}{4^{d_2}d_2!}\,,\label{vexp}
\end{equation}
with some symmetric constants $V_{g,2}^{d_1,d_2}$. We obtain
\begin{equation}
Z_g(\beta_1,\beta_2)_\text{conn}=\frac{1}{\pi}\sum_{d_1,d_2=0}^{d_1+d_2=3g-1}V_{g,2}^{d_1,d_2}\,\beta_1^{1/2+d_1}\beta_2^{1/2+d_2}\,.\label{222}
\end{equation}
Continuing to Lorentzian signature and putting $\beta=0$ (see below for finite $\beta$) this becomes
\begin{equation}
Z_g(\i T,-\i T)_\text{conn}=\frac{1}{\pi}\sum_{q=0}^{(3g-1)/2}(-1)^q\,T^{2 q+1}\sum_{d=0}^{2q}(-1)^d\,V_{g,2}^{d,2q-d}\,,\label{223}
\end{equation}
Comparing to \eqref{expaexpa}, we claim that in the $\tau$-scaling limit for generic dilaton gravities we should have
\begin{equation}
Z_g(\i T,-\i T)_\text{conn}=\,P_{g-1}(0)\,T^{2g+1}\,.
\end{equation}
This is rather surprising because in \eqref{223} we have powers of $T$ that are larger than $2g+1$, and hence should dominate the $\tau$-scaling limit. This indicates a novel cancellation between the various coefficients of the volumes $V_{g,2}(b_1,b_2)$! More precisely we are claiming that for \emph{any} theory
\begin{equation}
\sum_{d=0}^{2q}(-1)^d\,V_{g,2}^{d,2q-d}=0\,,\quad q>g\,,\quad \sum_{d=0}^{2g}(-1)^d\,V_{g,2}^{d,2g-d}=\pi (-1)^g P_{g-1}(0)\,.\label{simple}
\end{equation}
These cancellations for $q>g$ are quite surprising from the point of view of these polynomials, but they are nevertheless true, as one can check case by case. The simplest example is the genus $g=3$ wormhole with $q=4$. Based on dimensional analysis, one expects a term proportional to $T^9$. However, this term is absent because of the cancellation\footnote{For a list of volumes see for instance \cite{van2008intersection}.}
\begin{equation}
\frac{1}{2}\sum_{d=0}^{8}(-1)^d\,V_{3,2}^{d,8-d}=\frac{1}{324}-\frac{5}{324}+\frac{77}{1620}-\frac{503}{5670}+\frac{607}{11340}=0\,.\label{cancelbasic}
\end{equation}
This example is theory independent since it only depends on $f_1=1$, but we stress that more in general the $V_{g,2}^{d_1,d_2}$ depend on all $f_k$ such that \eqref{expaexpa} predicts theory-dependent cancellations. One checks easily case by case that cancellations indeed occur, more examples are provided in appendix \ref{app:a}.
To compute the $\tau$-scaled spectral form factor for finite $\beta$, we can introduce symmetric polynomials $e_2=\beta_1\beta_2$ and $e_1=\beta_1+\beta_2$, and reorganize \eqref{222} into
\begin{align}
Z_g(\beta+\i T,\beta-\i T)_\text{conn}=&\frac{1}{\pi}\sum_{q=0}^{(3g-1)/2}(-1)^q\,e_2^{ q+1/2}\sum_{m=2q}^{3g-1}e_1^{m-2q}\nonumber\\&\qquad \qquad 2\sum_{d=0}^q(-1)^d\,V_{g,2}^{d,m-d}\,\frac{m/2-d}{m-q-d}\frac{(m-q-d)!}{(q-d)!(m-2q)!}\,,\label{z2bdyexp}
\end{align}
where now $e_2=\beta^2+T^2$ and $e_1=2\beta$.\footnote{For the term $m=2q$ the term with $d=q$ receives an extra $1/2$, which we left implicit.} The constraint that this amplitude grows no faster than $T^{2g+1}$ imposes that the coefficient of $e_2^{q+1/2}$ vanishes for $q>g$
\begin{equation}
\sum_{d=0}^q(-1)^d\,V_{g,2}^{d,m-d}\,\frac{m/2-d}{m-q-d}\frac{(m-q-d)!}{(q-d)!(m-2q)!}=0\,,\quad q>g\,,m\geq 2q\,,\label{genericcancel}
\end{equation}
and furthermore there is a precise theory-\emph{dependent} prediction for $q=g$
\begin{equation}
2\sum_{m=2g}^{3g-1}(2\beta)^{m-2q}\sum_{d=0}^g(-1)^d\,V_{g,2}^{d,m-d}\,\frac{m/2-d}{m-g-d}\frac{(m-g-d)!}{(g-d)!(m-2g)!}=\pi (-1)^g P_{g-1}(\beta)\,.
\end{equation}
The case $m=2q$ reduces to the $\beta=0$ constraints \eqref{simple} (taking into account the factor $1/2$ mentioned above). The constraints with $m>2q$ represent additional cancellations which we claim are satisfied by the genus $g$ wormhole amplitudes of \emph{all} dilaton gravities (with matrix integral duals).
A happy consequence of the $\tau$-scaling limit is that all terms with $q < g$ are subleading. These terms contain lower powers of $T$, but their coefficients are growing more and more rapidly the lower the power of $T$, ultimately growing like $(2g)!$. This was conjectured in \cite{zograf2008large} and proven in \cite{mirzakhani2015towards}, see also (228) in \cite{Saad:2019lba} as Taylor series in $b^2$. We make some further comments on this in section \ref{sect:5.4}. Including such terms makes the sum over genus very complicated and one needs to resort to a Borel resummation in order to study it. The $\tau$-scaling limit thus gets rid of these intricacies, by selecting only the $q=g$ term at each genus. This is why the series is convergent, in part, and non-perturbative corrections are \emph{not} present.
We can also understand this from Efetov's non-linear sigma model \cite{Haake:1315494,Altland:2020ccq,Altland:2022xqx,efetov1983supersymmetry} (which described the $\tau$-scaling limit) formulation of these quantities. The resolvent is represented by a double integral over variables $s_{11}$ and $s_{22}$. The energy dependence in the action is $e^{\S}E(s_{11}-s_{22})$, and the double integral is dominated by two saddle-points. The expansion around the first saddle grows like $(2g)!$ in the energy domain, whereas the expansion around the second saddle gives a non-perturbative (in $e^{\S}$) correction. We can now inverse Laplace transform this to get a non-linear sigma model computation of $Z(\beta)$. The energy integral gives a delta function $\delta(\beta-e^{\S}(s_{11}-s_{22}))$, which collapses the double integral to a single integral. This single integral, notably, has just one saddle point, expanding around which produces the sum over wormholes. Because there is only one saddle, there are no non-perturbative corrections in the thermal ensemble. This calculation extends to the connected spectral form factor, it becomes a bit messier but the conclusion remains the same.
In summary, there are two reasons why this genus expansion is convergent: the $\tau$-scaling limit gets rid of terms that grow too fast, and the genus expansion in the thermal ensemble is more convergent than that in the microcanonical ensemble.\footnote{One other way to see that perturbation theory in the microcanonical ensemble is more complicated is because the integrals over $b$ with the density of states of the trumpet are not convergent and need to be defined using analytic continuation. This is true in the Airy case, in JT an additional complication is that the volumes themselves contain pieces that grow like $(2g)!$. The $\tau$-scaling limit cures the second, but not the first complication, which is cured by going to the canonical ensemble.}
\section{Cancellations in topological gravity}\label{sect:intersect}
In the previous section we found that there need to be non-trivial cancellations between the coefficients of (deformed) Weil-Petersson volumes. In this section and the next we explain how these relations come about. First we will consider topological gravity, i.e. the Airy model, where many of these cancellations are known \cite{Liu13896,eynard2021natural} in the context of intersection theory. In section \ref{sect:universalcanc} we will use open-closed duality to show that these very cancellations of \cite{Liu13896,eynard2021natural} actually imply the growth $T^{2g+1}e^{-2g\S}$ for all double-scaled matrix integrals, including all models of dilaton gravity \cite{Witten:2020wvy,Maxfield:2020ale,Mertens:2020hbs,Blommaert:2021fob,Blommaert:2021gha,Blommaert:2022ucs}.
We remind the reader that this $T^{2g+1}e^{-2\S}$ is the type of series that we found earlier will converge to the plateau. So, in part, these cancellations explain why the sum over genus $g$ wormholes converges (in the $\t$-scaling limit) to the plateau.
\subsection{Ribbon graphs and intersection numbers}\label{sect:3.1}
In 1991, Witten \cite{Witten:1990hr} showed that topological gravity is related to a matrix model with spectral density
\begin{equation}
\rho_0(E)=e^{\S}\frac{E^{1/2}}{2\pi}. \label{topgravspec}
\end{equation}
This spectral density occurs for any dilaton gravity at small enough energy, and is thus an important first case to consider.
Small energy means that we are at small temperature and so huge asymptotic boundaries. Since the partition functions $Z(\b_1,\dots,\b_n)$ are given by integrals over the (deformed) Weil-Petersson volumes \eqref{genusg}, one might wonder what large $\b_i$ implies for those volumes. It is straightforward to see from the trumpet partition function \cite{Saad:2019lba}
\begin{equation}
Z_{\text{trumpet}}(\b,b) = \frac{1}{2\pi^{1/2}\beta^{1/2}}e^{-\frac{b^2}{4\beta}}\,,
\end{equation}
that we now get contributions mostly from the region where $b^2 \sim \b$, hence $b^2$ is large. In other words, we can consider the leading large $b_i$ terms in the (deformed) WP volumes. These are theory-independent terms which have a have a clear geometric interpretation, as we now point out, see also Appendix \ref{app:c4}.
Since they are theory independent, we can start by considering the undeformed WP volumes, which compute the volumes of moduli spaces of hyperbolic Riemann surfaces with $R=-2$. As the boundaries of these Riemann surfaces are $K=0$ geodesics we can use Gauss-Bonnet to show that the area of such Riemann surfaces of genus $g$ and $n$ geodesic boundary is given by
\begin{equation}
A = 2\pi (2g+n - 2)\,,\label{3.3}
\end{equation}
and hence is a \emph{constant}. This means that when we take the $b_i$ very large, the Riemann surface needs to become a collection of very thin strips glued along trivalent vertices (trivalent because we can consider the pair of pants decomposition of the Riemann surface). In other words, the (deformed or undeformed) WP volumes reduce to the volume of moduli space of trivalent ribbon graphs, see also Fig. \ref{fig:ribbon}.
\begin{figure}[t]
\centering
\includegraphics[scale=0.6]{v11ribbon.pdf}
\caption{The torus with one puncture $V_{1,1}(b)$ becomes a ribbon graph, when the hole $b$ becomes large. In this limit the moduli space is spanned by the lengths of the ribbons with the constraint $b=2\ell_1+2\ell_2+2\ell_3$, and we integrate with the flat measure to recover $V_{1,1}(b)=b^2/48$.}
\label{fig:ribbon}
\end{figure}
These volumes are simple to compute, one can parameterize the moduli space of ribbon graphs for a certain graph $\G$ by the lengths $\ell_j$ of the edges of the graph, those lengths are constrained only by the fact that the sum of the lengths $\ell_{j_i}$ of the edges forming a certain boundary $i$ add up to $b_i$. Accounting for the standard symmetry factor in Feynman diagrams one thus obtains
\begin{equation}
V_{\G_{g,n}} (b_1\dots b_n)=\frac{1}{\abs{\text{Aut}(\G_{g,n})}}\prod_{j=1}^{6g-6+3n}\int_0^\infty \d \ell_j\,\prod_{i=1}^n \delta(b_i-\sum_{j_i}\ell_{j_i})\,.\label{3.4}
\end{equation}
Summing over all ${\G_{g,n}}$ with a certain genus $g$ can then indeed be checked to reproduce the Airy volumes. But how is this related to intersection numbers? The crux of Kontsevich's seminal work \cite{KontsevichModel} was the realization that these integrals can be represented equivalently as (see also \cite{Dijkgraaf:2018vnm,Okuyama:2019xbv})
\begin{equation}
V_{g,n}(b_1\dots b_n)=\sum_{\G_{g,n}}V_{\G_{g,n}} (b_1\dots b_n)=\int_{\overline{\mathcal{M}}_{g,n}}\exp\bigg(\frac{1}{2}\sum_{i=1}^n b_i^2\, \psi_i\bigg)\,.\label{43}
\end{equation}
The right hand side is in integral over the moduli space of Riemann surfaces. Just like for the ordinary (deformed) WP volumes we include degenerate Riemann surfaces, which results in the so-called Deligne-Mumford compactification of the moduli space of Riemann surfaces $\overline{\mathcal{M}}_{g,n}$. Here the $\psi_i$ denote the first Chern classes $c_1(\mathcal{L}_i)$ of some bundles of one forms $\mathcal{L}_i$ that are constructed as follows, see also appendix \ref{app:c}. Take a punctured Riemann surface and consider the cotangent space at each puncture $x = x_i$. For each $x_i$ these cotangent spaces depend on the moduli of the Riemann surface under consideration. The collection of all those spaces is a bundle $\mathcal{L}_i$ over $\overline{\mathcal{M}}_{g,n}$.
The key to proving the relation \eqref{43} is to realize that one can choose a specific one form $\alpha$ in $\mathcal{L}_i$ for which the curvature two-form $\d\alpha=c_1(\mathcal{L}_i)$ takes on a simple form \cite{KontsevichModel}. The first step is to use the fact that there is an equivalence between the moduli space of Riemann surfaces and the moduli spaces of ribbon graphs \cite{penner1987decorated,harer1988cohomology,strebel1984quadratic}. Concretely, one can associate to every Riemann surface with $n$ boundaries of lengths $b_i$ a unique ribbon graph with lengths $\ell_j$ with again the sum of $\ell_{j_i}$ constrained to $b_i$. This map is provided by the Jenkins-Strebel quadratic differential \cite{strebel1984quadratic}. So we can exchange the fundamental domain in terms of the Teichmuller coordinates $b_i$ and $\tau_i$ for a sum over ribbon graphs with the simple constraint that the sum of lengths $\ell_{j_i}$ is constrained to $b_i$.\footnote{It is quite remarkable that the moduli space of Riemann surfaces is so simple in these coordinates $\ell_j$. One can think of $\ell_j$ as the propagation times of open strings, and this simplicity of moduli space is roughly why open string field theory is simpler than closed string field theory.} In these coordinates, Kontsevich found a local expression for a one form $\alpha$ whose curvature is constant\footnote{We are suppressing some factors of two and minus signs in the sum, associated for instance with cases where one edges contributing twice to $b_i$, see theorem 3.20 in \cite{do2008intersection}. These are not important for this intuitive argument.}
\begin{equation}
\psi_i=\d\alpha=c_1(\mathcal{L}_i)=\frac{2}{b_i^2}\sum_{j<k}\d \ell_{j_i}\wedge \d \ell_{k_i}\,.\label{3.6}
\end{equation}
With this equation it is not hard to imagine that writing out the exponential in \eqref{43} generates simply the flat measure in \eqref{3.4}. The actual proof still involves some combinatorics \cite{KontsevichModel}, but the point should be obvious. We want to emphasize that all this aside, the $\psi_i$ are simply two forms on $\overline{\mathcal{M}}_{g,n}$. In terms of Teichmuller coordinates $b_i$ and $\tau_i$ they have rather complicated expressions, but in the $\ell_j$ coordinates they are constants.\footnote{To avoid confusion, the Weil-Petersson measure $\d b_i\wedge \d\tau_i$ becomes only flat in the $\ell_j$ coordinates for large $b_i$ \cite{do2008intersection}. So in general one can view Weil-Petersson volumes as integrating over the moduli space of ribbon graphs, but with a non-flat measure. The integration domain is the same \cite{penner1987decorated,harer1988cohomology,strebel1984quadratic}, but the integrand is different.}
Introducing common notation for the $2k$-forms (the $k$'th power of the Chern classes)
\begin{equation}
\tau_k=\psi_i^k\,,
\end{equation}
and writing out the exponentials in \eqref{43} we arrive at the relation between (deformed) WP volumes for large $b_i$ and so-called intersection numbers $\average{\tau_{d_1}\dots \tau_{d_n}}$ (for more about this name see appendix \ref{app:c})
\begin{equation}
V_{g,n}(b_1\dots b_n)=\sum_{d_1=0}^\infty\frac{b_1^{2d_1}}{2^{d_1}d_1!}\dots\sum_{d_n=0}^\infty\frac{b_n^{2d_n}}{2^{d_n}d_n!}\average{\tau_{d_1}\dots \tau_{d_n}}_g\,,\quad \average{\tau_{d_1}\dots \tau_{d_n}}_g=\int_{\overline{\mathcal{M}}_{g,n}}\psi_1^{d_1}\dots\psi_n^{d_n}\,.\label{expvolumes}
\end{equation}
Because first Chern classes $\psi_i$ are two-forms, and the dimension of the space $\overline{\mathcal{M}}_{g,n}$ is $6g-6+2n$, there is an obvious selection rule
\begin{equation}
\sum_{i=1}^n d_i=3g-3+n\,,\label{selection}
\end{equation}
and from the definition, the correlators $\average{\tau_{d_1}\dots \tau_{d_n}}$ are symmetric under exchanging any two labels. In terms of the $n$-boundary finite temperature partition function we then obtain
\begin{align}
Z(\beta_1\dots \beta_n)_\text{conn}&=\frac{\beta_1^{1/2}}{\pi^{1/2}}\dots\frac{\beta_n^{1/2}}{\pi^{1/2}}\sum_{g=0}^\infty e^{-2g\S} \sum_{d_1=0}^\infty (2\beta_1)^{d_1}\dots \sum_{d_2=0}^\infty (2\beta_n)^{d_n}\average{\tau_{d_1}\dots \tau_{d_n}}_g\nonumber\\&=\frac{x_1^{1/2}}{(2\pi)^{1/2}}\dots\frac{x_n^{1/2}}{(2\pi)^{1/2}}\,\mathcal{F}(x_1\dots x_n)\,,\quad x_i=2\beta_i e^{-2\S/3}\,,\label{zfromtau}
\end{align}
with the generating functional
\begin{equation}
\mathcal{F}(x_1\dots x_n)=\sum_{d_1=0}^\infty x_1^{d_1}\dots \sum_{d_n=0}^\infty x_n^{d_n}\average{\tau_{d_1}\dots \tau_{d_n}}\,.\label{58}
\end{equation}
By taking the large $b_i$ limit in Mirzakhani's recursion relations \cite{mirzakhani2007simple,Stanford:2019vob} (either in the answer or in the derivation) one obtains a simple version of topological recursion for ribbon graphs, see appendix \ref{app:c5}. This can be translated to simple recursion relations for intersection numbers $\average{\tau_{d_1}\dots \tau_{d_n}}$, which are the expansion coefficients of ribbon graph volumes \eqref{expvolumes}. One finds eventually equation (25) in \cite{faber2000logarithmic}\footnote{This is quite teadious. We can rewrite the recursion relations of volumes in terms of recursion relations for intersection numbers as in \cite{mulase2006mirzakhani}, but now using the volumes of ribbon graphs and without the $\kappa$ two-forms. What one obtains are the Dijkgraaf-Verlinde$^2$ \cite{dijkgraaf1991topological} version of the recursion relations, equation (7.27) in \cite{Dijkgraaf:1991qh} but with appropriate normalization. These recursion relations are not manifestly the same as the once we wrote above, they correspond with the Virasoro constraint equations in the KdV formalism, whereas our equations above literally follow from the KdV equations themselves. That these two infinite sets of differential equations (and correspondingly, two infinite sets of recursion equations) are equivalent was proven again in \cite{Dijkgraaf:1991qh}, see also \cite{fukuma1993continuum,Witten:1991mn}. We explain these steps more carefully in section \ref{sect:KdV}.}$^,$\footnote{The labels $d_j$ fix the genus $g$ via the selection rule \eqref{selection} and should be chosen such that $g$ is a positive integer (otherwise the correlator trivially vanishes). Applied to this case the selection rule reads
\begin{equation}
\sum_{j=0}^\infty (j-1)d_j+(k-1)=3g-1\,.
\end{equation}
Similarly the correlators on the right side only get contributions from values $a_j$ and $b_j$ such that $g_1$ ad $g_2$ are non-negative integers. We automatically have $g_1+g_2=g$, as one checks using the selection rules. There are no $e^{\S}$ in these equations, these are just statements about integrals of $2j$ forms over some symplectic manifold.
}
\begin{align}
&(2k+1)\bigg\langle \tau_0^2\tau_k\prod_{j=0}^\infty \tau_j^{d_j}\bigg\rangle_g=\prod_{j=0}^\infty\sum_{a_j=0}^{d_j} \frac{d_j!}{a_j!(d_j-a_j)!}\bigg\langle \tau_0\tau_{k-1}\prod_{j=0}^\infty \tau_j^{d_j-a_j}\bigg\rangle_{g_1}\bigg\langle \tau_0^3\prod_{j=0}^\infty \tau_j^{a_j}\bigg\rangle_{g_2}\nonumber\\&\hspace{1cm}+2\prod_{j=0}^\infty \sum_{b_j=0}^{d_j}\frac{d_j!}{b_j!(d_j-b_j)!}\,\bigg\langle \tau_0^2\tau_{k-1}\prod_{j=0}^\infty \tau_j^{d_j-b_j}\bigg\rangle_{g_1} \bigg\langle \tau_0^2\prod_{j=0}^\infty \tau_j^{b_j}\bigg\rangle_{g_2} +\frac{1}{4}\,\bigg\langle \tau_0^4\tau_{k-1}\prod_{j=0}^\infty \tau_j^{d_j}\bigg\rangle_{g-1}\,.\label{KdVrec}
\end{align}
These recursion relations encode the famous Korteweg-de Vries (KdV) hierarchy \cite{Dijkgraaf1991loop}, which is an infinite set of differential equations. The connection to the KdV hierarchy will prove useful in our work, so we summarize it now.
\subsection{KdV equations and intersection numbers}\label{sect:KdV}
The KdV hierarchy consists of an infinite set of differential equations \cite{Witten:1990hr}
\begin{equation}
(2n+1)\partial_{n}\partial_0^2 F=\partial_{n-1}\partial_0 F\partial_0^3 F+2\partial_{n-1}\partial_0^2 F\partial_0^2 F+\frac{1}{4}\partial_{n-1}\partial_0^4 F\,,\quad n\geq 0\,.\label{kdv}
\end{equation}
Here $F$ is a function of an infinite set of parameters $t_0,t_1,t_2\dots$ called KdV times and $\partial_i=\partial/\partial t_i$. Often, these equations are presented in a slightly different way, by introducing functions $R_n$ and $u$ of the KdV times
\begin{equation}
R_n=\partial_{n-1}\partial_0 F\,,\quad R_1=\partial_0^2 F=u\,.
\end{equation}
By definition on then has
\begin{equation}
\partial_n u=\partial_0 R_{n+1}\,.
\end{equation}
The functions $R_n$ for $n\geq 0$ are then determined recursively by the KdV hierarchy \eqref{kdv} as functions of $u$
\begin{equation}
(2n+1)\partial_0 R_{n+1}= (\partial_0 u) R_n + 2 u \partial_0 R_n +\frac{1}{4}\partial_0^3 R_n\,,\label{413}
\end{equation}
the $n=0$ version of this equation
\begin{equation}
\partial_0 u= R_0 \partial_0 u+2 u \partial_0 R_0+\frac{1}{4}\partial_0^3 R_0
\end{equation}
has the solution $R_0=1$. Using that $R_1=u$, one can then solve the $n=1$ version of \eqref{413} and obtain the solution\footnote{This is the standard Korteweg-de Vries equation $\partial_1u=u\partial_0u+\partial_0^3 u/12$.}
\begin{equation}
R_2=\frac{1}{2}u^2+\frac{1}{12}\partial_0^2 u\,.
\end{equation}
This can be continued recursively and $n$ and so in principle one can find solutions to the KdV equations in terms of one unknown function $u$. These functions $R_n$ that one obtains this way are (up to potential normalization factors appearing in various papers) known as Gelfand-Dikii polynomials.
In fact the KdV hierarchy has one more equation, known as the string equation \cite{Witten:1990hr}
\begin{equation}
\partial_0^2 F=t_0+\sum_{i=0}^\infty
t_{i+1}\partial_0\partial_i F\,,\label{stringeq}
\end{equation}
which, via the definition of the Gelfand-Dikii polynomials becomes
\begin{equation}
0=\sum_{n=0}^\infty (t_k-\delta_{k,1})R_k\,.
\end{equation}
Inserting the expressions for the Gelfand-Dikii polynomials in terms of powers of $u$ (and $t_0$ derivatives thereof) this becomes a highly complicated equation for $u$, which is all that remains to be solved of the KdV hierarchy of equations. But what does this have to do with the intersection numbers we discussed previously?
The relation with intersection numbers is that this unknown function $F$ is a generating functional of intersection numbers \cite{KontsevichModel,Witten:1990hr} (see appendix \ref{app:c2} for more details about this)
\begin{equation}
F=\bigg\langle\exp\bigg(\sum_{k=0}^\infty t_k \tau_k\bigg)\bigg\rangle=\sum_{\{d_i\}} \prod_{i=0}^\infty\frac{t_i^{d_i}}{d_i!}\bigg\langle \prod_{j=0}^\infty \tau_j^{d_j}\bigg\rangle\,.\label{Finter}
\end{equation}
Bearing in mind that in the gravitational description one interprets $\average{\tau_{d_1}\dots \tau_{d_n}}$ as connected correlators \eqref{zfromtau}, it is also natural to consider the generating function of full (not per-se connected) correlators
\begin{equation}
Z=\exp( F)\,.
\end{equation}
There are approximately a gazillion proofs for \eqref{Finter}, but in the spirit of the discussion we have had thus far we only mention one approach, due to Dijkgraaf-Verlinde$^2$ \cite{Dijkgraaf:1991qh}, and explained nicely by Witten \cite{Witten:1991mn}. The gist is that using standard manipulations one can show that the solution $F$ of the KdV hierarchy also satisfies linear differential equations \cite{Witten:1991mn}
\begin{equation}
L_n Z=0\,,\quad n\geq -1\,,\label{vir}
\end{equation}
with $L_n$ satisfying the Virasoro algebra\footnote{As a result, you actually only need to consider the equations for $-1\leq n \leq 2$ \cite{Witten:1991mn}.\label{few_n_Needed}}
\begin{equation}
[L_m,L_n] = (m-n)L_{m+n} + \delta_{n+m,0}\frac{m(m^2-1)}{12}\,.
\end{equation}
They are explicitly given by
\begin{equation}
L_n=\frac{1}{2}\sum_{m=-\infty}^{+\infty}\alpha_m \alpha_{n-m+1}+\frac{1}{16}\delta_{n,0}\,, \label{421}
\end{equation}
with creation and annihilation operators
\begin{equation}
\alpha_m=
\begin{cases}
\frac{1}{2^{1/2}}(2m-1)!!\,\partial_{m-1}\, &m\geq 1
\\\frac{1}{2^{1/2}}\frac{1}{(-2m-1)!!}\,(t_{-m}-\delta_{m,-1})\, &m\leq 0\,.
\end{cases}
\end{equation}
Then one can show that these differential equations are equivalent to the topological recursion relations that we found in appendix \ref{app:c5} for the ribbon graph volumes $V_{g,n}(b_1\dots b_n)$. In particular one can write the recursion relations of those volumes in terms of recursion relations for the intersection numbers as in \cite{mulase2006mirzakhani}, the result is the Dijkgraaf-Verlinde$^2$ recursion relation (equation (7.27) in \cite{Dijkgraaf:1991qh} or (4.1) in \cite{Dijkgraaf1991loop})
\begin{align}
&(2n+3)!!\,\bigg\langle \tau_{n+1}\prod_{i\in X} \tau_{k_i}\bigg\rangle_g =\sum_{j\in X}\frac{(2n+2k_j+1)!!}{(2k_j-1)!!}\,\bigg\langle \tau_{k_j+n}\prod_{i\in X/j} \tau_{k_i}\bigg\rangle_g\nonumber\\&\qquad\qquad\qquad\qquad\qquad\quad\,\,\,+\frac{1}{2}\sum_{m=1}^n(2m-1)!!(2n-2m-1)!!\,\bigg\langle \tau_{m-1}\tau_{n-m}\prod_{i\in X} \tau_{k_i}\bigg\rangle_{g-1} \nonumber\\&\qquad\qquad+\frac{1}{2}\sum_{g'=0}^g\sum_{X_1\cup X_2=X}\sum_{m=1}^n(2m-1)!!(2n-2m+1)!!\,\bigg\langle \tau_{m-1}\prod_{i_i\in X_1} \tau_{k_{i_1}}\bigg\rangle_{g'}\bigg\langle \tau_{n-m}\prod_{i_2\in X_2} \tau_{k_{i_2}}\bigg\rangle_{g-g'} \label{523}
\end{align}
We remind the reader that these correlators have an implicit genus, because of the selection rule \eqref{selection}. If we assign genus $g$ to the correlators on the first line, one sees that the correlators on the second line have genus $g-1$, and the ones on the final line have genera $g'$ and $g-g'$. It is obvious then where each term comes from in the Mirzakhani recursion relations for $V_{g,n}(b_1\dots b_n)$.
This is identical to the equations that one finds from writing out \eqref{vir}. To see this, one writes $Z$ in terms of $F$, and expands $F$ as in \eqref{Finter} in powers of $t_k$. In such an expansion, a derivative $\partial_k$ is the creator of an extra $\tau_k$
\begin{equation}
\partial_k F=\sum_{\{d_i\}}\prod_{i=0}^\infty \frac{t_i^{d_i}}{d_i!}\bigg\langle \tau_k\prod_{j=0}^\infty \tau_j^{d_j}\bigg\rangle\,,\label{string}
\end{equation}
and multiplication with $t_k$ is like removing or annihilating a $\tau_k$
\begin{equation}
t_k F=\sum_{d_i}\prod_{i=0}^\infty \frac{t_i^{d_i}}{d_i!}\bigg\langle d_k \tau_k^{d_k-1}\prod_{j\neq k}^\infty \tau_j^{d_j}\bigg\rangle\,.
\end{equation}
Applying this several times one indeed recovers \eqref{523} (by comparing terms with identical powers $t_i^{d_i}$). Before proceeding we mention two special cases of \eqref{523}. The case $n=-1$ is called the string equation, and is equivalent to \eqref{stringeq}\footnote{Here and below $g$ is determined using the selection rule \eqref{selection}, the correlator vanishes whenever $g$ is not a positive integer
\begin{equation}
\sum_{i=0}^m k_i=3g-2+m\,.
\end{equation}
}
\begin{equation}
\bigg\langle \tau_{0}\prod_{i=1}^m \tau_{k_i}\bigg\rangle_g =\sum_{j=1}^m\bigg\langle \tau_{k_j-1}\prod_{i\neq j}^m \tau_{k_i}\bigg\rangle_g\,,\label{426}
\end{equation}and the case $n=0$ is called the dilaton equation (use the selection rule \eqref{selection} to simplify the prefactor)
\begin{equation}
\bigg\langle \tau_{1}\prod_{i=1}^m \tau_{k_i}\bigg\rangle_g =-\chi\,\bigg\langle\prod_{i=1}^m \tau_{k_i}\bigg\rangle_g\,,\quad \chi=2-2g-m\label{427}
\end{equation}
The first term in
\begin{eqnarray}
L_{-1}=\frac{1}{4} t_0^2 -\frac{1}{2}\partial_0+\sum_{m=1} \frac{1}{2} t_m \partial_{m-1}\,,
\end{eqnarray}
fixes the fact that $\langle \tau_0^n\rangle=\delta_{n,3}$. Similarly, we can understand the term $1/16$ in \eqref{421} as fixing $\langle \tau_1\rangle=1/24$. Note that using equation \eqref{3.4} one finds $V_{1,1}(b)=b^2/48$ (see also \cite{Stanford:2022fdt} and appendix \ref{app:c}), from which we deduce $\average{\tau_1}=1/24$. Furthermore from $V_{0,3}(b_1,b_2,b_3)=1$ (the moduli space $\mathcal{M}_{0,3}$ is a point) one finds $\average{\tau_0^3}=1$. These are the initial conditions for the recursion relations.
As mentioned in footnote \ref{few_n_Needed}, beyond the $n=-1$ and $n=0$, we only need the $n=1$ and $n=2$ Virasoro conditions. Imposing these conditions results in non-linearities. Whereas before acting with $L_{-1}$ and $L_0$ is first order in the derivatives w.r.t. $t_m$, $L_1$ and $L_2$ contain second order derivatives acting on $Z = e^F$, resulting, for instance, in terms of the form $(\partial_0 F)^2 + \partial_0^2 F$ when converting to the free energy and giving rise to the final term in \eqref{523}.
Instead of expanding the Virasoro constraints \eqref{vir} in intersection numbers, we can also just expand the KdV hierarchy \eqref{kdv} (and string equation). Since both sets of equations carry identical information, so do the resulting recursion relations. One matches in a straightforward manner each consecutive term in \eqref{kdv} with those in the recursion relations \eqref{KdVrec}. We proceed with \eqref{KdVrec} now.
\subsection{Simplest cancellations}\label{sect:simplecancel}
One can use this recursive version of the KdV equations \eqref{KdVrec} to get differential equations for $\mathcal{F}(x_1\dots x_n)$ \eqref{58} \cite{faber2000logarithmic} which can then be solved exactly \cite{okounkov2002generating,faber2000logarithmic}. For $\mathcal{F}(x)$ we start from \eqref{KdVrec} with $d_j=0$
\begin{equation}\label{recursionUnReduced}
(2n+1)\average{\tau_0^2\tau_n}=\average{\tau_0\tau_{n-1}}\average{\tau_0^3}+2\average{\tau_0^2\tau_{n-1}}\average{\tau_0^2}+\frac{1}{4}\,\average{\tau_0^4\tau_{n-1}}\,,
\end{equation}
which, using the string equation \eqref{426} and $\average{\tau_0^n}=\delta_{n,3}$ simplifies to
\begin{equation}\label{recursionFx}
2n\average{\tau_{n-2}}-\frac{1}{4}\,\average{\tau_{n-5}}=0\,.
\end{equation}
We can rewrite this recursion relation into a differential equation for $\mathcal{F}(x)$ by basically doing the inverse of what took us from the KdV equations to \eqref{KdVrec}.\footnote{The difference is that here we consider all $n$, but only one insertion from the exponential, whereas in the KdV hierarchy we consider the whole exponential but fixed $n$.} Multiplying with $x^{n-1}$ and summing over $n$ we find
\begin{equation}
2\partial_x \sum_n x^{n}\average{\tau_{n-2}}-\frac{1}{4}\sum_n x^{n-1} \average{\tau_{n-5}}=0\quad\Rightarrow \quad\bigg(2\partial_x x^2-\frac{1}{4}x^4\bigg)\mathcal{F}(x)=0\,.
\end{equation}
This equation has a unique solution once one impose that the linear term $\mathcal{F}(x)\supset x\average{\tau_1}=x/24$
\begin{equation}\label{Fx}
\mathcal{F}(x)=\frac{1}{x^2}\exp\bigg( \frac{x^3}{24}\bigg)\,.
\end{equation}
With the relation \eqref{zfromtau} this computes the exact one-boundary partition function for topological gravity with spectrum \eqref{topgravspec}, which reproduces the answer one obtains from double scaling the exactly solvable Gaussian matrix integral \cite{Saad:2019lba}
\begin{equation}
Z(\beta)=\frac{e^{\S}}{4\pi^{1/2}\beta^{3/2}}\exp\bigg(\frac{1}{3}\,\beta^3 e^{-2\S}\bigg)\,.
\end{equation}
We can obtain the two-boundary partition function in similar manner. For $\mathcal{F}(x_1,x_2)$ one start from \eqref{KdVrec} with $d_j=\delta_{j,q}$
\begin{equation}
(2n+1)\,\average{\tau_0^2\tau_n\tau_q}=\average{\tau_0\tau_{n-1}}\average{\tau_0^3\tau_q}+\average{\tau_0\tau_{n-1}\tau_q}+2\average{\tau_0^2\tau_{n-1}}\average{\tau_0^2\tau_q}+\frac{1}{4}\average{\tau_0^4\tau_{n-1}\tau_q}\,.\label{533}
\end{equation}
Using the string equation we find the relation
\begin{equation}
\mathcal{F}(x_1,x_2,0)=\sum_{n=0}^\infty\sum_{q=0}^\infty x_1^n x_2^q \,\average{\tau_0\tau_n\tau_q}=(x_1+x_2)\sum_{n=0}^\infty\sum_{q=0}^\infty x_1^n x_2^q\, \average{\tau_n\tau_q}=(x_1+x_2)\,\mathcal{F}(x_1,x_2)\,.
\end{equation}
Using tricks like this we find that multiplying \eqref{533} with $x_1^n x_2^q$ and summing over $n$ and $q$ results term by term in
\begin{align}
&2\partial_1\, x_1(x_1+x_2)\, \mathcal{F}(x_1,x_2,0)-(x_1+x_2)\,\mathcal{F}(x_1,x_2,0)\\&=x_2\,\mathcal{F}(0,x_2,0)\,\mathcal{F}(x_1,0,0)+x_1\,\mathcal{F}(x_1,x_2,0)\nonumber+2x_1\,\mathcal{F}(0,x_2,0)\,\mathcal{F}(x_1,0,0)+\frac{1}{4}x_2(x_1+x_2)^3\,\mathcal{F}(x_1,x_2,0)\,.
\end{align}
Using again the string equation $\mathcal{F}(x_1,0,0)=x_1^2\,\mathcal{F}(x_1)=\exp(x_1^3/24)$ one can rearrange this into
\begin{equation}
\bigg(1-\frac{1}{4}x_1x_2(x_1+x_2)+2\frac{x_1(x_1+x_2)}{x_2+2x_1}\partial_1 \bigg) \mathcal{F}(x_1,x_2,0)\,\exp(-\frac{x_1^3+x_2^3}{24})=1\,.
\end{equation}
This simplifies tremendously by introducing the coordinate $a=x_1x_2(x_1+x_2)$ and naming the function outside the braces $f(a)$, then Mathematica spits out the unique solution (the boundary condition comes from taking $x_1=0$ and using $\mathcal{F}(0,x_2,0)=\exp(x_2^3/24)$ as above)
\begin{equation}
\bigg(1-\frac{1}{4}a+2a\partial_a \bigg)f(a)=1\,,\quad f(0)=1\quad \Rightarrow\quad f(a)=\frac{(2\pi)^{1/2}}{a^{1/2}}\exp\bigg(\frac{a}{8}\bigg)\,\text{Erf}\bigg( \frac{a^{1/2}}{2^{3/2}}\bigg)\,.
\end{equation}
From this one then obtains finally the exact answer for $\mathcal{F}(x_1,x_2)$
\begin{equation}\label{Fx1x2}
\mathcal{F}(x_1,x_2)=\frac{(2\pi)^{1/2}}{(x_1x_2)^{1/2}(x_1+x_2)^{3/2}}\exp\bigg(\frac{1}{24}(x_1+x_2)^3\bigg)\,\text{Erf}\bigg(\frac{1}{2^{3/2}}(x_1x_2(x_1+x_2))^{1/2}\bigg)\,,
\end{equation}
and using \eqref{zfromtau} one finds the exact connected two-boundary partition function \cite{Okuyama:2020ncd}
\begin{equation}
Z(\beta_1,\beta_2)_\text{conn}=\frac{e^{\S}}{4\pi^{1/2}(\beta_1+\beta_2)^{3/2}}\exp\bigg(\frac{1}{3}(\beta_1+\beta_2)^3e^{-2\S}\bigg)\,\text{Erf}\Big(((\beta_1\beta_2(\beta_1+\beta_2))^{1/2}e^{-\S}\Big)\,.
\end{equation}
Note that this epressions indeed reduces to \eqref{airyexact} in the $\tau$-scaling limit.
\subsection*{A comment on unstable surfaces}
We pause to make one comment about \eqref{Fx} and \eqref{Fx1x2}. Notice that they contain the terms $1/x^2$ and $1/(x_1 + x_2)$, which, from our conversion from the recursion relation to the differential equation should not be there. Nevertheless, these are correct and should be included, they represent the contributions at $(g,n) = (0,1)$ and $(0,2)$, which are degenerate surfaces, i.e they have zero area. In the mathematics literature they go under the name unstable surfaces. The matrix integral calculations naturally include these cases and also in the gravity calculations they should be there. From the recursion relation we can also see that they are there if we continue to negative $n$ and $q$. For instance from \eqref{recursionFx} we see that the boundary condition $\average{\tau_1} = 1/(24)$ determines $\average{\tau_{-2}} = 1$, but $\average{\tau_{n}}$ with $n<-2$ (along side with $n=-1,0$) still vanish. This gives the $1/x^2$ indeed. Furthermore, this also implies that $\average{\tau_0\tau_{-1}} = 1$.
\subsection*{Cancellations for two boundaries}
As explained in the recent paper by Eynard, Lewanski and Ooms \cite{eynard2021natural}, something interesting happens when we write $\mathcal{F}(x_1,x_2)$ as a series expansion (using the known expansions of Erf and exponentials, see for instance equation (2.13) in \cite{okounkov2002generating}) in elementary symmetric polynomials $e_1=x_1+x_2$ and $e_2=x_1x_2$:
\begin{equation}
\mathcal{F}(x_1,x_2)=\sum_{g=0}^\infty\sum_{m=0}^{g}e_2^m e_1^{3g-1-2m}\frac{3^m}{24^g m!(g-m)!}\frac{1}{2m+1}(-1)^{m}\,.\label{ffexact}
\end{equation}
The label $g$ refers to genus, because of the weight $e^{-2g\S}$ that each term acquires in terms of $\beta_i$ variables \eqref{zfromtau}. Let us compare this with the generic expansion of $\mathcal{F}(x_1,x_2)$ in elementary symmetric polynomials and intersection numbers, following from \eqref{58} (this is elementary, but slightly teadious to find)
\begin{equation}
\mathcal{F}(x_1,x_2)=\sum_{g}\sum_{m=0}^{(3g-1)/2}e_2^m e_1^{3g-1-2m}\sum_{p=0}^m
\average{\tau_p\tau_{3g-1-p}}\frac{3g-1-2p}{3g-1-p-m}\frac{(3g-1-p-m)!}{(m-p)!(3g-1-2m)!}(-1)^{p+m}\,.
\end{equation}
Notice that this is identical to the term of top degree $m=3g-1$ in \eqref{z2bdyexp}, which is no accident because topological gravity is the case $t_2, t_3\dots =0$ in section \ref{sect:eucl} in which case only the top degree survives. This should become more clear in section \ref{sect:universalcanc}. We left implicit the factor $1/2$ for the case $p=m=(3g-1)/2$.
The noteworthy observation is that the exact expression \eqref{ffexact} at fixed genus $g$ has a maximal power $e_2^g$, where the selection rule \eqref{selection} of intersection theory allows also for powers $e_2^m$ with $g<m<(3g-1)/2$. This implies highly non-trivial cancellations in intersection numbers
\begin{equation}
\sum_{p=0}^m
\average{\tau_p\tau_{3g-1-p}}\frac{3g-1-2p}{3g-1-p-m}\frac{(3g-1-p-m)!}{(m-p)!(3g-1-2m)!}(-1)^{p}=0\,,\quad g<m<(3g-1)/2\,.\label{taucanc1}
\end{equation}
The simplest case $m=(3g-1)/2$ has appeared in the literature before, for instance \cite{Liu13896} (theorem 8)
\begin{equation}
\sum_{d_1 + d_2 = 3g-1} \braket{\tau_{d_1}\tau_{d_2}}(-1)^{d_1} = 0\,.
\end{equation}
Now, these cancellations \eqref{taucanc1} are exactly the same as the highest degree cancellations $m=3g-1$ that we predicted for volumes \eqref{genericcancel} associated with generic spectra. This is obvious because the expansion coefficients in \eqref{vexp} for the volumes are exactly identical to the the intersection number in \eqref{expvolumes}
\begin{equation}
V_{g,d_1,d_2}=2^{d_1+d_2}\average{\tau_{d_1}\tau_{d_2}}_g\,,
\end{equation}
which for topological gravity are nonzero only if $d_1+d_2=3g-1$ \eqref{selection}. So, for topological gravity the only non-trivial cancellations in \eqref{genericcancel} are $m=3g-1$, the other cases are trivially satisfied because of the selection rule \eqref{selection}.
The fact that \eqref{ffexact} has a maximal power $e_2^g$ directly implies that the genus $g$ wormhole in topological gravity grows for late times no faster than
\begin{equation}
Z_g(\beta+\i T,\beta-\i T)_\text{conn}=\quad \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}, scale=0.7]
\pgftext{\includegraphics[scale=1]{euclwormmany.pdf}} at (0,0);
\draw (-0.04, -2) node {genus $g$ wormholes};
\draw (-2.65, 2.2) node {$\beta+\i T$};
\draw (2.65, 2.2) node {$\beta-\i T$};
\draw (0, 0) node {$\dots$};
\end{tikzpicture}\quad \sim T^{2g+1}e^{-2g\S}\,,\label{545}
\end{equation}
because for late enough times $e_1\sim \beta$ and $e_2\sim T^2$. We have shown that the cancellations in the volumes required to make this happen can be derived directly from the KdV equations. The question then arises whether this is also true for \emph{generic} dilaton gravity theories, or generic spectral curves $\rho_0(E)$. We will demonstrate in section \ref{sect:universalcanc} that this \emph{is} the case, the universal scaling \eqref{545} can be derived using the KdV hierarchy.
To set up that argument, we first introduce a generalization of the cancellations \eqref{ffexact} in $\mathcal{F}(x_1,x_2)$ to the $n$-boundary correlators in topological gravity $\mathcal{F}(x_1\dots x_n)$.
\subsection{Multi-boundary cancellations}
Just like for the cases $\mathcal{F}(x_1)$ and $\mathcal{F}(x_1,x_2)$ that we presented in section \ref{sect:simplecancel}, one can get an exact formula for the $n$-boundary correlators in topological gravity $\mathcal{F}(x_1\dots x_n)$ via the KdV recursion relations. This was shown by Liu and Xu \cite{liu2011n}, who found a recursive formula to compute $\mathcal{F}(x_1\dots x_n)$ from $\mathcal{F}(x_1\dots x_m)$ with $m<n$. The proof, nor the particular equation, is particularly insightful, the point is that precise formulas exist and follow purely from KdV.
These formulas can then be expanded in elementary symmetric polynomials \cite{eynard2021natural}, like we did for the two point function in \eqref{ffexact}. From the intersection number formula \eqref{58} and simple dimensional analysis we get an expansion of the type
\begin{equation}
\mathcal{F}(x_1\dots x_n)=\sum_{g=0}^\infty \sum_{m_1=0}^\infty\dots \sum_{m_n=0}^\infty e_1^{m_1}e_2^{m_2}\dots e_n^{m_n}\,C_g(m_2\dots m_n)\,,\quad \sum_{j=1}^n j m_j=3g-3+n\,,
\end{equation}
where the constraint on the powers comes from the selection rule \eqref{selection}. For clarity
\begin{equation}
e_1=\sum_{j=1}^n x_j\,,\quad e_2=\sum_{j_1<j_2=1}^n x_{j_1}x_{j_2}\,,\quad e_3=\sum_{j_1<j_2<j_3=1}^n x_{j_1}x_{j_2}x_{j_3}\,,
\end{equation}
etcetera. Crucially, Eynard, Lewanski and Ooms \cite{eynard2021natural} found that many of the naively allowed expansion coefficients identically vanish
\begin{equation}
C_g(m_2\dots m_n)=0\,,\quad \sum_{j=2}^n m_j>g\,.\label{548}
\end{equation}
This generalizes the fact that the highest power in \eqref{ffexact} is $e_2^g$. They impressively checked this vanishing for all $n$ and $g\leq 7$, and for all $g$ and $n\leq 3$, using the explicit formulas of Liu and Xu \cite{liu2011n} (see also \cite{okounkov2002generating}) and the KdV equations (in various forms).
Taken at face value, this new constraint \eqref{548} does not seem too constraining on the individual powers of $e_3, e_4\dots$, because the original selection rule does not allow $m_3$ to grow faster than $g$, or $m_4$ faster than $3g/4$ etcetera. So one may wonder if, beyond $m=2$, these cancellations \eqref{548} have any intuitive physical interpretation, we will now demonstrate that they do.
\section{Universal cancellations in dilaton gravity via open-closed duality}\label{sect:universalcanc}
In this section we show that the constraint \eqref{548} on the total power of $e_2,e_3\dots$ implies (and in fact is identical to) the universal maximal growth \eqref{545} for all double scaled matrix models (with square root edges \eqref{spectrum}), not just topological gravity. This late-time wormhole universality, in turn, is key for the emergence of the plateau, which, in turn, is gravity's way of saying it is a discrete quantum system \cite{Cotler:2016fpe}.
Readers familiar with how those models are described using the KdV hierarchy may skip to section \ref{subsect:universalcanc}, for didactic purposes we will go slower.
\subsection{KdV equations around different backgrounds}\label{sect:4.1}
We have learned that because of the relation of the volumes with $\psi_i$ classes for topological gravity \eqref{43}
\begin{equation}
V_{g,n}(b_1\dots b_n)=\int_{\overline{\mathcal{M}}_{g,n}}\exp\bigg(\frac{1}{2}\sum_{i=1}^n b_i^2\, \psi_i\bigg)=\sum_{d_1=0}^\infty\frac{b_1^{2d_1}}{2^{d_1}d_1!}\dots\sum_{d_n=0}^\infty\frac{b_n^{2d_n}}{2^{d_n}d_n!}\average{\tau_{d_1}\dots \tau_{d_n}}_g\,,
\end{equation}
that $n$-boundary partition functions can be expressed in terms of intersection numbers, as in \eqref{zfromtau}.
More in general, there is such a relation for all double scaled matrix integrals with a leading order spectrum of the type \eqref{spectrum}
\begin{equation}
\rho_0(E)=\frac{e^{\S}}{2\pi}\sum_{k=0}^\infty f_k\,E^{k+1/2}\,,\quad f_0=1\,.\label{52}
\end{equation}
In this more general case the volumes are computed in terms of intersection numbers as
\begin{equation}
V_{g,n}(b_1\dots b_n)=\sum_{d_1=0}^\infty\frac{b_1^{2d_1}}{2^{d_1}d_1!}\dots\sum_{d_n=0}^\infty\frac{b_n^{2d_n}}{2^{d_n}d_n!}\bigg\langle\tau_{d_1}\dots \tau_{d_n}\exp\bigg(\sum_{k=2}^\infty \g_k \tau_k\bigg)\bigg\rangle_g\,,\label{53}
\end{equation}
with a relation $f_k(\g_i)$ that we have yet to determine. Notice that we can get the correlators on the right from the same generating functional $F$ that satisfies the KdV hierarchy \eqref{Finter}, by expanding around a different set of KdV times $t_{k\,\text{total}}=\g_k+t_k$
\begin{align}
F=\bigg\langle\exp\bigg(\sum_{k=0}^\infty t_{k\,\text{total}} \tau_k\bigg)\bigg\rangle&=\prod_{i=0}^\infty \sum_{d_i}\frac{t_i^{d_i}}{d_i!}\bigg\langle \prod_{j=0}^\infty \tau_j^{d_j}\exp\bigg(\sum_{k=2}^\infty \g_k \tau_k\bigg)\bigg\rangle=\prod_{i=0}^\infty \sum_{d_i}\frac{t_i^{d_i}}{d_i!}\bigg\langle \prod_{j=0}^\infty \tau_j^{d_j}\bigg\rangle_\text{$\g_k$}\,,\label{fexpansions}
\end{align}
where in the last equality we have introduced shorthand notation $\average{\dots}_{\g_k}$ for correlators in the presence of an extra exponential of $\tau_k$ operators.
There are several ways to appreciate that \eqref{53} calculates the correlators of a double scaled matrix integral \cite{Saad:2019lba} with a spectral curve of the type \eqref{52} (or some model of two dimensional (dilaton) gravity). One way is to write out the Virasoro constraints \eqref{vir}; but with the more generally correct form for the creation and annihilation operators (we can always set $\g_0=\g_1=0$)
\begin{equation}
\alpha_m=
\begin{cases}
\frac{1}{2^{1/2}}(2m-1)!!\,\partial_{m-1}\, &m\geq 1
\\\frac{1}{2^{1/2}}\frac{1}{(-2m-1)!!}\,(t_{-m}-\delta_{m,-1}+\g_{-m})\, &m\leq 0\,.
\end{cases} \label{55}
\end{equation}
We can use the definition of $F$ \eqref{fexpansions} to see that we still have the property that a derivative with respect to $t_k$ is the creator of an $\tau_k$
\begin{equation}
\partial_k F=\sum_{\{d_i\}}\prod_{i=0}^\infty \frac{t_i^{d_i}}{d_i!}\bigg\langle \tau_k\prod_{j=0}^\infty \tau_j^{d_j}\bigg\rangle_\text{$\g_k$}\,,
\end{equation}
and furthermore multiplication with $t_k$ is still like removing or annihilating a $\tau_k$
\begin{equation}
t_k F=\sum_{\{d_i\}}\prod_{i=0}^\infty \frac{t_i^{d_i}}{d_i!}\bigg\langle d_k \tau_k^{d_k-1}\prod_{j\neq k}^\infty \tau_j^{d_j}\bigg\rangle_\text{$\g_k$}\,.
\end{equation}
Then we can write out the generators $L_n$ \eqref{421} and use $Z=\exp(F)$ to obtain recursion relations that are analogous to the Dijkgraaf-Verlinde$^2$ ones \eqref{523} but which depend explicitly on the $\g_k$, because $\g_k$ appears in \eqref{55}. These recursive equations to compute $\average{\tau_{d_1}\dots \tau_{d_n}}_{\g_k}$ can then be recovered alternatively from the topological recursion \cite{Eynard_2009} relations between volumes with spectral curve \eqref{52}, demonstrating the equivalence.
We emphasize again that we are always dealing with the same function $F$, the same KdV hierarchy \eqref{kdv} and the same string equation (which features the invariant $t_{k\,\text{total}}=t_k+\g_k$)
\begin{equation}
0=\sum_{k=0}^\infty (t_k+\g_k-\delta_{k,1})R_k\,,
\end{equation}
which in a correlation function reduces to the generalization of \eqref{426}
\begin{equation}
\bigg\langle\tau_0\prod_{i=1}^n\tau_{k_i}\bigg\rangle_\text{$\g_k$}-\sum_{j=1}^\infty \g_{j+1}\bigg\langle\tau_j\prod_{i=1}^n\tau_{k_i}\bigg\rangle_\text{$\g_k$}=\sum_{m=1}^n\bigg\langle \tau_{k_m-1}\prod_{i\neq m}^n \tau_{k_i}\bigg\rangle_\text{$\g_k$}\,.\label{stringeqtaugen}
\end{equation}
For multi-critical points with $E^{k+1/2}$ edge only $\g_{k+1}$ is nonzero and we recover the $n=0$ case of formula (4.1) in \cite{Dijkgraaf1991loop} (the formulas are more symmetric if we view the $-\delta_{k,1}$ as $\g_1=-1$ instead, which we will adopt below).
Let us now compute the spectrum $\rho_0(E)$ that follows from \eqref{fexpansions}. For this we need to be a bit more clever with the genus counting parameter and define \cite{Okuyama:2019xbv}
\begin{equation}
F(e^{\S},\g_k)=\sum_{g=0}^\infty e^{-2g\S}\average{\exp\bigg(\sum_{k=0}^\infty \g_k\tau_k\bigg)}_g\,,\quad R_{k+1}(e^{\S},\g_k)=\partial_0\,\partial_k\, F(e^{\S},\g_k)\,,
\end{equation}
with again $u(e^{\S},\g_k)=R_1(e^{\S},\g_k)$. Using the selection rule \eqref{selection} we observe that this is related to $F$ via $F(e^{\S},\g_k)=e^{-2\S} F(\g_k e^{\S 2(1-k)/3})$. Using these relations one rewrites the KdV recursion relations for $R_n$ \eqref{413} as (all equations below implicitly have the functions of $u(e^{\S},\g_k)$ that we just defined)
\begin{equation}
(2n+1)\partial_0 R_{n+1}=\partial_0 u R_n+2 u \partial_0 R_n +e^{-2\S}\frac{1}{4}\partial_0^3 R_n\,,\quad 0=\sum_{k=0}^\infty \g_k R_k\,.
\end{equation}
This redefinition may seem a bit like wasted energy, but the benifit is that we can now very easily solve the recursion relation to leading order in $e^{\S}$, because we can neglect the final term. With seed $R_0=1$, one finds $R_k=u^k/k!$ and the leading order string equation becomes quite simple
\begin{equation}
-\g_0=\sum_{k=1}^\infty \g_k\frac{u^k}{k!}=\mathcal{G}(u)\,.
\end{equation}
One can then use this simple structure to find $Z(\beta)$ to leading order as follows \cite{Okuyama:2019xbv,Johnson:2019eik,Johnson:2020exp,Johnson:2020heh}. In all the expressions we have written down so far $t_0$ was set to zero, but this makes using the KdV hierarchy impossible since there are $\partial_0$s everywhere. Thus we need to consider a partition function $Z(\b,t_0)$ with non-zero $t_0$. For that we need to use the contributions from unstable surfaces discussed below \eqref{Fx1x2}. In particular, consider
\begin{align}
\partial_0 Z(\beta,t_0) = e^{\S}\,\frac{\beta^{1/2}}{\pi^{1/2}}\sum_d (2\beta)^d \average{\tau_0\tau_d}_{\g_k} =e^{\S}\,\frac{1}{2\pi^{1/2}\beta^{1/2}}\sum_{d=0}^\infty (2\beta)^d R_d
=e^{\S}\,\frac{e^{2\beta u}}{2\pi^{1/2}\beta^{1/2}}\label{partialZ}\,,
\end{align}
where we used that a derivative wrt to $t_0$ inserts a factor of $\tau_0$\footnote{To be very precise here: we work to first order in perturbation theory in $t_0$, which will be enough as we will later set $t_0$ to zero again.} the definition $R_{k+1}=\partial_0\,\partial_k\,F$ and that $R_d = u^d/d!$. Integrating \eqref{partialZ}, using that $\g_0=-\mathcal{G}(u)$ such that $\d \g_0=-\mathcal{G}'(u)\d u$, and putting $t_0=0$ again one obtains the leading order solution \cite{Okuyama:2019xbv}
\begin{equation}
Z(\beta)= \frac{e^{\S}}{2\pi^{1/2} \b^{1/2}} \int_{-2 u_0}^{\infty} \d u\,\mathcal{G}'(-u/2)\,e^{- \b u } \,,\label{515}
\end{equation}
with $u_0$ the smallest positive solution to $\mathcal{G}(u_0) = 0$. The spectral density is then found to be
\begin{equation}
\rho_0(E) = \frac{e^{\S}}{\pi}\int_{E_0}^{E}\d u \frac{\mathcal{G}'(-u/2)}{\sqrt{E-u}} \,.\label{4.16}
\end{equation}
with $E_0=2u_0$. In the case of JT we have
\begin{equation}
\mathcal{G}(-u/2) = \frac{u^{1/2}}{2\pi}I_1(2\pi u^{1/2}),
\end{equation}
which gives $E_0 = 0$ and the density to be the sinh. More in general, one finds an expansion of $\rho_0(E)$ in terms of $E^{k+1/2}$ as in \eqref{spectrum} but with expansion coefficients that depend on $\g_k$ non-linearly because $E_0$ depends non-linearly on these couplings. To sum things up: \eqref{53} computes volumes of two dimensional (dilaton) gravity theories, and the leading order spectrum can be found more or less directly from the KdV equations.\footnote{Some extra overall $e^{E_0 \beta}$ in the integral \eqref{expaexpa} does not change any of the statements about the power series.}
Before we proceed let us make two comments that will be important later.
\begin{enumerate}
\item JT gravity corresponds to the case
\begin{equation}
\g_{n+1}=(-1)^{n+1}\frac{(2\pi^2)^n}{n!}\,,
\end{equation}
which indeed reproduces the series expansion of $\rho_0(E)=e^{\S}\sinh(2\pi E^{1/2})/4\pi^2$. Of course we also have the known expression for the Weil-Petersson symplectic form \cite{Dijkgraaf:2018vnm}
\begin{equation}
V_{g,n}(b_1\dots b_n)=\int_{\overline{\mathcal{M}}_{g,n}}\exp\bigg(2\pi^2\kappa+\frac{1}{2}\sum_{i=1}^n b_i^2\, \psi_i\bigg)\,,
\end{equation}
where the $\kappa$ class represents the Weil-Petersson two form on punctured Riemann surfaces. If we compare this with our current construction we recover the known duality \cite{Dijkgraaf:2018vnm,eynard2011recursion}
\begin{equation}
e^{2\pi^2 \kappa} \quad \Leftrightarrow\quad \exp\left(\sum_{k=2}^\infty (-1)^k\frac{(2\pi^2)^{k-1}}{(k-1)!}\tau_k\right)\,,\label{520}
\end{equation}
which is to be understood as holding within expectation values.
\item This relation between an exponential of $\tau_k$ operators and matrix integrals with different spectral densities should be considered an example of open-closed (string) duality. For instance in \eqref{53} the left-hand side is the ``closed'' picture, we compute $n$-boundary amplitudes in a specific gravity theory. But, the right-hand side is a sum over $m\geq n$-boundary amplitudes in topological gravity
\begin{equation}\label{openInter}
\average{\tau_{d_1}\dots \tau_{d_n}}_{\g_k}=\average{\tau_{d_1}\dots \tau_{d_n}}+\sum_{k_1=2}^\infty \g_{k_1} \average{\tau_{d_1}\dots \tau_{d_n}\tau_{k_1}}+\frac{1}{2}\sum_{k_1=2}^\infty\sum_{k_1=2}^\infty\g_{k_1}\g_{k_2} \average{\tau_{d_1}\dots \tau_{d_n}\tau_{k_1}\tau_{k_2}}+\dots
\end{equation}
This should be viewed as the ``open'' description \cite{Gaiotto:2003yb}. The identity \eqref{520} should be interpreted in the same way. The $\kappa$ two forms are integrated over $\overline{\mathcal{M}}_{g,n}$, whereas the $\tau_k$ correspond with Chern classes $c_1(L_a)^{\wedge k}$ associated with $m-n$ extra marked points $x_a$ integrated over $\overline{\mathcal{M}}_{g,m}$ with $m\geq n$. In string language, the exponential of $\tau_k$'s is like inserting D-branes (exponentials of boundaries), see also Fig. \ref{fig:openclosed}.
\item Another side of the coin is the relation with ribbon graphs. We demonstrated in section \ref{sect:3.1} that topological gravity can be thought of in the same way as JT gravity, but where the higher genus Riemann surfaces one glues to are ribbon graphs with large lengths $b_i$. In turn, the full JT theory also has a ribbon graph interpretation; actually two, related by open-closed duality.
The open side of the interpretation is essentially rewriting \eqref{openInter} in terms of ribbon graphs using Kontsevich's insights, which we explained in section \ref{sect:3.1}. The crucial thing there is then that the JT ribbon graphs in the open description are sums of the usual (Airy) ribbon graphs, but with additional faces that are weighted by the $\g_i$ instead of the $b_i^2/2$.
The closed perspective is that one takes the usual (Airy) ribbon graphs, but when computing the volume of moduli space one uses not the flat measure, rather the $\kappa$ class implies a non-trivial measure
\begin{equation} \label{muJT}
\mu(\{\hat{\ell}_i\},\{b_i\})_{\rm JT} = \exp\bigg(2\pi^2\kappa+\frac{1}{2}\sum_{i=1}^n b_i^2\, \psi_i\bigg) = \exp \bigg( - \sum_{i<j}^{6g-6+2n} W^{-1}_{ij} \d \hat{\ell}_i \wedge \d \hat{\ell}_j \bigg)\,,
\end{equation}
with $W_{ij}$ the matrix
\begin{equation}
W_{ij} = \sum_{p \in \g_i \cap \g_j} \cos \theta_p\,.
\end{equation}
Here $\g_i$ are $6g-6+2n$ distinct simple closed geodesics with length $\hat{\ell}_i$, and $\theta_p$ is the intersection angle between two of them. These $\hat{\ell}_i$ are crucially not the same as the lengths $\ell_j$ we encountered around equation \eqref{3.4}. For $\hat{\ell}_i$ we picked a set of simple closed geodesics $\g_i$ on the surface, whereas in \eqref{3.4} we looked at the edges of a ribbon graph associated to the surface. How to construct the $\g_i$ from the ribbon graph is non-trivial. From the ribbon graph, as explained in \cite{do2008intersection} there is an algorithmic way of constructing the simple closed geodesics (these can be disjoint unions of curves also) but since the ribbon graph is trivalent\footnote{The moduli space of ribbon graphs of other valency have dimension strickly less than $6g-6+2n$.} one will get $6g-6+3n$ geodesics, which are $n$ too many. One can eliminate them by noticing that the analogue of the matrix $W_{ij}$ will have rank $6g-6+2n$ and so there is a set of rows and colums one can select to get the right geodesics. This is thus rather cumbersome, but luckily at large $b_i$ the $\ell_i$ and $\hat{\ell}_i$ have a simple relation \cite{do2008intersection}, resulting in \eqref{3.6}.
At any rate, the closed picture is thus more complicated only in the sense that we would have to write \eqref{muJT} in terms of the edge lengths $\ell_i$ of the usual ribbon graphs.
Another point to note is that the relation with the perhaps more familiar measure in terms of the Fenchel-Nielsen length and twist coordinates $(b_j,\tau_j)$ is also not easy. One would have to write the twist and length variables in terms of the edge variables $\ell_j$ of ribbon graphs. The relation with the $\hat{\ell}_i$ is a bit easier, but then one still needs to convert $\hat{\ell}_i$ to the $\ell_i$ variables, which as explained above, is hard.
\end{enumerate}
The point is that you can compute complicated things in a simple theory (many-boundary observables in topological gravity) and learn about simple things in a complicated theory (few-boundary observables in generic two-dimensional gravity models). Eynard, Lewanski and Ooms have such a complicated claim \eqref{548} in a simple theory, we will recast this into a simple claim \eqref{expaexpa} in complicated theories.
\begin{figure}[t]
\centering
\begin{equation}
\quad \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}, scale=0.7]
\pgftext{\includegraphics[scale=1]{openclosed1.pdf}} at (0,0);
\draw (-0.04, -2) node {complicated action};
\draw (-0.04, 2) node {simple observable};
\end{tikzpicture}\quad=\quad\sum \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}, scale=0.7]
\pgftext{\includegraphics[scale=1]{openclosed2.pdf}} at (0,0);
\draw (-0.04, -2.5) node {simple action};
\draw (-0.04, 2.5) node {complicated observable};
\draw (0, 0) node {$\dots$};
\end{tikzpicture}\quad\nonumber
\end{equation}
\caption{Open-closed duality in a nutshell. Here we have in mind computing $V_{1,1}(b)$ in the theory with an exponential of $\tau_k$ inserted in the action, see \eqref{69} below. Either we use the complicated action \eqref{69} (left); or we expand out the deformations (right), in which case we have a sum over cusp-defects but we just use the simple JT gravity action at the cost of computing a more complicated observable with many boundaries (or operators). Alternatively, in the context of this section, we could think of the left picture as JT gravity and the right picture as Airy with an exponential of \eqref{520} inserted. The idea is always the same.}
\label{fig:openclosed}
\end{figure}
\subsection{Universal cancellations in all theories}\label{subsect:universalcanc}
Consider now the two-boundary correlator in a generic background $\g_k$, or for any double scaled matrix integral with square root spectral edge. According to \eqref{zfromtau} and \eqref{53} we have
\begin{align}
Z(\beta_1,\beta_2)_\text{conn}&=\frac{\beta_1^{1/2}}{\pi^{1/2}}\,\frac{\beta_2^{1/2}}{\pi^{1/2}}\,\mathcal{F}(2\beta_1,2\beta_2)_{\g_k}=\frac{\beta_1^{1/2}}{\pi^{1/2}}\,\frac{\beta_2^{1/2}}{\pi^{1/2}}\sum_{g=0}^\infty e^{-2g\S}\,\mathcal{F}_g(2\b_1,2\b_2)_{\g_k}\,,\label{zfromtaugen}
\end{align}
with the generating functional
\begin{equation}
\mathcal{F}_g(x_1,x_2)_{\g_k}=\sum_{d_1=0}^\infty\sum_{d_2=0}^\infty x_1^{d_1} x_2^{d_2}\bigg\langle\tau_{d_1} \tau_{d_2}\exp\bigg(\sum_{k=2}^\infty \g_k \tau_k\bigg)\bigg\rangle_g\,.\label{4.22}
\end{equation}
Now let us introduce a new infinite set of variables $y_i(t_k)$ such that
\begin{equation}
\g_k=\sum_{i=0}^\infty y_i^k\,.
\end{equation}
Then expanding out the generating functional gives
\begin{equation}
\mathcal{F}_g(x_1,x_2)_{\g_k}=\sum_{m=0}^\infty \frac{1}{m!}\sum_{i_1=0}^\infty\dots \sum_{i_m=0}^\infty \mathcal{F}_g(x_1,x_2,y_{i_1}\dots y_{i_m})\,,
\end{equation}
which features the $n\geq 2$-boundary correlators of topological gravity
\begin{equation}
\mathcal{F}_g(x_1\dots x_n)=\sum_{d_1=0}^\infty\dots \sum_{d_n=0}^\infty x_1^{d_1}\dots x_n^{d_n}\average{\tau_{d_1}\dots \tau_{d_n}}_g=\sum_{m_1=0}^\infty\dots \sum_{m_n=0}^\infty e_1^{m_1}e_2^{m_2}\dots e_n^{m_n}\,C_g(m_2\dots m_n)\,,
\end{equation}
which as we discussed around \eqref{548} has a constrained expansion in elementary symmetric polynomials
\begin{equation}
C_g(m_2\dots m_n)=0\,,\quad \sum_{j=2}^n m_j>g\,.\label{548bis}
\end{equation}
For late times in the spectral form factor computation $x_1\sim \i T$ and $x_2\sim -\i T$, but none of the $y_i$ scale in any way with $T$, these are just parameters inherent to the theory. This means that $e_2\sim e_3\sim \dots \sim T^2$ and therefore
\begin{equation}
e_1^{m_1}e_2^{m_2}\dots e_n^{m_n}\sim T^{2\sum_{j=2}^n m_j}\,.\label{4.27}
\end{equation}
The power of $T$ that appears here is precisely the expression that was constrained by the KdV equations to be upper bound by $2 g$. Hence we have derived that $\mathcal{F}_g(x_1,x_2)_{\g_k}$ grows no faster than $T^{2g}$ for generic $\g_k$
\begin{equation}
C_g(m_2\dots m_n)=0\,,\quad \sum_{j=2}^n m_j>g\quad \Rightarrow \quad Z_g(\beta+\i T,\beta-\i T)_\text{conn}\sim T^{2g+1}e^{-2g\S}\,.
\end{equation}
In summary, we have used the KdV equations to prove a universal late-time scaling behavior for genus $g$ wormhole amplitudes in generic double scaled matrix integrals (or two dimensional gravity models). We explained in section \ref{sect:eucl} why this behavior was key for the emergence of the plateau.
In fact we believe that the arrow works both ways
\begin{equation}
C_g(m_2\dots m_n)=0\,,\quad \sum_{j=2}^n m_j>g\quad \Leftrightarrow \quad Z_g(\beta+\i T,\beta-\i T)_\text{conn}\sim T^{2g+1}e^{-2g\S}\,,\label{iff}
\end{equation}
in other words this new type of wormhole universality at late times also implies all of the cancellations that Eynard, Lewanski and Ooms \cite{eynard2021natural} found. We demonstrate this in appendix \ref{app:reverse}.
\section{KdV equations in dilaton gravity and the matrix integral}\label{sect:interpretation}
In the previous sections we primarily focused on the intersection theory and its integrable structure by itself, but there is actually a natural interpretation in dilaton gravity and the matrix integral of all this stuff as well. The general message that we want to convey here is that the KdV equations seem to have a lot of applications in gravity, it seems like we have just scratched the surface.
In \textbf{section \ref{sect:KdVdilatongravity}} we prove that the $\tau_k$ can be interpreted as local operators in dilaton gravity which create cusps (sharp defects \cite{Witten:2020wvy,Maxfield:2020ale,Mertens:2019tcm}). KdV backgrounds with different $\g_k$ correspond with inserting a gas of such cusps in JT gravity, this changes the dilaton potential, and in that sense the different KdV backgrounds are different dilaton gravity models.
The KdV equations mostly describe how observables in dilaton gravity change under changes in the action, the answer is basically that we compute things with a new spectral curve. Likewise, the string-and dilaton equations have an interpretation in dilaton gravity as transformation rules for observables under changes in the action. They reduce to analogues of the Dijkgraaf-Verlinde$^2$ loop equations \cite{Dijkgraaf:1991qh}.
In \textbf{section \ref{sect:matrix}} we present a clean matrix integral dual for the $\tau_k$ operators, improving on equations that appeared in \cite{Maldacena:2004sn, Gross1989nonperturbative,Blommaert:2021gha,DOUGLAS1990176,douglas1990strings}.
\subsection{Dilaton gravity}\label{sect:KdVdilatongravity}
We start with remembering the relation \eqref{53} between Weil-Petersson volumes and $\tau_k$ correlators in the JT gravity background \eqref{520}
\begin{equation}
V_{g,n}(b_1\dots b_n)=\sum_{d_1=0}^\infty\frac{b_1^{2d_1}}{2^{d_1}d_1!}\dots\sum_{d_n=0}^\infty\frac{b_n^{2d_n}}{2^{d_n}d_n!}\average{\tau_{d_1}\dots\tau_{d_n}}_\kappa\,.\label{61}
\end{equation}
Furthermore \cite{tan2006generalizations,do2009weil,do2011moduli,Cotler:2019nbi,Maxfield:2020ale} we also know that correlators of defects with opening angle $\alpha_i<\pi$ in JT gravity are analytic continuations of Weil-Petersson volumes to $b_i=\i\alpha_i$
\begin{equation}
\average{\mathcal{O}_\text{D}(\alpha_1)\dots \mathcal{O}_\text{D}(\alpha_n)}_{g\,\text{conn}}= V_{g,n}(\i \alpha_1\dots \i \alpha_n)=\sum_{d_1=0}^\infty \frac{(-1)^{d_1}}{2^{d_1}d_1!}\,\alpha_1^{2d_1}\dots\sum_{d_1=0}^\infty \frac{(-1)^{d_n}}{2^{d_n}d_n!}\,\alpha_1^{2d_n}\average{\tau_{d_1}\dots\tau_{d_n}}_\kappa \,,\label{62}
\end{equation}
with $\mathcal{O}_\text{D}(\alpha)$ the dilaton gravity operator that creates a defect with opening angle $\alpha$\footnote{One can use several quantization schemes for defects in JT gravity \cite{Witten:2020wvy,Turiaci:2020fjj}, leading to slightly different expressions for the defect operator. We are using the most intuitive conventions of \cite{Witten:2020wvy}. The first equality in \eqref{64} is scheme-independent, the second equality can be directly modified to a different scheme. The scheme of \cite{Turiaci:2020fjj} may be more appropriate for doing semiclassics with the action in \eqref{69}.} \cite{Mertens:2019tcm} (using Gauss-Bonnet one can check that this indeed creates the expected angular defect \cite{Mertens:2019tcm,Louko:1995jw})
\begin{equation}
\mathcal{O}_\text{D}(\alpha)\quad \Leftrightarrow\quad\int \d^2 x \sqrt{g}\, e^{-(2\pi-\alpha) \Phi}\quad \Leftrightarrow \quad \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}, scale=0.7]
\pgftext{\includegraphics[scale=1]{boundarya.pdf}} at (0,0);
\draw (-0.04, -1.5) node {conical defect};
\draw (-0.5, 0) node {$\a$};
\draw (-2,0) node {$\dots$};
\end{tikzpicture}\quad\label{63}
\end{equation}
Equation \eqref{62} basically means that this defect operator is a generating functional of $\tau_k$ operators
\begin{equation}
\tau_k\quad \Leftrightarrow\quad (-1)^k \frac{2^k k!}{(2k)!}\,\partial_\alpha^{2k}\,\mathcal{O}_\text{D}(\alpha)\rvert_{\alpha=0}\quad \Leftrightarrow\quad \frac{(-1)^k}{(2k-1)!!}\,\int\d x^2\sqrt{g}\,\Phi^{2k}\,e^{-2\pi\Phi}\,,\label{64}
\end{equation}
where in the second equality we inserted the dilaton gravity meaning of $\mathcal{O}_\text{D}(\alpha)$. Another way of stating this is that we have the obvious identity
\begin{equation}
\int \d^2 x \sqrt{g}\, e^{-(2\pi-\alpha) \Phi}\quad \Leftrightarrow \quad \exp\bigg(\frac{1}{2}(\i\alpha )^2\psi_\text{extra}\bigg)\,,\quad \alpha<\pi\,.\label{65}
\end{equation}
Here $\psi_{\text{extra}}$ is the first Chern class attached to the extra marked point, constructed in a similar way as \eqref{43}. The same equation \eqref{64} is found by working directly with geodesic boundaries and expanding around $b=0$ whilst using the dilaton gravity description of a geodesic boundary \cite{Blommaert:2021fob}\footnote{The reader should not dwell on the $e^{-\S}$ prefactor here, which just signifies that we choose to count this as a hole rather than a local operator topologically. For defects and $\tau_k$ operator insertions there is no such prefactor.}
\begin{equation}
\mathcal{O}_\text{G}(b)\quad \Leftrightarrow\quad e^{-\S}\int \d^2 x \sqrt{g}\, e^{-2\pi \Phi} \cos\left(b\Phi\right)\quad \Leftrightarrow\quad \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}, scale=0.7]
\pgftext{\includegraphics[scale=1]{boundaryb.pdf}} at (0,0);
\draw (-0.04, -1.5) node {geodesic boundary};
\draw (1.75, 0) node {$b$};
\draw (-2,0) node {$\dots$};
\end{tikzpicture}\quad\label{expansion}
\end{equation}
We learn from \eqref{64} that the $\tau_k$ operators should be interpreted in JT gravity as cusps (infinitely sharp defects $\alpha=0$) with an additional insertion of $\Phi^{2k}$ at the cusps. This fits nicely with the stringy intuition that local vertex operators (here $\tau_k$) can be viewed as closed string states coming in from $\infty$, the infinity is the fact that the cusp is infinitely sharp and the state is specified by the $\Phi^{2k}$
\begin{equation}
\tau_k\quad \Leftrightarrow\quad \frac{(-1)^k}{(2k-1)!!}\,\int\d x^2\sqrt{g}\,\Phi^{2k}\,e^{-2\pi\Phi}\quad \Leftrightarrow \quad \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}, scale=0.7]
\pgftext{\includegraphics[scale=1]{boundarykink.pdf}} at (0,0);
\draw (-0.04, -1.5) node {(nearly) cusp defect};
\draw (2.5, 0) node {$\a=0$};
\draw (-1.5,0) node {$\dots$};
\end{tikzpicture}\quad\label{5.7}
\end{equation}
Suppose now that we take the KdV hierarchy and we consider the following total background $\g_k$
\begin{equation}
\exp\bigg(2\pi^2\kappa+\sum_{k=2}^\infty t_k \tau_k\bigg)=\exp\bigg(\sum_{k=2}^\infty \g_k \tau_k\bigg)\,,
\end{equation}
where the relation between $\g_k$ and $t_k$ follows from the expansion of the $\kappa$ class in $\tau_k$ operators \eqref{520}. Because the $2\pi^2\kappa$ is specifying JT gravity, we can view the remainder of the exponential as an operator insertion in JT gravity. Using the dictionary \eqref{64} we find that, because the $t_k \tau_k$ are in the exponential, we obtain a deformation of the JT gravity dilaton potential
\begin{equation}
\exp\bigg(2\pi^2\kappa+\sum_{k=2}^\infty t_k \tau_k\bigg)\quad \Leftrightarrow \quad \exp\bigg(\S\chi+\frac{1}{2}\int\d^2 x\sqrt{g} \,\Phi (R+2)+ \int\d^2 x\sqrt{g}\sum_{k=2}^\infty \frac{(-1)^k }{(2k-1)!!}\,t_k\,\Phi^{2k}\,e^{-2\pi\Phi} \bigg)\,.\label{69}
\end{equation}
These deformations span the class of dilaton gravities that were discussed in \cite{Maxfield:2020ale,Witten:2020ert,Witten:2020wvy}. This formula is exactly true as long as the deformation decays for large $\Phi$ no slower than $e^{-\pi \Phi}$.
Consider now the string equation \eqref{stringeqtaugen} which for JT gravity $\g_{j+1}=-(-2\pi^2)^j/j!$ becomes
\begin{equation}
\sum_{a=0}^\infty \frac{(-2\pi^2)^a}{a!}\bigg\langle\tau_a\prod_{i=1}^n\tau_{k_i}\bigg\rangle_\k=\sum_{m=1}^n\bigg\langle \tau_{k_m-1}\prod_{i\neq m}^n \tau_{k_i}\bigg\rangle_\k\,.
\end{equation}
Note that the operator that is being ``inserted'' on the left (replacing the role of the so-called puncture operator $\tau_0$ in topological gravity) is related through \eqref{62} with a Weil-Petersson volume evaluated at $\alpha=2\pi$. We can then rewrite the string equation by summing over the other $\tau_k$ in the correlator with appropriate prefactors, such that they correspond with inserting geodesic boundaries too as in \eqref{61}. One finds that the string equation becomes
\begin{equation}
V_{g,n+1}(2\pi \i,\,b_1\dots b_n)=\int_0^{b_1} \d a_1 a_1\,V_{g,n}(a_1,b_2\dots b_n)+\dots \int_0^{b_n} \d a_n a_n\,V_{g,n}(b_1\dots b_{m-1},a_n)\,.\label{rel1}
\end{equation}
One can indeed check this explicitly, for instance
\begin{equation}
V_{1.2}(2\pi \i,b)=\frac{b^4}{192}+\pi^2\frac{b^2}{24}=\int_0^b \d a a\,V_{1,1}(a)\,,\quad V_{1,1}(a)=\frac{a^2}{48}+\frac{\pi^2}{12}\,.
\end{equation}
This relation \eqref{rel1} should be viewed as the first term in the Taylor expansion of Mirzakhani's recursion around $b=2\pi \i$. Indeed, Mirzakhani's recursion relations are \cite{mirzakhani2007simple,Stanford:2019vob}
\begin{align}
b\, V_{g, n+1}(b, B)=\,&\frac{1}{2} \int_{0}^{\infty} \d b' b'\,\d b'' b''\,D(b,b',b'')\bigg( V_{g-1,n+2}(b',b'',B)+\sum_{\text{stable}} V_{h_1,n_1}(b',B_1) V_{h_2,n_2}(b'',B_2)\bigg) \nonumber\\
&+\sum_{i=1}^{n} \int_{0}^{\infty} \d b' b'\,(b-T(b,b', b_i))\, V_{g,n-1}(b', B/b_i)\,.\label{recmirza}
\end{align}
with
\begin{equation}
D(b,b',b'') = 2 \log\left(\frac{e^{b/2} + e^{(b'+b'')/2}}{e^{-b/2} + e^{(b'+b'')/2}}\right),\quad T(b,b',b'') = \log \left( \frac{\cosh\frac{b''}{2} + \cosh\frac{b-b'}{2}}{\cosh\frac{b''}{2} + \cosh\frac{b+b'}{2}} \right).
\end{equation}
Inserting $b=2\pi\i$ (and being careful with branchcuts upon doing the analytic continuation from real $b$) one obtains
\begin{equation}
D(2\pi \i,b^{\prime},b^{\prime \prime})=0\,,\quad T(2\pi\i,b',b_k)=2\pi\i\,\theta(b'-b_k)\,,\label{616}
\end{equation}
which reduces the recursion relation \eqref{recmirza} directly to \eqref{rel1}. The other Virasoro constraints \eqref{vir} in the KdV hierarchy compute subleading terms in the Taylor expansion of the volumes around $b=2\pi\i$.
For instance, working out the $L_0$ constraint (or dilaton equation) for JT gravity gives
\begin{equation}
\sum_{a=0}^\infty(2a+3) \frac{(-2\pi^2)^a}{a!}\bigg\langle\tau_{a+1}\prod_{i=1}^n\tau_{k_i}\bigg\rangle_\k=\sum_{m=1}^n(2k_i+1)\bigg\langle\prod_{i=1}^n \tau_{k_i}\bigg\rangle_\k\,,
\end{equation}
which in terms of volumes becomes
\begin{equation}
\bigg(\frac{2}{b}\partial_b+\partial_b^2\bigg)\, V_{g,n+1}(b=2\pi\i ,b_1\dots b_n)=\bigg(n+\sum_{i=1}^n b_i\partial_{b_i}\bigg)\,V_{g,n}(b_1\dots b_n)\,,\label{dildil}
\end{equation}
this relation one can again check explicitly, for instance for $V_{1,2}(b,b_1)$ and $V_{1,1}(b_1)$. In terms of partition functions \eqref{zfromtau} the string equation \eqref{rel1} becomes
\begin{equation}
Z(b=2\pi i,\,\beta_1\dots \beta_n)_\text{conn}=2\sum_{i=1}^n \beta_i\,Z(\beta_1\dots\beta_n)_\text{conn}\,,
\end{equation}
and the dilaton equation \eqref{dildil} becomes
\begin{equation}
\bigg(\frac{2}{b}\partial_b+\partial_b^2\bigg)\,Z(b=2\pi i,\,\beta_1\dots \beta_n)_\text{conn}=2\sum_{i=1}^n \beta_i\partial_{\beta_i}\,Z(\beta_1\dots\beta_n)_\text{conn}\,.
\end{equation}
These identities are the equivalent of what Dijkgraaf-Verlinde$^2$ \cite{Dijkgraaf:1991qh} called loop equations for topological gravity. In the latter case the right hand side of the dilaton equation is also equal to $-3\chi Z(\beta_1\dots\beta_n)_\text{conn}$, as in \eqref{427}. This is not true for JT gravity, because the selection rule \eqref{selection} is violated by the $\kappa$ classes in the exponential. However, it turns out \cite{do2008intersection} we also have an equation
\begin{equation}
\frac{1}{b}\partial_b\,Z(b=2\pi i,\,\beta_1\dots \beta_n)_\text{conn}=-\chi\, Z(\beta_1\dots\beta_n)_\text{conn}\,,\label{619}
\end{equation}
such that the dilaton equation simplifies to
\begin{equation}
\partial_b^2\,Z(b=2\pi i,\,\beta_1\dots \beta_n)_\text{conn}=\bigg(2\sum_{i=1}^n \beta_i\partial_{\beta_i}-2\chi\bigg)\, Z(\beta_1\dots\beta_n)_\text{conn}\,.\label{620}
\end{equation}
We are interested in understanding these equations (and if possible the other $L_n$ constraints) directly from the dilaton gravity path integral, in the spirit of having a better gravity understanding of the KdV hierarchy. Based on equation \eqref{65} and
\begin{equation}
\sum_{a=0}^\infty\frac{(-2\pi^2)^a}{a!}\tau_a = \exp\bigg(\frac{1}{2}(2\pi\i)^2\psi_\text{extra}\bigg)\,,
\end{equation}
one might think that this corresponds with a blunt defect $\alpha=2\pi$, which is essentially an area operator (or a marked point which is integrated over spacetime, without creating any source of curvature in that spacetime as backreaction)
\begin{equation}
\sum_{a=0}^\infty\frac{(-2\pi^2)^a}{a!}\tau_a\quad \overset{?}{\Leftrightarrow} \quad \int\d^2 x\sqrt{g}=A\quad \Leftrightarrow \quad \text{marked point without curvature source}\,.\label{grainofsalt}
\end{equation}
This turns out to be \emph{almost} correct, but not quite. Correlators of blunt defects $\alpha=2\pi$ were discussed around equation (4.23) and (4.18) in \cite{Turiaci:2020fjj}. The JT gravity connected $n$-boundary path integral with $m$ such defects inserted results in\footnote{If we are more careful with prefactors in \cite{Turiaci:2020fjj} we see that actually the relevant operator is $\e A$, then the Weil-Petersson volumes contribute $\epsilon\, 2\pi \chi=0$ and the trumpets contribute $\epsilon \int \d u \sqrt{h}=\beta_i$ indeed. We use the quantization scheme of \cite{Turiaci:2020fjj} here because it is more suitable for making contact with semiclassics, in that scheme for $\alpha\sim 2\pi$ there is roughly an extra prefactor $2\pi-\alpha$ in \eqref{63}, this is the $2\pi/\g(1-\alpha/2\pi)$ in \eqref{5.29} when $\alpha$ is close to $2\pi$, and $\epsilon=2\pi-\alpha$.}
\begin{equation}
\int_{\b_1\dots\b_n\,\rm conn} \mathcal{D}g \mathcal{D}\Phi \,\bigg(\int\d^2 x\sqrt{g}\bigg)^m e^{-I_{\rm JT}[g,\Phi]} =\bigg(2\sum_{i=1}^n \beta_i\bigg)^mZ(\beta_1\dots \beta_n)_\text{conn}\,,
\end{equation}
which exponentiates indeed to equation (4.23) in \cite{Turiaci:2020fjj}
\begin{equation}
\int_{\b_1\dots\b_n\,\rm conn} \mathcal{D}g \mathcal{D}\Phi \,\exp\bigg(\lambda\int\d^2 x\sqrt{g}\bigg) e^{-I_{\rm JT}[g,\Phi]}=\exp\bigg(2\lambda\sum_{i=1}^n \beta_i\bigg)Z(\beta_1\dots \beta_n)_\text{conn}\,,
\end{equation}
meaning we have shifted the overall energy scale with $E_0=2\lambda$. Our operator reproduces this behavior, up to contact terms. Indeed we have for instance (using again the string equation)
\begin{equation}
\sum_{a_1=0}^\infty\frac{(-2\pi^2)^{a_1}}{a_1!}\bigg\langle\tau_{a_1} \sum_{a_2=0}^\infty\frac{(-2\pi^2)^{a_2}}{a_2!}\tau_{a_2}\bigg\rangle_{\k}=\bigg\langle\ \sum_{a_2=0}^\infty\frac{(-2\pi^2)^{a_2+1}}{(a_2+1)!}\tau_{a_2}\bigg\rangle_{\k}\,,
\end{equation}
which translated to Weil-Petersson volumes means
\begin{equation}
V_{g,2}(2\pi\i,2\pi\i)=\int_0^{2\pi i}\d a a\,V_{g,1}(a)\neq 0\,.
\end{equation}
These contributes from two operators coming into contact give corrections such as the second term in
\begin{equation}
Z(b=2\pi \i,b=2\pi \i,\beta_1\dots \beta_n)_\text{conn}=\bigg(2\sum_{i=1}^n \beta_i\bigg)^2 Z(\beta_1\dots \beta_n)_\text{conn}+\int_0^{2\pi i}\d a a\,Z(b=a,\beta_1\dots\beta_n)_\text{conn}\,,
\end{equation}
which are absent for the blunt defects $\alpha=2\pi$. In contrast, for topological gravity \cite{dijkgraaf1991topological} the operator on the left-hand side of the string equation is $\tau_0$, and there are then no contact terms because there is no $\tau_{-1}$. This is why in topological gravity $\tau_0$ is exactly the area operator, also called puncture operator $P$.
The blunt defects $\alpha=2\pi$ in JT gravity likewise have no contact terms \cite{Turiaci:2020fjj} (in hyperbolic geometry this is because there is no geodesic that only surrounds a number of these blunt defects, but no handles). Our operators do, because they are built up out of an infinite number of sharp cusps. Their amplitudes are by construction analytic continuations of Weil-Petersson volumes, the blunt defects differ from this by the subtraction of contact terms. Therefore for $\alpha>\pi$ we rather have a relation like
\begin{equation}
\exp\bigg(\frac{1}{2}(\i\alpha )^2\psi_\text{extra}\bigg)\quad \text{minus contact terms}\quad \Leftrightarrow \quad \int\d^2 x\sqrt{g}\,\frac{2\pi}{\g(1-\alpha/2\pi)}e^{-(2\pi-\alpha)\Phi}\,,\quad \alpha>\pi\,.\label{5.29}
\end{equation}
So a description of blunt defects in terms of cohomology is a but more subtle, we understand that such a description is currently under construction \cite{joaquinlorenztoap}.
Note also that on the path integral level, the fact that we get a simple shift in the energy can also be understood from the fact that adding the area integral $\e A$ to the JT action can be removed by shifting the dilaton, which in turn causes a boundary term to appear that induces the shift in energy.
In summary, we can trust the dilaton gravity interpretation of the $\tau_k$ in \eqref{64} completely (also when inserting these $\tau_k$ in the action), as long as we do not insert infinite sums that conspire to defects with $\alpha<\pi$, at which point contact terms (dis)appear.
We end this section with some further comments and open questions.
\begin{enumerate}
\item It would be nice to have an exact description of our infinite sum \eqref{grainofsalt} in dilaton gravity, then one could write up the dilaton gravity action for all minimal models (by turning off the KdV times $\g_k$ \eqref{520} that took us from topological gravity to JT in the first place).\footnote{See \cite{Goel:2020yxl,Mertens:2020hbs,Mertens:2020pfe} for some (not obviously related) progress in that direction.} There is a description of topological gravity as pure Einstein-Hilbert gravity \cite{Dijkgraaf:1991qh} (section 7.2), where the other minimal models have corrections in the action of the type (notice that equation \eqref{427} is then obvious via Gauss-Bonnet)
\begin{equation}
\tau_n\sim \int \d^2 x \sqrt{g} R^n\,.
\end{equation}
It would be interesting to see how that worldsheet metric $g$ and the JT gravity metric-and dilaton are related, particularly because there are no AdS$_2$ asymptotics in the first description.
\item It would be interesting to have some dilaton gravity understanding of the operator on the left-hand side of the dilaton equation(s) \eqref{619} and \eqref{620}, analogous to how the string equation involves an area operator \eqref{grainofsalt}, in the spirit of having a better gravity understanding of the KdV hierarchy. In the quantization scheme of \cite{Maxfield:2020ale} we can immediately understand \eqref{619}\footnote{The $-A/2\pi$ comes from the $\a^{-1} \partial_\alpha$ working on the $2\pi-\alpha$ prefactor in the defect operator $\mathcal{O}_\text{D}(\alpha)$ in the quantization scheme of \cite{Maxfield:2020ale}.}
\begin{equation}
\sum_{a=0}^\infty \frac{(-2\pi^2)^a}{a!}\,\tau_{a+1}\quad \text{minus contact terms}\quad \Leftrightarrow \quad \int\d^2 x\sqrt{g}\,\frac{\epsilon\Phi-1}{2\pi}=\frac{L-A}{2\pi}=-\chi\,,
\end{equation}
where we used the boundary conditions on $\Phi$ to show that the boundary contribution (the length $L$) to the area $A$ cancels in this observable, and the bulk part of the area evaluates indeed to $2\pi\chi$. To be clear, in the $\Leftrightarrow$ we used our dictionary \eqref{5.7}, and then we used the JT gravity description to find that the operator on the left computes the Euler character. This precisely reproduces the equation \eqref{619}, which can be viewed as a subset of the KdV equations.
The operator in the dilaton equation \eqref{620} generates a (infinitesimal) metric rescaling (hence the name dilaton equation), which boils down to rescaling the boundary lengths
\begin{equation}
\sum_{m=0}^\infty\frac{\lambda^m}{m!} \bigg(2\sum_{i=1}^n \beta_i\partial_{\beta_i}-2\chi\bigg)^m\, Z(\beta_1\dots\beta_n)_\text{conn}=e^{-2\l\chi}\,Z(\beta_1 e^{2\l}\dots\beta_n e^{2\l})_\text{conn}\,.
\end{equation}
We can appreciate this by translating the operator in the left hand side of \eqref{620} to dilaton gravity variables, which indeed gives the generator of metric rescalings in the JT gravity action (for the $\chi$ part we remind ourselves of the relation between $\S$ and $\Phi_0$, see for instance \cite{Yang:2018gdb})
\begin{equation}
\sum_{a=0}^\infty(2a+1) \frac{(-2\pi^2)^a}{a!}\,\tau_{a+1}\quad \text{minus contact terms}\quad \Leftrightarrow \quad \int\d^2 x\sqrt{g}\,2\,\Phi + \int_\partial \d u\sqrt{h}\,\Phi\,.
\end{equation}
So the simplest $L_{-1}$ and $L_0$ of the KdV equations have JT gravity interpretations as describing the covariant transformation rules of observables under certain changes of parameters in the action. This makes sense, as the KdV equations essentially state how $F$ changes with the KdV times $t_k$. The other KdV equations $L_n Z=0$ with $n>0$ are essentially equivalent to \eqref{69} in combination with the statement that these deformed theories are matrix integrals with different spectral curves $\rho_0(E)$ \cite{Maxfield:2020ale}. These constraints state how observables change under changes in the dilaton gravity action with $t_k$ and $k>1$, namely by changing the spectral curve.
The string-and dilaton equation are special cases describing the flow under changes of $t_0$ and $t_1$. This means that given the dictionary \eqref{5.7} one can derive the KdV hierarchy from the JT gravity path integral formulation, essentially.
\item Let us emphasize that without summing over the $\tau_k$, that the KdV equations should be interpreted as recursion relations between cusps in JT gravity on closed manifolds. Without knowledge of explicit Weil-Petersson volumes these relations seem miraculous. It would be interesting to find an independent dilaton gravity calculation of even the simplest examples, for instance the correlator of one cusp on the torus
\begin{equation}
\int_\text{torus} \mathcal{D} g\,\mathcal{D} \Phi\,e^{-I_\text{JT}}\, \int \d^2 x \sqrt{g}\, e^{-2\pi \Phi}\,\Phi^{2k}=\quad \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}, scale=0.7]
\pgftext{\includegraphics[scale=1]{v11.pdf}} at (0,0);
\draw (1.9,0.05) node {$k$};
\end{tikzpicture}\quad=0\,,\quad k\neq 1\,.
\end{equation}
Intersection theory predicts in particular that this vanishes for all $k\neq 1$ and it would be interesting to have a direct dilaton gravity argument for this.
Remember furthermore the relation \eqref{65}
\begin{equation}
\int \d^2 x \sqrt{g}\, e^{-(2\pi-\alpha) \Phi}\quad \Leftrightarrow \quad \exp\bigg(\frac{1}{2}(\i\alpha )^2\psi_\text{extra}\bigg)\,,\quad \alpha<\pi\,,
\end{equation}
and notice that the right-hand side is even in $\alpha$. This implies that correlators of cusps multiplied with odd powers $\Phi^{2k+1}$ vanish identically in JT gravity (on any surface, and regardless of other operators)
\begin{equation}
\int \d^2 x \sqrt{g}\, e^{-2\pi \Phi}\,\Phi^{2k+1}=0\,.\label{636}
\end{equation}
This too is mysterious without explicit knowledge of the volumes, it boils down to understanding why the Weil-Petersson volumes are even polynomials of $b_i^2$. One can view this as a redundancy, or a null deformation of the type discussed in \cite{Blommaert:2022ucs}. Indeed, one can add terms like \eqref{636} to the JT action as in \eqref{69} without affecting any observables. So theories whose actions differ by terms of the type \eqref{636} are completely equivalent, and having them or not having them is a gauge choice. As a simple check on this, notice that a potential $U(\Phi)$ as in \eqref{636} is in the kernel of equation (1.4) in \cite{Witten:2020wvy}, the spectral curve (and thus all observables) remain unaffected. This redundancy is also why \eqref{expansion} is an analytic continuation of \eqref{65}, one can replace the cosine by either one of the exponentials without changing any observables.
It is natural to wonder if there is some contour-integral argument for \eqref{636}, directly in the dilaton gravity formulation (without going through the volumes).
\end{enumerate}
\subsection{Matrix integral}\label{sect:matrix}
As the final element in the web of dualities between intersection numbers, matrix integrals and dilaton gravities we present the matrix integral interpretation of the $\psi$-classes. We claim that every insertion of $\tau_k$ corresponds with the following counterclockwise contour integral around the real axis (a contour above and below the real axis, the latter is denoted by $R$)
\begin{equation}
\tau_k \quad \Leftrightarrow\quad \frac{(-1)^{k+1}}{\i}\frac{1}{(2k+1)!!}\oint_R \d E\, E^{k+1/2}\Tr \delta(H-E)=\frac{(-1)^{k+1}}{\i}\frac{1}{(2k+1)!!}\oint_R \d E\, E^{k+1/2}\,\rho(E)\,.\label{oper}
\end{equation}
We will derive this equation momentarily, but let us first give a few basic checks that it is correct, and show how this equation is useful in practice. (To our knowledge, this equation and its practical use are new, whereas versions of \eqref{656} were known.)
As a first check notice that $\rho_0(E)$ has no poles on the real axis, for instance for JT gravity $\rho_0(E)=e^{\S}\sinh(2\pi E^{1/2})/4\pi^2$ for positive energies and zero otherwise. So we recover the statement that $\average{\t_k}_0=0$, which is true for generic backgrounds $\g_p$. In intersection theory this follows from the selection rule \eqref{selection}, which for $g=0$ we can rewrite as
\begin{equation}
\sum_{i=1}^n (k_i-1)+k=-2\,,\quad k_i\geq 2\,,\label{638}
\end{equation}
where the $k_i$ come from expanding out the background and $k_i\geq 2$ because $\g_0=\g_1=0$ (in particular, $\g_0=0$ is important). The left-hand side is non-negative so this is never satisfied, so $\average{\t_k}_0=0$ indeed.\footnote{Notice that the cases $k\leq-2$ can and do give nonzero answers at genus zero, as we found around equation \eqref{Fx}.}
At non-zero genus we obtain finite answer, because $E^{k+1/2}\rho(E)_g$ generically has poles at $E=0$. To appreciate how this arises we can consider the inverse Laplace transform of \eqref{genusg}
\begin{equation}
Z_g(\beta)=\int_0^\infty \d E\, e^{-\beta E}\,\rho_g(E)\quad \Rightarrow\quad \rho_g(E)=\int_0^\infty \d b b\, \frac{1}{2\pi E^{1/2}}\cos(b E^{1/2})V_{g,1}(b)\,,
\end{equation}
where the volume is an even polynomial \eqref{vexp}
\begin{equation}
V_{g,1}(b)=\sum_{d=0}^\infty V_{g,d}\,\frac{b^{2d}}{4^d d!}\quad \Rightarrow\quad \rho_g(E)=\frac{1}{2\pi}\sum_{d=0}^\infty V_{g,d}\,(-1)^{d+1} \frac{(2d+1)!!}{2^d}E^{-d-3/2}\,.
\end{equation}
Using this expressions and picking up the pole at the origin we obtain
\begin{align}
\average{\tau_k}_g &=\frac{(-1)^{k+1}}{\i}\frac{1}{(2k+1)!!}\oint_R \d E\, E^{k+1/2} \rho(E)_g\\&=\sum_{d=0}^\infty V_{g,d} (-1)^{k+d} \frac{1}{2^d}\frac{1}{2\pi\i}\oint_0 \d E\,E^{-1+k-d}=\frac{1}{2^k}V_{g,k}\,,
\end{align}
which means that the operator \eqref{oper} computes the expansion coefficients of the volumes. In other words our matrix integral definition \eqref{oper} is recovering the expansion of volumes in intersection numbers \eqref{53} (in a generic background $\g_k$)
\begin{equation}
V_{g,n}(b_1\dots \b_n)=\sum_{d_1=0}^\infty\frac{b_1^{2d_1}}{2^{d_1}d_1!}\dots\sum_{d_n=0}^\infty\frac{b_n^{2d_n}}{2^{d_n}d_n!}\average{\tau_{d_1}\dots \tau_{d_n}}_g\,.
\end{equation}
The equation above was for $n=1$ but it is obvious that this extends to generic $n$.
As a further check on this we can compute the disk with a puncture. This example involves picking up the pole at $E_1=E_2$ of the genus zero wormhole, equation (139) in \cite{Saad:2019lba},
\begin{align}
\average{\rho(M)\tau_k}_{0\,\text{conn}}&=\frac{(-1)^{k+1}}{\i}\frac{1}{(2k+1)!!}\oint_R \d E\, E^{k+1/2} \average{\rho(M)\rho(E)}_{0\,\text{conn}}\\&=\frac{(-1)^{k}}{\i}\frac{1}{(2k+1)!!}\frac{1}{4\pi^2M^{1/2}}\oint_M \d E\, E^{k}(E+M)\frac{1}{(E-M)^2}\nonumber\\
&=\frac{1}{2\pi}\frac{(-1)^k}{(2k-1)!!}M^{k-1/2}\,,
\end{align}
such that
\begin{equation}
\average{Z(\beta)\tau_k}_{0\,\text{conn}}=\frac{1}{2\pi}\frac{(-1)^k}{(2k-1)!!}\int_0^\infty \d M\,e^{-\beta M}\, M^{k-1/2}=\frac{(-1)^k}{2^{k+1}\pi^{1/2}}\,\beta^{-1/2-k}\,.
\end{equation}
We can compare this with the prediction that one gets by viewing the trumpet as a generating function for these one-point functions, along the lines of \eqref{expansion}
\begin{align}
Z_\text{trumpet}(\beta,b)= \frac{1}{2\pi^{1/2}\beta^{1/2}}e^{-\frac{b^2}{4\beta}}&=\sum_{k=0}^\infty \frac{b^{2k}}{2^k k!}\frac{(-1)^k}{2^{k+1}\pi^{1/2}}\,\beta^{-1/2-k}=\sum_{k=0}^\infty \frac{b^{2k}}{2^k k!}\average{Z(\beta)\tau_k}_{0\,\text{conn}}\,,
\end{align}
which indeed gives the same answer on the nose. Finally we also recover the statement that the cylinder amplitude vanishes in intersection theory (or the genus zero two point function)\footnote{Actually, if we include the contribution from unstable surfaces, we should continue to all integers $n$ and $k$, in which case we get $\average{\tau_{-1-k} \tau_k}_0 = (-1)^{k}$. This is consistent with the $1/(x_1 + x_2)$ in \eqref{Fx1x2}.}
\begin{align}
\average{\tau_n\tau_k}_{0}&=\frac{(-1)^{n+1}}{i}\frac{1}{(2n+1)!!}\oint_R \d E\, E^{n+1/2} \average{\rho(E)\tau_k}_{0\,\text{conn}}\nonumber\\&=\frac{(-1)^{n+1+k}}{2\pi i}\frac{1}{(2n+1)!!(2k-1)!!}\oint_R \d E E^{n+k}=0\,.\label{5.47}
\end{align}
For generic backgrounds $\g_k$ this follows from the same logic as around \eqref{638} (but with $-1$ on the right).
Now we explain how to derive this equation \eqref{oper} for $\tau_k$. The first step is proving an identity that appears in \cite{Maldacena:2004sn, Gross1989nonperturbative}, which holds miraculously for each arbitrary (but fixed) number $E_0$
\begin{equation}
\Tr\bigg(\frac{1}{y-H}\bigg)-\frac{\Tr(1)}{y}=\sum_{k=1}^\infty (-1)^{k+1}(-y-E_0)^{-k-1/2}(-y+E_0)^{-1/2}\Tr((H+E_0)^{k-1/2}(H-E_0)^{1/2})_+\,.\label{649}
\end{equation}
The branchcuts of the roots are chosen in the usual way, we have $(-y-E_0)^{-k-1/2}>0$ when $y<-E_0$ and $(-y+E_0)^{-1/2}>0$ when $y<E_0$. Let us prove this equation for $y>E_0$ and real, after collecting all the signs it simplifies slightly
\begin{align}
\Tr\bigg(\frac{1}{y-H}\bigg)-\frac{\Tr(1)}{y}&=\sum_{k=1}^\infty (y+E_0)^{-k-1/2}(y-E_0)^{-1/2}\Tr((H+E_0)^{k-1/2}(H-E_0)^{1/2})_+\nonumber\\&=\sum_{q=1}^\infty\frac{1}{y^{q+1}}\,\Tr(H^q)\,.
\end{align}
The $+$ means we should expand in powers of $1/H$ and keep only the terms with positive powers of $H$ in the resulting series. To check this, one explicitly expands the binomials in $1/H$, and then rearranges the resulting triple sum to collect the terms that multiply $\Tr(H^q)$, for fixed $q$. The double sum at fixed $q$ surprisingly spits out $1/y^{q+1}$, and we recover the large $y$ expansion of the left-hand side \cite{Blommaert:2021gha}.
Now we can consider the following double scaling limit of \eqref{649} where we take $y=-E_0-z^2$, shift also $H$ by $E_0$ and send $E_0\to\infty$
\begin{equation}
\Tr\bigg(\frac{1}{z^2+H}\bigg)-\frac{\Tr(1)}{E_0+z^2}=\sum_{k=1}^\infty (-1)^{k}z^{-2k-1}\,W_{k-1}(H)\,,\label{651}
\end{equation}
where we have introduced the so called scaling polynomials of \cite{Gross1989nonperturbative}
\begin{equation}
W_{k-1}(H)=\lim_{E_0\to\infty} (2E_0)^{-1/2}\Tr(H^{k-1/2}(H-2E_0)^{1/2})_+\,.\label{652}
\end{equation}
The $+$ is confusing in this context, because it is unclear what the expansion in powers of $H$ means upon double scaling. Below we will give a more correct formula that does not involve the subscript $+$ anymore, but is purely based on a contour integral, and therefore does make sense in the double scaling continuum limit.
Using the definition of an FZZT brane \cite{Maldacena:2004sn,Saad:2019lba,Fateev:2000ik,Ponsot:2001ng,Goel:2020yxl,Mertens:2020hbs,Mertens:2020pfe,Hosomichi:2008th,Kostov:2002uq,Okuyama:2021eju,Teschner:2000md,Okuyama:2021eju,Blommaert:2019wfy,Blommaert:2021etf,Blommaert:2021fob}
\begin{equation}
\mathcal{O}_\text{FZZT}(z)=\Tr \log(z^2+H)-\Tr(1)\log(z^2)=\int_\infty^{z}\d w\,2w \bigg(\Tr\bigg(\frac{1}{w^2+H}\bigg)-\frac{\Tr(1)}{w^2}\bigg)\label{653}
\end{equation}
and its relation with a geodesic boundary \cite{Blommaert:2021fob} (we are not giving any topological weight to FZZT branes here, this is an irrelevant choice)
\begin{equation}
\mathcal{O}_\text{G}(b)=-\frac{e^{-\S}}{2\pi \i}\int_{-\i\infty}^{+\i\infty}\d z \,e^{b z}\, \mathcal{O}_\text{FZZT}(z)\,,
\end{equation}
we obtain an expansion of the geodesic boundary in scaling polynomials\footnote{It is important that the FZZT branes actually have their poles at $z=-\epsilon\pm \i \sqrt{H}$ \cite{Blommaert:2021gha}, such that in reality the integrand contains $(z+\epsilon)^{-2k}$. Since $b>0$ we can close the contour to $z=-\infty$, and there is no pole at infinity because of the second term on the first line.}
\begin{align}
\mathcal{O}_\text{G}(b)&=\frac{e^{-\S}}{b}\frac{1}{2\pi \i}\int_{-\i\infty}^{+\i\infty}\d z\, e^{b z}\, 2z\bigg(\Tr\bigg(\frac{1}{z^2+H}\bigg)-\frac{\Tr(1)}{E_0+z^2}\bigg)\nonumber\\&=2\,e^{-\S}\sum_{k=1}^\infty (-1)^kW_{k-1}(H)\frac{1}{b}\frac{1}{2\pi \i}\int_{-\i\infty}^{+\i\infty}\d z e^{b z}z^{-2k}\nonumber\\
&=e^{-\S}\sum_{k=0}^\infty (-1)^{k+1}\frac{b^{2k}}{(2k)!}\frac{2}{2k+1}W_k(H)\,.
\end{align}
Comparing with the expansion of a geodesic boundary in cusp defects \eqref{expansion}, we find the identification of the $\tau_k$ insertions with the scaling polynomials \cite{Gross1989nonperturbative}
\begin{equation}
\mathcal{O}_\text{G}(b)=e^{-\S}\sum_{k=0}^\infty \frac{b^{2k}}{2^k k!}\,\tau_k\quad \Rightarrow \quad \tau_k\quad \Leftrightarrow\quad (-1)^{k+1}\frac{2}{(2k+1)!!}W_k(H)\,.\label{656}
\end{equation}
Now the question is what the correct implementation of $W_k(H)$ in the double scaling limit is. In \eqref{652} the last square root $(H-2E_0)^{1/2}$ is on the branchcut, so depending on whether $H$ takes values slightly above or below the real axis we get respectively a factor $\pm \i$. We thus have two possible contours for the eigenvalues of $H$ in this observables. We claim that the correct combination is the average of both contours
\begin{equation}
W_k(H)= \frac{\i}{2} \int_{R+\i\epsilon}\d E\, E^{k+1/2}\Tr \delta(H-E)-\frac{\i}{2}\int_{R-\i\epsilon}\d E\, E^{k+1/2}\Tr \delta(H-E)
\end{equation}
We can flip the orientation of the first contour, so that this becomes a counterclockwise contour integral around the real axis, which picks up any poles that might occur on the real axis in the integrand
\begin{equation}
W_k(H)=\frac{1}{2\i}\oint_R \d E\, E^{k+1/2}\Tr \delta(H-E)\,.
\end{equation}
This reproduces our original claim \eqref{oper}, which we have independently proven to be correct above, by reproducing all intersection numbers from it.
To close off this section we remark how this relates to the Kontsevich matrix integral \cite{KontsevichModel} and the equation \eqref{69}. We have
\begin{equation}
\prod_{i=1}^\infty\det(1+H/z_i^2)=\exp\bigg(\sum_{i=1}^\infty \mathcal{O}_\text{FZZT}(z_i) \bigg)\quad \Leftrightarrow\quad \exp\bigg(\sum_{k=2}^\infty t_k(z_j)\,\tau_k\bigg)\,,\quad t_k(z_j)=-(2k-1)!!\sum_{i=1}^\infty z_i^{-2k-1}\,,
\end{equation}
where in the first equality we used $\det(A)=\exp(\Tr(\log(A)))$, and in the second equality we computed the $w$ integral in \eqref{653} using the expansion in scaling polynomials \eqref{651}, and identified the $\tau_k$'s using \eqref{656}. Kontsevich proved that this identity is correct, namely inserting the left hand side in a matrix integral with a spectral curve corresponding to KdV times $\g_k$ generates a matrix integral whose spectral curve corresponds with KdV times $\g_k+t_k(z_j)$. He did this by proving that the right-hand side equals $F$ for those KdV times (with $F$ the function that satisfies the KdV hierarchy \eqref{kdv}), see also appendix \ref{app:c2}.
In essence, this relation is the raison d'etre of the scaling polynomials, we can start with a Gaussian matrix integral (which double scales to topological gravity \cite{Saad:2019lba}) and turn on some scaling polynomials in the action to get a double scaled theory whose spectral curve has $t_k(z_j)$ turned on.
\section{Concluding remarks}
We end this paper with three comments.
In \textbf{section \ref{sect:5.4}} we use the KdV equations \eqref{KdVrec} to demonstrate that the $(2g)!$ growth in volumes $V_{g,2}(b_1,b_2)$ for generic models comes from terms in the polynomials with order one powers of $b_1^2$ and $b_2^2$. This is part of the reason why the genus expansion converges in the $\tau$-scaling limit.
In \textbf{section \ref{sect:6.2}} we contemplate the multi-boundary generalization of our discussion, in particular we consider the multi-boundary generalization of the $\tau$-scaled spectral form factor and discuss cancellations.
In \textbf{section \ref{sect:lorentzian}} we explain the logical need for a Lorentzian interpretation of the cancellations and the universal scaling behavior \eqref{545} of late-time genus $g$ wormholes, and preview how this comes about by thinking about Lorentzian topology change in JT gravity \cite{workinprogress}.
\subsection{Open-closed duality and factorial growth}\label{sect:5.4}
As a final application of the KdV equations in gravity we want to give an intuitive argument explaining that the famous $(2g)!$ growth in volumes $V_{g,2}(b_1,b_2)$ for generic models of dilaton gravity comes from terms in the polynomials with order one powers of $b_1^2$ and $b_2^2$ (so not scaling with $g$). Therefore they will not survive in our $T\to\infty$ limit, which acts at each genus $g$ individually because we also send $e^{\S}\to\infty$. This is part of the explanation why the genus expansion is convergent in our setup, see section \ref{sect:euclideanwormholes}.
For concreteness we consider the constant term in the polynomials \eqref{53}
\begin{equation}
V_{g,2}(0,0)=\bigg\langle \tau_0^2 \exp\bigg(\sum_{k=2}^\infty \g_k\tau_k\bigg)\bigg\rangle_g\,.
\end{equation}
We would like to argue that this grows as $(2g)!$ for large genus $g$, and that the terms with large powers of $b_1^2$ and $b_2^2$ do not. The basic intuition is that for small powers of $b_i^2$, following the selection rule \eqref{selection}, we can get many, many low-dimensional forms such as $\tau_2$ coming out of the exponential, and it makes sense that such correlators with many, many operators would grow factorially. On the other hand, for large powers of $b_i^2$ the selection rule \eqref{selection} does not allow for that many operators to come down from the exponential (for the maximal power no operators come from the exponential and we are effectively computing Airy volumes again). With relatively fewer operators, it is hard to imagine factorial growth.
We believe this intuition should universally hold, but to make our point concrete we will here focus on the $(2,3)$ minimal string with spectrum \cite{Maldacena:2004sn,Saad:2019lba}
\begin{equation}
2\pi e^{-\S}\rho_0(E)=E^{1/2}+\frac{t_2}{3}E^{3/2}\,.\label{minmal}
\end{equation}
This has an eigenvalue instanton saddle near $E=-3/t_2$ in the matrix integral and therefore, following the techniques of section 5.6 in \cite{Saad:2019lba} one expects factorial growth of the type (in this section we ignore for simplicity of presentation most sub-exponential $g$-dependence)
\begin{equation}
V_{g,2}(0,0)=\langle \tau_0^2 \exp(t_2\tau_2)\rangle_g\sim (2g)!\, t_2^{3g}\,,
\end{equation}
where we used \eqref{4.16} and the fact that $u_0=0$ for this deformation. This arises from intersection theory as follows. Using the selection rule \eqref{selection} we immediately obtain the correct scaling with $t_2$
\begin{equation}
V_{g,2}(0,0)=\frac{t_2^{3g-1}}{(3g-1)!}\langle\tau_0^2\tau_2^{3g-1}\rangle_g\,.
\end{equation}
Now we want to prove that the correlator $\langle\tau_0^2\tau_2^{3g-1}\rangle_g$ for large genus grows as $(5g)!$, such that combined with the $1/(3g)!$ we get the predicted double factorial growth. For this we can use the happy fact that we can rewrite the KdV recursion \eqref{KdVrec} entirely as a recursion relation for
\begin{equation}
f(g)=\langle\tau_0\tau_1\tau_2^{3g-2}\rangle_g=\frac{1}{3g-1}\langle\tau_0^2\tau_2^{3g-1}\rangle_g\,,
\end{equation}
by considering $k=2$ and $d_2=3g-2$ with all other $d_j=0$, and repeatedly using the string-and dilaton equations \eqref{426} and \eqref{427} to eliminate excess $\tau_0$ and $\tau_1$'s in all terms. For large genus, the dominant contributions to \eqref{KdVrec} in this setup come from the first, and the last term. One can check this by first assuming that this is true, then one finds the recursion relation
\begin{equation}
f(g)\sim g^5 f(g-1)\quad \Rightarrow\quad f(g)\sim (5g)!\,.
\end{equation}
One then checks that with this $g!^5$ growth the other terms in the recursion relation become subleading indeed. Being more careful with all the prefactors and sub-factorial $g$-dependence one can recover the full prediction of the eigenvalue instanton from this recursion relation \eqref{KdVrec}.
\subsection{Multi-boundary generalization}\label{sect:6.2}
Let us briefly comment on the multi-boundary generalization of the story that we have presented here.\footnote{We thank Douglas Stanford for discussions on this.} For instance, we could consider the three-boundary generalization of the spectral form factor
\begin{equation}
Z(\beta+\i T_1,\beta+\i T_2,\beta+\i T_3)_\text{conn}\,,\quad T_1+T_2+T_3=0\,,
\end{equation}
in the generalized $\tau$-scaling limit where $\tau_1=T_1e^{-\S}$ and $\tau_2=T_2e^{-\S}$ remain finite (and their difference remains finite as well). The triple energy integral in \eqref{basic} is then dominated by $E_1$, $E_2$ and $E_3$ all close together, and we can use the three-boundary generalization of the sine-kernel \eqref{sinekernel}, which can be found for instance in \cite{Blommaert:2019wfy}. Doing the the Fourier transforms over the small energy differences explicitly, one finds a generalization of the ramp-and plateau \eqref{27}
\begin{equation}
Z(\beta+\i T_1,\beta+\i T_2,\beta-\i T_1-\i T_2)_\text{conn}=\int_0^\infty \d E\,e^{-3\beta E}\,\begin{cases}
0\, &\frac{T_1}{2\pi}+\frac{T_2}{2\pi}<\rho_0(E)
\\\frac{T_1}{2\pi}+\frac{T_2}{2\pi}-\rho_0(E)\, &\frac{T_1}{2\pi}<\rho_0(E)<\frac{T_1}{2\pi}+\frac{T_2}{2\pi}
\\\frac{T_2}{2\pi}\, &\frac{T_2}{2\pi}<\rho_0(E)<\frac{T_1}{2\pi}\\\rho_0(E)\, &\rho_0(E)<\frac{T_2}{2\pi}\,,
\end{cases}
\end{equation}
where we have specialized to $T_2<T_1$. Following the steps that led to \eqref{expaexpa}, this can be massaged into
\begin{equation}
Z(\beta+\i T_1,\beta+\i T_2,\beta-\i T_1-\i T_2)_\text{conn}=\frac{e^{\S}}{6\pi\beta}\int_0^{\tau_2}\d f\,e^{-3\beta E(f)}-\frac{e^{\S}}{6\pi\beta}\int_{\tau_1}^{\tau_1+\tau_2}\d f\,e^{-3\beta E(f)}\,,\label{5.68}
\end{equation}
for instance for topological gravity $E(f)=f^2$ this is simply the sum of three Erf functions. For infinite times the first term produces the generalized plateau value $Z(3\beta)$, which comes from terms in the triple sum where all energies coincide. However, notice that if one Taylor expands this in $e^{-\S}$ that we again get a series in powers of $e^{-2\S}$ starting at $e^{-2\S}$, whereas the three-boundary connected amplitudes scale as $e^{-(2g+1)\S}$ with $g$ the genus. So, unlike for the two-boundary spectral form factor we do not see any obvious way to reproduce this tau-scaled answer from the sum over wormhole geometries with three boundaries, at least for now.
In some sense this emphasizes how nice it is that this worked for two boundaries, but on the other hand it also shows that we might still need D-brane effects to understand more complicated phenomena.
One thing that does happen for any number of boundaries is that the cancellations in intersection theory \eqref{548} constrain the volumes in a non-trivial manner for all theories. Namely with the constraint $T_1+\dots+T_n=0$ we find the generalization of \eqref{4.27} for any of the times $T_i\to\infty$
\begin{equation}
e_1^{m_1}e_2^{m_2}\dots e_n^{m_n}\sim T_i^{2\sum_{j=2}^n m_j}\,.\label{4.27}
\end{equation}
The power of $T$ that appears here is precisely the expression that was constrained by the KdV equations to be upper bound by $2 g$, therefore the maximal power of any of the times $T_i$ is constrained
\begin{equation}
Z_g(\beta+\i T_1\dots\beta+\i T_n)_\text{conn}\sim T_i^{2g+1}e^{-(2g+n-2)\S}\,,\quad T_1+\dots+T_n=0\,.
\end{equation}
On the other hand dimensional analysis of the volumes for generic dilaton gravities would again suggest that naively we could get powers up to $T_i^{3g-2+n}$. So there are also major cancellations in $V_{g,n}(b_1\dots b_n)$ for generic dilaton gravity models. It remains to be seen whether or not those are related with explaining a generalization of the ramp-and plateau such as \eqref{5.68}. That would be interesting.
\subsection{Universal powers of time via Lorentzian topology change}\label{sect:lorentzian}
We want to stress that recovering the universal growth $T^{2g+1}$ is highly non-trivial from the Euclidean gravitational path integral at genus $g$. In 2d dilaton gravity, we understand why this happens, namely because of known cancellations in intersection numbers. But the scaling of wormhole amplitudes with $T^{2g+1}$ for late times should hold for essentially any gravity model in a regime dominated by black holes, because of random matrix universality \cite{Haake:1315494}.
It is likely that one could argue that late-time physics is effectively dominated by the near-horizon region of black holes, where one often recovers JT gravity. Nevertheless we do not think an explanation in terms of intersection numbers for real-life black holes is fully satisfactory.
Instead, we view the fact that the scaling $T^{2g+1}$ is highly non-trivial in Euclidean signature (requiring quite miraculous, exact cancellations) as a sign that we should really be looking for a Lorentzian picture. After all, without Lorentzian large times there is nothing in the genus $g$ amplitudes alerting us of any cancellations. Because of the universality, we think the Lorentzian picture should be sufficiently simple (unlike the Euclidean story), in the sense that one could imagine a generalization to higher dimensional black holes.
We believe that we have found such an interpretation by thinking about Lorentzian topology change. The idea is that we can build Lorentzian wormhole geometries using the crotch singularities of Louko-Sorkin \cite{Louko:1995jw,workinprogressmisha}.\footnote{For other recent appearances of singular spacetimes in Lorentzian signature see for instance \cite{Marolf:2022ybi}.} For late enough times, the locations of the crotches (these are the places where baby universes or wormholes are born, and die) should behave approximately as zero modes. These $2g$ zero modes give a volume factor $T^{2g}$, which along with the usual factor $T$ from the rotational zero mode of the double cone \cite{Saad:2018bqo} reproduces the universal $T^{2g+1}$.
We will present this picture elsewhere in more detail, with concrete calculations in JT gravity \cite{workinprogress}.
\section*{Acknowledgments}
We thank Alex Altland, Jan Boruch, Fabian Haneder, Thomas Mertens, Klaus Richter, Phil Saad, Steve Shenker, Julian Sonner, Douglas Stanford, Juan-Diego Urbina, Misha Usatyuk, Torsten Weber and Zhenbin Yang for useful discussions. We also thank Fabian Haneder, Klaus Richter, Juan-Diego Urbina and Torsten Weber for sharing a version of their draft. AB was supported in part by a BEAF fellowship, by the SITP at Stanford, and by the ERC-COG Grant NP-QFT No. 864583. JK is supported by the Simons Foundation. SY is supported in part by NSF grant PHY-1720397 and by a Clark fellowship at SITP.
|
1909.07226
|
\section{Introduction}
In aperture synthesis radio astronomy an image of the sky brightness distribution is reconstructed from measured visibilities.
A visibility is the correlation coefficient between the electric field at two different locations.
The relationship between the sky brightness distribution and the expected visibilities is a linear equation
commonly referred to as the `measurement equation' (ME) \citep{Smirnov2011}.
An image could be reconstructed using generic solving techniques, but the computational cost of any reasonably sized problem is prohibitively large.
The cost can be greatly reduced by using the fact that under certain conditions the ME can be approximated by a two-dimensional (2D) Fourier transform.
The discretized version of the ME can then be evaluated using the very efficient fast Fourier transform (FFT).
To use the FFT, the data needs to be on a regular grid. Since the measurements have continuous coordinates, they first need to be resampled onto a regular grid.
In \cite{Brouw1975} a convolutional resampling method is introduced known as ``gridding''.
The reverse step, needed to compute model visibilities on continuous coordinates from a discrete model, is known as ``degridding''.
For larger fields of view the approximation of the ME by a Fourier transform is inaccurate. The reduction of the full three-dimensional (3D) description to two dimensions
only holds when all antennas are in a plane that is parallel to the image plane. Also, the variations of the instrumental and atmospheric effects over the field of view are not included.
There are two approaches to the problem of wide field imaging: 1) Partition the image into smaller sub-images or facets such that the approximations hold for each of the facets.
The facets are then combined together whereby special care needs to be taken to avoid edge effects \citep{Cornwell1992, Tasse2018}; and 2) include deviations from the Fourier transform in the convolution function.
The W-projection algorithm \citep{Cornwell2005} includes the non-coplanar baseline effect.
The A-projection algorithm \citep{Bhatnagar2008} extended upon this by also including instrumental effects.
For the Low-Frequency Array (LOFAR) it is necessary to include ionospheric effects as well \citep{Tasse2013}.
Each successive refinement requires the computation of more convolution kernels.
The computation of the kernels can dominate the total cost of gridding, especially when atmospheric effects are included in the convolution kernel,
because these effects can vary over short time scales.
The high cost of computing the convolution kernels is the main motivation for the development of a new algorithm for gridding and degridding.
The new algorithm presented in this paper effectively performs the same operation as classical gridding and degridding with AW-projection, except that it does this more efficiently
by avoiding the computation of convolution kernels altogether.
Unlike, for example, the approach by \cite{Young2015}, the corrections do not need to be decomposable in a small number of basis functions.
The performance in terms of speed of various implementations of the algorithm on different types of hardware is the subject of \cite{Veenboer2017}.
The focus of this paper is on the derivation of the algorithm and analysis of its accuracy.
The paper is structured as follows:
In section \ref{sec:gridding} we review the gridding method and AW-projection.
In section \ref{sec:imagedomaingridding} we introduce the new algorithm which takes the gridding operation to the image domain.
In section \ref{sec:analysis} the optimal taper for the image domain gridding is derived.
Image domain gridding with this taper results in a lower error than classical gridding with the classical optimal window.
In section \ref{sec:simulations} both the throughput and the accuracy are measured.
The following notation is used throughout the paper.
Complex conjugation of $x$ is denoted $x^{*}$.
Vectors are indicated by bold lower case symbols, for example, $\mathbf{v}$, matrices by bold upper case symbols, $\mathbf{M}$.
The Hermitian transpose of a vector or matrix is denoted $\mathbf{v}^\mathrm{H}$, $\mathbf{M}^\mathrm{H}$ , respectively.
For continuous and discrete (sampled) representations of the same object, a single symbol is used. Where necessary, the discrete version is distinguished
from the continuous one by a superscript indicating the size of the grid, that is, $V^{L\times L}$ is a grid of $L\times L$ pixels sampling continuous function $V$.
Square brackets are used to address pixels in a discrete grid, for example, $V[i,j]$, while parentheses are used for the value at continuous coordinates, $V(u,v)$.
A convolution is denoted by $\ast$; the (discrete) circular convolution by $\circledast$. The Fourier transform, both continuous and discrete, is denoted by $\mathcal{F}$.
In algorithms we use $\gets$ for assignment.
A national patent (The Netherlands only) for the method presented in this paper has been registered at the European Patent Office in The Hague, The Netherlands \citep{vandertol2017}.
No international patent application will be filed.
Parts of the description of the method and corresponding figures are taken from the patent application.
The software has been released \citep{Veenboer2017-2} under the GNU General Public License (GNU GPL \url{https://www.gnu.org/licenses/gpl-3.0.html}).
The GNU GPL grants a license to the patent for usage of this software and derivatives published under the GNU GPL.
To obtain a license for uses other than under GPL, please contact Astron at secretaryrd@astron.nl.
\begin{figure}[tbp!]
\includegraphics[width=.45\textwidth]{GriddingENFinal-img001.png}
\caption{Plot of the $uv$ coverage of a small subset of an observation. Parallel tracks are for the same baseline, but different for frequencies.}
\label{fig:uvtrack}
\end{figure}
\section{Gridding}
\label{sec:gridding}
In this section we summarize the classical gridding method. The equations presented here are the starting point for the derivation of image domain gridding in the following section.
The output of the correlator of an aperture synthesis radio telescope is described by the ME \citep{Smirnov2011}.
The full polarization equation can be written as a series of 4x4 matrix products \citep{Hamaker1996} or a series of 2x2 matrix products from two sides \citep{Hamaker2000}.
For convenience, but without loss of generality, the derivations in this paper are done for the scalar (non-polarized) version of the ME.
The extension of the results in this paper to the polarized case is straightforward, by writing out the matrix multiplications in the polarized ME as sums of scalar multiplications.
The scalar equation for visibility $y_{ijqr}$ for baseline $i,j$, channel $q$ at timestep $r$ is given by
\begin{align}
y_{ijqr} = \iint_{lm} & e^{-j2\pi\left(u_{ijr} l + v_{ijr} m + w_{ijr} n \right)/\lambda_q} \nonumber \\
& g_{iqr}(l,m) g_{iqr}^{*} (l,m) I(l,m) dl dm, \label{eq:MeasurmentEquation1}
\end{align}
where $I(l,m)$ is the brightness distribution or sky image, $(u_{ijr}, v_{ijr}, w_{ijr})$ is the baseline coordinate and $(l,m,n)$ is the direction coordinate, with
$n'=n-1=\sqrt{1-l^2-m^2}-1$, $\lambda_q$ is the wavelength for the $q$th channel,
and $g_{iqr}(l,m)$ is the complex gain pattern of the $i$th antenna.
To simplify the notation we lump indices $i,j,q,r$ together into a single index $k$, freeing indices $i,j,q,r$ for other purposes later on.
Defining
\begin{multline}
A_k(l,m) \triangleq g_{iqr}(l,m) g_{jqr}^{*} (l,m), \\
u_k \triangleq u_{ijr}/\lambda_q, \quad v_k \triangleq v_{ijr}/\lambda_q, \quad w_k \triangleq w_{ijr}/\lambda_q,
\end{multline}
allows us to write \eqref{eq:MeasurmentEquation1} as
\begin{equation}
y_k = \iint_{lm} e^{-j2\pi\left(u_k l + v_k m + w_k n \right)} A_k(l,m) I(l,m) dl dm. \label{eq:MeasurementEquation}
\end{equation}
The observed visibilities $\hat{y}_k$ are modeled as the sum of a model visibility $y_k$ and noise $\eta_k$:
\begin{equation}
\hat{y}_{k} = y_{k} + \eta_{k}
.\end{equation}
The noise $\eta_{k}$ is assumed to be Gaussian, have a mean of zero, and be independent for different $k$, with variance $\sigma^2_{k}$.
Image reconstruction is finding an estimate of image $I(l,m)$ from a set of measurements $\{\hat{y}_{k}\}$.
We loosely follow a previously published treatment of imaging \cite[][Appendix A]{Cornwell2008}.
To reconstruct a digital image of the sky it is modeled as a collection of point sources.
The brightness of the point source at $(l_i, m_j)$ is given by the value of the corresponding pixel $I[i,j]$.
The source positions $l_i, m_j$ are given by
\begin{equation}
l_i = -S/2 + iS/L, \quad m_j = -S/2 + jS/L,
\end{equation}
where $L$ is the size of one side of the image in pixels, and $S$ the size of the image projected onto the tangent plane.
Discretization of the image leads to a discrete version of the ME, or DME:
\begin{equation}
y_k = \sum_{i=1}^L\sum_{j=1}^L e^{-j2\pi\left(u_k l_i + v_k m_j + w_k n'_{ij} \right)} g_k(l_i,m_j) I[i,j] \label{eq:DME}
.\end{equation}
This equation can be written more compactly in matrix form, by stacking the pixels $I[i,j]$ in a vector $\mathbf{x}$, the visibilities $y_k$ in a vector $\mathbf{y}$,
and collecting the coefficients $e^{-j2\pi\left(u_k l_i + v_k m_j + w_k n'_{ij} \right)} g_k(l_i,m_j)$ in a matrix $\mathbf{A}$:
\begin{equation}
\mathbf{y} = \mathbf{A}\mathbf{x}.
\end{equation}
The vector of observed visibilities $\hat{\mathbf{y}}$ is the sum of the vector of model visibilities $\mathbf{y}$ and the noise vector $\boldsymbol{\eta}$.
Because the noise is Gaussian, the optimally reconstructed image $\hat{\mathbf{x}}$ is a least squares fit to the observed data $\hat{\mathbf{y}}$:
\begin{equation}
\hat{\mathbf{x}} = \argmin_\mathbf{x} \|\mathbf{\Sigma}^{-1/2}(\mathbf{A}\mathbf{x} - \hat{\mathbf{y}}) \|^2 \label{eg:costfunction}
,\end{equation}
where $\mathbf{\Sigma}$ is the noise covariance matrix, assumed to be diagonal, with $\sigma^2_{k}$ on the diagonal.
The solution is well known and given by
\begin{equation}
\hat{\mathbf{x}} = \left(\mathbf{A}^\mathsf{H}\mathbf{\Sigma}\mathbf{A}\right)^{-1}\mathbf{A}^\mathsf{H}\mathbf{\Sigma}\hat{\mathbf{y}}
.\end{equation}
In practice the matrices are too large to directly evaluate this equation.
Even if it could be computed, the result would be of poor quality, because matrix $\mathbf{A}^\mathsf{H}\mathbf{\Sigma}\mathbf{A}$ is usually ill-conditioned.
Direct inversion is avoided by reconstructing the image in an iterative manner.
Additional constraints and/or a regularization are applied, either explicitly or implicitly.
Most, if not all, of these iterative procedures need the derivative of cost function \eqref{eg:costfunction} to compute the update.
This derivative is given by
\begin{equation}
\mathbf{A}^{\mathsf{H}}\mathbf{\Sigma}^{-1}\left(\hat{\mathbf{y}} - \mathbf{A}\mathbf{x}\right) \label{eq:derivative}
.\end{equation}
In this equation, the product $\mathbf{A}\mathbf{x}$ can be interpreted as the model visibilities $\mathbf{y}$ for model image $\mathbf{x}$.
The difference then becomes $\hat{\mathbf{y}}-\mathbf{y}$, which can be interpreted as the residual visibilities.
Finally the multiplication of $\hat{\mathbf{y}}$ or $(\hat{\mathbf{y}}-\mathbf{y})$ by $\mathbf{A}^\mathsf{H}\mathbf{\Sigma}^{-1}$ computes
the dirty, or residual image, respectively.
This is equivalent to the Direct Imaging Equation (DIE), in literature often denoted by the misnomer
\footnote{See footnote on p. 128 of \citet{NRAO1999} on why DFT is a misnomer for this equation.}
Direct Fourier Tranform (DFT):
\begin{equation}
\hat{I}[i,j] = \sum_{k=0}^{K-1} e^{\mathrm{j}2\pi\left(u_k l_i + v_k m_j + w_k n'_{ij} \right)} g_k^*(l_i,m_j) \gamma_k \hat{y}_k \label{eq:imaging}
,\end{equation}
where $\gamma_k$ is the weight. The weight can be set to $1/\sigma_k^2$ (the entries of the main diagonal of $\mathbf{\Sigma}^{-1}$) for natural weighting, minimizing the noise,
but often other weighting schemes are used, making a trade off between noise and resolution.
Evaluation of the equations above is still expensive. Because the ME is close to a Fourier transform,
the equations can be evaluated far more efficiently by employing the FFT.
To use the FFT, the measurements need to be put on a regular grid by gridding.
Gridding is a (re)sampling operation in the $uv$ domain that causes aliasing in the image domain, and must therefore be preceded by a filtering operation.
The filter is a multiplication by a taper $c(l,m)$ in the image domain, suppressing everything outside the area to be imaged.
This operation is equivalent to a convolution in the $uv$ domain by $C(u,v)$, the Fourier transform of the taper.
Let the continuous representation of the observed visibilities after filtering be given by
\begin{equation}
\widetilde{V}(u,v) = \sum_{k=0}^{K-1} y_{k} \delta\left(u - u_{k},v-v_{k}\right) \ast C\left(u,v\right) \label{eq:gridding}
.\end{equation}
Now the gridded visibilities are given by:
\begin{equation}
\widehat{V}^{L \times L}[i,j] = \widetilde{V}(u_i, v_j) \quad \text{for } 0 \le i,j < L.
\end{equation}
The corresponding image is given by:
\begin{equation}
\widehat{I}^{L \times L} = \mathcal{F}(\widehat{V}^{L \times L} / c^{L \times L}, \label{eq:avg_beam_correction}
\end{equation}
The division by $c^{L \times L}$ in \eqref{eq:avg_beam_correction} is to undo the tapering of the image by the gridding kernel.
The degridding operation can be described by
\begin{equation}
y_k \gets \left( V(u,v) \ast C_{k}(u,v) \right)(u_k,v_k),
\label{eq:degridding}
\end{equation}
where $C_k(u,v)$ is the gridding kernel and $V(u,v)$ is the continuous representation of grid $V^{L \times L}$:
\begin{equation}
V(u,v) = \sum_{q=0}^{L}\sum_{r=0}^{L} \delta(u-u_r, v-v_r) V[q,r],
\end{equation}
Grid $V^{L \times L}$ is the discrete Fourier transform of model image $I^{L \times L}$ scaled by $\tilde{c}^{L \times L}$:
\begin{equation}
V \gets \mathcal{F}(I^{L \times L}/\tilde{c}^{L \times L})
\end{equation}
In this form, the reduction in computation cost by the transformation to the $uv$ domain is not immediately apparent.
However, the support of the gridding kernel is rather small, making the equations sparse, and hence cheap to evaluate.
The kernel is the Fourier transform of the window function $c_k(l,m)$.
In the simplest case the window is a taper independent of time index $k$, $c_k(l,m) = b(l,m)$.
The convolution by the kernel in the $uv$ domain applies a multiplication by the window in the image domain.
This suppresses the side-lobes but also affects the main lobe.
A well-behaved window goes towards zero near the edges. At the edges, the correction is unstable and that
part of the image must be discarded. The image needs to be somewhat larger than the region of interest.
The cost of evaluating Eqs. \eqref{eq:gridding} and \eqref{eq:degridding} is determined by the support and the cost of evaluating $C_k$.
The support is the size of the region for which $C_k$ is non-negligible.
Often $C_k$ is precomputed on an over-sampled grid, because then only lookups are needed while gridding.
In some cases $C_k$ can be evaluated directly, but often only an expression in the image domain for $c_k$ is given.
The convolution functions are then computed by evaluating the window functions on a grid that samples the combined image domain effect at least at the Nyquist rate, that is,
the number of pixels $M$, must be at least as large as the support of the convolution function:
\begin{equation}
c^{M\times M}[i,j] = c(l_i, m_j) \quad \text{for } 0 \leq i,j < M
.\end{equation}
This grid is then zero padded by the oversampling factor $N$ to the number of pixels of the oversampled convolution function $MN \times MN$:
\begin{equation}
C^{MN \times MN} = \mathcal{F}(\mathcal{Z}^{MN \times MN}({c^{M \times M}}))
,\end{equation}
where $l_i = -S/(2M) + iS/M, m_j = -S/(2M) + jS/M$, and $\mathcal{Z}^{MN}$ is the zero padding operator extending a grid to size $MN \times MN$.
Since a convolution in the $uv$ domain is a multiplication in the image domain, other effects that have the form of a multiplication in the image domain can
be included in the gridding kernel as well.
\subsection{W-projection}
In \cite{Cornwell2005} W-projection is introduced.
This method includes the effect of the non coplanar baselines in the convolution function. The corresponding window function is given by
\begin{equation}
c(l,m) = b(l,m) e^{2\pi \mathrm{j} w n'}.
\end{equation}
This correction depends on a single parameter only, the $w$ coordinate.
The convolution functions for a set of $w$ values can be precomputed, and while gridding the nearest $w$ coordinate is selected.
The size of the W term can become very large which makes W projection expensive.
The size of the W term can be reduced by either W-stacking \citep{Humphreys2011} or W-snapshots \citep{Cornwell2012}.
\subsection{A-projection}
A further refinement was introduced in \cite{Bhatnagar2008}, to include the antenna beam as well:
\begin{equation}
c(l,m) = b(l,m) e^{2\pi \mathrm{j} w n'} g_p(l,m) g_q^{*}(l,m),
\end{equation}
where $g_p(l,m)$ is the voltage reception pattern of the $p$th antenna.
As long as a convolution function is used to sample many visibilities, the relative cost of computing the convolution function is small.
However, for low-frequency instruments with a wide field of view, both the A term and the W term vary over short time scales.
The computation of the convolution kernels dominates over the actual gridding.
The algorithm presented in the following section is designed to overcome this problem by circumventing the need to compute the kernels altogether.
\begin{figure*}[!tbp]
\includegraphics[width=.45\textwidth]{GriddingENFinal-img003.png}
\includegraphics[width=.45\textwidth]{GriddingENFinal-img004.png}
\caption{\textit{Left}: Track in $uv$ domain for a single baseline and multiple channels. The boxes indicate the position of the subgrids. The bold box corresponds to the bold samples.
\textit{Right}: Single subgrid (box) encompassing all affected pixels in the $uv$ grid. The support of the convolution function is indicated by the circles around the samples.
}
\label{fig:subgrid}
\end{figure*}
\section{Image domain gridding}
\label{sec:imagedomaingridding}
In this section we present a new method for gridding and degridding. The method is derived from the continuous equations because the results
follow more intuitively than in the discrete form. Discretization introduces some errors, but in the following section the accuracy
of the algorithm is shown to be at least as good as classical gridding.
\subsection{Gridding in the image domain}
Computing the convolution kernels is expensive because they are oversampled.
The kernels need to be oversampled because they need to be shifted to a continuous position in the $uv$ domain.
The key idea behind the new algorithm is to pull part of the gridding operation to the image domain, instead of transforming a zero-padded window function to the $uv$ domain.
In the image domain the continuous $uv$ coordinate of a visibility can be represented by a phase gradient, even if the phase gradient is sampled.
The convolution is replaced by a multiplication of a phase gradient by a window function.
Going back to the image domain seems to defy the reasoning behind processing the data in the $uv$ domain in the first place.
Transforming the entire problem back to the image domain will only bring us back to the original direct imaging problem.
The key to an efficient algorithm is to realize that direct imaging is inefficient for larger images, because of the scaling by the number of pixels.
But for smaller images (in number of pixels) the difference in computational cost between gridding and direct imaging is much smaller.
For very short baselines (small $uv$ coordinates) the full field can be imaged with only a few pixels because the resolution is low.
This can be done fairly efficiently by direct imaging.
Below we introduce a method that makes low-resolution images for the longer baselines too, by partitioning the visibilities first in groups of nearby samples,
and then shifting these groups to the origin of the $uv$ domain. Below we show that these low-resolution images can then be combined in the $uv$
domain to form the final high-resolution image.
\subsection{Partitioning}
The partitioning of the data is done as follows. See Figure \ref{fig:uvtrack} for a typical distribution of data points in the $uv$ domain.
Due to rotation of Earth, the orientation of the antennas changes over time causing the data points to lie along tracks.
Parallel tracks are for observations with the same antenna pair, but at different frequencies.
Figure \ref{fig:subgrid}a shows a close up where the individual data points are visible.
A selection of data points, limited in time and frequency, is highlighted. A tight box per subset around the affected grid points in the (u,v) grid is shown.
The size of the box is determined as shown in Figure \ref{fig:subgrid}b, where the circles indicate the support of the convolution function.
The data is partitioned into $P$ blocks.
Each block contains data for a single baseline, but multiple timesteps and channels.
The visibilities in the $p$th group are denoted by $y_{pk} \text{ for } k \in {0,\dots,K_p-1}$, where $K_p$ is the number of visibilities in the block.
The support of the visibilities within a block falls within a box of $L_p \times L_p$ pixels.
We refer to the set of pixels in this box as a subgrid. The position of the central pixel of the $p$th subgrid is given by $(u_{0p},v_{0p})$.
The position of the top-left corner of the subgrid in the master grid is denoted by $(q_{0p}, r_{0p})$.
We note that the visibilities are being partitioned here, not the master $uv$ grid. Subgrids may overlap and the subgrids do not necessarily cover the entire master grid.
\subsection{Gridding equation in the image domain}
A shift from $(u_{0p},v_{0p})$ to the origin can be written as a convolution by the Dirac delta function $\delta\left(u+u_{0p},v+v_{0p}\right)$.
Partitioning the gridding equation \eqref{eq:gridding} into groups and factoring out the shift for the central pixel leads to
\begin{align}
\begin{aligned}
\widehat{V}(u,v) = \sum_{p=1}^M \Big( & \delta\left(u-u_{0p},v-v_{0p}\right) \ast \\
&
\begin{aligned}
& \sum_{k=1}^N
&
\begin{aligned} [t]
& y_{pk} \delta\left(u+u_{0p}, v+v_{0p}\right) \ast \\
& \Big. \delta\left(u - u_{pk},v-v_{pk}\right) \ast \\
& C_{pk}\left(u,v\right)\Big).
\end{aligned}
\end{aligned}
\end{aligned}
\end{align}
The shifts in the inner and outer summation cancel each other, leaving only the shift in the original equation \eqref{eq:gridding}.
Now define subgrid $\widehat{V}_{p}(u,v)$ as the result of the inner summation in the equation above
\begin{align}
& \widehat{V}_p(u, v) = \nonumber \\
& \sum_{k=1}^{N} y_{pk}\delta\left(u + u_{0p} - u_{pk} ,v+v_{0p}-v_{pk} \right) \ast C_{pk}\left(u,v\right).
\end{align}
The $uv$ grid $\widehat{V}$ is then a summation of shifted subgrids $\widehat{V}_p$
\begin{align}
\widehat{V}(u,v) = \sum_{p=1}^M \delta\left(u-u_{0p},v-v_{0p}\right) \ast \widehat{V}_p(u,v).
\end{align}
Now we define the subgrid image $\widehat{I}_p(l,m)$ as the inverse Fourier transform of $\widehat{V}_p$.
The subgrids $\widehat{V}_p$ can then be computed by first computing $\widehat{I}_p(l,m)$ and then transforming it to the $uv$ domain.
The equation for the subgrid image, $\widehat{I}_p(l,m)$, can be found from its definition:
\begin{align}
\begin{aligned}
\widehat{I}_p(l,m) & = \mathcal{F}^{-1}\left(\widehat{V}_p(u,v)\right) \\
&
\begin{aligned}
= \sum_{k=1}^{N} & \Big( y_{pk} e^{2\pi\mathrm{i}\left(\left(u_{pk}-u_{0p}\right)l + \left(v_{pk} - v_{0p} \right) m + w_{pk}n\right)} \\
& c_{pk}\left(l,m\right)\Big)
\end{aligned}
\label{eq:ShiftedDirectImaging}
\end{aligned}.
\end{align}
This equation is very similar to the direct imaging equation \eqref{eq:imaging}. An important difference is the shift towards the origin making the remaining terms $\left(u_{pk}-u_{0p}\right)$ and
$\left(v_{pk}-v_{0p}\right)$ much smaller than the $u_k$ and $v_k$ in the original equation.
That means that the discrete version of this equation can be sampled by far fewer pixels. In fact image $\widehat{I}_p\left(l,m\right)$ is critically sampled when the number of pixels equals the size of the enclosing box in the $uv$ domain. A denser sampling is not needed since the Fourier transform of a denser sampled image will result in near zero values in the region outside the enclosing box.
The sampled versions of the subgrid and subgrid image are denoted by $\widehat{V}_p[i,j]$ and $\widehat{I}_p[i,j]$ respectively.
Because $\widehat{I}_p\left(l,m\right)$ can be sampled on a grid with far fewer samples than the original image, it is not particularly expensive to compute
$\widehat {V}_p(u,v)$ by first computing a direct image using \eqref{eq:ShiftedDirectImaging} and then applying the FFT.
The subgrid $\widehat{V}_p(u,v)$ can then be added to the master grid. The final image $\widehat{I}$ is then the inverse Fourier transform of $\widehat{V}$ divided by the root mean square (rms) window $\overline{c}(l,m)$.
Discretization of the equations above leads to Algorithm \ref{alg:gridding} and \ref{alg:degridding} for gridding and degridding, respectively.
\begin{figure}[tbh]
\begin{algorithm}[H]
\caption{Image domain gridding}\label{alg:gridding}
\begin{algorithmic}[0]
\LineComment{In: \parbox[t]{6cm}{
$\{y_{pk}\}$ visibilities \\
$\{u_{pk}\}$, $\{v_{pk}\}$, $\{w_{pk}\}$ : uvw-coordinates \\
$P, L, \{K_p\}, \{L_p\}$: dimensions \\
$\{c^{L_p \times L_p}_{pk}\}$ : image domain kernels \\
$\bar{c}^{L \times L}$: rms image domain kernel}}
\LineComment{Out: $I^{L \times L}$ image}
\LineComment{Initialize grid to zero:}
\State{$V^{L\times L} \gets 0$}
\LineComment{Iterate over data blocks:}
\For{p in $0 \ldots P-1$}
\LineComment{Initialize subgrid to zero:}
\State{$I_p^{L_p \times L_p} \gets 0$}
\LineComment{iterate over data within block:}
\For{k in $0 \ldots K_p-1$}
\State $u \gets u_{pk} - u_{0p}$
\State $v \gets v_{pk} - v_{0p}$
\State $w \gets w_{pk} - w_{0p}$
\LineComment{iterate over pixels in subgrid:}
\For{i in $0 \ldots L_p - 1$}
\For{j in $0 \ldots L_p - 1$}
\State $l \gets - \frac{S}{2} + \frac{i}{L_p}\frac{S}{2}$
\State $m \gets -\frac{S}{2} + \frac{j}{L_p}\frac{S}{2}$
\State $n' \gets \sqrt{1 - l_q^2 - m_r^2} - 1$
\State
\begin{varwidth}[t]{\linewidth}
$I_p \left[i,j\right] \gets I_p\left[i,j\right] + $ \par
\hskip\algorithmicindent $e^{2\pi\mathrm{j}\left(ul + vm + wn'\right)} c_{pk}^{*}[i,j] y_{pk}$
\end{varwidth}
\EndFor
\EndFor
\EndFor
\LineComment{Transform subgrid to $uv$ domain:}
\State $V_{p} \leftarrow \operatorname{FFT}(I_p)$
\LineComment{Add subgrid to master grid:}
\For{ $i$ in $0 \ldots L_p - 1$}
\For{$j$ in $0 \ldots L_p - 1$}
\State
\begin{varwidth}[t]{\linewidth}
$V\left[i+i_{0p},j + j_{0p}\right] \gets$ \par
\hskip\algorithmicindent $V\left[i+i_{0p}, j + j_{0p}\right] + V_{p}[i,j]$
\end{varwidth}
\EndFor
\EndFor
\EndFor
\LineComment{Transform grid and apply inverse rms taper:}
\State{$I^{L \times L} \gets FFT(V^{L \times L}) / \bar{c}^{L \times L}$}
\end{algorithmic}
\end{algorithm}
\end{figure}
\begin{figure}[tbh]
\begin{algorithm}[H]
\caption{Image domain degridding}\label{alg:degridding}
\begin{algorithmic}[0]
\LineComment{In: \parbox[t]{6cm}{
$I^{L \times L}$ image \\
$\{u_{pk}\}$, $\{v_{pk}\}$, $\{w_{pk}\}$ : uvw-coordinates \\
$L, \{K_p\}, \{L_p\}$: dimensions \\
$\{c^{L_p \times L_p}_{pk}\}$ : image domain kernels \\
$\bar{c}^{L \times L}$: rms image domain kernel}}
\LineComment{Out: $\{y_{pk}\}$ visibilities}
\LineComment{Apply inverse rms taper to entire image:}
\State $I \gets I/c$
\LineComment{Fourier transform entire image:}
\State $V \gets \mathrm{FFT}(I)$
\LineComment{Iterate over data blocks:}
\For{p in $0 \ldots P-1$}
\LineComment{initialize subgrid from master grid:}
\For{i in $0\ldots L_p - 1$}
\For{j in $0\ldots L_p - 1$}
\State $V_p[i,j] \gets V[i+i_{0p},j+j_{0p}]$
\EndFor
\EndFor
\LineComment{Transform subgrid to image domain:}
\State $I_p \gets IFFT(V_p)$
\For{k in $0 \ldots K_p-1$}
\State $\Delta u \gets u_{pk} - u_{0p}$
\State $\Delta v \gets v_{pk} - v_{0p}$
\State $\Delta w \gets w_{pk} - w_{0p}$
\State $y_{pk} \gets 0$
\For{i in $0 \ldots L_p - 1$}
\For{j in $0 \ldots L_p - 1$}
\State $l \leftarrow -\frac{S}{2} + \frac{i}{L_p-1}\frac{S}{2}$
\State $m \leftarrow -\frac{S}{2} + \frac{j}{L_p - 1} \frac{S}{2}$
\State $n' \leftarrow \sqrt{1-l_i^2-m_j^2} - 1$
\State
\begin{varwidth}[t]{\linewidth}
$y_{pk} \gets y_{pk} + $ \par
\hskip\algorithmicindent $e^{-2\pi\mathrm{j} \left(\Delta ul + \Delta vm + \Delta wn'\right)} c_{pk} \left[i,j\right] I \left[ i,j \right]$
\end{varwidth}
\EndFor
\EndFor
\EndFor
\EndFor
\end{algorithmic}
\end{algorithm}
\end{figure}
\subsection{Variations}
The term for the `convolution function' $c_{pk} \left[i,j\right]$ is kept very generic here by giving it an index $k$, allowing a different value for each sample.
Often the gain term, included in $c_{pk}$, can be assumed constant over many data points.
The partitioning of the data can be done such that only a single $c_p$ for each block is needed.
The multiplication by $c_k$ can then be pulled outside the loop over the visibilities, reducing the number of operations in the inner loop.
In the polarized case each antenna consists of two components, each measuring a different polarization, the visibilities are 2$\times$2 matrices and the gain $g$ is described by a 2$\times$2 Jones matrix.
For this case the algorithm is not fundamentally different. Scalar multiplications are substituted by matrix multiplications, effectively adding an extra loop over the different polarizations of the data, and an extra loop over the differently polarized images.
\begin{table*}[tp]
\caption{Level of aliasing for different gridding methods and kernel sizes}
\label{tab:error_idg}
\centering
\begin{tabular}{cccccccc}
\hline\hline
\multicolumn{1}{c}{$\beta$} & \multicolumn{1}{c}{Classical} & \multicolumn{6}{c} {Image domain gridding} \\
\multicolumn{1}{c}{} & \multicolumn{1}{c}{PSWF} & \multicolumn{1}{c}{$L$=8} & \multicolumn{1}{c}{$L$=16} & \multicolumn{1}{c}{$L$=24} & \multicolumn{1}{c}{$L$=32} & \multicolumn{1}{c}{$L$=48} & \multicolumn{1}{c}{$L$=64} \\
\hline
3.0 & 3.33e-02 & 4.63e-02 & \textbf{2.87e-02} & \textbf{2.25e-02} & \textbf{1.92e-02} & \textbf{1.54e-02} & \textbf{1.32e-02} \\
5.0 & 1.68e-03 & 4.60e-03 & 2.59e-03 & 1.94e-03 & \textbf{1.60e-03} & \textbf{1.25e-03} & \textbf{1.06e-03} \\
7.0 & 7.96e-05 & 2.99e-04 & 1.67e-04 & 1.25e-04 & 1.03e-04 & 7.89e-05 & \textbf{6.62e-05} \\
9.0 & 3.68e-06 & & 9.08e-06 & 6.96e-06 & 5.78e-06 & 4.45e-06 & 3.71e-06 \\
11.0 & 1.68e-07 & & 4.82e-07 & 3.55e-07 & 2.98e-07 & 2.33e-07 & 1.95e-07 \\
13.0 & 7.58e-09 & & 2.70e-08 & 1.79e-08 & 1.47e-08 & 1.16e-08 & 9.83e-09 \\
15.0 & 3.40e-10 & & 1.45e-09 & 9.15e-10 & 7.25e-10 & 5.61e-10 & 4.78e-10 \\
\hline
\end{tabular}
\tablefoot{Level of aliasing for classical gridding with the PSWF and for image domain gridding with the optimal window for different subgrid sizes $L$, and different kernel sizes $\beta$.
Numbers in bold indicate where image domain gridding has lower aliasing than classical gridding.
The numbers in this table were generated with code using mpmath (1), a Python library for arbitrary-precision floating-point arithmetic.}
\tablebib{(1) \citet{mpmath}}
\end{table*}
\section{Analysis}
\label{sec:analysis}
In the previous section, the image domain gridding algorithm is derived rather intuitively without considering the effects of sampling and truncation,
except for the presence of a still-unspecified anti-aliasing window $c[i,j]$.
In this section, the output of the algorithm is analyzed in more detail. The relevant metric here is the difference between the result of direct imaging/evaluation
and gridding/degridding, respectively. This difference, or gridding error, is due solely to the side lobes of the anti-aliasing window.
These side lobes are caused by the limited support of the gridding kernel. An explicit expression for the error will be derived in terms of the anti-aliasing window.
Minimization of this expression leads directly to the optimal window, and corresponding error. This completes the analysis of accuracy of image domain gridding,
except for the effect of limited numerical precision, which was found in practice not to be a limiting factor.
For comparison we summarize the results on the optimal anti-aliasing window for classical gridding known in the literature.
For both classical and image domain gridding the error can be made arbitrarily small by selecting a sufficiently large kernel.
It is shown below that both methods reach comparable performance for equal kernel sizes.
Conversely, to reach a given level of performance both methods need kernels of about the same size.
\subsection{Optimal windows for classical gridding}
We restrict the derivation of the optimal window in this section to a 1D window $f(x)$.
The spatial coordinate is now $x$, replacing the $l,m$ pair in the 2D case, and normalized such that the region to be imaged, or the main lobe, is given by $-1/2 \leq x < 1/2$.
The 2D windows used later on are a simple product of two one-dimensional (1D) windows, $c(l,m) = f(l/S)f(m/S)$, and it is assumed that optimality is mostly preserved.
\citet{Brouw1975} uses as criterion for the optimal window that it maximizes the energy in the main lobe relative to the total energy:
\begin{equation}
f_{\mathrm{opt}} = \argmax_{f} \frac{\int_{-1/2}^{1/2}\|f(x)\|^2\,\mathrm{d}x}{\int_{-\infty}^{\infty}\|f(x)\|^2\,\mathrm{d}x},
\end{equation}
under the constraint that its support in the $uv$ domain is not larger than a given kernel size $\beta$.
This minimization problem was already known in other contexts. In \cite{Slepian1961} and \cite{Landau1961} it is shown that this problem can be written as an eigenvalue problem. The solution is the prolate spheroidal wave function (PSWF).
The normalized energy in the side lobes is defined by
\begin{equation}
\varepsilon^2 = \frac{\int_{-\infty}^{-1/2}\|f(x)\|^2\,\mathrm{d}x + \int_{1/2}^{+\infty}\|f(x)\|^2\,\mathrm{d}x}{\int_{-\infty}^{\infty}\|f(x)\|^2\,\mathrm{d}x}.
\end{equation}
For the PSWF, the energy in the side lobes is related to the eigenvalue:
\begin{equation}
\varepsilon^2_{\textsc{\tiny PSWF}} = 1 - \lambda_0(\alpha),
\end{equation}
where $\lambda_0(\alpha)$ is the first eigenvalue and $\alpha = \beta\pi/2$. The eigenvalue is given by:
\begin{equation}
\lambda_0(\alpha) = \frac{2\alpha}{\pi}\left[R_{00}(\alpha,1)\right]^2,
\end{equation}
where $R_{mn}(c, \eta)$ is the radial prolate spheroidal wave function \citep[ch. 21]{Abramowitz1964}.
The second column of Table \ref{tab:error_idg} shows the aliasing error $\varepsilon$ for different $\beta$.
The required kernel size can be found by looking up the smallest kernel that meets the desired level of performance.
\subsection{Effective convolution function in image domain gridding}
In classical gridding, the convolution by a kernel in the $uv$ domain effectively applies a window in the image domain.
In image domain gridding, the convolution by a kernel is replaced by a multiplication on a small grid in the image domain
by a discrete taper $c[i,j]$. Effectively this applies a (continuous) window on the (entire) image domain, like in classical gridding.
Again the 2D taper is chosen to be a product of two 1D tapers:
\begin{equation}
c[i,j] = a_i a_j.
\end{equation}
The 1D taper is described by the set of coefficients $\{a_k\}$.
It can be shown that the effective window is a sinc interpolation of the discrete window.
The interpolation however is affected by the multiplication by the phase gradient corresponding to the position shift from
the subgrid center, $\Delta u, \Delta v$. For the 1D analysis, we use a single parameter for the position shift, $s$.
\begin{equation}
f(x,s) = \sum_{k=0}^{L-1} a_k z_k(s) \operatorname{sinc}(L(x - x_k)) z^{*}(x,s), \label{eq:effective_window}
\end{equation}
where $z(x,s)$ is the phase rotation corresponding to the shift $s$, and $\operatorname{sinc}(x)$ is the normalized sinc function defined by
\begin{equation}
\operatorname{sinc}(x) = \frac{\sin (\pi x)}{\pi x}.
\end{equation}
Phasor $z(x,s)$ is given by:
\begin{equation}
z(x,s) = e^\frac{\mathrm{j} 2\pi x s}{L}.
\end{equation}
The sample points are given by $x_k = -1/2 + k/L$. The phasor at the sample points is given by $z_k(s) = z(x_k, s)$.
Although the gradients cancel each other exactly at the sample points, the effect of the gradient can still be seen in the side lobes.
The larger the gradient, the larger the ripples in the sidelobes, as can be seen in Figure \ref{fig:optimal_window}.
Larger gradients correspond to samples further away from the subgrid center.
The application of the effective window can also be represented by a convolution in the $uv$ domain, whereby the kernel depends on the position of the sample within the subgrid.
Figure \ref{fig:convfunc}a shows the convolution kernel for different position shifts. For samples away from the center the convolution kernel is asymmetric.
That is because each sample affects all points in the sub-grid, and not just the surrounding points as in classical gridding.
Samples away from the center have more neighboring samples on one side than the other.
In contrast to classical gridding, the convolution kernel in image domain gridding has side lobes.
These side lobes cover the pixels that fall within the sub-grid, but outside the main lobe of the convolution kernel.
\subsection{Optimal window for image domain gridding}
The cost function that is minimized by the optimal window is the mean square of the side lobes of the effective window.
Because the effective window depends on the position within the sub-grid, the mean is also taken over all allowed positions.
For a convolution kernel with main lobe width $\beta$, the shift away from the sub-grid center $\|s\|$ cannot be more than $(L-\beta+1)/2$,
because then the main lobe wraps around far enough to touch the first pixel on the other side.
This effect can be seen in Figure \ref{fig:convfunc}b. The samples in the center have a low error.
The further the sample is from the center, the larger is the part of the convolution kernel that wraps around, and the larger are the side lobes of the effective window.
The cost function to be minimized is given by:
\begin{align}
\varepsilon^2 = \int_{-(L-\beta-1)/2}^{(L-\beta-1)/2} & \left( \int_{-\infty}^{-0.5} \|f(x,s)\|^2\,\mathrm{d}x \right. + \nonumber \\
& \left. \int_{.5}^{\infty} \|f(x,s)\|^2\,\mathrm{d}x \right )\mathrm{d}s. \label{eq:error}
\end{align}
In the Appendix a $L \times L$ matrix $\mathbf{\overline{R}}$ is derived such that the error can be written as:
\begin{equation}
\varepsilon^2 = \mathbf{a}^{H} \mathbf{\overline{R}} \mathbf{a}
,\end{equation}
where $\mathbf{a} = \left[\begin{array}{ccc} a_0 \dots a_{L-1} \end{array}\right]$ is a vector containing the window's coefficients.
The minimization problem:
\begin{equation}
\mathbf{a}_{opt} = \argmin_{\mathbf{a}} \varepsilon^2(\mathbf{a}) = \argmin_{\mathbf{a}} \mathbf{a}^{H} \mathbf{\overline{R}} \mathbf{a}
\label{eq:a_opt}
,\end{equation}
can be solved by a eigenvalue decomposition of $\mathbf{\overline{R,}}$
\begin{equation}
\mathbf{\overline{R}} = \mathbf{U} \mathbf{\Lambda} \mathbf{U}^{\mathrm{H}}
.\end{equation}
The smallest eigenvalue $\lambda_{L-1}$ gives the side lobe level $\varepsilon$ for the optimal window $\mathbf{a}_{opt} = \mathbf{u}_{L-1}$.
The shape of the convolution kernel is a consequence of computing the cost function as the mean over a range of allowed shifts $-(L-\beta+1) \leq s \leq L-\beta+1$.
The minimization of the cost function leads to a convolution kernel with a main lobe that is approximately $\beta$ pixels wide, but this width is not enforced by any other means than through the cost function.
Table \ref{tab:error_idg} shows the error level for various combinations of sub-grid size $L$ and width $\beta$.
For smaller sub-grids the aliasing for image domain gridding is somewhat higher than for classical gridding with the PSWF,
but, perhaps surprisingly, for larger sub-grids the aliasing is lower.
This is not in contradiction with the PSWF being the convolution function with the lowest aliasing for a given support size.
The size of the effective convolution function of image domain gridding is $L$, the width of the sub-grid, even though the main lobe has only size $\beta$.
Apparently the side lobes of the convolution function contribute a little to alias suppression.
In the end, the exact error level is of little importance. One can select a kernel size that meets the desired performance.
Table \ref{tab:error_idg} shows that kernel size in image domain gridding will not differ much from the kernel size required in classical gridding.
For a given kernel size there exists a straightforward method to compute the window.
In practice the kernel for classical gridding is often sampled. In that case, the actual error is larger than derived here.
Image domain gridding does not need a sampled kernel and the error level derived here is an accurate measure for the level reached in practice.
\begin{figure}
\includegraphics[width=.48\textwidth]{optimal_window.pdf}
\caption{Effective window depending on the position of the sample within the sub-grid. The lowest side lobes are for a sample in the center of the sub-grid.
The higher side lobes for samples close to the edge are caused by the phase gradient corresponding to a shift away from the center.}
\label{fig:optimal_window}
\end{figure}
\begin{figure*}[!t]
\includegraphics[width=.45\textwidth, trim = 0pt 15pt 0pt 0pt, clip=true]{cf_shifted.pdf}
\includegraphics[width=.45\textwidth]{error_vs_shift.pdf}
\caption{\textit{Left}: Effective convolution function for samples at different positions within the sub-grid;
\textit{top left}: sample at the center of the sub-grid; \textit{left middle}: sample at the leftmost position within the sub-grid before the main lobe wraps around;
\textit{left bottom}: sample at the rightmost position within the sub-grid before the main lobe wraps around.
\textit{Right}: Gridding error as a function of position of the sample within the sub-grid. Close to the center of the sub-grid the error changes little with position.
The error increases quickly with distance from the center immediately before the maximum distance is reached.}
\label{fig:convfunc}
\end{figure*}
\section{Application to simulated and observed data}
\label{sec:simulations}
In the previous section it was shown that by proper choice of the tapering window the accuracy of image domain gridding is at least as good as classical gridding.
The accuracy at the level of individual samples was measured based on the root mean square value of the side lobes of the effective window.
In practice, images are made by integration of very large datasets.
In this section we demonstrate the validity of the image domain gridding approach by applying the algorithm in a realistic scenario to
both simulated and observed data, and comparing the result to the result obtained using classical gridding.
\subsection{Setup}
The dataset used is part of a LOFAR observation of the "Toothbrush" galaxy cluster by \cite{vanWeeren2017}.
For the simulations this dataset was used as a template to generate visibilities with the same metadata as the preprocessed visibilities in the dataset.
The pre-imaging processing steps of flagging, calibration and averaging in time and frequency had already been performed.
The dataset covers ten LOFAR sub-bands whereby each sub-band is averaged down to 2 channels, resulting in 20 channels covering the frequency range 130-132 MHz.
The observation included 55 stations, where the shortest baseline is 1km, and the longest is 84km.
In time, the data was averaged to intervals of \SI{10}{seconds}.
The observation lasted \SI{8.5}{hours}, resulting in 3122 timesteps, and, excluding autocorrelations, 4636170 rows in total.
The imager used for the simulation is a modified version of WSClean \citep{Offringa2014}.
The modifications allow the usage of the implementation of image domain gridding by \cite{Veenboer2017} instead of classical gridding.
\subsection{Performance metrics}
Obtaining high-quality radio astronomical images requires deconvolution.
Deconvolution is an iterative process. Some steps in the deconvolution cycle are approximations.
Not all errors thus introduced necessarily limit the final accuracy that can be obtained.
In each following iteration, the approximations in the previous iterations can be corrected for.
The computation of the residual image however is critical.
If the image exactly models the sky then the residual image should be noise only, or zero in the absence of noise.
Any deviation from zero sets a hard limit on the attainable dynamic range.
The dynamic range can also be limited by the contribution of sources outside the field of view.
Deconvolution will not remove this contribution. The outside sources show up in the image
through side lobes of the point spread function (PSF) around the actual source, and as alias inside the image.
The aliases are suppressed by the anti-aliasing taper.
The PSF is mainly determined by the $uv$ coverage and the weighting scheme, but the gridding method has some effect too.
A well behaved PSF allows deeper cleaning per major cycle, reducing the number of major cycles.
The considerations above led to the following metrics for evaluation of the image domain gridding algorithm
\begin{enumerate}
\item level of the side lobes of the PSF;
\item root mean square level of the residual image of a simulated point source, where the model image and the model to generate the visibilities are an exact match;
\item the rms level of a dirty image of a simulated source outside the imaged area.
\end{enumerate}
\subsection{Simulations}
The simulation was set up as follows.
An empty image of 2048$\times$2048 pixels was generated with cell size of \SI{1}{arcsec}.
A single pixel at position (1000,1200) in this image was set to \SI{1.0}{Jy}.
The visibilities for this image were computed using three different methods: 1) Direct evaluation of the ME, 2) classical degridding, and 3) image domain degridding.
For classical gridding we used the default WSClean settings: a Kaiser-Bessel (KB) window of width 7 and oversampling factor 63.
The KB window is easier to compute than the PSWF, but its performance is practically the same.
For image domain gridding a rather large sub-grid size of 48 $\times$ 48 pixels was chosen.
A smaller sub-grid size could have been used if the channels had been partitioned into groups, but this was not yet implemented.
The image domain gridder ran on a NVIDIA GeForce 840M, a GPU card for laptops.
The CPU is a dual core Intel i7 (with hyperthreading) running at \SI{2.60}{\giga \hertz} clockspeed.
The runtime is measured in two ways: 1) At the lowest level, purely the (de)gridding operation and 2) at the highest level, including all overhead.
The low-level gridding routines report their runtime and throughput. WSClean reports the time spend in gridding, degridding, and deconvolution.
The gridding and degridding times reported by WSClean include the time spent in the large-scale FFTs and reading and writing the data and any other overhead.
The speed reported by the gridding routine was \SI{4.3}{\mega visibilities \per \second}.
For the $20 \times 4636170$ = \SI{93}{\mega visibilities} in the dataset, the gridding time is \SI{22}{\second}.
The total gridding time reported by WSClean was \SI{72}{\second}.
The total runtime for classical gridding was \SI{52}{\second} for Stokes I only, and \SI{192}{\second} for all polarizations.
The image domain gridder always computes all four polarizations.
Figure \ref{fig:psf} shows the PSF. In the main lobe the difference between the two methods is small. The side lobes
for the image domain gridder are somewhat (5 \%) lower than for classical gridder. Although in theory this affects the cleaning depth per major cycle, we do not expect
such a small difference to have a noticable impact on the convergence and total runtime of the deconvolution.
A much larger difference can be seen in the residual visibilities in Figure \ref{fig:residual_plot} and the residual image in Figure \ref{fig:residual_image}.
The factor-18 lower noise in the residual image means an increase of the dynamic range limit by that factor.
This increase will of course only be realized when the gridding errors are the limiting factor.
In Figure \ref{fig:outside-fov} a modest 2\% better suppression of an outlier source is shown. This will have little impact on the dynamic range.
\subsection{Imaging observed data}
This imaging job was run on one of the GPU nodes of LOFAR Central Processing cluster.
This node has two Intel(R) Xeon(R) E5-2630 v3 CPUs running at \SI{2.40}{GHz}.
Each CPU has eight cores. With hyperthreading each core can run 2 threads simultaneously.
All in all, 32 threads can run in parallel on this node.
The node also has four NVIDIA Tesla K40c GPUs. Each GPU has a compute power of 4.29 Tflops (single precision).
The purpose of this experiment is to measure the run time of an imaging job large enough to make a reasonable extrapolation to a full-size job.
This is not a demonstration of the image quality that can be obtained, because that requires a more involved experiment.
For example, direction-dependent corrections are applied, but they were filled with identity matrices.
Their effect is seen in the runtime, but not in the image quality.
The dataset is again the ``toothbrush'' dataset used also for the simulations.
The settings are chosen to image the full field of LOFAR at the resolution for an observation including all remote stations (but not the international stations).
The image computed is 30000$\times$30000 pixels with 1.2asec/pixel.
After imaging 10\% was clipped on each side, resulting in a 24000 $\times$ 24000 pixel image, or 8$\deg$$\times$8$\deg$.
The weighting scheme used is Briggs' weighting, with the robustness parameter set to 0.
The cleaning threshold is set to 100 mJy, resulting in four iterations of the major cycle.
Each iteration takes about 20 minutes.
\begin{figure}[!t]
\includegraphics[width=0.45\textwidth]{psf.png}
\caption{PSF for the classical gridder (blue) and the image domain gridder (green) on a logarithmic scale.
The main lobes are practically identical. The first side lobes are a bit less for the image domain gridder.
There are some differences in the further (lower) sidelobes as well, but without a consistent pattern.
The rms value over the entire image, except the main lobe, is about 5\% lower for image domain gridding than for classical gridding.}
\label{fig:psf}
\end{figure}
\begin{figure*}[!tbp]
\includegraphics[width=0.33\textwidth]{predicted_vis.png}
\includegraphics[width=0.33\textwidth]{abserror1.png}
\includegraphics[width=0.33\textwidth]{abserror2.png}
\caption{\textit{Left}: Real value of visibilities for a point source as predicted by direct evaluation of the ME, and degridding by the classical gridder and image domain gridder.
The visibilities are too close together to distinguish in this graph. \textit{Middle, right}: Absolute value of the difference between direct evaluation and degridding
for a short (1km) and a long (84km) baseline. On the short baseline the image domain gridder rms error of \SI{1.03e-05}{Jy} is about \num{242} times lower than the classical gridder rms error of \SI{2.51e-03}{Jy}.
On the long baseline the image domain gridder rms error of \SI{7.10e-04}{Jy} is about seven times lower than the classical gridder error of \SI{4.78e-03}{Jy}.
}
\label{fig:residual_plot}
\end{figure*}
\begin{figure*}[!tbp]
\includegraphics[width=0.45\textwidth]{residual-wsclean.png}
\includegraphics[width=0.45\textwidth]{residual-idg.png}
\caption{Residual image for the classical gridder in wsclean (left) and the image domain gridder (right).
The color-scale for both images is the same, ranging from \SI{-1.0e-05}{Jy \per beam} to \SI{1.0e-05}{Jy \per beam}. The rms value of the
area in the box centered on the source is about \num{19} times lower for image domain gridding (\SI{7.6e-06}{Jy \per beam}) than for classical gridding (\SI{1.3e-4}{Jy \per beam}).
The rms value over the entire image is about \num{17} times lower for image domain gridding (\SI{1.1e-06}{Jy \per beam}) than for classical gridding (\SI{2.1e-05}{Jy \per beam})}
\label{fig:residual_image}
\end{figure*}
\begin{figure*}[!tbp]
\includegraphics[width=0.45\textwidth]{outside-fov-1-dirty.png}
\includegraphics[width=0.45\textwidth]{outside-fov-1-idg-dirty.png}
\caption{Image of simulated data of a source outside the field of view with classical gridding (left) and image domain gridding (right).
The color-scale for both images is the same, ranging from \SI{-1.0e-03}{Jy \per beam} to \SI{1.0e-03}{Jy \per beam}.
Position of the source is just outside the image to the north. The aliased position of the source within the image is indicated by an `X'.
The image is the convolution of the PSF with the actual source and all its aliases.
In the image for the classical gridder (left) the PSF around the alias is just visible. In the image for the image domain gridder (right) the alias is
almost undetectable. The better alias suppression has little effect on the overall rms value since this is dominated by the side lobes of the PSF around the actual source.
The rms value over the imaged area is 2\% lower for image domain gridding (\SI{1.35}{Jy \per beam}) than for classical gridding (\SI{1.38e-03}{Jy \per beam}).
}
\label{fig:outside-fov}
\end{figure*}
\begin{figure*}[!tbp]
\includegraphics[width=\textwidth, trim = 105pt 370pt 90pt 0, clip=true]{tb-20k.pdf}
\caption{Large image (20000 $\times$ 20000 pixel) of the toothbrush field. The field of view is 22\textdegree $\times$ 22\textdegree at a resolution of \SI{4}{arcsec} per pixel.
Cleaned down to \SI{100}{mJy} per beam, taking four major cycles.}
\label{fig:toothbrush}
\end{figure*}
\section{Conclusions \& future work}
The image domain gridding algorithm is designed for the case where the cost of computing the gridding kernels is a significant part of the total cost of gridding.
It eliminates the need to compute a (sampled) convolution kernel by directly working in the image domain.
This not only eliminates the cost of computing a kernel, but is also more accurate compared to using an (over)sampled kernel.
Although the computational cost of the new algorithm is higher in pure operation count than classical gridding, in practice
it performs very well. On some (GPU) architectures it is even faster than classical gridding even when the cost of computing the convolution functions is not included.
This is a large step forward, since it is expected that for the square kilometer array (SKA), the cost of computing the convolution kernels will dominate the total cost of gridding.
Both in theory and simulation, it has been shown that image domain gridding is at least as accurate as classical gridding as long as a good taper is used.
The optimal taper has been derived.
The originally intended purpose of image domain gridding, fast application of time and direction dependent corrections,
has not yet been tested, as the corrections for the tests in this paper have been limited to identity matrices.
The next step is to use image domain gridding to apply actual corrections.
Another possible application of image domain gridding is calibration.
In calibration, a model is fitted to observed data. This involves the computation of residuals and derivatives.
These can be computed efficiently by image domain gridding whereby the free parameters are the A-term.
This would allow to fit directly for an A-term in the calibration step, using a full image as a model.
\begin{appendix}
\section{Derivation of the optimal window}
The optimal window is derived by writing out the expression for the mean energy in the side lobes in terms of coefficients $a_k$.
This expression contains a double integral: one integral is over the extent of the side lobes, and one over all allowed positions in the sub-grid.
The double integral can be expressed in terms of special functions. The expression for the mean energy then reduces to a weighted vector norm,
where the entries of the weighting matrix are given in terms of the special functions.
The minimization problem can then readily be solved by singular value decomposition.
The square of the effective window given in \eqref{eq:effective_window} is
\begin{equation}
\|f(x,s)\|^2 = \sum_{k=0}^{L-1} \sum_{l=0}^{L-1} a_k a_l e^\frac{j 2\pi (k-l) s}{L} \operatorname{sinc}(x - k)\operatorname{sinc}(x - l). \\
\end{equation}
This can be written as a matrix product:
\begin{equation}
f^2(x,s) = \mathbf{a}^{H} \left( \mathbf{Q}(x) \odot \mathbf{S}(s) \right) \mathbf{a},
\end{equation}
where the elements of matrix $\mathbf{Q}(x)$ are given by $q_{i,j} = \operatorname{sinc}(x - k)\operatorname{sinc}(x - l)$ and
the elements of matrix $\mathbf{S}(s)$ are given by $s_{kl} = e^\frac{j 2\pi (k-l) s}{L}$.
The equation for the error \eqref{eq:error} can now be written as
\begin{equation}
\varepsilon = \mathbf{a}^{H} \left( \mathbf{\overline{Q}} \odot \mathbf{\overline{S}} \right) \mathbf{a} = \mathbf{a}^{H} \left( \mathbf{\overline{R}} \right) \mathbf{a}
,\end{equation}
where $\mathbf{\overline{R}} = \mathbf{\overline{Q}} \odot \mathbf{\overline{S}}$ and
\begin{equation}
\mathbf{\overline{Q}} = \int_{-\infty}^{0} \mathbf{Q}(x)\,\mathrm{d}x + \int_{L}^{\infty} \mathbf{R}(x)\,\mathrm{d}x
\label{eq:meanR}
,\end{equation}
and
\begin{equation}
\mathbf{\overline{S}} = \int_{-(L-\beta+1)/2}^{(L-\beta+1)/2} \mathbf{S}(s) ds
.\end{equation}
To evaluate the entries of $\mathbf{\overline{Q}}$ the following integral is needed:
\begin{equation}
\begin{split}
& \int \frac{\sin^2(\pi x)}{\pi^2(x^2 + kx)}dx = \\
& \frac{1}{2\pi^2k}\left( \operatorname{Ci}(2\pi(k+x)) - \log(k+x) + \right. \\
& \left. -\operatorname{Ci}(2\pi x) + \log(x) \right), \quad \forall k \in \mathbb{Z}
\end{split}
,\end{equation}
where $\operatorname{Ci}(x)$ is the \emph{cosine integral}, a special function defined by
\begin{equation}
\operatorname{Ci}(x) = \int_x^\infty \frac{\cos t}{t}\,\mathrm{d}t
.\end{equation}
The entries of matrix $\mathbf{\bar{S}}$ are given by:
\begin{equation}
\begin{split}
\bar{s}_{kl} & = \frac{1}{L-\ +1}\int_{-(L-\beta+1)/2}^{(L-\beta+1)/2} e^\frac{j 2\pi (k-l) s}{L} ds \\
& = \frac{1}{L-\beta+1}\left[ -\frac{jL}{2\pi (k-l)} e^\frac{j2\pi (k-l) s}{L} \right]_{-(L-\beta+1)/2}^{(L-\beta+1)/2} \\
& = \frac{L}{\pi (k-l)(L-\beta+1)} \sin(\pi (k-l) (L-\beta+1)/L) \\
& = \operatorname{sinc}((k-l)(L-\beta+1)/N)
\end{split}
.\end{equation}
\end{appendix}
\begin{acknowledgements}
This work was supported by the European Union, H2020 program, Astronomy ESFRI and Research Infrastructure Cluster (Grant Agreement number: 653477).
The first author would like to thank Sanjay Bhatnagar and others at NRAO, Soccoro, NM, US, for their hospitality and discussions on the A-projection algorithm in May 2011
that ultimately were the inspiration for the work presented in this paper. We would also like to thank A.-J. van der Veen for his thorough reading of and comments on a draft version.
\end{acknowledgements}
\bibliographystyle{aa}
|
2103.11589
|
\section{Introduction}
\label{sec:intro}
The vulnerability of neural networks to adversarial attack has been plaguing machine learning researchers ever since the discovery by Szegedy et al. \cite{szegedy2014intriguing}. In the years since, many research efforts have been geared towards making neural networks robust to adversarial perturbations, but many defense strategies have failed to stand the test of time. One of the strongest baselines for adversarial robustness that has repeatedly stood up to rigorous scrutiny is adversarial training \cite{goodfellow2015explaining}, in particular adversarial training based on the Projected Gradient Descent (PGD) attack strategy \cite{madry2018towards}. PGD adversarial training can be mathematically represented as a constrained inner min-max optimization, whose solution gives us a minimal perturbation that maximizes classification loss. This min-max optimization is a characteristic of many approaches to adversarial robust training, including in other related formulations of the loss, like TRADES \cite{zhang2019theoretically}.
Such networks have repeatedly withstood numerous evaluations, as shown in works like \cite{athalye2018obfuscated}. One concern is that the robust accuracy of such networks on test sets leaves much to be desired. For instance, state of the art neural networks typically have less than 50\% accuracy-under-attack for the CIFAR-10 dataset. This low robust accuracy occurs despite the fact that the network is able to memorize the \textit{robust} ($\ell_\infty$-bounded boxed) training distribution with nearly 100\% accuracy against its own attack model, as a result of PGD adversarial training. This points to a familiar problem for machine learning practitioners -- over-fitting to the training set. As recent work has pointed out \cite{wong2020overfitting}, robust adversarial training is just as susceptible to ``overfitting'' as standard neural network training, because the robust training distribution is an incomplete subsampling or imperfect match to the robust test distribution due to the finite size of the dataset.
Recent developments have also shown that adversarial vulnerability is a problem of networks severely overfitting to features that are imperceptibly small yet useful for classification \cite{ilyas2019adversarial}. With this view, adversarial training is an advanced form of worst-case data augmentation. This view has been shown in \cite{xie2020adversarial} to be able to usefully improve accuracy under various corruptions including ``natural adversarial examples'' \cite{hendrycks2019natural} with certain domain adaptation tools. Hence, improvements in adversarial training are useful not just for security purposes, but for a wide audience who desires robust classification to be more understandably as well as consistently generalizeable.
Our approach focuses on the poor and overfitting accuracy-under-attack of robust training. We frame together the effective mixup augmentation \cite{zhang2018mixup} introduced for typical (non-adversarial) training, and robust PGD adversarial optimization. We use adversarial optimization to pinpoint the locations in the interpolation space between datapoints, where the classification decisions of the neural networks are the least smooth, i.e. that adversarially maximize the KL divergence between the network's predictions and the smoothed label interpolation between datapoints.
Robust adversarial learning is susceptible to overfitting, and we show that data augmentation insights from standard training transfer well to adversarial optimization. Our contributions include:
\begin{itemize}
\item We show through intuitive geometry and empirical results that previous works integrating mixup and adversarial optimization were limited in their ability to probe the vicinal distribution to find worst-case points. It is important for the adversarial optimization to be able to fully find the worst-case points to learn from.
\item With ablation experiments we break down the optimization components that led our results to surpass the baselines.
\item Our approach demonstrates significant improvements in robust accuracy-under-attack against strong, state-of-the-art adversaries. We evaluate against an ensemble of state-of-the-art adversaries including a strong gradient-free black-box attack, demonstrating that our approach provides real improvements that do not introduce or rely on any gradient obfuscation.
\end{itemize}
The rest of this paper is organized as follows. In Section \ref{sec:relworks}, we summarize related work exploring mixup for data augmentation or adversarial robustness. We present the background of this work and the details of our approach in Section \ref{sec:approach}. Section \ref{sec:exp} presents the experimental results of our work, which are then comprehensively discussed in Section \ref{sec:discussion}. Finally, we conclude this paper and discuss future avenues of research in Section \ref{sec:conc}.
\section{Related Work}
\label{sec:relworks}
The concept of Mixup \cite{zhang2018mixup} was initially proposed as an introduction to Vicinal Risk Minimization to neural network training, that reduces overfitting by encouraging the network to make smooth, linearly interpolated predictions between datapoints. Manifold Mixup \cite{verma2019manifold} is a recent update that also performs such interpolations between intermediate features in the network, encouraging features throughout the model to smoothly interpolate between datapoints. However, neither approach is robust to multi-step adversarial attacks. Another related approach, Adversarial Vertex Mixup \cite{lee2020adversarial} considers first finding the adversarial perturbation point $X_{av}$ for each datapoint $X$, then using Mixup to mix $X$ and $X_{av}$ for the purpose of learning a label-smoothed distribution around $X$. However, this method does not make use of the valuable space \textit{between} different datapoints that can be learned with the original Mixup \cite{zhang2018mixup}, which is one of the key ideas behind the work presented in this paper.
Wong et al. \cite{wong2020overfitting} evaluated adversarial training with mixup and found reduced performance compared to a well-tuned PGD-only baseline. We refer to their implementation as the Baseline Algorithm \ref{algo:baseline1} (see Fig. \ref{figbaseline1}) and indeed found that it is not particularly effective. Mixup Inference \cite{pang2020mixup} uses Mixup in the inference phase by using minibatches at test time to improve adversarial robustness. However, concerns with this approach have been raised by \cite{tramer2020adaptive}. In our work, at inference time we do nothing but a standard deterministic prediction, which avoids such concerns. Guo et al. \cite{guo2019mixup} learn a prior distribution for the mixup ratio using the reparameterization trick, but their approach is not oriented for adversarial robustness. As a result, similarly to baseline Mixup and Manifold Mixup, this strategy will be vulnerable to worst-case perturbations.
Directional Adversarial Training \cite{archambault2019mixup} has been proposed for adversarially optimizing the mixup ratio between randomly sampled pairs. However, their approach will miss the space \textit{around} points (in small $\ell_\infty$-bound boxes) that adversarial robustness focuses on (in the same way that ordinary mixup is not robust to adversarial attack), and they do not try to evaluate for this case.
VarMixup \cite{mangla2020varmixup} uses a formulation of adversarial mixup to improve their generative VAE model, but their strongest results on adversarial classification are dependent upon Mixup Inference \cite{pang2020mixup}, the value of which has been disputed by Tramer et al \cite{tramer2020adaptive} who were able to attack through Mixup Inference to be no better than baseline PGD adversarial training. To avoid such concerns, we avoid using any inference-time ``tricks'' and evaluate our networks the same as the baseline PGD adversarial training \cite{madry2018towards}.
\vspace{-1mm}
\section{Approach}
\label{sec:approach}
In this section, we establish the required background and discuss the necessary details of our approach. We frame the threat model we are aiming to defend. Then, we walk through two baseline training schemes that use mixup and adversarial training, before introducing our approach, in order to show by comparison the novelties of our approach. Additionally, we discuss a few nuances and optimization tools in our geometrical framework.
\vspace{-1mm}
\subsection{Threat Model}
We start this section by discussing the adversarial threat model considered in this work. We work with the threat model of $\ell_\infty$ norm bounded attacks, and use white-box evaluation. Our models are trained using adversarial optimization with an inner loop to find a worst-case data augmentation. Thereafter, at test time we transparently use the trained network weights for deterministic predictions. This follows the threat model of \cite{madry2018towards}. We use PGD adversarial training from \cite{madry2018towards} detailed as Algorithm \ref{algo:PGD} as our inner optimizer. After producing the adversarial images $x' = (x+\delta)$, the network is trained to make predictions on such images with a standard cross-entropy loss.
\begin{algorithm}
\caption{PGD Adversarial Optimization}
\label{algo:PGD}
\begin{algorithmic}
\REQUIRE { $f_{\theta}(\cdot)$: Neural network with parameters $\theta$ }
\REQUIRE { $\mathcal{L}(\cdot, \cdot)$: Loss (KL divergence)}
\REQUIRE { $D$: Training data with images $x$ and labels $y$ }
\REQUIRE { $\epsilon$: $\ \ell_\infty$ norm bound to constrain adversarial attack }
\REQUIRE { $\eta$: step scale for PGD update }
\FOR { $\{(x,y)\} \sim D $ }
\item $\delta \leftarrow U(-\epsilon,\epsilon) \quad\triangleright$ initialize adversarial perturbation
\FOR { PGD step }
\item $ \mathcal{L}_P = d(f(x+\delta),y) $
\item $ g_\delta \leftarrow \nabla_\delta \mathcal{L}_P \quad\triangleright $ backpropagate gradients
\item $ \delta \leftarrow $ clamp $( \delta + sign(g_\delta) \cdot \eta, -\epsilon, \epsilon) $
\ENDFOR
\item $ \mathcal{L}_a = d(f(x+\delta),y) $
\item $ g_\theta \leftarrow \nabla_\theta \mathcal{L}_a \quad\triangleright $ backpropagate gradients
\item $ \theta \leftarrow $ Step$(\theta,g_\theta) \quad\triangleright$ parameter update
\ENDFOR
\end{algorithmic}
\end{algorithm}
\vspace{-1mm}
\subsection{Baselines}
We first describe two baseline approaches that combine adversarial training and Mixup for producing robust classifiers. The first approach, starts with choosing a mixing ratio $\lambda$ between two datapoints $(x_i,y_i)$ and $(x_j,y_j)$ and interpolates between them using $\lambda$ to form a virtual datapoint $(x_m,y_m)$. Following this, an adversarial perturbation $x_m'$ is optimized while being constrained around $x_m$, which is then used to train the network with the loss $L(f(x_m'),y_m)$. This is the formulation evaluated in \cite{wong2020overfitting}. An intuitive explanation of this approach is presented in Fig. \ref{figbaseline1}.
\begin{algorithm}
\caption{Baseline: Mixup, Then Attack; as in \cite{wong2020overfitting}}
\label{algo:baseline1}
\begin{algorithmic}
\REQUIRE { $f_{\theta}(\cdot)$: Neural network with parameters $\theta$ }
\REQUIRE { $\mathcal{L}(\cdot, \cdot)$: Loss (KL divergence)}
\REQUIRE { $D$: Training data with images $x$ and labels $y$ }
\REQUIRE { $\varepsilon$: $\ \ell_\infty$ norm bound to constrain adversarial attack }
\FOR { $\{(x_i,y_i),(x_j,y_j)\} \sim D $ }
\item $\lambda \sim B(\alpha, \alpha)$
\item $x_m = \lambda x_i + (1-\lambda) x_j \quad\triangleright $ mixup
\item $y_m = \lambda y_i + (1-\lambda) y_j \quad\triangleright $ mixup
\item $x_m' \leftarrow attack(x_m,y_m,\varepsilon) \quad\triangleright $ adversarial attack (PGD)
\item $ \mathcal{L}_a = d(f(x_m'),y_m) $
\item $ g_\theta \leftarrow \nabla_\theta \mathcal{L}_a \quad\triangleright $ backpropagate gradients
\item $ \theta \leftarrow $ Step$(\theta,g_\theta) \quad\triangleright$ parameter update
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{figure}[!t]
\centering
\includegraphics[width=2.5in]{mixup_then_attack.pdf}
\caption{Baseline: Mixup, then Attack. See Algorithm \ref{algo:baseline1}. Evaluated by \cite{wong2020overfitting}. The dotted lines indicate the PGD adversarial optimization procedure: the adversarial perturbation is constrained by $\ell_\infty$ norm to within the $\varepsilon$-box around $x_m$.}
\label{figbaseline1}
\end{figure}
For the second approach detailed in Fig. \ref{figbaseline2}, the datapoints $(x_i,y_i)$ and $(x_j,y_j)$ are adversarially perturbed to $(x_i',y_i)$ and $(x_j',y_j)$, respectively. This adversarial perturbation is done independently via the PGD optimization. After this step, we interpolate between $x_i'$ and $x_j'$ to form $x_m'$. The network is again trained with the loss $L(f(x_m'),y_m)$.
\begin{figure}[!t]
\centering
\includegraphics[width=3.3in]{interp_adv_training.pdf}
\caption{Baseline: Adversarially attack, then Mixup. This is the adversarial optimization approach of ``Interpolated Adversarial Training'' \cite{lamb2019interpolated}. The dotted lines indicate the PGD adversarial optimization procedure: the adversarial perturbations are constrained by $\ell_\infty$ norm to within the $\varepsilon$-boxes around $x_i$ and $x_j$.}
\vspace{-2mm}
\label{figbaseline2}
\end{figure}
\begin{algorithm}
\caption{Baseline: Attack, then Mixup: Interpolated Adversarial Training (ignoring the unperturbed loss $\mathcal{L}_c$) \cite{lamb2019interpolated}}
\label{algo:interpadvtrain}
\begin{algorithmic}
\REQUIRE { $f_{\theta}(\cdot)$: Neural network with parameters $\theta$ }
\REQUIRE { $\mathcal{L}(\cdot, \cdot)$: Loss (KL divergence)}
\REQUIRE { $D$: Training data with images $x$ and labels $y$ }
\REQUIRE { $\varepsilon$: $\ \ell_\infty$ norm bound to constrain adversarial attack }
\FOR { $\{(x_i,y_i),(x_j,y_j)\} \sim D $ }
\item $x_i' \leftarrow attack(x_i,y_i,\varepsilon) \quad\triangleright $ adversarial attack (PGD)
\item $\lambda \sim B(\alpha, \alpha)$
\item $x_m' = \lambda x_i' + (1-\lambda) x_j' \quad\triangleright $ mixup
\item $y_m = \lambda y_i + (1-\lambda) y_j \quad\triangleright $ mixup
\item $ \mathcal{L}_a = d(f(x_m'),y_m) $
\item $ g_\theta \leftarrow \nabla_\theta \mathcal{L}_a \quad\triangleright $ backpropagate gradients
\item $ \theta \leftarrow $ Step$(\theta,g_\theta) \quad\triangleright$ parameter update
\ENDFOR
\end{algorithmic}
\end{algorithm}
In the paper \cite{lamb2019interpolated}, it is advocated to train the neural network using the sum of the adversarial loss $\mathcal{L}_a$ from Baseline Algorithm \ref{algo:interpadvtrain} and the (mixup) loss on the pristine data $\mathcal{L}_c(f(x_m),y_m)$ where $(x_m,y_m)$ are computed as in Baseline Algorithm \ref{algo:baseline1}. Here, we focus on the adversarial optimization procedure targeting a worst-case loss comparable to $\mathcal{L}_a$, so as to not preclude the use of pristine (unperturbed) losses.
\vspace{-2mm}
\subsection{Integrated Adversarial Optimization \& Mixup}
Our approach is shown in Fig. \ref{figourmixup} and the initial implementation is detailed in Algorithm \ref{algo:ours}. We integrate mixup into the adversarial optimization. This will allow us to backpropagate and learn the mixing interpolation ratio $\lambda$, which we find to be beneficial. We will point out a geometrical quirk that compels us to develop a geometrical labeling fix.
Fig. \ref{figourmixup} shows geometrically that the volume of data space in which adversarial perturbations is optimized is greater than previous baselines. We argue that this results in a stronger adversarial learning, because the adversary is able to probe worst-case regions in the data space with far greater flexibility. A commonly observed wisdom is that there is a trade-off between adversarial robustness and accuracy \cite{zhang2019theoretically}. With this perspective, since we are building a stronger adversary, we would expect to increase robustness potentially at the cost of (hopefully slightly) decreasing the accuracy on pristine images. In many domains, including security and safety applications, the worst-case behavior is particularly concerning, so our goal is to improve the robustness against worst-case attacks.
\begin{figure}
\centering
\includegraphics[width=3.2in]{our_adv_mixup.pdf}
\caption{Our integrated adversarial mixup optimization. The dotted lines indicate our PGD optimization procedure: the adversarial perturbations are constrained by $\ell_\infty$ norm to within the $\varepsilon$-boxes around $x_i$ and $x_j$. Though we constrain the optimization to the boxes around $x_i$ and $x_j$, the PGD optimizer does not care about $f(x_i')$ or $f(x_j')$; its goal is to use $x_i'$ and $x_j'$ to tug on the wire that connects them, to find the point $x_m'$ in the whole space between that causes $f(x_m')$ to make an incongruous prediction (away from $y_m$). In Algorithm \ref{algo:ouroptimizeratio}, $\lambda$ is added to the adversarial optimization to better explore this interpolation space to find the worst $x_m'$. Not shown is how to produce label $y_m$, for which we will later propose distinguishing $\lambda$ for interpolations $\lambda_x$ and $\lambda_y$ for input $x_m'$ and label $y_m$, respectively.}
\vspace{-2mm}
\label{figourmixup}
\end{figure}
\begin{algorithm}
\caption{Adversarially Optimized Mixup}
\label{algo:ours}
\begin{algorithmic}
\REQUIRE { $f_{\theta}(\cdot)$: Neural network with parameters $\theta$ }
\REQUIRE { $\mathcal{L}(\cdot, \cdot)$: Loss (KL divergence)}
\REQUIRE { $D$: Training data with images $x$ and labels $y$ }
\REQUIRE { $\varepsilon$: $\ \ell_\infty$ norm bound to constrain adversarial attack }
\REQUIRE { $\eta$: step scale for PGD update }
\FOR { $\{(x_i,y_i),(x_j,y_j)\} \sim D $ }
\item $\lambda \sim B(\alpha, \alpha)$
\item $\lambda_x = \lambda_y = \lambda $
\item $\delta_i, \delta_j \leftarrow U(-\varepsilon,\varepsilon) \quad\triangleright$ initialize adversarial perturbations
\FOR { PGD step }
\item $x_m' = \lambda_x (x_i+\delta_i) + (1-\lambda_x) (x_j+\delta_j) \quad\triangleright $ mixup
\item $y_m = \lambda_y y_i + (1-\lambda_y) y_j \quad\triangleright $ mixup
\item $ \mathcal{L}_P = d(f(x_m'),y_m) $
\item $ g_\delta \leftarrow \nabla_\delta \mathcal{L}_P \quad\triangleright $ backpropagate gradients
\item $ \delta \leftarrow $ clamp$( \delta + sign(g_\delta) \cdot \eta, \ -\varepsilon,\ \varepsilon) $
\ENDFOR
\item $ \mathcal{L}_a = d(f(x+\delta),y) $
\item $ g_\theta \leftarrow \nabla_\theta \mathcal{L}_a \quad\triangleright $ backpropagate gradients
\item $ \theta \leftarrow $ Step$(\theta,g_\theta) \quad\triangleright$ parameter update
\ENDFOR
\end{algorithmic}
\end{algorithm}
\vspace{-1mm}
\subsection{Independence of $\delta_i$, $\delta_j$}
In most implementations of mixup, including the current paper, datapoint pairs $[(x_i,y_i), (x_j,y_j)]$ come from a minibatch of examples, where $(x_j,y_j)$ is matched as a permutation of the same minibatch. (In some cases, then, by small chance, $(x_i,y_i) = (x_j,y_j)$, which is equivalent to sampling an interpolation ratio $\lambda$ as 0 or 1). This means that when optimizing the perturbations $\delta = (x'-x)$ that are added to the minibatch $x$, the same perturbation would be duplicated on the left and on the right side of the mixing. Yet, in our formulation, the perturbations need to behave as ``puppeteers'' pulling on the interpolation that occurs along the taut string connecting the two adversarial points. For this reason, for the permuted right side, we allocate a copy of the minibatch inputs $\{x\}$ of shape $(M,...)$ for minibatch size $M$, and initialize perturbations $\{\delta\}$ of shape $(2M,...)$. This frees up the puppeteering optimization to find the best interpolation point between all pairs without any duplication interference. In our ablation studies, we refer to the effect of this as Shared $\delta$. It is important to note that our best model does not use the shared $\delta$ but allocates independently initialized $\delta_i$, $\delta_j$.
\vspace{-1mm}
\subsection{Geometric Label Mixing}
In algorithm \ref{algo:ours}, the mixed label $y_m$ is assigned simply as the linear interpolation of the two source labels $y_i$ and $y_j$. However, because the interpolated datapoints $x_i'$ and $x_j'$ are attacked, it is possible for the label interpolation $y_m$ to be ``out-of-sync'' with the inputs. If for example the adversarial perturbation on $x_i$ is in the direction away from $x_j$, then part of the $\lambda$ of \ref{algo:ours} is working to bring the label $y_i$ \textit{back} in the direction of $y_j$, which can be harmful. For example, if by coincidence $ \delta_i = -c (x_j - x_i) $ for some scalar $c > 0$, and $ \delta_j \ll \lvert x_i - x_j \rvert $, then
\begin{align*}
x_m' = \lambda (1+c) x_i + (1 - \lambda(1 + c)) x_j
\end{align*}
which would effectively make $\lambda \longrightarrow \lambda (1+c)$ which is counter-productive to the label learning. In a destructive case (perhaps by coincidence) where $c = 1/\lambda - 1$, then the perturbation $\delta_i$ would just bring $x_m' \rightarrow x_i$, implying a ground truth label $y_i$, but the ground truth label $y_m$ would still be interpolated (smoothed) inbetween $y_i$ and $y_j$ since in mixup the label $y_m$ is dependent only on $\lambda$, not on $c$ or perturbations $\delta_i$ or $\delta_j$.
To address this potential counterproductive case, we propose to derive the label mixing ratio $\lambda_y$ using Algorithm \ref{algo:betterlabelassignment}. The formula is the normalized distance of the point $x_m'$ along the vector projection of the line segment from $x_i'$ to $x_j'$. The result is geometrically consistent, and usefully, it is differentiable.
\begin{algorithm}
\caption{Geometrical Mixed Label Assignment}
\label{algo:betterlabelassignment}
\begin{algorithmic}
\REQUIRE { $ (x_i',y_i), (x_j',y_j) $: perturbed data points }
\REQUIRE { $ x_m' $: mixed sample point, as in Algorithm \ref{algo:ours} }
\STATE $v = x_j' - x_i' $
\STATE $p = x_i' - x_m' $
\STATE $\lambda_y = \textrm{clamp}( 1 + (p \cdot v)/(v \cdot v) , 0, 1)$
\STATE Now use this $\lambda_y$ in place of $\lambda$ in Algorithm \ref{algo:ours}.
\end{algorithmic}
\end{algorithm}
\vspace{-2mm}
\subsection{Learning Mixing Ratio $\lambda_x$}
Now that we have a clear geometrical formulation of how to, simultaneously, adversarially perturb the inputs while interpolating in between datapoints with a consistent label interpolation, we can explore adversarially optimizing the interpolation scalar (for the inputs) $\lambda_x$. We explored the approach of Algorithm \ref{algo:ouroptimizeratio}, in which the mix ratio is clipped to a bounded range between 0.5 and 1 (without loss of generality, clipping from 0.5 to 1 simply assigns ``left''/``right'' sides to each mixup pair), using the sigmoid function as a soft clamp mechanism.
\begin{algorithm}
\caption{Optimizing Mixup Ratio (constrained using sigmoid function $\sigma$)}
\label{algo:ouroptimizeratio}
\begin{algorithmic}
\REQUIRE { $ \kappa $: clip bound between 0.5 and 1 for mix ratio }
\REQUIRE { $ \eta_\gamma $: step scale for updates }
\STATE $\lambda_{init} \sim B(\alpha, \alpha)$
\WHILE {min$(\lambda_{init}) < \kappa$}
\item $ \lambda_{init}[\lambda_{init} < \kappa] \sim B(\alpha, \alpha) $
\ENDWHILE
\STATE $ \gamma \leftarrow -log((1-\kappa)/(\lambda_{init}-\kappa) - 1) $
\FOR { PGD step }
\item $\lambda \leftarrow \kappa + (1-\kappa) \sigma(\gamma)$
\item $\ldots \mathcal{L}_P \quad\triangleright$ as in Algorithm \ref{algo:ours}, using $\lambda$
\item $ g_\gamma \leftarrow \nabla_\gamma \mathcal{L}_P \quad\triangleright $ backpropagation
\item $ \gamma \leftarrow \gamma + sign(g_\gamma) \cdot \eta_\gamma $
\ENDFOR
\end{algorithmic}
\end{algorithm}
\section{Experiments}
\label{sec:exp}
We start this section by presenting the details of our experimental setup. These are followed by extensive experimental results presenting the ablation studies and the relative performance of our approaches with those in literature.
\subsection{Setup}
For the evaluation of the ideas developed in this paper, we perform experiments on the CIFAR-10 and CIFAR-100 datasets. We use the same network and hyperparameters settings for both. We use the PreAct-ResNet-18 \cite{he2016identity} and Wide ResNet-34 \cite{zagoruyko2016wide} architectures, which have been used by many others for evaluating adversarial robustness \cite{zhang2018mixup,wong2020fast,wong2020overfitting,lamb2019interpolated}. Our implementation uses Pytorch \cite{NEURIPS2019_9015}. Our threat model is the $\ell_{\infty}$ bound 8/255.
As noted by \cite{wong2020overfitting}, adversarial training is susceptible to overfitting. In our ResNet-18 experiments, we found that training for more than 100 epochs tended to do more harm than good, so in our experiments we train for either 80 or 100 epochs. For best results in this regime, we swept learning rates across the different optimizers in \texttt{pytorch-optimizer} \footnote{\url{https://github.com/jettify/pytorch-optimizer}.} so that baseline PGD training \cite{madry2018towards} (Algorithm \ref{algo:PGD}) provided good results at the end of the last epoch. We found that Yogi \cite{NIPS2018_8186} with a learning rate of 0.003 (batch size 128) worked well. We use weight decay 5e-4. We reduce the learning rate at two steps by factor 0.1x at 70\% and 90\% of the training epochs. The mixup-based models tended to be more robust to overfitting, so we train them for 100 epochs; the only model trained for 80 epochs is baseline PGD \cite{madry2018towards}. The hyperparameters were selected for the baseline PGD adversarial training: due to this, our implementation of the PGD baseline (using PreActResNet18) in table \ref{tablecifar10} is stronger than the WideResNet reported by \cite{madry2018towards}.
In a second implementation, for experiments using WideResNet-34 (which is a far larger network computationally) in table \ref{tablecif10wide}, we use the project provided by \cite{wong2020overfitting} with similar hyperparameters (10-step PGD of step size 2/255, 200 epochs with dropped learning rate at 100 epochs, weight decay 5e-4, SGD optimizer with momentum 0.9, learning rate 0.1).
When reading the data, we use standard baseline augmentations (same as \cite{madry2018towards}): randomly flip left/right, and translate +/- 4 pixels, and normalize inputs by the 3-channel mean and std. dev. of the training set. \cite{wong2020fast} argues that ``1-step PGD'' (FGSM with uniform initialization $U(-\varepsilon,\varepsilon)$) works reasonably well for robustness, but we found that more steps provide additional benefits in robust accuracy and stability, so we train RN-18 against 5-step PGD with step size 4/255, and WRN-34 against 10-step PGD with step size 2/255.
For all of our evaluations of Mixup, we initialize $\lambda$ from the symmetric Beta distribution $B(\alpha,\alpha)$ parameterized by $\alpha = 0.5$. In our experiments with learned $\lambda$, as parametrized in Algorithm \ref{algo:ouroptimizeratio}, we find that the initialization of $\lambda_{init}$ is somewhat important. It seems more valuable to initialize from interpolation positions nearer to original training datapoints using $B(\alpha,\alpha)$ rather than a uniform initialization $\lambda \sim U(0,1)$. For this reason we can also impose a clipping $\kappa > 0.5$, which would mean there are regions of the space perfectly between two datapoints that may never be learned from, but would help bias the optimization towards larger values, which are expected to be nearer to the real datapoints. We use $\kappa = 0.65$ in our experiments; the optimization is not very sensitive to the exact value of $\kappa$ and it seems less important than the initialization $\lambda_{init}$. To take steps we use projected gradients (sign of the gradient, multiplied by a step sized so that cumulatively a noticeable change in $\lambda$ could be reached).
As an aside, we note that our approach is compatible with the Manifold Mixup \cite{verma2019manifold} approach, mixing hidden features $x_i$ instead of input images. Unfortunately, we found its benefits in our robust optimization framework to be nearly negligible: it affects the resulting robust accuracy by no more than 0.1\%.
\subsection{Ablations \& Evaluation}
We start from the baseline PGD training \cite{madry2018towards} described in Algorithm \ref{algo:PGD}. Subsequently, we reproduce the two baseline approaches discussed in Algorithm \ref{algo:baseline1} \cite{wong2020overfitting} and Algorithm \ref{algo:interpadvtrain} \cite{lamb2019interpolated}, and then build our optimization framework. To measure the impact of optimizing the mixup ratio (algorithm \ref{algo:ouroptimizeratio}), in some experiments we freeze the value $\lambda$ after sampling from $B(\alpha,\alpha)$; otherwise in our full method we allow it to be optimized. We also measure the effect of the independently optimized left and right $\delta$ (versus Shared $\delta$).
For robust evaluation we follow the advice and guidelines of \cite{carlini2019on}. Note that we do not resort to any test-time tricks and at inference time our model is the same as any standard network. As a result, our primary concern would be obfuscated gradients \cite{athalye2018obfuscated}. We evaluate our attack against PGD adversaries with 20 iterations (PGD20) and with 100 iterations (PGD100), both using step size 2/255. The PGD-20 attacker is the same as \cite{madry2018towards} (step size 2/255, no restarts). We also make use of the recent AutoAttack toolbox \cite{croce2020reliable} which includes the black-box score-based (sample-querying, gradient-free) adversary Square \cite{andriushchenko2019square} which would be able to overcome obfuscated gradients, if they were a problem.
\subsection{Ablation of Algorithms \ref{algo:ours} and \ref{algo:ouroptimizeratio}}
We report results on CIFAR-10 in table \ref{tablecifar10} and results on CIFAR-100 in table \ref{tablecifar100}. All numbers are accuracies (percent correct out of 100, higher is better). AA refers to the AutoAttack toolbox from \cite{croce2020reliable}, which for table \ref{tablecifar10} we run in cheap mode, which uses fewer steps and samples. Despite being called cheap, we find its effectiveness to be close to the full attack -- for our CIFAR10 model in the last row of table \ref{tablecifar10}, we list the cheap score as 48.7\%; in our evaluation we found the full score to be 48.6\% which is very comparable: the full evaluation on our model did not reveal anything new. In most cases the black-box Square attack \cite{andriushchenko2019square}, the fourth attack in the AutoAttack ensemble and the only gradient-free attacker, found \textit{no} additional adversary images. Since the preceding 3 attacks were gradient-based, this points towards the implication that the gradients were not interfered with.
\begin{table}[]
\centering
\caption{Ablation Experiments, CIFAR-10 Accuracy, ResNet18}
\label{tablecifar10}
\begin{tabular}{|l|c|c|c|l|}
\hline
\diagbox[innerwidth=1.8cm]{Train}{Test} & pristine & PGD20 & PGD100 & AAch \\ \hline
Attack & 83.3\% & 50.2\% & 49.8\% & 47.3\% \\ \hline
Mix-then-Attack & 78.5\% & 52.1\% & 51.9\% & 47.1\% \\ \hline
Attack-then-Mix & 82.5\% & 49.9\% & 49.7\% & 46.6\% \\ \hline
Ours, Fr. $\lambda$, Sh. $\delta$ & 82.3\% & 51.5\% & 51.1\% & 47.6\% \\ \hline
Ours, Frozen $\lambda$ & 80.5\% & 52.4\% & 52.2\% & 48.2\% \\ \hline
Ours, Shared $\delta$ & 81.6\% & 52.9\% & 52.7\% & 48.3\% \\ \hline
Ours & 79.8\% & 54.1\% & 54.0\% & 48.7\% \\ \hline
\end{tabular}
\end{table}
\begin{table}[]
\centering
\caption{Ablation Experiments, CIFAR-100 Accuracy, ResNet18}
\label{tablecifar100}
\begin{tabular}{|l|c|c|c|l|}
\hline
\diagbox[innerwidth=3cm]{Train}{Test} & pristine & PGD20 & PGD100 \\ \hline
Attack & 56.7\% & 25.1\% & 24.9\% \\ \hline
Mix-then-Attack & 50.3\% & 28.6\% & 28.5\% \\ \hline
Attack-then-Mix & 55.7\% & 27.3\% & 27.2\% \\ \hline
Ours, Frozen $\lambda$, Shared $\delta$ & 55.1\% & 28.0\% & 27.9\% \\ \hline
Ours, Frozen $\lambda$ & 53.8\% & 29.3\% & 29.2\% \\ \hline
Ours, Shared $\delta$ & 53.7\% & 28.8\% & 28.6\% \\ \hline
Ours & 52.0\% & 29.2\% & 29.1\% \\ \hline
\end{tabular}
\end{table}
\subsection{Geometrical Label Mixing}
We report results on CIFAR-10 in table \ref{tablecif10wide}, this time using the much larger WideResNet-34 network \cite{zagoruyko2016wide}. As a baseline, we start from the code and implementation of \cite{wong2020overfitting}. These models are evaluated against AutoAttack in ``standard'' mode with the source implementation as of September 2020. Again, there are no issues with obfuscated gradients, as the robust accuracy holds up well against the strong AutoAttack suite which includes the gradient-free black-box Square attack. This table shows that the geometrical mixed labeling did not provide a large benefit, which indicates that the concerns were unlikely during the adversarial optimization. In a high dimensional space, it is unlikely that initialized perturbations would coincide with the vector connecting two data points. This makes implementation of our approach easier, since the interpolation method of Algorithm \ref{algo:ours} is simpler.
\begin{table}[]
\centering
\caption{CIFAR-10 Accuracy, Geometric Label Mixing, WRN-34}
\label{tablecif10wide}
\begin{tabular}{|l|c|c|c|l|}
\hline
\diagbox[innerwidth=3cm]{Train}{Test} & pristine & PGD10 & AA \\ \hline
Baseline \cite{madry2018towards,wong2020overfitting} & 86.20 \% & 56.54 \% & 51.99 \% \\ \hline
Ours, Alg. \ref{algo:ours} + \ref{algo:ouroptimizeratio} & 85.11 \% & 58.31 \% & 52.64 \% \\ \hline
Ours, Alg. \ref{algo:ours}+\ref{algo:betterlabelassignment}+\ref{algo:ouroptimizeratio} & 85.04 \% & 58.34 \% & 52.69 \% \\ \hline
\end{tabular}
\end{table}
\vspace{-1mm}
\section{Discussion}
\label{sec:discussion}
Our method provides significant improvements above the evaluated baselines, and holds up against the strong PGD100 and AutoAttack adversaries. There is a significant drop in robust accuracy from PGD100 to AutoAttack, but the general rank ordering of the approaches remains similar. The optimization criteria aims for a smooth robustness, so it is reasonable that the result is no more vulnerable to attack than the baseline PGD. Both CIFAR-10 and CIFAR-100 see about 4\% absolute improvement against PGD-100 over PGD adversarial training, which we refer to in our results as ``Attack'' without any mixup. It is important to note that this is a more significant \textit{relative} improvement for CIFAR-100 because the baseline accuracies are much lower. CIFAR-100 is a significantly more challenging dataset than CIFAR-10, both because there are more classes to confuse the classifier, and because there are fewer images-per-class (500 as opposed to 5000 in CIFAR-10). Because of the relatively smaller dataset of CIFAR-100 (fewer images-per-class), data augmentation becomes more important, so these Mixup strategies demonstrate their value more prominently. However the ablation effects are a little more muddled on CIFAR-100. In fact, our whole (non-ablated) model performed no better than the ablated model with frozen mixing ratio $\lambda$. This may be dataset dependent, depending on the distribution of pairwise distances between classes which is especially important under adversarial attacks. However, another probable reason behind this might be the specific formulation of the mixing optimization we tested (a shifted sigmoid initialized from a Beta distribution, but with no prior regularization during the optimization). Future work may focus on the formulation of the mixing distribution, but the results as in table \ref{tablecifar100} still demonstrate that our optimization method is stronger (by 0.6\% absolute improvement under 100-step PGD attack) than any of the other mixup-based baselines.
On CIFAR-10 in table \ref{tablecifar10} we see the strongest improvement of 2.1\% in PGD-100 and 1.4\% in AutoAttack over the strongest baseline. On both datasets, the ``Mix-then-attack'' is the strongest baseline. It is closely related to our method in the the adversarial optimization is performed in between datapoints $x_i$ and $x_j$, however our results show that it is valuable to be able to fully probe the space between datapoints with an integrated optimizer that can move the starting points ($x_i'$ and $x_j'$); otherwise the adversarial optimizer's clipping will prevent it from reaching potentially worse locations -- more incorrect locations that are desireable to learn from for maximal robustness.
\vspace{-2mm}
\section{Conclusion \& Future Work}
\label{sec:conc}
We have found improvements to adversarial training that addresses over-fitting in adversarial training that provides solid improvements in robust accuracy-under-attack over the baselines. The results are resilient to strong adversarial attacks, including black box gradient free attacks, demonstrating that we avoid gradient obfuscation. Our work demonstrates that when using data augmentation to improve the generalization of adversarial robustness, thoroughly adversarially backpropagating through the entire data augmentation formulation is important, because adversarial trained networks need to learn from the worst possible sample points.
We focused on aiming for maximum robustness by training only against the worst-case adversarial loss in each formulation. In future work we could attempt to explore the tradeoff between robust accuracy and natural (pristine) accuracy using either an interpolated adversarial + pristine combination of Interpolated Adversarial Training \cite{lamb2019interpolated} or more sophisticated balanced reformulations along the line of TRADES \cite{zhang2019theoretically}. Our formulation of the constrained optimization for the mixing ratio $\lambda$ used a sigmoid for clipping between 0.5 and 1, but future work could use or learn a better prior for $\lambda$, perhaps using a formulation like AdaMixUp \cite{guo2019mixup}. Any architecture can be trained with our approach, and as the authors of \cite{madry2018towards} found, larger networks with higher learning capacity tend to be able to achieve higher robust accuracies. Our approach could be combined with larger robustness-oriented architectures like RobNets \cite{guo2019meets} for further gains in robust accuracy.
\section*{Acknowledgment}
This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA). The views, opinions and/or findings expressed are those of the author and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.
{\small
\bibliographystyle{ieee_fullname}
|
2112.13525
|
\section{Introduction}
Importance of the representation theory of infinite dimensional Lie algebras are well known in both mathematics and physics. Many mathematicians studied representations for the Lie algebra of the derivation algebra of $A=\mathbb C[t_1^{\pm 1}, t_2^{\pm 1},...,t_n^{\pm 1}]$, so called Witt algebra denoted by $W_n$ or $DerA_n$, for references \cite{16,17,18}. In particular, for $n=1$ the universal central extension for the derivation algebra of $A$ is known as the Virasoro algebra, generally denoted by $Vir$. Representations for Virasoro algebra have been well studied by several mathematicians, for references \cite{2,3,4}. In \cite{4}, O.Mathieu proved that a simple module for $Vir$ with finite dimensional weight spaces is either a highest weight module, a lowest weight module or a uniformly bounded module of intermediate series. Later in \cite{2}, it was proved that an irreducible module with an infinite dimensional weight space should have all infinite dimensional weight space. From then a large class of irreducible modules for $Vir$ with infinite dimensional weight spaces have been found. Simple modules with infinite dimensional weight spaces for $Vir$ were constructed in \cite{5} for the first time, by taking tensor product of highest weight modules and intermediate modules. Later isomorphism class of above mentioned modules have been found in \cite{6}. Another large class of irreducible modules for $Vir$ with infinite dimensional weight spaces have been found in \cite{7} using non-weights modules introduced in \cite{8}. \\
On the other hand, in present days mathematicians are showing interest to study representations of loop algebras of some well known Lie algebras. For instance, representations of loop algebras of well known Lie algebras have been studied in, \cite{9,10,11,12,13,14,15}. Representations of loop of Virasoro algebras have been studied in \cite{9,10}. For $B=\mathbb C[t^{\pm 1}]$, complete classification of irreducible $Vir\otimes B$-modules have been done in \cite{9}. Later this work was generalized by A.Savage for $Vir \otimes B$, where $B$ is a commutative associative finitely generated unital algebra over $\mathbb C$ in \cite{10}. In \cite{10}, A.Savage proved that a simple module for $Vir\otimes B$ with finite dimensional weight spaces is either highest weight module, lowest weight module or uniformly bounded module. But classification of irreducible $Vir\otimes B$ modules with a finite dimensional weight space is still open. In this paper we have constructed a class of irreducible modules for $Vir \otimes B$, for a commutative associative unital algebra $B$ over $\mathbb C$ with infinite dimensional weight spaces. \\
This paper has been organised as follows. In section 2, we start with basis definitions and preliminaries. We define Verma module $M(\phi)$ for $Vir\otimes B$ and define a $Vir\otimes B$ module structure on intermediate modules for $Vir$, these modules are denoted by $V_{\alpha,\beta,\psi},$ where $\alpha, \beta \in \mathbb C$ and $\psi:B\to \mathbb C$ be an algebra morphism with $\psi(1)=1$. Then we consider the tensor product of irreducible quotient $V(\phi)$ of $M(\phi)$ and the irreducible modules obtained from the family $V_{\alpha,\beta,\psi},$ denote these irreducible modules as $V'_{\alpha,\beta,\psi}$. We denote $V^\phi_{\alpha,\beta,\psi}=V(\phi)\otimes V'_{\alpha,\beta,\psi},$ and study the properties of these modules in section 3. Finally in Section 4, we find isomorphism classes of these modules.
\section{Notations and Preliminaries}
\subsection*{(1)} Throughout the paper all the vector spaces, algebras, tensor products are taken over the field of complex numbers $\mathbb{C}$. Let $\mathbb{Z}$, $\mathbb{N}$, $\mathbb{Z}^+$ denote the sets of integers, natural numbers and non-negative integers respectively. For any Lie algebra $G$, let $ U (G)$ denote the universal enveloping algebra of $G$.
\subsection*{(2)} The Virasoro algebra is an infinite dimensional Lie algebra with the basis $\{ d_n,C:n \in \mathbb Z \}$ and defining relations
\begin{align}\label{a2.1}
[d_m,d_n]=(n-m)d_{m+n}+\delta_{m,-n}\frac{m^3-m}{12}C,\\
[d_n,C]=0,
\end{align}
for all $n,m \in \mathbb Z$.\\
We denote the Virasoro algebra as $Vir$. Clearly $Vir$ has a triangular decomposition as, $$Vir= Vir^- \oplus Vir^0 \oplus Vir^+, $$
where $Vir^-=\displaystyle{ \bigoplus_{n<0}}\mathbb C d_n$, $Vir^0= span\{d_0,C\}$ and $Vir^+=\displaystyle{ \bigoplus_{n>0}}\mathbb C d_n$.
\subsection*{(3)} It is well known from \cite{1}, that there is a class of intermediate modules $V_{\alpha, \beta}$ for $Vir$ with two parameters $\alpha, \beta \in \mathbb C$. As a vector space $V_{\alpha, \beta}=\displaystyle{ \bigoplus_{n \in \mathbb Z}}\mathbb C v_n$ and $Vir$ action on $V_{\alpha, \beta}$ is given by,
\begin{align}\label{a2.3}
d_n.v_k=(\alpha+k+n\beta)v_{k+n},\\
C.v_k=0,
\end{align}
for all $n,k \in \mathbb Z$. \\
For convenience we denote $V'_{\alpha, \beta}= V_{\alpha,\beta}$, if $V_{\alpha, \beta}$ is irreducible.
\begin{lemma}\label{l2.1}$($\cite{1}$)$
\item[1.] The $Vir$ module $V_{\alpha, \beta} \simeq V_{\alpha+m, \beta}$ for all $m \in \mathbb Z$.\\
\item[2.] The $Vir$ module $V_{\alpha, \beta}$ is irreducible if and only if $\alpha \notin \mathbb Z $ and $\beta \notin \{0,1\}$.\\
\item[3.] $V_{0,0}$ has a unique non-trivial proper sub-module $\mathbb Cv_0$ and denote $V_{0,0}'=V_{0,0}/\mathbb Cv_0$.\\
\item[4.] $V_{0,1}$ has a unique non-zero proper sub-module $V'_{0,1}=\displaystyle{\bigoplus_{i\neq 0}} \mathbb C v_i$. \\
\item[5.] $V'_{\alpha,0}\simeq V'_{\alpha,1}$ for all $\alpha \in \mathbb C$.
\end{lemma}
\begin{remark}
To avoid repetition through out this paper we always take $V'_{\alpha, \beta}$ for $0\leq Re \alpha <1, \beta \neq 1$, where $Re \alpha$ is the real part of $\alpha$. Clearly this class of modules are irreducible for $Vir$.
\end{remark}
\subsection*{(3)} Let $B$ be a commutative associative unital algebra. We consider the Lie algebra $Vir_B= Vir\otimes B$ with the following bracket operation,
$$[X\otimes b, Y \otimes b']=[X,Y]\otimes bb' $$
for all $b,b' \in B$ and $X,Y \in Vir$. We denote $X \otimes 1$ simply as $X$ for all $X \in Vir$. Note that $Vir_B$ has a triangular decomposition as,
$$Vir_B=Vir^-_B \oplus Vir^0_B \oplus Vir^+_B,$$
where $Vir^-_B=Vir^-\otimes B, Vir^0_B=Vir^0\otimes B$ and $Vir^+_B=Vir^+\otimes B$.
\subsection*{(4)} We define a $Vir_B$ module structure on $V_{\alpha, \beta}$. Let $\psi:B \to \mathbb C$ be an algebra homomorphism such that $\psi(1)=1.$ Define a module structure on $V_{\alpha, \beta} $ by,
\begin{align}
d_n\otimes b. v_k=\psi(b)(\alpha +k +n\beta)v_{k+n},\\
C\otimes b.v_k=0
\end{align}
for all $n,k \in \mathbb Z, b \in B.$
Clearly $V'_{\alpha, \beta}$ are irreducible modules for $Vir_B$, since they are so for $Vir.$ We denote this module by $V'_{\alpha, \beta, \psi}.$
\subsection*{(5)}
\begin{definition}
A module $V$ for $Vir_B$ is said to be weight module if the action of $d_0$ on $V$ is diagonalizable, i.e $V$ can be decomposed as $V = \displaystyle{\bigoplus_{\lambda \in \mathbb C} V_\lambda}$, where $V_\lambda=\{ v \in V: d_0v=\lambda v \}.$ The space $V_\lambda$ is called the weight space with respect to the weight $\lambda$ of $V$ and elements of $V_\lambda$ are called weight vectors of $V$ of weight $\lambda$. Moreover we say that $V$ is a highest weight (lowest weight) module if $Vir^+_B.v=0$ $(Vir^-_B.v=0)$ for some non-zero weight vector $v$ of $V$.
\end{definition}
Let $\phi: Vir^0_B \to \mathbb C$ be a one dimensional representation of $Vir^0_B$. Extend $\mathbb C \phi$ to a module for $ Vir^0_B \oplus Vir^+_B$ by defining action of $Vir^+_B$ as zero. Then define the Verma module,
$$M(\phi)=U(Vir_B)\bigotimes_{U(Vir^0_B \oplus Vir^+_B)}\mathbb C \phi. $$
Clearly $M(\phi)$ is a highest weight module with highest weight $\phi(d_0)$ and $M(\phi)=\displaystyle{\bigoplus_{i\in \mathbb Z^+}}M(\phi)_{\phi(d_0)-i}.$ Note that $\widetilde{v_\phi}=1\otimes 1_\phi$ is the highest weight vector of $M(\phi)$, where $1_\phi$ is the unit in $\mathbb C \phi$. Moreover $M(\phi) \simeq U(Vir^-_B)$ as $U(Vir^-_B)$-modules. \\
Let $N(\phi)$ be the unique maximal proper sub-module of $M(\phi)$. Then
$$V(\phi)=M(\phi)/N(\phi) $$
is the irreducible highest weight module corresponding to $\phi$. It is a highest weight module with highest weight $\phi(d_0)$ and highest weight vector $v_\phi$, is the image of $\widetilde{v_\phi}$ in $V(\phi)$. Moreover $V(\phi)=\displaystyle{\bigoplus_{i\in \mathbb Z^+}}V(\phi)_{\phi(d_0)-i}.$\\
Consider the module $V^\phi_{\alpha, \beta, \psi}= V(\phi)\otimes V'_{\alpha, \beta,\psi}$ for $Vir_B.$ Clearly this a weight module for $Vir_B$ and
$$ V^\phi_{\alpha, \beta, \psi}= \displaystyle{\bigoplus_{n\in \mathbb Z}}(V^\phi_{\alpha, \beta, \psi})_{\phi(d_0)+\alpha +n},$$
where $(V^\phi_{\alpha, \beta, \psi})_{\phi(d_0)+\alpha +n}= \displaystyle{\bigoplus_{i\in \mathbb Z^+}}(V(\phi)_{\phi(d_0)-i}\otimes \mathbb C v_{n+i})$. Hence every weight spaces of $V^\phi_{\alpha, \beta, \psi}$ are infinite dimensional. In our discussion we always take $0\leq Re \alpha <1$ and $\beta \neq 1$ for all modules $V^\phi_{\alpha, \beta, \psi}$.\\
\section{Main Results}
\begin{lemma}\label{l3.1}
$ V^\phi_{\alpha, \beta, \psi}$ is generated by $\{ v_\phi \otimes v_m: m \in \mathbb Z \}$ or $\{ v_\phi \otimes v_m: m \in \mathbb Z-\{0\} \}$ according as $(\alpha,\beta)\neq (0,0)$ or $(\alpha, \beta)=(0,0)$, where $v_\phi$ is the highest weight vector of $V(\phi)$.
\end{lemma}
\begin{proof}
Note that $V(\phi)=U(Vir^-_B)v_\phi$. Consequently $ V^\phi_{\alpha, \beta, \psi}$ is spanned by $\{uv_\phi \otimes v_m: u \in U(Vir^-_B)$ and $m \in \mathbb Z$ or $\mathbb Z-\{0\} \}$. Hence the lemma follows from Lemma \ref{l2.1}.
\end{proof}
\begin{proposition}\label{p3.1}
$End_{Vir_B}( V^\phi_{\alpha, \beta, \psi})\simeq \mathbb C$.
\end{proposition}
\begin{proof}
Let $T \in End_{Vir_B}( V^\phi_{\alpha, \beta, \psi})$. Therefore $T(v_\phi \otimes v_m)$ and $(v_\phi \otimes v_m)$ must have same weight. Let \\
$T(v_\phi \otimes v_m)= \displaystyle{\sum_{i=0}^{i=k}}c_ix_{-i}\otimes v_{m+i}$, where $x_{-i} \in V(\phi)_{\phi(d_0)-i}$, $c_i \in \mathbb C$ for $0\leq i \leq k$ and $x_0=v_\phi.$\\
Let us choose $n (> k)\in \mathbb N$ sufficiently large such that \\
\begin{equation}\label{eq3.1}
(\alpha + m+n\beta)(\alpha +n+m +n \beta) \neq 0
\end{equation}
\begin{equation}\label{eq3.2}
(\alpha+i+m+2n\beta)(\alpha + m+n\beta)(\alpha +n+m +n \beta)\neq (\alpha+m +2n\beta)(\alpha+i+m+n\beta)(\alpha+i+n+m+n\beta)
\end{equation}
for $1 \leq i \leq k$.
This is possible since equality of \ref{eq3.1} and \ref{eq3.2} gives only finitely many solutions for $n$.\\
Set $w= d_{2n} -\frac{(\alpha+m+2n\beta)}{(\alpha+m+n\beta)(\alpha+m+n+n\beta)}d_n^2$, then $w.(v_\phi \otimes v_m)=0$ and $w.(x_{-i}\otimes v_{m+i}) \neq 0$, (due to \ref{eq3.2}) for $1 \leq i \leq k$. Moreover, it is easy to see that $w.(x_{-i}\otimes v_{m+i})$ are linearly independent for $1 \leq i \leq k$. \\
Therefore $T(w.v_\phi \otimes v_m)=w.T(v_\phi \otimes v_m)$ gives $c_i =0$ for $1 \leq i \leq k$. Hence we have
$$T(v_\phi \otimes v_m)=c_m(v_\phi\otimes v_m) $$ for some $c_m \in \mathbb C$ and for all $m \in \mathbb Z$ or $\mathbb Z -\{0\}$ according as $(\alpha,\beta)\neq (0,0)$ or $(\alpha,\beta)=(0,0)$.\\
Now consider three cases.\\
{\bf Case I :} Let us assume $\beta \neq 0.$ Fix two arbitrary integers $m,n.$ Choose $l \in \mathbb N$ in such a way that $\alpha +m + l\beta \neq 0 $ and $\alpha + n + (l+m-n)\beta \neq 0.$ Then,
$$T(d_l.v_\phi \otimes v_m)=d_l.T(v_\phi \otimes v_m)$$
implies that $c_{m+l}=c_m. $\\
Again, $T(d_{l+m-n}.v_\phi\otimes v_n)=d_{l+m-n}.T(v_\phi \otimes v_n)$ implies that $c_{m+l}=c_n.$ Hence we have $c_m=c_{m+l}=c_n$.\\
{\bf Case II :} Let us assume $\beta =0, \alpha \neq 0.$ Fix two arbitrary integers $m,n$. Since $0\leq Re \alpha <1$, so $\alpha + m \neq 0$ and $\alpha +n \neq 0$. Choose any $l \in \mathbb N$ and consider $T(d_l.v_\phi \otimes v_m)=d_l.T(v_\phi \otimes v_m)$, $T(d_{l+m-n}.v_\phi\otimes v_n)=d_{l+m-n}.T(v_\phi \otimes v_n)$ which gives us $c_m =c_{m+l}=c_n.$\\
{\bf Case III :} Let us assume $\beta =0, \alpha=0.$ In the same way like Case II we have $c_m=c_n$ for all $m,n \in \mathbb Z-\{0\}.$\\
Combining all the three cases we have $c_m =c $ for some $c \in \mathbb C$ and for all $m \in \mathbb Z$ or $\mathbb Z -\{0\}$ according as $(\alpha,\beta)\neq (0,0)$ or $(\alpha,\beta)=(0,0)$. Hence we have the result by Lemma \ref{l3.1}.
\end{proof}
From the Proposition \ref{p3.1} we have, modules $V^\phi_{\alpha, \beta , \psi}$ are indecomposable. Now we find out a condition under which these modules will be irreducible. Before that we prove a necessary and sufficient condition for irreducibility of $V^\phi_{\alpha, \beta , \psi}$.
\begin{theorem}\label{t3.1}
For $\alpha+\beta \notin \mathbb Z,$ $V^\phi_{\alpha, \beta , \psi}$ is irreducible if and only if $V^\phi_{\alpha, \beta , \psi}$ is cyclic on every $v_\phi \otimes v_m$ for all $m \in \mathbb Z$.
\end{theorem}
\begin{proof}
Let $V^\phi_{\alpha, \beta , \psi}$ be cyclic on every $v_\phi \otimes v_m$ for all $m \in \mathbb Z$. Let $W$ be any non-zero sub-module of $V^\phi_{\alpha, \beta , \psi}$. Let $w \in W$ be a non-zero weight vector. Let
$$w= \displaystyle{\sum_{i=0}^{i=n}}x_{-i}\otimes v_{m+i}, $$
where $x_{-i} \in V(\phi)_{\phi(d_0)-i}$, for $0\leq i \leq n$.\\
We use induction on $n$ to show that $v_\phi \otimes v_k \in W$ for some $k \in \mathbb Z$. Note that if $n=0,$ then we are done.\\
Assume that $n>0$ and $x_{-n} \neq 0.$ Then there exists a $b \in B$ such that $d_1 \otimes b.x_{-n} \neq 0$ or $d_2 \otimes b .x_{-n} \neq 0$. If both of them are zero for all $b \in B$, then $x_{-n}$ would be a highest weight vector for $V(\phi)$ and hence a scalar multiple of $v_\phi$. Then $w $ cannot be a weight vector. \\
Without loss of generality assume that $d_1 \otimes b.x_{-n} \neq 0$ for some $b \in B$ (other case can be done in same way).\\
{\bf Case I :} Let us assume $\beta \neq 0$. Choose $l (>> n) \in \mathbb N$ in such a way that $(\alpha+m+n+\beta)\{\alpha+m+n+1+ (l-1)\beta\} \neq 0$. Let
$$X=d_l \otimes b -\frac{\alpha+m+n+ l\beta}{(\alpha+m+n+\beta)\{\alpha+m+n+1+ (l-1)\beta\}}d_{l-1}d_1\otimes b .$$
By computing the action of $X$ on $w$ we have,\\
$X.w=$\\
$\displaystyle{\sum_{i=0}^{i=n-1}} \psi(b)\lbrace(\alpha+m+i+l\beta)-\frac{(\alpha+m+i+\beta)(\alpha+m+n+l\beta)\{\alpha+m+i+1+(l-1)\beta\}}{(\alpha+m+n+\beta)\{\alpha+m+n+1+ (l-1)\beta\}} \rbrace x_{-i}\otimes v_{m+i+l}$\\
$$-\displaystyle{\sum_{i=0}^{i=n}}\frac{(\alpha+m+n+l\beta)\{\alpha+m+i+(l-1)\beta\}}{(\alpha+m+n+\beta)\{\alpha+m+n+1+ (l-1)\beta\} }d_1\otimes b. x_{-i}\otimes v_{m+i+l-1}$$
First summation of the above expression on $X.w$ runs upto $n-1$ terms because
of $X.v_{m+n}=0.$ Moreover it is also clear that $X.w$ is a weight vector of the form $y_{0}\otimes v_{s_1}+....+y_{-r}\otimes v_{s_k}$ such that $r <n$ and $y_{-i}\in V(\phi)_{\phi(d_0)-i}.$ Therefore to complete the induction process it is sufficient to show that $X.w \neq 0.$\\
Note that the components of $X.w$ on $V(\phi)_{\phi(d_0)-i}\otimes \mathbb C v_{m+l+i} $ is
$$ \psi(b)\lbrace(\alpha+m+i+l\beta)-\frac{(\alpha+m+i+\beta)(\alpha+m+n+l\beta)\{\alpha+m+i+1+(l-1)\beta\}}{(\alpha+m+n+\beta)\{\alpha+m+n+1+ (l-1)\beta\}} \rbrace x_{-i}\otimes v_{m+i+l} $$
$$ -\frac{(\alpha+m+n+l\beta)\{\alpha+m+i+1+(l-1)\beta \}}{(\alpha+m+n+\beta)\{\alpha+m+n+1+ (l-1)\beta\}}d_1\otimes b. x_{-(i+1)}\otimes v_{m+i+l}.$$
If for some $0 \leq i \leq n-1 $, $d_1\otimes b. x_{-(i+1)}$ and $x_{-i}$ are linearly independent then the components of $X.w$ on $V(\phi)_{\phi(d_0)-i}\otimes \mathbb C v_{m+l+i} $ will be zero, only if the coefficients of $x_{-i}\otimes v_{m+i+l}$ and $d_1\otimes b. x_{-(i+1)}\otimes v_{m+i+l}$ are zero. But this is possible only for finitely many values of $l$. On the other hand if all $d_1\otimes b. x_{-(i+1)}$ and $x_{-i}$ are linearly dependent, then we consider $d_1\otimes b.x_{-n}=k_0x_{-(n-1)},$ for some $k_0 \in \mathbb C$. Now the coefficient of $x_{-(n-1)}\otimes v_{m+l+n-1}$ will be zero only for finitely many values of $l$. Hence in any case we can find $l$ large enough such that $X.w \neq 0$.
\\
{\bf Case II :} Let us assume $\beta =0$. Choose $l (>>n) \in \mathbb N$. Note that $\alpha + \beta \notin \mathbb Z$ and $\beta =0$ implies that $\alpha \notin \mathbb Z$. Let
$$X=d_{2l}\otimes b -\frac{1}{(\alpha+m+n+1)(\alpha+m+n+l)}d_ld_{l-1}d_1\otimes b. $$
Note that $X.v_{m+n}=0$. Using this fact we can obtain the expression for $X.w$ as, \\
$$X.w=\displaystyle{\sum_{i=0}^{i=n-1}}\psi(b)(\alpha+m+i)\lbrace 1- \frac{(\alpha+m+i+1)(\alpha+m+l+i)}{(\alpha+m+n+1)(\alpha+m+n+l)\rbrace} \rbrace x_{-i}\otimes v_{m+i+2l} $$
$$ - \displaystyle{\sum_{i=0}^{i=n}}\frac{(\alpha+m+i)(\alpha+m+i+l-1)}{(\alpha+m+n+1)(\alpha+m+n+l)}d_1\otimes b.x_{-i}\otimes v_{m+i+2l-1} . $$
Therefore $X.w$ is a weight vector of the form $y_{0}\otimes v_{s_1}+....+y_{-r}\otimes v_{s_k}$ such that $r <n$ and $y_{-i}\in V(\phi)_{\phi(d_0)-i}.$ Now proceed like Case I and prove that we can find $l$ large enough such that $X.w \neq 0$.\\
Combining both the cases we have $v_\phi \otimes v_k \in W$ for some $k \in \mathbb Z$. This completes the proof.
\end{proof}
\begin{corollary}\label{p3c3.1}
Suppose $\alpha \pm \beta \notin \mathbb Z$ and $b \in B$ such that $\psi(b) \neq 0$. Then $V^\phi_{\alpha , \beta, \psi}$ is irreducible if $\phi|_{d_0\otimes <b>}=0,$ where $<b>$ denote the ideal generated by $b$ in $B$.
\end{corollary}
\begin{proof}
We assert that $d_{-1}\otimes b.v_\phi=0$. If not, then \\
$d_1\otimes a.(d_{-1}\otimes b.v_\phi)=[d_1\otimes a,d_{-1}\otimes b].v_\phi=-2d_0\otimes ab.v_\phi=\phi(d_0\otimes ab).v_\phi=0$ for all $a \in B$.\\
Moreover $d_2\otimes a.(d_{-1}\otimes b.v_\phi)=0$ for all $a \in B$. Thus $d_{-1}\otimes b.v_\phi$ is a highest weight vector of $V(\phi)$, a contradiction.\\
Now, $$d_{-1}\otimes b.(v_\phi\otimes v_{n+1})=\psi(b)(\alpha +n+1 -\beta)v_\phi \otimes v_n. $$
This implies that $U(Vir_B)(v_\phi \otimes v_n )\subseteq U(Vir_B)(v_\phi\otimes v_{n+1}) $ for all $n \in \mathbb Z$. On the other hand,
$$ d_1\otimes b.(v_\phi\otimes v_{n})=\psi(b)(\alpha + n+\beta)(v_\phi\otimes v_{n+1}) $$
implies that $U(Vir_B)(v_\phi \otimes v_n )\supseteq U(Vir_B)(v_\phi\otimes v_{n+1}) $ for all $n \in \mathbb Z$. Therefore, $U(Vir_B)(v_\phi \otimes v_m )= U(Vir_B)(v_\phi\otimes v_{n}) $ for all $n,m \in \mathbb Z.$ Now by Theorem \ref{t3.1} and Lemma \ref{l3.1}, $V^\phi_{\alpha, \beta, \psi}$ is irreducible.
\end{proof}
\begin{remark}
In particular, if $B$ is a not a local ring, there exists a non-unit $b \in B$ such that $\psi(b)\neq 0$ and hence in this case $<b> \subsetneq B.$
\end{remark}
\section{Isomorphism Class}
In this section we find out conditions when two modules $V^{\phi_1}_{\alpha_1, \beta_1,\psi_1}$ and $V^{\phi_2}_{\alpha_2, \beta_2,\psi_2}$ will be isomorphic as $Vir_B$ modules.
\begin{lemma}\label{l34.1}
$V^{\phi_1}_{\alpha_1, \beta_1,\psi_1} \simeq V^{\phi_2}_{\alpha_2, \beta_2,\psi_2}$ imply that $\psi_1=\psi_2$.
\end{lemma}
\begin{proof}
To prove $\psi_1=\psi_2$, it is sufficient to prove that $ker\psi_1=ker \psi_2$. Because then it is easy to see that, $\psi_1=c\psi_2$ for some $c \in \mathbb C$. Hence $\psi_1(1)=\psi_2(1)=1$ gives us $\psi_1=\psi_2$.\\
Let $ker\psi_1 \neq ker\psi_2$ and $b \in ker\psi_1 - ker\psi_2 $. Let $T$ be the isomorphism between $V^{\phi_1}_{\alpha_1, \beta_1,\psi_1}$ and $ V^{\phi_2}_{\alpha_2, \beta_2,\psi_2}$. Then for some $k \in \mathbb N$, $v_{\phi_2} \otimes v_k$ has a pre-image under $T$. Since $T$ is an isomorphism this pre-image is a weight vector. Let\\
$$ T(\displaystyle{\sum_{i=0}^{i=n}} c_ix_{-i}\otimes v_{i+p})=v_{\phi_2}\otimes v_k ,$$
for some $p \in \mathbb Z$ and $x_{-i} \in V(\phi)_{\phi(d_0)-i}$, for $0\leq i \leq n$.\\
{\bf Case I :} Let us assume $\beta_2 \neq 0.$ Choose $l (>>n) \in \mathbb N$ such that $\alpha_2+k+l\beta_2 \neq 0$. Now consider,
\begin{align}\label{a4.1}
T(d_l\otimes b.\displaystyle{\sum_{i=0}^{i=n}} c_ix_{-i}\otimes v_{i+p})=d_l\otimes b.v_{\phi_2}\otimes v_k,
\end{align}
which is absurd, since left hand side of \ref{a4.1} is zero whereas right side is non-zero.\\
{\bf Case II :} Let us assume $\beta_2 =0$. Choose $l >>n,$ then right hand side of \ref{a4.1} becomes $\psi_2(b)(\alpha_2+k)v_{\phi_2}\otimes v_{l+k} \neq 0$, since $\alpha_2 +k =0 $ imply that $\alpha_2 \in \mathbb Z, $ so $\alpha_2=0=k$, a contradiction.
\end{proof}
\begin{theorem}\label{p3t4.1}
$V^{\phi_1}_{\alpha_1, \beta_1,\psi_1} \simeq V^{\phi_2}_{\alpha_2, \beta_2,\psi_2}$ if and only if $\psi_1=\psi_2, \phi_1=\phi_2, \alpha_1=\alpha_2, \beta_1=\beta_2.$
\end{theorem}
\begin{proof}
Let $T:V^{\phi_1}_{\alpha_1, \beta_1,\psi_1} \to V^{\phi_2}_{\alpha_2, \beta_2,\psi_2}$ be the isomorphism.\\
{\bf Case I :} Let us assume $(\alpha_1,\beta_1)\neq (0,0)$. Since $T$ is an isomorphism, for all $l \in\mathbb Z$ image of $v_{\phi_1}\otimes v_l$ will be a weight vector of same weight with $v_{\phi_1}\otimes v_l$. Let
\begin{align}\label{p3a4.2}
T(v_{\phi_1}\otimes v_l)= \displaystyle{\sum_{i=0}^{i=k}} c_ix_{-i}\otimes v_{i+p},
\end{align}
for some $p \in \mathbb Z$, $c_i \in \mathbb C$ and $x_{-i} \in V(\phi)_{\phi(d_0)-i}$, for $0\leq i \leq k.$ Since $v_{\phi_1}\otimes v_0$ and its image has same weight, so we have $\phi_1(d_0)+\alpha_1+l=\phi_2(d_0)+\alpha_2+p$.\\
For all $m,n > k$, $ T(d_m d_n.v_{\phi_1}\otimes v_l)=d_m d_n.T(v_{\phi_1}\otimes v_l)$ implies that,
\begin{align}\label{p3a4.3}
(\alpha_1+l+n\beta_1)(\alpha_1+n+l+m\beta_1)T(v_{\phi_1}\otimes v_{m+n+l})=
\end{align}
\begin{align*}
\displaystyle{\sum_{i=0}^{i=k}}c_i(\alpha_2+p+i+n\beta_2)(\alpha_2+p+i+n+m\beta_2) x_{-i}\otimes v_{i+p+m+n}.
\end{align*}
Again, for all $m,n >k$, $T(d_{m+n}.v_{\phi_1}\otimes v_l)=d_{m+n}.T(v_{\phi_1}\otimes v_l)$ implies that,
\begin{align}\label{p3a4.4}
\{\alpha_1+l+(m+n)\beta_1\}T(v_{\phi_1}\otimes v_{m+n+l})=\displaystyle{\sum_{i=0}^{i=k}}c_i (\alpha_2+p+i+(m+n)\beta_2) x_{-i}\otimes v_{i+p+m+n}.
\end{align}
From \ref{p3a4.3} and \ref{p3a4.4} for all $i$ with $c_i \neq 0$ we have,
\begin{align}\label{p3a4.5}
(\alpha_1+l+n\beta_1)(\alpha_1+l+n+m\beta_1)(\alpha_2+p+i+(m+n)\beta_2) =
\end{align}
\begin{align*}
\{\alpha_1+l+(m+n)\beta_1\}(\alpha_2+p+i+n\beta_2)(\alpha_2+p+i+n+m\beta_2).
\end{align*}
From \ref{p3a4.5} we obtain a polynomial in $m,n $, which is given by
$$\beta_1\beta_2(\beta_1-\beta_2)mn(m+n) +\{(\alpha_1+l)(\alpha_2+p+i)(\beta_2-\beta_1)+\beta_1(\alpha_2+p+i)^2-\beta_2(\alpha_1+l)^2 \}(m+n)$$
$$+\{(\alpha_1+l)\beta_2(\beta_2-1-2\beta_1)+\beta_1(\alpha_2+p+i)(1+2\beta_2-\beta_1)\}mn+\beta_1\beta_2(\alpha_2+p+i-\alpha_1-l)(m^2+n^2) $$
$$+(\alpha_1+l)(\alpha_2+p+i-\alpha_1-l)(\alpha_2+p+i)=0. $$
Since this equation holds for all $m,n >k$, so we have,
\begin{align}\label{p3a4.6}
\beta_1\beta_2(\beta_1-\beta_2)=0,
\end{align}
\begin{align}\label{p3a4.7}
(\alpha_1+l)(\alpha_2+p+i)(\beta_2-\beta_1)+\beta_1(\alpha_2+p+i)^2-\beta_2(\alpha_1+l)^2=0,
\end{align}
\begin{align}\label{p3a4.8}
(\alpha_1+l)\beta_2(\beta_2-1-2\beta_1)+\beta_1(\alpha_2+p+i)(1+2\beta_2-\beta_1)=0,
\end{align}
\begin{align}\label{p3a4.9}
\beta_1\beta_2(\alpha_2+p+i-\alpha_1-l)=0,
\end{align}
\begin{align}\label{p3a4.10}
(\alpha_1+l)(\alpha_2+p+i-\alpha_1-l)(\alpha_2+p+i)=0.
\end{align}
{\bf Sub-case I :} Let $\beta_1 =0$, then $\alpha_1+l \neq 0$, since $(\alpha_1,\beta_1)\neq (0,0)$ and $0 \leq Re\alpha_1 <1.$ From \ref{p3a4.8}, we have $\beta_2=0$, as $\beta_2 \neq 1$. If $\alpha_2+p+i = 0$, then $0 \leq Re\alpha_2 <1$ implies that $\alpha_2=p+i=0$. Then from \ref{p3a4.2}, we have
$$T(v_{\phi_1}\otimes v_l)= c_{-p}x_p\otimes v_0. $$
Then choosing a natural number $n > -p$ we have,
$$T(d_n.v_{\phi_1}\otimes v_l)=0 ,$$ which implies that $\alpha_1+l=0$, a contradiction. Therefore $\alpha_2+p+i \neq 0$, Hence from \ref{p3a4.10} and $0 \leq Re\alpha_1, Re\alpha_2 <1,$ we have $\alpha_1-\alpha_2=p+i-l=0.$\\
{\bf Sub-case II :} Let $\beta_2 = 0$, then from \ref{p3a4.8} we have $\beta_1 =0$ or $\alpha_2 +p +i =0$, as $\beta_1\neq 1$. If $\beta_1 =0 $, then by Sub-case I we have $\alpha_1=\alpha_2$. If $\beta_1 \neq 0$, then $\alpha_2 +p+i=0$ implies that $\alpha_2=p+i=0$. Again from Sub-case I we get that $\alpha_1 \neq 0 $ implies $\alpha_2+p+i \neq 0$, hence $\alpha_1=0$. Moreover we have,
$$ T(v_{\phi_1}\otimes v_l)= c_{-p}x_p\otimes v_0. $$
Hence for all $n>-p$ we have,
$$T(d_n.v_{\phi_1}\otimes v_l)=0 ,$$
this implies that $l+n\beta_1=0$ for all $n>-p$, a contradiction.\\
{\bf Sub-case III :} Let $\beta_1\beta_2 \neq 0$. Note that from \ref{p3a4.6} and \ref{p3a4.9} we have $\beta_1=\beta_2$, $\alpha_1=\alpha_2$.\\
Hence in all cases we have $\alpha_1=\alpha_2$, $\beta_1=\beta_2$ and $p+i=l$. Therefore $T$ is given by $ T(v_{\phi_1}\otimes v_l)= c_{l}x_{-(l-p_l)}\otimes v_l, $ for some $c_{l} \in \mathbb C^{*}$ and $p_l\in \mathbb Z$ such that $l\geq p_l$. From Lemma \ref{l3.1} we have,
$$\displaystyle{\sum_{l \in \mathbb Z}} U(Vir_B^-)(v_{\phi_1} \otimes v_l)= V^{\phi_1}_{\alpha_1,\beta_1, \psi_1} . $$
Since $T$ is an isomorphism, we have
$$ \displaystyle{\sum_{l \in \mathbb Z}} U(Vir_B^-)(x_{-(l-p_l)} \otimes v_l)= V^{\phi_2}_{\alpha_2,\beta_2, \psi_2} . $$
Last equality holds only when $p_l=l$ for all $l \in \mathbb Z$ . Hence we have $T(v_{\phi_1}\otimes v_l)= c_{l}'v_{\phi_2}\otimes v_l, $ for some $c_l' \in \mathbb C^*$ and for all $l \in \mathbb Z$. Now consider the relations,
\begin{align}\label{p3a4.11}
T(d_0\otimes b.v_{\phi_1}\otimes v_l) =d_0\otimes bT(v_{\phi_1}\otimes v_l)
\end{align}
\begin{align}\label{p3a4.12}
T(C\otimes b.v_{\phi_1}\otimes v_l) =C\otimes bT(v_{\phi_1}\otimes v_l)
\end{align}
for all $b \in B$.
Then using Lemma \ref{l34.1}, \ref{p3a4.11} and \ref{p3a4.12} we have $\phi_1(d_0\otimes b)=\phi_2(d_0\otimes b)$ and $\phi_1(C\otimes b)=\phi_2(C\otimes b)$ for all $b \in B$.\\
{\bf Case II :} Let $(\alpha_1,\beta_1)=(0,0).$ In this case we proceed similarly like Case I and conclude that $T(v_{\phi_1}\otimes v_l)= c_{l}'v_{\phi_2}\otimes v_l, $ for some $c_l' \in \mathbb C^*$ and for all $l \in \mathbb Z-\{0\}$. Hence we have $ \phi_1=\phi_2, \alpha_1=\alpha_2, \beta_1=\beta_2.$\\
Therefore by Case I, Case II and Lemma \ref{l34.1} we have the result.
\end{proof}
\begin{remark}
Corollary \ref{p3c3.1} combined with Theorem \ref{p3t4.1} gives us a collection of non-isomorphic irreducible modules for $Vir_B$ with infinite dimensional weight spaces.
\end{remark}
\vspace{2cm}
|
2112.13638
|
\section{Introduction}
Quantum information processing has attracted increasing attention recently due to its great potential and profound implications.
To harness the power of quantum information processing, it is crucial to verify the underlying quantum states and devices efficiently based on the accessible measurements. Unfortunately, traditional tomographic approaches are notoriously inefficient since the resource overhead increases exponentially with the system size under consideration. To overcome this problem, a number of alternative approaches have been proposed recently; see \rscite{Eisert,KlieschTheory,Theo2021,yu2021statistical} for an overview.
Among alternative approaches proposed so far, \emph{quantum state verification} (QSV) is particularly appealing because it can achieve a high efficiency based on
local operations and classical communication (LOCC) \cite{Masahito2005,Aolita2015,Many-qubit,Pallister2018Optimal,Zhu2019Efficient,Zhu-general}.
Notably, efficient verification protocols based on local projective measurements have been constructed for bipartite pure states \cite{Masahito2005,Zhu-entangled,LiHZ19,MHtwo-qubit,S.JWbipartite}, stabilizer states \cite{Stabilizer2015,Pallister2018Optimal,Kalev2019,Zhu-hypergraph,Zhu-general,LiGHZ,Tom-stabilizer}, hypergraph states \cite{Zhu-hypergraph}, weighted graph states \cite{Hayashi_weighted}, and Dicke states \cite{Liu-dicke,LiHSS21}. Moreover, the efficiency of QSV has been demonstrated in a number of experiments \cite{ExpEntangled,ExpLu,TowardsJiang,ClassicalZhang}.
Recently, the idea of QSV was generalized to \emph{quantum gate verification} (QGV) \cite{Liu2020Efficient,ZhuZ20,Zeng2019Quantum} (cf. \rscite{Hofmann2005Complementary,Daniel2013Minimum,Mayer2018Quantum,Wu_2019,Cross2020}), which enables efficient verification of various quantum gates and quantum circuits based on LOCC.
Notably, all bipartite unitaries and Clifford unitaries can be verified with resources that are independent of the system size, while the resource required to verify the generalized controlled-NOT (CNOT) gate and generalized controlled-$Z$ (CZ) gate grows only linearly with the system size. The efficiency of QGV has also been demonstrated in several experiments recently \cite{zhang2021efficient,luo2021proofofprinciple}.
So far most works on QSV and QGV have exclusively focused on the sample efficiency as the main figure of merit. By contrast, the number of experimental settings has received little attention, although this figure of merit is also of key interest to both theoretical study and practical applications. Even for bipartite pure states, it is still not clear how many measurement settings are required to construct a reliable verification protocol. The situation is even worse in the case of bipartite unitaries, not to mention the multipartite scenario. This problem becomes particularly important when it is difficult or slow to switch measurement settings, which is the case in many practical scenarios.
In this work we study systematically QSV and QGV with a focus on the number of experimental settings based on LOCC. We show that any bipartite pure state can be verified by two measurement settings based on nonadaptive local projective measurements. By contrast, at least $d$ experimental settings based on local operations are required to verify each bipartite unitary in dimension $d$, while $2d$ settings are sufficient. In addition, we introduce the concept of entanglement-free verification, which is of special interest to both theoretical study and practical applications. Moreover, we show that any entanglement-free verification protocol can be turned into a minimal-setting protocol, and vice versa.
For each two-qubit unitary, we determine the minimum number of required experimental settings explicitly. Our study shows that any two-qubit unitary can be verified using only five experimental settings, while
a generic two-qubit unitary (except for a set of measure zero) can be verified by an entanglement-free protocol based on four settings. Explicit entanglement-free protocols are constructed for CNOT, CZ, controlled-phase (C-Phase), and SWAP gates, respectively. In the course of study we clarify the properties of Schmidt coefficients of two-qubit unitaries and their implications for studying the equivalence relation under local unitary transformations, which are of interest beyond the main focus of this work.
The rest of this paper is organized as follows. In \sref{sec:QSGV}, we briefly review the basic frameworks of QSV and QGV. In \sref{sec:BipartitePureState}, we determine the minimum number of measurement settings required to verify each bipartite pure state. In \sref{sec:UMinS}, we clarify the relation between minimal-setting verification and entanglement-free verification; in addition, we derive nearly tight lower and upper bounds for the minimum number of settings required to verify each bipartite unitary. In \sref{sec:TwoQubitU}, we clarify the properties of Schmidt coefficients of two-qubit unitaries. In \sref{sec:VTwoQuibtU}, we determine the minimum number of settings required to verify each two-qubit unitary. \Sref{sec:summary} summarizes the paper. To streamline the presentation, some technical proofs are relegated to the appendixes.
\section{\label{sec:QSGV}Quantum state and gate verification}
In preparation for the later study, here we briefly review the basic frameworks of QSV \cite{Pallister2018Optimal,Zhu2019Efficient,Zhu-general} and QGV \cite{ZhuZ20,Liu2020Efficient,Zeng2019Quantum} (cf. \rscite{Hofmann2005Complementary,Daniel2013Minimum,Mayer2018Quantum}).
\subsection{\label{sec:QSV}Quantum state verification}
Consider a quantum system associated with the Hilbert space $\mathcal{H}$. A quantum device is supposed to produce the target state $|\Psi \rangle$, but actually produces the $N$ states $\rho_1,\rho_2,\dots,\rho_N$ in $N$ runs. To distinguish the two situations, we can perform a random test in each run.
Each test is determined by a test operator $E_l$, which is associated with a two-outcome measurement of the form $\{E_l, I-E_l\}$, where $I$ is the identity operator. Here the first outcome corresponds to passing the test. To guarantee that the target state can always pass the test, the test operator $E_l$ should satisfy the condition $\langle \Psi | E_l | \Psi \rangle =1$, which means $ E_l | \Psi \rangle =| \Psi \rangle$.
If the test $E_l$ is performed with probability $p_l$, then the performance of the above verification procedure is determined by the
verification operator $\Omega = \sum_l p_l E_l$. Suppose $\langle \Psi |\rho |\Psi \rangle \leq 1-\varepsilon$, then the maximal probability that $\rho$ can pass each test on average is \cite{Pallister2018Optimal,Zhu2019Efficient,Zhu-general}
\begin{equation}
\max_{\langle \Psi |\rho |\Psi \rangle \leq 1-\varepsilon} \operatorname{tr}(\Omega \rho)=1-[1-\beta(\Omega)]\varepsilon = 1-\nu(\Omega)\varepsilon,
\end{equation}
where $\beta(\Omega)$ is the second largest eigenvalue of $\Omega$, and $\nu(\Omega)=1-\beta(\Omega)$ is the spectral gap from the maximal eigenvalue. Note that a positive spectral gap is necessary and sufficient for verifying the target state reliably, assuming that the total number of tests is not limited.
Let $\varepsilon_j = 1-\< \Psi | \rho_j |\Psi \> $ be the infidelity of the state prepared in the $j$th run and let $\bar{\varepsilon} = \sum_j \varepsilon_j /N$ be the average infidelity. Suppose the states $\rho_1,\rho_2,\dots,\rho_N$ prepared in the $N$ runs are independent of each other. Then the maximal probability that these states can pass all $N$ tests is $[1-\nu(\Omega)\bar{\varepsilon}]^N$. To ensure the condition $\bar{\varepsilon}<\varepsilon $ with significant level $\delta$, the minimum number of tests required reads \cite{Pallister2018Optimal,Zhu2019Efficient,Zhu-general}
\begin{equation}
N=\left \lceil \frac{\ln \delta}{\ln [1-\nu(\Omega)\varepsilon]} \right \rceil
\approx \frac{\ln \delta^{-1}}{\nu(\Omega)\varepsilon}.
\end{equation}
Not surprisingly, a larger spectral gap means a higher efficiency.
\subsection{\label{sec:QGV}Quantum gate verification}
Consider a quantum device that is expected to perform the unitary transformation $\mathcal{U}$ associated with the unitary operator $U$ on $\mathcal{H}$, but actually realizes an unknown quantum process $\Lambda$. In order to verify whether this quantum process is sufficiently close to the target unitary transformation, we need to construct a set $\mathscr{T}=\{|\psi_j\>\}_j$ of test states. In each run we randomly prepare a test state from the set $\mathscr{T}$ and apply the quantum process $\Lambda$. Then we verify whether the output state $\Lambda(\rho_j)$ is sufficiently close to the target output state $\mathcal{U}(\rho_j)=U\rho_j U^\dag$ by virtue of QSV as described in \sref{sec:QSV}, where $\rho_j=|\psi_j\>\<\psi_j|$ \cite{ZhuZ20,Liu2020Efficient}. By construction, the target unitary transformation can always pass each test.
Suppose the test state $|\psi_j\>$ is chosen with probability $p_j>0$; denote the verification operator for the output state $\mathcal{U}(\rho_j)$ by $\Omega_j$. Then the average probability that the process $\Lambda$ can pass each test reads \cite{ZhuZ20}
\begin{equation}
\sum_{j} p_j \operatorname{tr} [\Omega_j \Lambda(\rho_j)]. \label{pass_test}
\end{equation}
The target unitary transformation $\mathcal{U}$ can be verified reliably if only $\mathcal{U}$ can pass each test with certainty. To clarify this condition, we need to introduce additional terminology. Let $\nu_j$ be the spectral gap of $\Omega_j$. The test state $|\psi_j\>$ is effective if $\nu_j>0$; the set of effective test states is denoted by $\mathscr{T}_{\mathrm{eff}}$. The verification protocol is \emph{ordinary} if $\nu_j>0$ for each $j$, in which case every test state is effective, so that $\mathscr{T}_{\mathrm{eff}}=\mathscr{T}$. Otherwise, the verification protocol is \emph{extraordinary}.
A set $\mathscr{T}=\{|\psi_j\> \}_j$ in $\mathcal{H}$ can \emph{identify} the unitary transformation $\mathcal{U}$ if the condition
\begin{align}\label{eq:LambdaU}
\Lambda(|\psi_j\>\<\psi_j|)=\mathcal{U}(|\psi_j\>\<\psi_j|),\quad \forall j
\end{align}
implies that $\Lambda=\mathcal{U}$, that is,
\begin{align}
\Lambda(\rho)=\mathcal{U}(\rho), \quad \forall \rho \in \mathscr{D}(\mathcal{H}),
\end{align}
where $\mathscr{D}(\mathcal{H})$ denotes the set of all density operators on the Hilbert space $\mathcal{H}$.
In this case, the set $\mathscr{T}$ is referred to as an \emph{identification set} (IS). It turns out the set $\mathscr{T}$ can identify $\mathcal{U}$ iff it can identify any other unitary transformation on $\mathcal{H}$ \cite{Mayer2018Quantum}, so it is not necessary to refer to a specific unitary transformation. The significance of ISs to QGV is manifested in the following lemma. Further discussions on ISs will be presented in \sref{sec:MIS}.
\begin{lemma}\label{lem:QGVreliable}
If the unitary transformation $\mathcal{U}$ can be verified reliably by a protocol based on the set $\mathscr{T}=\{|\psi_j\>\}_j$ of test states, then $\mathscr{T}$ is an IS. If the set $\mathscr{T}_{\mathrm{eff}}$ of effective test states is an IS, then the unitary transformation $\mathcal{U}$ can be verified reliably. If the verification protocol is ordinary, then $\mathcal{U}$ can be verified reliably iff $\mathscr{T}$ is an IS.
\end{lemma}
\begin{proof}
By construction, $\mathcal{U}$ can pass each test with certainty, so any quantum process $\Lambda$ that satisfies the condition in \eref{eq:LambdaU} can also pass each test with certainty. Suppose $\mathcal{U}$ can be verified reliably. Then only $\mathcal{U}$ can pass each test with certainty, which implies that $\Lambda=\mathcal{U}$ when \eref{eq:LambdaU} holds. Therefore, $\mathscr{T}$ is an IS.
Conversely, if a quantum process $\Lambda$ can pass each test with certainty, then we have $\operatorname{tr} [\Omega_j \Lambda(|\psi_j\>\<\psi_j|)]=1$ for each $|\psi_j\>\in \mathscr{T}$, which implies that
\begin{align}\label{eq:LambdaUeff}
\Lambda(|\psi_j\>\<\psi_j|)=\mathcal{U}(|\psi_j\>\<\psi_j|),\quad \forall |\psi_j\>\in \mathscr{T}_{\mathrm{eff}},
\end{align}
given that $|\psi_j\>\in \mathscr{T}_{\mathrm{eff}}$ iff $\nu_j>0$. Now suppose the set $\mathscr{T}_{\mathrm{eff}}$ of effective test states is an IS, then \eref{eq:LambdaUeff} implies that $\Lambda=\mathcal{U}$. Therefore, only the target unitary transformation $\mathcal{U}$ can pass each test with certainty, which means $\mathcal{U}$ can be verified reliably.
If the verification protocol is ordinary, then $\mathscr{T}_{\mathrm{eff}}=\mathscr{T}$, so the last statement in \lref{lem:QGVreliable} follows from the first two statements.
\end{proof}
The sample complexity of QGV has been analyzed in \rscite{ZhuZ20,Liu2020Efficient,Zeng2019Quantum} based on the idea of channel-state duality, but the details are not necessary to the current study. It turns out the verification of the unitary transformation $\mathcal{U}$ is closely tied to the verification of its Choi state, especially when the verification protocol is balanced, which means $\sum_j p_j\rho_j=I/d$ \cite{ZhuZ20}. However, verification protocols with minimal settings are in general not balanced as we shall see later. This observation shows that some important features in QGV do not have natural analogs in QSV and deserve further studies.
\section{\label{sec:BipartitePureState}Verification of bipartite pure states with minimal settings}
Given a bipartite or multipartite pure state $|\Psi\rangle$, how many measurement settings are necessary to verify $|\Psi\rangle$ reliably? This problem is trivial if we can perform arbitrary entangling measurements, in which case one setting is enough. Unfortunately, it is not easy to realize entangling measurements in practice, so here we focus on verification protocols based on nonadaptive local projective measurements, which are amenable to experimental realization. This is a fundamental problem in the study of QSV that is of practical interest. However, it is in general very difficult to solve such an optimization problem if not impossible given that the potential choices of measurement settings are countless.
Even in the bipartite case, this problem has not been solved in the literature, although it is known that any bipartite pure state can be verified by two distinct tests based on adaptive local projective measurements \cite{LiHZ19}. Note that one test based on adaptive local projective measurements may entail many different measurement settings, so the result presented in \rcite{LiHZ19} does not resolve the current problem under consideration.
Here we show that any bipartite pure state can be verified by at most two measurement settings, thereby resolving the minimal-setting problem in the bipartite scenario completely.
\begin{theorem}\label{theorem:bipartite fewest settings}
Every bipartite pure product state can be verified by one measurement setting. Every bipartite pure entangled state can be verified by two measurement settings, but not one measurement setting.
\end{theorem}
\begin{proof}
Suppose the bipartite system is associated with the bipartite Hilbert space $\mathcal{H}_\mathrm{A}\otimes \mathcal{H}_\mathrm{B}$ of dimension $d_\mathrm{A}\otimes d_\mathrm{B}$.
In the Schmidt basis, any bipartite pure state in $\mathcal{H}_\mathrm{A}\otimes \mathcal{H}_\mathrm{B}$ can be written as
\begin{equation}
|\Psi\>=\sum_{j=0}^{r-1} \lambda_j |jj\>,
\end{equation}
where $r=\min\{d_\mathrm{A},d_\mathrm{B}\}$, and $\lambda_j$ are the Schmidt coefficients of $|\Psi\>$ arranged in nonincreasing order.
If $|\Psi\rangle$ is a product state, then $\lambda_j=\delta_{j0}$ and $|\Psi\rangle =|00\rangle$. In this case $|\Psi\rangle$ can be verified by a verification protocol composed of the single test $P_0=|\Psi\>\<\Psi|=|00\> \<00|$. In addition, $P_0$ can be realized by one measurement setting, that is, the projective measurement onto the Schmidt basis.
If $|\Psi\>$ is entangled, then it cannot be verified by one measurement setting based on a nonadaptive local projective measurement because the pass eigenspace of any such verification operator has dimension at least 2, which means the spectral gap is zero. To prove \thref{theorem:bipartite fewest settings}, it remains to show that $|\Psi\>$ can be verified by two measurement settings. Let
\begin{align}
P_1:=&\sum_{j=0}^{r-1} |jj\>\<jj| , \\
P_2:=&I-|u\>\<u| \otimes I + |u\>\<u| \otimes |v\>\<v|,
\end{align}
where
\begin{gather}
|u\>: = \frac{1}{\sqrt{r}} \sum_{j=0}^{r-1} |j\>, \\
|v\> := \lambda_0 |0\> + \lambda_1 |1\> +\dots + \lambda_{r-1} |r-1\>.
\end{gather}
Then $P_1$ and $P_2$ are two test projectors for $|\Psi\rangle$ that can be realized by nonadaptive local projective measurements.
To realize $P_1$, both Alice and Bob perform projective measurements on the Schmidt basis, and the test is passed if they obtain the same outcome $j$ for $j=0,1,2,\ldots, r-1$.
To realize $P_2$, Alice performs the two-outcome projective measurement $\{|u\>\<u|, I-|u\>\<u|\}$ and Bob performs the two-outcome projective measurement $\{|v\>\<v|, I-|v\>\<v|\}$; the test is passed except when Alice obtains the first outcome, while Bob obtains the second outcome.
Now we can construct a simple verification protocol for $|\Psi\>$ by performing the two tests $P_1$ and $P_2$ with probability $1/2$ each. The resulting verification operator reads $\Omega=(P_1+P_2)/2$. According to Lemma~1 in \rcite{LiHSS21}, the spectral gap of $\Omega$ is given by
$\nu(\Omega) = (1-\sqrt{q})/2>0 $ with
\begin{equation}
q=\|\bar{P}_1\bar{P}_2\bar{P}_1\|=\Bigl\|\frac{r-1}{r}\bar{P}_1\Bigr\|=\frac{r-1}{r},
\end{equation}
where $\bar{P}_j = P_j - |\Psi\>\<\Psi|$ for $j=1,2$.
Therefore, $|\Psi\>$ can be verified by the strategy $\Omega$, which can be realized by two measurement settings based on nonadaptive local projective measurements.
\end{proof}
\section{\label{sec:UMinS}Verification of unitary transformations with minimal settings}
In this section we explore verification protocols of unitary transformations with minimal settings. In addition we introduce the concept of entanglement-free verification and clarify its connection with minimal-setting verification. Verification of bipartite unitaries is then discussed in more detail.
\subsection{\label{sec:MIS}Minimal identification sets}
Recall that a set of pure states $\mathscr{T}=\{|\psi_j\> \}_j$ in $\mathcal{H}$ is an IS if it can identify unitary transformations on $\mathcal{H}$ (cf. \sref{sec:QGV}) \cite{Mayer2018Quantum}. Here we are particularly interested in ISs with as few elements as possible.
The set $\mathscr{T}$ is a \emph{minimal identification set} (MIS) if, in addition, any proper subset is not an IS. MISs are crucial to constructing verification protocols for unitary transformations with minimal settings.
To understand the properties of ISs and MISs, we need to introduce several additional concepts.
A set of pure states $\mathscr{T}=\{|\psi_j\> \}_j$ in $\mathcal{H}$ is a spanning set if it spans $\mathcal{H}$; it is a basis if it is a spanning set that is also linearly independent.
The \emph{transition graph} of the set $\mathscr{T}$ is a graph whose vertices are in one-to-one correspondence with the states $|\psi_j\>$; two vertices $j,k$ are adjacent if $\<\psi_j|\psi_k\>\neq 0$. The set $\mathscr{T}$ is connected if its transition graph is connected; note that here the definition is different from the usual definition in topology.
The set is a \emph{connected spanning set} if it is a spanning set that is connected; the set $\mathscr{T}$ is a connected linearly independent set (CLIS) if it is a linearly independent set that is connected. A connected basis is a CLIS that is also a connected spanning set.
By definition a CLIS can contain at most $d$ states, where $d$ is the dimension of $\mathcal{H}$. Suppose the set $\mathscr{T}$ is nonempty; then a CLIS contained in $\mathscr{T}$ is maximal if it is not contained in any other CLIS contained in $\mathscr{T}$. Note that each state in $\mathscr{T}$ is contained in at least one maximal CLIS. In particular, $\mathscr{T}$ contains at least one maximal CLIS as a subset.
The following result proved in \rcite{Mayer2018Quantum} clarifies the conditions under which a set of pure states can identify unitary transformations on $\mathcal{H}$.
\begin{lemma}\label{lem:IS}
A set of pure states in $\mathcal{H}$ is an IS iff it is a connected spanning set.
\end{lemma}
By \lref{lem:IS}, at least $d$ test states are required to identify unitaries on $\mathcal{H}$.
To saturate the lower bound $d$, the test states must form a connected basis.
\begin{lemma}\label{lemma:MIS}
A set of pure states in $\mathcal{H}$ is a MIS iff it is a connected basis.
\end{lemma}
\Lref{lemma:MIS} clarifies the properties of MISs; it is a simple corollary of \lref{lem:IS} above and \lsref{lem:CLISmax} and \ref{lem:SpanBasis} below, which are proved in
Appendix~\ref{app:lem:SpanBasis}.
\begin{lemma}\label{lem:CLISmax}
Suppose $\mathscr{T}$ is a connected spanning set in $\mathcal{H}$. Then any maximal CLIS contained in $\mathscr{T}$ is a connected basis.
\end{lemma}
\begin{lemma}\label{lem:SpanBasis}
Every connected spanning set in $\mathcal{H}$ contains a subset that forms a connected basis. Every set in $\mathcal{H}$ that contains a connected spanning subset is a connected spanning set.
\end{lemma}
Suppose $\mathscr{T}$ is a connected spanning set that is composed of $k$ pure states. As an implication of \lref{lem:SpanBasis}, $\mathscr{T}$ contains a connected spanning subset that is composed of $k'$ pure states as long as $d\leq k'\leq k$.
To illustrate the above results, here we present a connected spanning set $\mathscr{S}$ that is composed of the computational basis and one additional state \cite{Daniel2013Minimum}:
\begin{align}
\mathscr{T}=\{|j\>\}_{j=0}^{d-1} \cup \{|\varphi\>\},
\end{align}
where
\begin{equation}\label{eq:totally rotated state}
|\varphi\> = \frac{1}{\sqrt{d}} \sum_{j=0}^{d-1}|j\>.
\end{equation}
A connected basis contained in $\mathscr{T}$ can be constructed as follows,
\begin{align}\label{eq:MIS}
\mathscr{S}=\{|j\>\}_{j=1}^{d-1} \cup \{|\varphi\>\}.
\end{align}
According to \lref{lemma:MIS}, $\mathscr{S}$ is also a MIS.
\subsection{Minimal-setting verification and Entanglement-free verification} \label{sec:MSV&EFV}
Let $U$ be a unitary operator on $\mathcal{H}$ and $\mathcal{U}$ the associated unitary transformation.
Recall that a general verification protocol for $U$ (which means a verification protocol for $\mathcal{U}$) consists of a set of input test states and the verification protocol for the output state associated with each input state. For simplicity, here we assume that each test state is a pure product state, and the verification protocol for each output state is based on nonadaptive local projective measurements. Such verification protocols are most amenable to experimental realization.
We are particularly interested in the minimum number of experimental settings required to verify $U$ by ordinary verification protocols, which is denoted by $\mu(U)$ henceforth. When extraordinary verification protocols are allowed, the minimum number is denoted by $\mu_\mathrm{e}(U)$. To be specific, one experimental setting means the preparation of a pure product input state and a
nonadaptive local projective measurement on the output state. Note that the number of experimental settings required by any verification protocol is at least the number of test states involved. In conjunction with \lsref{lem:QGVreliable} and \ref{lem:IS}, this observation implies that
\begin{align}\label{eq:muULB}
\mu(U)\geq \mu_\mathrm{e}(U)\geq d
\end{align}
for any unitary operator $U$ acting on a $d$-dimensional Hilbert space. For a simple noncomposite system, the two inequalities can always be saturated, and the verification problem is trivial. In the rest of this paper we shall focus on composite systems and consider only ordinary verification protocols, in which case it is in general highly nontrivial to determine $\mu(U)$. Although it is even more difficult to determine $\mu_\mathrm{e}(U)$, our results on $\mu(U)$ provide valuable upper bounds for $\mu_\mathrm{e}(U)$, which are nearly tight in the bipartite setting.
A verification protocol for $U$ is \emph{entanglement free} if all input test states and the corresponding output states (after the action of $U$) are product states; in addition, all measurements are based on local projective measurements.
An entanglement-free protocol does not generate any entanglement in the verification procedure and hence the name. Such verification protocols are particularly appealing to both theoretical study and experimental realization.
It turns out entanglement-free verification is intimately connected to minimal-setting verification. To clarify this point, we need to introduce some additional terminology.
Denote by $\mathrm{Prod}$ the set of pure product states; denote by $\mathrm{Prod}(U)$ the set of product states that remain product states after the action of $U$:
\begin{equation}\label{eq:ProdU}
\mathrm{Prod}(U) = \{ |\psi\> \in \mathrm{Prod}\ | \ U |\psi\> \in \mathrm{Prod} \}.
\end{equation}
The dimension of the span of the set $\mathrm{Prod}(U) $ is denoted by $d_\mathrm{Prod}(U)$:
\begin{equation}
d_\mathrm{Prod}(U) = \mathrm{dim}\ \mathrm{span} (\mathrm{Prod}(U)),
\end{equation}
which satisfies $0 \le d_\mathrm{Prod}(U) \le d$. A state $|\psi\>$ in $\mathcal{H}$ satisfies the \emph{product-state constraint} associated with $U$ if $|\psi\>\in \mathrm{Prod}(U)$. A set of states satisfies the product-state constraint if it is contained in $\mathrm{Prod}(U)$, so that each state satisfies the constraint.
An entanglement-free IS (EFIS) $\mathscr{T}$ for $U$ is an IS that satisfies the product-state constraint, which implies that $\mathscr{T}\subseteq \mathrm{Prod}(U)$.
Similarly, an entanglement-free MIS (EFMIS)
is a MIS that satisfies the product-state constraint. Note that the definition of an EFIS (EFMIS) depends on the specific unitary transformation under consideration, although the definition of an IS (MIS) is independent of a specific unitary transformation. The unitary operator $U$ can be verified by an entanglement-free protocol iff it admits an EFMIS, in which case $\mathrm{Prod}(U)$ contains an IS. \Lref{lem:TestStateProdU} and \thref{thm:minS-entFree} below
further clarify the connections among the product-state constraint as determined by $\mathrm{Prod}(U)$, minimal-setting verification, and entanglement-free verification.
The proof of \Lref{lem:TestStateProdU} is presented in \aref{app:lem:TestStateProdU}.
\begin{lemma}\label{lem:TestStateProdU}
Suppose $U$ is a unitary operator acting on a composite Hilbert space $\mathcal{H}$ of dimension $d$.
Suppose $\mathscr{T}$ is the set of test states of an entanglement-free verification protocol for $U$ or an ordinary verification protocol composed of $d$ experimental settings based on local operations. Then $\mathscr{T}\subseteq \mathrm{Prod}(U)$.
\end{lemma}
\begin{theorem}\label{thm:minS-entFree}
Suppose $U$ is a unitary operator on a composite Hilbert space $\mathcal{H}$ of dimension $d$. Then the following five statements are equivalent:
\begin{enumerate}
\item $\mu(U) = d$.
\item $\mathrm{Prod}(U)$ is a connected spanning set.
\item $\mathrm{Prod}(U)$ contains a connected basis as a subset.
\item $U$ admits an EFMIS.
\item $U$ can be verified by an entanglement-free protocol.
\end{enumerate}
\end{theorem}
\begin{corollary}\label{cor:minS-entFree}
Suppose $U$ is a unitary operator on a composite Hilbert space $\mathcal{H}$ of dimension $d$. If $\mu(U) = d$ or
if $U$ can be verified by an entanglement-free protocol, then $d_\mathrm{Prod}(U)= d$.
\end{corollary}
\Crref{cor:minS-entFree} is an immediate consequence of \thref{thm:minS-entFree}.
\begin{proof}[Proof of \thref{thm:minS-entFree}]
Suppose $\mu(U)=d$. Then $U$ can be verified by an ordinary protocol composed of $d$ experimental settings that are based on local operations. Let $\mathscr{T}$ be the set of test states; then $\mathscr{T}$ forms a connected basis according to \lsref{lem:QGVreliable} and \ref{lem:IS}. In addition, $\mathscr{T}\subseteq\mathrm{Prod}(U)$ according to \lref{lem:TestStateProdU}. Therefore, $\mathrm{Prod}(U)$ is a connected spanning set according to \lref{lem:SpanBasis}, which confirms the implication $1\mathrel{\Rightarrow} 2$.
Next, suppose $\mathrm{Prod}(U)$ is a connected spanning set. Then $\mathrm{Prod}(U)$ contains a connected basis as a subset according to \lref{lem:SpanBasis}, which confirms the implication $2\mathrel{\Rightarrow} 3$.
Next, suppose $\mathrm{Prod}(U)$ contains a connected basis $\mathscr{T}$. Then $\mathscr{T}$ satisfies the product-state constraint and is a MIS according to \lref{lemma:MIS}. Therefore, $\mathscr{T}$ is an EFMIS for $U$, which confirms the implication $3\mathrel{\Rightarrow} 4$.
The implication $4\mathrel{\Rightarrow} 5$ follows from the definition, given that any EFMIS for $U$ can serve as a set of test states of an entanglement-free verification protocol.
Finally, suppose $U$ can be verified by an entanglement-free protocol; let $\mathscr{T}$ be the set of test states. Then $\mathscr{T}$ is an IS contained in $\mathrm{Prod}(U)$ by \lref{lem:QGVreliable} and is thus a connected spanning set by \lref{lem:IS}.
According to \lref{lem:SpanBasis}, $\mathscr{T}$ contains a connected basis $\mathscr{S}$, which enables us to construct a reliable verification protocol for $U$ using only $d$ experimental settings. Therefore, $\mu(U)=d$, which confirms the implication $5\mathrel{\Rightarrow} 1$ and completes the proof of \thref{thm:minS-entFree}.
\end{proof}
\subsection{Minimal settings for verifying bipartite unitaries}
In this section we focus on the verification of general bipartite unitaries and show that the minimum number of settings required to verify a generic bipartite unitary grows linearly with the total dimension.
\begin{theorem}\label{theorem:minimal settings}
Suppose $U$ is a unitary operator acting on a $d$-dimensional bipartite Hilbert space $\mathcal{H}$. Then the minimum number of experimental settings $\mu(U)$ required to verify $U$ satisfies $d \le \mu(U) \le 2d$.
\end{theorem}
\begin{proof}
The inequality $d \le \mu(U)$ follows from the general lower bound in \eref{eq:muULB}.
To prove the upper bound $\mu(U) \le 2d$, note that the MIS $\mathscr{S}$ in \eref{eq:MIS} can serve as a set of test states; in addition, all states in $\mathscr{S}$ are product states as long as the computational basis coincides with the standard product basis. According to \thref{theorem:bipartite fewest settings}, the output state associated with each input state can be verified by either one or two measurement settings based on nonadaptive local projective measurements. Therefore, $\mu(U)\leq 2d$, which completes the proof of \thref{theorem:minimal settings}.
\end{proof}
The following proposition clarifies the relation between $\mu(U)$ and $d_\mathrm{Prod}(U)$; see \aref{app:proposition:mu(U)} for a proof.
\begin{proposition}\label{pro:muUdprod}
Let $U$ be a unitary operator acting on a $d$-dimensional bipartite Hilbert space $\mathcal{H}$. If $d_\mathrm{Prod}(U)<d$, then
\begin{equation}\label{eq:mu(U)}
\mu(U) = d_\mathrm{Prod}(U) + 2[d-d_\mathrm{Prod}(U)].
\end{equation}
In the case $d_\mathrm{Prod}(U)=d$, we have $\mu(U)=d$ if the set $\mathrm{Prod(U)}$ is connected and $\mu(U)=d+1$ otherwise.
\end{proposition}
\section{\label{sec:TwoQubitU}Two-qubit unitaries}
In this section we discuss the basic properties of two-qubit unitaries that are relevant to studying the minimal-setting verification and entanglement-free verification presented in the next section. Here the discussion builds on the previous works \rscite{Kraus2001Optimal,D2002Optimal}.
\subsection{\label{sec:CanonicalForm}Canonical form of two-qubit unitaries}
Let $\mathcal{H} = \mathcal{H}_\mathrm{A} \otimes \mathcal{H}_\mathrm{B}$ be the Hilbert space associated with a two-qubit system shared by A and B. According to \rscite{Kraus2001Optimal,D2002Optimal},
any two-qubit unitary operator $U_{\mathrm{A}\mathrm{B}}$ acting on $\mathcal{H}$ can be expressed as follows,
\begin{equation}\label{eq:canonical form 1}
U_{\mathrm{A} \mathrm{B}} = V_\mathrm{A} \otimes W_\mathrm{B} U \tilde{V}_\mathrm{A} \otimes \tilde{W}_\mathrm{B},
\end{equation}
where $V_\mathrm{A},W_\mathrm{B},\tilde{V}_\mathrm{A},\tilde{W}_\mathrm{B}$ are four qubit unitary operators,
\begin{equation}\label{eq:canonical form 2}
\begin{gathered}
U=U(\alpha_1, \alpha_2,\alpha_3)=\mathrm{e}^{-\mathrm{i} H(\alpha_1,\alpha_2,\alpha_3)},\\
H(\alpha_1,\alpha_2,\alpha_3)=\sum_{k=1}^3\alpha_k H_k, \\
0 \leq |\alpha_3| \leq \alpha_2 \leq \alpha_1 \leq \pi/4, \\
H_1=\sigma_1 \otimes \sigma_1, \quad
H_2=\sigma_2 \otimes \sigma_2,\quad
H_3=\sigma_3 \otimes \sigma_3,
\end{gathered}
\end{equation}
and $\sigma_1,\sigma_2,\sigma_3$ are the three Pauli operators.
The operator $U(\alpha_1,\alpha_2,\alpha_3)$ can further be expressed as
\begin{equation}\label{eq:U}
U(\alpha_1,\alpha_2,\alpha_3)
=\sum_{k=0}^3 \zeta_k \sigma_k \otimes \sigma_k,
\end{equation}
where $\sigma_0$ is the identity operator
and the coefficients $\zeta_k$ are given by
\begin{equation}\label{eq:zetak}
\begin{aligned}
\zeta_0 = \cos\alpha_1 \cos\alpha_2 \cos\alpha_3 - \mathrm{i} \sin\alpha_1 \sin\alpha_2 \sin\alpha_3,\\
\zeta_1 = \cos\alpha_1 \sin\alpha_2 \sin\alpha_3 - \mathrm{i} \sin\alpha_1 \cos\alpha_2 \cos\alpha_3,\\
\zeta_2 = \sin\alpha_1 \cos\alpha_2 \sin\alpha_3 - \mathrm{i} \cos\alpha_1 \sin\alpha_2 \cos\alpha_3,\\
\zeta_3 = \sin\alpha_1 \sin\alpha_2 \cos\alpha_3 - \mathrm{i} \cos\alpha_1 \cos\alpha_2 \sin\alpha_3.
\end{aligned}
\end{equation}
According to the equation
\begin{align}
\sigma_3^\mathrm{A} U(\alpha_1,\alpha_2,-\alpha_3)\sigma_3^\mathrm{A}&=U(-\alpha_1,-\alpha_2,-\alpha_3)\nonumber\\
&=U^*(\alpha_1,\alpha_2,\alpha_3),
\end{align}
$U(\alpha_1,\alpha_2,-\alpha_3)$ is equivalent to $U^*(\alpha_1,\alpha_2,\alpha_3)$. Therefore, any two-qubit unitary operator is equivalent to $U(\alpha_1,\alpha_2,\alpha_3)$ or $U^*(\alpha_1,\alpha_2,\alpha_3)$ with
\begin{equation}\label{eq:para range}
0 \leq \alpha_3 \leq \alpha_2 \leq \alpha_1 \leq \pi/4.
\end{equation}
Since most quantities we are interested in, such as Schmidt coefficients and the minimum number of experimental settings, are invariant under local unitary transformations and complex conjugation, so we can focus on $U(\alpha_1,\alpha_2,\alpha_3)$ with the parameter range in \eref{eq:para range} in the following discussion.
\subsection{\label{sec:Schmidt}Schmidt coefficients of two-qubit unitaries}
To further clarify the properties of two-qubit unitary operators, we need to find suitable invariants. Given a two-qubit unitary operator $U$ acting on the Hilbert space $\mathcal{H} = \mathcal{H}_\mathrm{A} \otimes \mathcal{H}_\mathrm{B}$,
its Choi state
\begin{equation}\label{eq:PsiU}
|\Psi_U\> := U |\Phi\>_{\mathrm{A}\rmA'} \otimes |\Phi\>_{\mathrm{B}\rmB'}
\end{equation}
is a four-qubit pure state on $\mathcal{H} \otimes \mathcal{H}$, where
\begin{align}
|\Phi\>_{\mathrm{A}\rmA'}=\frac{1}{\sqrt{2}} \sum_k |k\>_{\mathrm{A}}|k\>_{\mathrm{A}'},\quad |\Phi\>_{\mathrm{B}\rmB'}=\frac{1}{\sqrt{2}} \sum_k |k\>_{\mathrm{B}}|k\>_{\mathrm{B}'}
\end{align}
are two-qubit maximally entangled states shared by parties $\mathrm{A}\rmA'$ and $\mathrm{B}\rmB'$, respectively. The Schmidt coefficients (rank) of $U$ are defined as the Schmidt coefficients (rank) of $|\Psi_U\>$ with respect to the partition between $\mathrm{A}\rmA'$ and $\mathrm{B}\rmB'$. Note that the Schmidt coefficients and Schmidt rank of $U$ are invariant under local unitary transformations.
Let
\begin{align}
|\tilde{\Phi}_k\>=\sigma_k \otimes I |\Phi\>, \quad k=0,1,2,3.
\end{align}
Then the set
$\{ |\tilde{\Phi}_k\> \}_{k=0}^3$ forms a Bell basis, which is equivalent to the magic basis \cite{PhysRevLett.78.5022} up to overall phase factors. When $U=U(\alpha_1,\alpha_2,\alpha_3)$ is the canonical two-qubit unitary defined in \sref{sec:CanonicalForm}, by virtue of \eref{eq:U}, the Choi state $|\Psi_U\>$ can be expressed as
\begin{align}
|\Psi_U\>
&= \sum_{k=0}^3 \zeta_k |\tilde{\Phi}_k\>_{\mathrm{A}\rmA'} \otimes |\tilde{\Phi}_k\>_{\mathrm{B}\rmB'}.
\end{align}
Now it is clear that the Schmidt coefficients of $|\Psi_U\>$ with respect to the partition between $\mathrm{A}\rmA'$ and $\mathrm{B}\rmB'$ are $|\zeta_k|$ for $k=0,1,2,3$, where $\zeta_k$ are given in \eref{eq:zetak}. Therefore, the two-qubit unitary $U(\alpha_1,\alpha_2,\alpha_3)$ has Schmidt coefficients $|\zeta_k|$ for $k=0,1,2,3$, which satisfy the following normalization condition:
\begin{equation}\label{eq:SchmidtCoeffNorm}
|\zeta_0|^2+|\zeta_1|^2+|\zeta_2|^2+|\zeta_3|^2=1.
\end{equation}
Note that $U^*(\alpha_1,\alpha_2,\alpha_3)$ and $U(\alpha_1,\alpha_2,\alpha_3)$ have the same Schmidt coefficients and Schmidt rank. So we can focus on the parameter range in \eref{eq:para range} when studying the Schmidt coefficients and Schmidt rank of $U(\alpha_1,\alpha_2,\alpha_3)$.
The Schmidt rank of $U(\alpha_1,\alpha_2,\alpha_3)$ is determined in \rcite{D2002Optimal} as reproduced in the following lemma, which can also be verified directly by virtue of \eref{eq:zetak}.
\begin{lemma}\label{lem:SchmidtRank}
Suppose $0 \leq \alpha_3 \leq \alpha_2 \leq \alpha_1 \leq \pi/4$. Then the Schmidt rank of $U(\alpha_1,\alpha_2,\alpha_3)$ is 1 if $\alpha_1=\alpha_2=\alpha_3=0$, is 2 if $\alpha_1>0$ and $\alpha_2=\alpha_3=0$, and is 4 if $\alpha_1\geq \alpha_2>0$.
\end{lemma}
\begin{figure}[t]
\centering
\includegraphics[scale=0.17]{fig1.png}
\caption{Contour plot of $|\zeta_0|^2$ in the plane of $\alpha_2-\alpha_3$, where $|\zeta_0|$ is the largest Schmidt coefficient of $U(\alpha_1=\pi/4,\alpha_2,\alpha_3)$. The other three Schmidt coefficients are determined by $|\zeta_0|^2$ according to \eref{eq:zetakSqSpecial}. All unitaries corresponding to a given contour line share the same Schmidt coefficients.
}
\label{fig:zeta0contour}
\end{figure}
The properties of Schmidt coefficients of two-qubit unitaries are summarized in \lsref{lem:SchmidtCoeffOrder}-\ref{lem:SchmidtCoeffSame} and \crref{cor:SchmidtCoeffSame} below, which are proved in \aref{app:lem:SchmidtCoeff}.
\begin{lemma}\label{lem:SchmidtCoeffOrder}
Suppose $0 \leq \alpha_3 \leq \alpha_2 \leq \alpha_1 \leq \pi/4$. Then the Schmidt coefficients of
$U(\alpha_1,\alpha_2,\alpha_3)$ satisfy the following relation:
\begin{equation}\label{eq:SchmidtCoeffOrder}
|\zeta_0| \ge |\zeta_1| \ge |\zeta_2| \ge |\zeta_3|\geq 0.
\end{equation}
The first inequality saturates iff $\alpha_1 = \pi/4$; the second inequality saturates iff $\alpha_2=\alpha_1$; the third inequality saturates iff $\alpha_1=\frac{\pi}{4}$ or $\alpha_3=\alpha_2$; and the last inequality saturates iff $\alpha_2=\alpha_3=0$.
\end{lemma}
\begin{lemma}\label{lem:SchmidtCoeffSpecial}
Suppose $0 \leq \alpha_3 \leq \alpha_2 \leq \alpha_1 \leq \pi/4$. Then the four Schmidt coefficients of $U(\alpha_1,\alpha_2,\alpha_3)$ satisfy $|\zeta_0| > |\zeta_1| = |\zeta_2| = |\zeta_3|>0$ iff $0<\alpha_3=\alpha_2=\alpha_1<\pi/4$.
\end{lemma}
When $\alpha_2=\alpha_1=\pi/4$, all Schmidt coefficients of the unitary operator $U(\alpha_1,\alpha_2,\alpha_3)$ are equal to $1/2$ irrespective of the value of $\alpha_3$ [cf. \eref{eq:zetak}]. Such coincidence can also occur when $\alpha_1=\pi/4$ and $\alpha_2\leq\pi/4$,
in which case we have
\begin{equation}\label{eq:zetakSqSpecial}
\begin{aligned}
|\zeta_0|^2=|\zeta_1|^2=\frac{1}{4}[1+\cos(2\alpha_2)\cos(2\alpha_3)],\\
|\zeta_2|^2=|\zeta_3|^2=\frac{1}{4}[1-\cos(2\alpha_2)\cos(2\alpha_3)],
\end{aligned}
\end{equation}
so all Schmidt coefficients of $U(\alpha_1,\alpha_2,\alpha_3)$ are completely determined by the product $\cos(2\alpha_2)\cos(2\alpha_3)$ or any given Schmidt coefficient, as illustrated in \fref{fig:zeta0contour}.
A specific choice of two inequivalent unitary operators with the same Schmidt coefficients is shown in \aref{app:eg same coeff}.
On the other hand, the following lemma shows that such coincidence of Schmidt coefficients cannot occur when $\alpha_1<\pi/4$.
\begin{lemma}\label{lem:SchmidtCoeffSame}
Suppose $0 \leq \alpha_3 \leq \alpha_2 \leq \alpha_1 \leq \pi/4$ and $0 \leq \alpha_3' \leq \alpha_2' \leq \alpha_1' \leq \pi/4$. Then $U(\alpha_1,\alpha_2,\alpha_3)$ and $U(\alpha_1',\alpha_2',\alpha_3')$ have the same Schmidt coefficients iff one of the following two conditions holds,
\begin{gather}
\alpha_1=\alpha_1',\quad \alpha_2=\alpha_2',\quad \alpha_3=\alpha_3'; \label{eq:SchmidtCoeffSameCon1}\\
\alpha_1=\alpha_1'=\frac{\pi}{4},\quad \cos(2\alpha_2)\cos(2\alpha_3)= \cos(2\alpha_2')\cos(2\alpha_3'). \label{eq:SchmidtCoeffSameCon2}
\end{gather}
\end{lemma}
\begin{corollary}\label{cor:SchmidtCoeffSame}
Suppose $U$ and $U'$ are two two-qubit unitary operators that have the same Schmidt coefficients $s_0,s_1,s_2,s_3$, which satisfy $s_0 > s_1 \ge s_2 \ge s_3$. Then $U'$ is equivalent to either $U$ or $U^*$ under local unitary transformations. In other words, $U'$ can be expressed as
\begin{equation}
U' = V_\mathrm{A} \otimes W_\mathrm{B} \tilde{U} \tilde{V}_\mathrm{A} \otimes \tilde{W}_\mathrm{B},
\end{equation}
where $\tilde{U}=U$ or $U^*$, and $V_\mathrm{A},W_\mathrm{B},\tilde{V}_\mathrm{A},\tilde{W}_\mathrm{B}$ are suitable qubit unitary operators.
\end{corollary}
\begin{figure}[t]
\centering
\includegraphics[scale=0.17]{fig2.png}
\caption{Accessible Schmidt coefficients of two-qubit unitaries $U(\alpha_1,\alpha_2,\alpha_3)$ for the parameter range $0 \leq \alpha_3, \alpha_2, \alpha_1 \leq \pi/4$. The red-shaded region in each ternary diagram represents the set of accessible points specified by the barycentric coordinate $(\xi_1,\xi_2,\xi_3)=(|\zeta_1|^2, |\zeta_2|^2, |\zeta_3|^2)/(1-|\zeta_0|^2)$, where $|\zeta_0|$ is the largest Schmidt coefficient, and $|\zeta_1|, |\zeta_2|, |\zeta_3|$ are the other three Schmidt coefficients; cf. \eref{eq:zetak}. The left, right, and top corners of the big black triangle correspond to the coordinates $(1,0,0)$, $(0,1,0)$, and $(0,0,1)$, respectively.
The shaded region within each blue dashed triangle represents the set of accessible points for the smaller parameter range $0 \leq \alpha_3 \leq \alpha_2 \leq \alpha_1 \leq \pi/4$, in which case $|\zeta_1|$, $|\zeta_2|$, $|\zeta_3|$ are in nonincreasing order.}
\label{fig:triangle_dis}
\end{figure}
The above analysis clarifies the properties of Schmidt coefficients of two-qubit unitary operators. Given the assumption $0 \leq \alpha_3 \leq \alpha_2 \leq \alpha_1 \leq \pi/4$, the Schmidt coefficients of $U(\alpha_1,\alpha_2,\alpha_3)$ must satisfy the conditions in \esref{eq:SchmidtCoeffNorm} and \eqref{eq:SchmidtCoeffOrder}.
However, the two conditions are not enough to guarantee the existence of a two-qubit unitary with a given set of Schmidt coefficients.
To demonstrate this point, we can determine the ranges of the four Schmidt coefficients of $U(\alpha_1,\alpha_2,\alpha_3)$ by virtue of \eref{eq:zetak}, with the result
\begin{equation}
\begin{gathered}
\frac{1}{2}\leq |\zeta_0| \leq 1, \quad 0\leq |\zeta_1|\leq \frac{1}{\sqrt{2}},\\
0\leq |\zeta_2|\leq \frac{1}{2}, \quad 0\leq |\zeta_3|\leq \frac{1}{2}.
\end{gathered}
\end{equation}
By contrast, the constraints in \esref{eq:SchmidtCoeffNorm} and \eqref{eq:SchmidtCoeffOrder} alone would imply that $0\leq |\zeta_2|\leq 1/\sqrt{3}$.
To further clarify the constraints on the Schmidt coefficients of two-qubit unitaries, it is convenient to introduce some additional variables.
Let
\begin{align}\label{eq:xi}
\xi_j=\frac{|\zeta_j|^2}{1-|\zeta_0|^2},\quad j=1,2,3.
\end{align}
Geometrically, $(|\zeta_0|^2, |\zeta_1|^2, |\zeta_2|^2, |\zeta_3|^2)$ can be regarded as the barycentric coordinate of a point in a three-dimensional probability simplex according to \eref{eq:SchmidtCoeffNorm}. The accessible Schmidt coefficients correspond to a subset in the probability simplex.
In addition, when $|\zeta_0|<1$, $(\xi_1, \xi_2, \xi_3)$ is the barycentric coordinate of a point in a two-dimensional probability simplex, which corresponds to a normalized cross section of the three-dimensional probability simplex.
\Fref{fig:triangle_dis} illustrates the accessible region of Schmidt coefficients
for six normalized cross sections associated with six distinct values of $|\zeta_0|$, where $|\zeta_0|$ is the largest Schmidt coefficient.
The shaded region within each blue dashed triangle represents
the set of accessible ordered Schmidt coefficients as determined by $(\xi_1, \xi_2, \xi_3)$
for the parameter range $0 \leq \alpha_3 \leq \alpha_2 \leq \alpha_1 \leq \pi/4$.
By contrast, the whole red-shaded region in each ternary diagram represents the set of accessible Schmidt coefficients for the larger parameter range $0 \leq \alpha_3, \alpha_2, \alpha_1 \leq \pi/4$. In the latter case, \eref{eq:SchmidtCoeffOrder} no longer applies, but we have
\begin{equation}
|\zeta_0|\geq |\zeta_j| \quad j=1,2,3,
\end{equation}
so $|\zeta_0|$ is still the largest Schmidt coefficient.
\section{\label{sec:VTwoQuibtU}Verification of two-qubit unitaries with minimal settings}
\subsection{Product-state constraint}
To construct a minimal-setting protocol for verifying the two-qubit unitary $U(\alpha_1,\alpha_2,\alpha_3)$, we first need to
clarify the product-state constraint, which is tied to the set $\mathrm{Prod}(U)$ defined in \eref{eq:ProdU}.
To better understand the product-state constraint, it is instructive to consider the magic basis \cite{PhysRevLett.78.5022}, which is composed of the four maximally entangled states
\begin{equation}
\begin{aligned}
|\Phi_1 \rangle =\frac{1}{\sqrt{2}} (|00\rangle + |11\rangle), \;\;
|\Phi_2 \rangle =\frac{\mathrm{i}}{\sqrt{2}} (|00\rangle - |11\rangle),\\
|\Phi_3 \rangle =\frac{\mathrm{i}}{\sqrt{2}} (|01\rangle + |10\rangle), \; \;
|\Phi_4 \rangle =\frac{1}{\sqrt{2}} (|01\rangle - |10\rangle).
\end{aligned}
\end{equation}
Suppose the input state $|\phi_0\rangle$ employed has the form
$
|\phi_0\rangle = \sum_{k=1}^4 \gamma_k |\Phi_k \rangle$ with $\sum_{k=1}^4 |\gamma_k|^2 =1$.
Then the concurrence \cite{PhysRevLett.78.5022} of the input state reads
\begin{equation}
C(|\phi_0 \>)= \left| \sum_{k=1}^4 \gamma_k^2 \right|.
\end{equation}
After the action of $U(\alpha_1,\alpha_2,\alpha_3)$, the output state has the expansion
\begin{equation}
|\phi \> = \sum_{k=1}^4 \mathrm{e}^{-\mathrm{i} \lambda_k} \gamma_k |\Phi_k \> ,
\end{equation}
where
\begin{equation}
\begin{aligned}
&\lambda_1 = \alpha_1-\alpha_2+\alpha_3,\\
&\lambda_2 = -\alpha_1+\alpha_2+\alpha_3,\\
&\lambda_3 =\alpha_1+\alpha_2-\alpha_3,\\
&\lambda_4 = -\alpha_1-\alpha_2-\alpha_3.
\end{aligned}
\end{equation}
The concurrence of the output state reads
\begin{equation}
C(|\phi \>)= \left| \sum_{k=1}^4 \mathrm{e}^{-2\mathrm{i} \lambda_k}\gamma_k^2 \right|.
\end{equation}
The product-state constraint demands $C(|\phi_0 \>)=0$ and $C(|\phi \>)=0$:
\begin{align}\label{eq:PSCmagic}
\sum_{k=1}^4 \gamma_k^2=0,\quad \sum_{k=1}^4 \mathrm{e}^{-2\mathrm{i} \lambda_k}\gamma_k^2=0.
\end{align}
When $0 < \alpha_1+\alpha_2 < \pi/2$, \eref{eq:PSCmagic} is equivalent to the following equations:
\begin{equation}\label{eq:product-state-constraint}
\begin{aligned}
\gamma_3^2 &= r_{31} \gamma_1^2 + r_{32} \gamma_2^2,\quad
\gamma_4^2 = r_{41} \gamma_1^2 + r_{42} \gamma_2^2, \\
r_{31}
&=\exp[\mathrm{i} (2\alpha_2-2\alpha_3+\pi)] \frac{\sin(2\alpha_1+2\alpha_3)}{\sin(2\alpha_1+2\alpha_2)},\\
r_{32}
&=\exp[\mathrm{i} (2\alpha_1-2\alpha_3+\pi)] \frac{\sin(2\alpha_2+2\alpha_3)}{\sin(2\alpha_1+2\alpha_2)},\\
r_{41}
&=\exp[\mathrm{i} (-2\alpha_1-2\alpha_3+\pi)] \frac{\sin(2\alpha_2-2\alpha_3)}{\sin(2\alpha_1+2\alpha_2)},\\
r_{42}
&=\exp[\mathrm{i} (-2\alpha_2-2\alpha_3+\pi)] \frac{\sin(2\alpha_1-2\alpha_3)}{\sin(2\alpha_1+2\alpha_2)}.
\end{aligned}
\end{equation}
If the product-state constraint holds, then $\gamma_3^2$ and $\gamma_4^2$ are completely determined by $\gamma_1$ and $\gamma_2$. Taking into account the normalization condition $\sum_{k=1}^4 |\gamma_k|^2 =1$ and ignoring the overall phase factors, we can deduce that there are in general two free real parameters.
When $\alpha_1+\alpha_2=0$ or $\alpha_1+\alpha_2 = \pi/2$, \eref{eq:product-state-constraint} does not apply, in which case it is more convenient to consider the product-state constraint in the computational basis. Now any two-qubit pure product state can be expressed as
\begin{equation}
|\phi_0 \rangle =
\begin{pmatrix}{}
a_1 \\
a_2
\end{pmatrix}
\otimes
\begin{pmatrix}
b_1 \\
b_2
\end{pmatrix}=
\begin{pmatrix}
a_1 b_1 \\
a_1 b_2 \\
a_2 b_1 \\
a_2 b_2
\end{pmatrix}.
\end{equation}
After the action of $U(\alpha_1,\alpha_2,\alpha_3)$, the output state reads
\begin{equation}\label{eq:outputStateCB}
|\phi \rangle = U |\phi_0 \rangle =
\begin{pmatrix}
c_1 \\
c_2 \\
c_3 \\
c_4
\end{pmatrix},
\end{equation}
where
\begin{equation}
\begin{aligned}
c_1 = (\zeta_0+\zeta_3)a_1 b_1+(\zeta_1-\zeta_2)a_2 b_2, \\
c_2 = (\zeta_0-\zeta_3)a_1 b_2+(\zeta_1+\zeta_2)a_2 b_1, \\
c_3 = (\zeta_0-\zeta_3)a_2 b_1+(\zeta_1+\zeta_2)a_1 b_2, \\
c_4 = (\zeta_0+\zeta_3)a_2 b_2+(\zeta_1-\zeta_2)a_1 b_1,
\end{aligned}
\end{equation}
and $\zeta_k$ for $k=0,1,2,3$ are defined in \eref{eq:zetak}. According to \rcite{PhysRevLett.78.5022},
the concurrence $C$ of the output state reads
\begin{align}\label{eq:concurrence}
C(|\phi \>) = 2 | c_1c_4-c_2c_3 |.
\end{align}
To satisfy the product-state constraint, the concurrence $C(|\phi \>)$ should vanish, which means
\begin{equation}\label{eq:PSCstandard}
c_1c_4-c_2c_3 = 0.
\end{equation}
\subsection{Minimal-setting and entanglement-free verification of two-qubit unitaries}
In this section we determine the minimum number of experimental settings required to verify an arbitrary two-qubit unitary and derive a simple criterion for determining whether a general two-qubit unitary can be verified by an entanglement-free protocol. Our main result is summarized in the following theorem.
\begin{theorem}\label{thm:MinStwoqubitG}
Suppose $U$ is a two-qubit unitary operator with Schmidt coefficients $s_0, s_1, s_2, s_3$ arranged in nonincreasing order. Then
\begin{equation}
\mu(U)=\begin{cases}
5 & \mbox{if } s_0>s_1=s_2=s_3>0,\\
4&\mbox{otherwise};
\end{cases}
\end{equation}
in addition, the unitary operator $U$ can be verified by an entanglement-free protocol unless $s_0>s_1=s_2=s_3>0$.
\end{theorem}
\Thref{thm:MinStwoqubitG} is a corollary of \lref{lem:SchmidtCoeffSpecial} in \sref{sec:Schmidt} and \thref{thm:MinStwoqubit} below.
Define
\begin{align}\label{eq:ent-free area}
\mathcal{S}:=&\Bigl\{ (\alpha_1,\alpha_2,\alpha_3)\Big| 0 \leq \alpha_3 \leq \alpha_2 \leq \alpha_1 \leq \frac{\pi}{4} \Bigr\},\\
\mathcal{S}_\mathrm{E}:=&\Bigl\{ (\alpha,\alpha,\alpha) \Big| 0 < \alpha< \frac{\pi}{4} \Bigr\},\quad
\mathcal{S}_\mathrm{EF}:=\mathcal{S}\setminus \mathcal{S}_\mathrm{E}.
\end{align}
\begin{theorem}\label{thm:MinStwoqubit}
Suppose $0 \leq \alpha_3 \leq \alpha_2 \leq \alpha_1 \leq \pi/4$. Then
\begin{align}\label{eq:MinStwoqubit}
\mu(U(\alpha_1,\alpha_2,\alpha_3))=\begin{cases}
4 & \mbox{if } (\alpha_1,\alpha_2,\alpha_3) \in \mathcal{S}_\mathrm{EF},\\
5 & \mbox{if } (\alpha_1,\alpha_2,\alpha_3) \in \mathcal{S}_\mathrm{E}.
\end{cases}
\end{align}
$U(\alpha_1,\alpha_2,\alpha_3)$ can be verified by an entanglement-free protocol iff $(\alpha_1,\alpha_2,\alpha_3) \in \mathcal{S}_\mathrm{EF}$.
\end{theorem}
\begin{proof}
To prove \thref{thm:MinStwoqubit}, it suffices to prove \eref{eq:MinStwoqubit}, which implies
the last statement in the theorem according to \thref{thm:minS-entFree}. To prove \eref{eq:MinStwoqubit}, we shall first construct
a four-setting entanglement-free protocol for verifying $U(\alpha_1,\alpha_2,\alpha_3)$ when $(\alpha_1,\alpha_2,\alpha_3) \in \mathcal{S}_\mathrm{EF}$. To this end we need to consider three different cases and construct an EFMIS in each case (cf. \thref{thm:minS-entFree}).
\begin{enumerate}
\item[1.] $\alpha_1=\alpha_2=\pi/4$
In this case, according to \esref{eq:outputStateCB}-\eqref{eq:PSCstandard}, the product-state constraint under the computational basis reads
\begin{equation}
a_1 a_2 b_1 b_2 \cos(2\alpha_3) =0.
\end{equation}
In addition, $|\zeta_0|=|\zeta_1|=|\zeta_2|=|\zeta_3|=1/2$ according to \eref{eq:zetak}. So a pure product state satisfies the product-state constraint if one of the reduced states is an eigenstate of $\sigma_3$. Based on this observation we can construct an EFMIS as follows:
\begin{equation}\label{eq:1st-case MIS}
\begin{aligned}
&|\phi_1\>=|0{+}\>,\quad
|\phi_2\>=|1{+}\>,\\
&|\phi_3\>=|{-}0\>,\quad
|\phi_4\>=|{+}0\>,
\end{aligned}
\end{equation}
where $|\pm\>=\frac{1}{\sqrt{2}}(|0\>\pm |1\>)$ are the two eigenstates of $\sigma_1$.
Note that these product states remain as product states after the action of $U(\alpha_1,\alpha_2,\alpha_3)$ as expected. In addition, the transition graph of these states is connected. Therefore,
$U(\alpha_1,\alpha_2,\alpha_3)$ can be verified by an entanglement-free protocol based on four experimental settings, which confirms \eref{eq:MinStwoqubit}.
\item[2.] $\alpha_1=\alpha_2=\alpha_3=0$
In this case, $U(\alpha_1,\alpha_2,\alpha_3)$ is equal to the identity, so all product states satisfy the product-state constraint, and it is easy to construct an EFMIS. Actually, the EFMIS constructed in case 1 still works. Therefore,
$U(\alpha_1,\alpha_2,\alpha_3)$ can be verified by an entanglement-free protocol based on four experimental settings, which confirms \eref{eq:MinStwoqubit}.
\item[3.] $\alpha_1>\alpha_3$ and $\alpha_2<\pi/4$.
In this case, it is more convenient to consider the magic basis. Suppose the state $|\phi_0\rangle$ has the expansion
$|\phi_0\rangle = \sum_{k=1}^4 \gamma_k |\Phi_k \rangle$ with the normalization condition $\sum_{k=1}^4 |\gamma_k|^2 =1$.
Then the product-state constraint is satisfied if the coefficients $\gamma_1^2, \gamma_2^2, \gamma_3^2, \gamma_4^2$ have the form as shown in
\aref{app:case3-EFMIS}. Moreover, an EFMIS can be constructed as follows (in the magic basis):
\begin{equation}\label{eq:case3-input}
\begin{aligned}
&|\phi_{1}\>=
\begin{pmatrix}
\gamma_{1} \\ \gamma_{2} \\ \gamma_{3} \\ \gamma_{4}
\end{pmatrix},\quad
&|\phi_{2}\>=
\begin{pmatrix}
\gamma_{1} \\ -\gamma_{2} \\ \gamma_{3} \\ \gamma_{4}
\end{pmatrix}, \\
&|\phi_{3}\>=
\begin{pmatrix}
\gamma_{1} \\ \gamma_{2} \\ \gamma_{3} \\ -\gamma_{4}
\end{pmatrix},\quad
&|\phi_{4}\>=
\begin{pmatrix}
-\gamma_{1} \\ \gamma_{2} \\ \gamma_{3} \\ -\gamma_{4}
\end{pmatrix}.
\end{aligned}
\end{equation}
Therefore,
$U(\alpha_1,\alpha_2,\alpha_3)$ can be verified by an entanglement-free protocol based on four experimental settings, which confirms \eref{eq:MinStwoqubit}.
\end{enumerate}
To complete the proof of \thref{thm:MinStwoqubit}, it remains to determine $\mu(U(\alpha_1,\alpha_2,\alpha_3))$ in the case $(\alpha_1,\alpha_2,\alpha_3) \in \mathcal{S}_\mathrm{E}$, which means $0<\alpha_1=\alpha_2=\alpha_3<\pi/4$. Suppose the input state $|\phi_0\rangle$ has the expansion
$|\phi_0\rangle = \sum_{k=1}^4 \gamma_k |\Phi_k \rangle$ with $\sum_{k=1}^4 |\gamma_k|^2 =1$ in the magic basis. According to \eref{eq:product-state-constraint}, the product-state constraint amounts to the following equality:
\begin{equation}
(\gamma_1^2, \gamma_2^2, \gamma_3^2, \gamma_4^2) = (\gamma_1^2, \gamma_2^2, -\gamma_1^2- \gamma_2^2, 0),
\end{equation}
which implies that $d_{\mathrm{Prod}}(U) = 3$. So $U(\alpha_1,\alpha_2,\alpha_3)$ cannot be verified by an entanglement-free protocol according to \thref{thm:minS-entFree}. Nevertheless, $U(\alpha_1,\alpha_2,\alpha_3)$ can be verified by a five-setting protocol based on local operations, given that $\mu(U(\alpha_1,\alpha_2,\alpha_3))=5$ according to \pref{pro:muUdprod}. This result
confirms \eref{eq:MinStwoqubit} and completes the proof of \thref{thm:MinStwoqubit}.
\end{proof}
Next, we generalize \thref{thm:MinStwoqubit} to the whole parameter range $0\le \alpha_3,\alpha_2,\alpha_1 < 2\pi$.
Define
\begin{align}\label{eq:ent-free area2}
\tilde{\mathcal{S}}:=&\Bigl\{ (\alpha_1,\alpha_2,\alpha_3)\Big| 0 \leq \alpha_3,\alpha_2,\alpha_1 < 2\pi \Bigr\},\\
\tilde{\mathcal{S}}_\mathrm{E}:=&\Bigl\{ \Bigl(\frac{\pi}{2} k_1+\frac{\pi}{4} \pm \alpha,\frac{\pi}{2} k_2+\frac{\pi}{4} \pm \alpha,\frac{\pi}{2} k_3+\frac{\pi}{4} \pm \alpha \Bigr) \Big| \nonumber \\
& 0 < \alpha < \frac{\pi}{4}, \;
k_1,k_2,k_3=0,1,2,3 \Bigr\},\\
\tilde{\mathcal{S}}_\mathrm{EF}:=&\tilde{\mathcal{S}} \setminus \tilde{\mathcal{S}}_\mathrm{E}.
\end{align}
The following corollary is proved in \aref{app:cor:MinStwoqubit-general}.
\begin{corollary}\label{cor:MinStwoqubit-general}
Suppose $0\le \alpha_3,\alpha_2,\alpha_1 < 2\pi$. Then
\begin{align}
\mu(U(\alpha_1,\alpha_2,\alpha_3))=\begin{cases}
4 & \mbox{if } (\alpha_1,\alpha_2,\alpha_3) \in \tilde{\mathcal{S}}_\mathrm{EF},\\
5 & \mbox{if } (\alpha_1,\alpha_2,\alpha_3) \in \tilde{\mathcal{S}}_\mathrm{E}.
\end{cases}
\end{align}
$U(\alpha_1,\alpha_2,\alpha_3)$ can be verified by an entanglement-free protocol iff $(\alpha_1,\alpha_2,\alpha_3) \in \tilde{\mathcal{S}}_\mathrm{EF}$.
\end{corollary}
\Thref{thm:MinStwoqubit} and \crref{cor:MinStwoqubit-general} imply that generic two-qubit unitary transformations (except for a set of measure zero) can be verified by entanglement-free protocols based on four experimental settings. In principle we can reach arbitrarily high precision as long as sufficiently many tests can be performed.
Nevertheless, certain special unitary transformations cannot be verified by entanglement-free protocols, in which case five experimental settings are necessary. Note that the minimum number of settings is not continuous, which is expected for a discrete figure of merit. For each unitary $U$ in the later case, we can find a nearby unitary $U'$ that can be verified by an entanglement-free protocol. In this way $U$ can be verified approximately by an entanglement-free protocol. However, the precision is limited by the entanglement infidelity between $U'$ and $U$; in addition, the target unitary transformation $U$ cannot pass all the tests with certainty.
To enhance the precision, we can find a better approximation to $U$, but the precision is still limited for any given approximation. Although any two-qubit unitary transformation can be verified with five measurement settings (only four settings in the generic case), quite often the sample efficiency can be improved by increasing the number of measurement settings. The tradeoff between the sample efficiency and the number of experimental settings deserves further studies.\\
\subsection{Examples}\label{example}
In this section we present explicit EFMISs for several well-known two-qubit gates, from which entanglement-free verification protocols can be constructed immediately.
\subsubsection{CNOT}
The
CNOT gate is equivalent to $U(\frac{\pi}{4},0,0)$ according to the following decomposition
\begin{equation}
\begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0
\end{pmatrix}
= V_\mathrm{A} \otimes W_\mathrm{B} U(\frac{\pi}{4},0,0) \tilde{V}_\mathrm{A} \otimes \tilde{W}_\mathrm{B},
\end{equation}
where
\begin{equation}
\begin{aligned}
V_\mathrm{A}&=\frac{1}{\sqrt{2}}
\begin{pmatrix}
1 & 1 \\
\mathrm{i} & -\mathrm{i}
\end{pmatrix},&\quad
\tilde{V}_\mathrm{A}&=\frac{1}{\sqrt{2}}
\begin{pmatrix}
1 & 1 \\
1 & -1
\end{pmatrix}, \\
W_\mathrm{B}&=\frac{1}{\sqrt{2}}
\begin{pmatrix}
1 & \mathrm{i} \\
-\mathrm{i} & -1
\end{pmatrix},&\quad
\tilde{W}_\mathrm{B}&=
\begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix}.
\end{aligned}
\end{equation}
To construct an entanglement-free protocol for verifying the CNOT gate, it suffices to construct an EFMIS. To this end, we can first construct an EFMIS for
$U(\frac{\pi}{4},0,0)$ and then apply a suitable local unitary transformation, although it is easy to construct an EFMIS for the CNOT gate directly.
According to \esref{eq:outputStateCB}-\eqref{eq:PSCstandard}, the product-state constraint for $U(\frac{\pi}{4},0,0)$ under the computational basis can be expressed as
\begin{equation}
(a_1^2-a_2^2)(b_1^2-b_2^2)=0.
\end{equation}
A product state satisfies the constraint iff
one of the reduced states is an eigenstate of $\sigma_1$. Based on this observation, an EFMIS can be constructed as
\begin{equation}\label{eq:input-CNOT0}
\begin{aligned}
|\phi_1\>&=|0+\>, &\quad
|\phi_2\>&=|1+\>,\\
|\phi_3\>&=|{-}0\>,&\quad
|\phi_4\>&=|{+}0\>,
\end{aligned}
\end{equation}
where $|\pm\>=(|0\>\pm |1\>)/\sqrt{2}$ are the two eigenstates of $\sigma_1$. By multiplying the local unitary operator $(\tilde{V}_\mathrm{A} \otimes \tilde{W}_\mathrm{B})^\dag$, we can construct an EFMIS for the CNOT gate as
\begin{equation}\label{eq:input-CNOT}
\begin{aligned}
|\tilde{\phi}_1\>&=|{+}{-}\>, &\quad
|\tilde{\phi}_2\>&=|{-}{-}\>,\\
|\tilde{\phi}_3\>&=|10\>,&\quad
|\tilde{\phi}_4\>&=|00\>.
\end{aligned}
\end{equation}
\subsubsection{CZ}
The CZ gate is equivalent to the CNOT gate
according to the identity
\begin{align}
\mathrm{CZ}=(I\otimes H) \mathrm{CNOT} (I\otimes H),
\end{align}
where $H$ is the Hadamard gate.
Therefore, any EFMIS for the CNOT gate can be turned into an EFMIS for the CZ gate by simply applying the local unitary operator $I\otimes H$.
For example, one EFMIS for the CZ gate can be constructed by applying
$I\otimes H$ to the states in \eref{eq:input-CNOT}, which yields
\begin{equation}\label{eq:EFMIS-CZ}
\begin{aligned}
|\phi_1\>&=|{+}1\>, &\quad
|\phi_2\>&=|{-}1\>,\\
|\phi_3\>&=|1{+}\>,&\quad
|\phi_4\>&=|0{+}\>.
\end{aligned}
\end{equation}
\subsubsection{C-Phase}
The C-Phase gate with nontrivial phase $0<\varphi<2\pi$ reads
\begin{equation}
\begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & \mathrm{e}^{\mathrm{i} \varphi}
\end{pmatrix}.
\end{equation}
The conjugate of the C-Phase gate is equivalent to $U\bigl(\frac{\varphi}{4},0,0\bigr)$ according to the following decomposition
\begin{equation}
\begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & \mathrm{e}^{-\mathrm{i} \varphi}
\end{pmatrix}
= V_\mathrm{A} \otimes W_\mathrm{B} U\Bigl(\frac{\varphi}{4},0,0\Bigr) \tilde{V}_\mathrm{A} \otimes \tilde{W}_\mathrm{B},
\end{equation}
where
\begin{equation}
\begin{aligned}
V_\mathrm{A}&=\frac{1}{\sqrt{2}}
\begin{pmatrix}
1 & 1 \\
-\mathrm{e}^{-\mathrm{i} \frac{\varphi}{2}} & \mathrm{e}^{-\mathrm{i} \frac{\varphi}{2}}
\end{pmatrix},&\!
\tilde{V}_\mathrm{A}&=\frac{1}{\sqrt{2}}
\begin{pmatrix}
1 & -1 \\
1 & 1
\end{pmatrix},\\
W_\mathrm{B}&=\frac{1}{\sqrt{2}}
\begin{pmatrix}
\mathrm{e}^{\mathrm{i} \frac{\varphi}{4}} & \mathrm{e}^{\mathrm{i} \frac{\varphi}{4}} \\
\mathrm{e}^{-\mathrm{i} \frac{\varphi}{4}} & -\mathrm{e}^{-\mathrm{i} \frac{\varphi}{4}}
\end{pmatrix},&\!
\tilde{W}_\mathrm{B}&=\frac{1}{\sqrt{2}}
\begin{pmatrix}
1 & 1 \\
1 & -1
\end{pmatrix}.
\end{aligned}
\end{equation}
According to \esref{eq:outputStateCB}-\eqref{eq:PSCstandard}, the product-state constraint for $U(\frac{\varphi}{4},0,0)$ under the computational basis can be expressed as
\begin{equation}
(a_1^2-a_2^2)(b_1^2-b_2^2)\sin\frac{\varphi}{2}=0.
\end{equation}
A product state satisfies the constraint if
one of the reduced states is an eigenstate of $\sigma_1$. So the states in \eref{eq:input-CNOT0} also form an EFMIS for $U(\frac{\varphi}{4},0,0)$.
By applying the local unitary operator $(\tilde{V}_\mathrm{A} \otimes \tilde{W}_\mathrm{B})^\dag$, we can construct an EFMIS for the C-Phase gate (and its conjugate) as
\begin{equation}
\begin{aligned}
|\phi_1\>&=|{-}0\>, &\quad
|\phi_2\>&=|{+}0\>,\\
|\phi_3\>&=-|1{+}\>,&\quad
|\phi_4\>&=|0{+}\>.
\end{aligned}
\end{equation}
Note that this EFMIS applies to the C-Phase gate with an arbitrary phase. Incidentally, the four states in \eref{eq:EFMIS-CZ} also form an EFMIS for the C-Phase gate with an arbitrary phase.
\subsubsection{SWAP}
The SWAP gate is equal to $U(\frac{\pi}{4},\frac{\pi}{4},\frac{\pi}{4})$ up to an overall phase factor according to the following identity
\begin{equation}
\begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1
\end{pmatrix}
=\frac{1+\mathrm{i}}{\sqrt{2}} U(\frac{\pi}{4},\frac{\pi}{4},\frac{\pi}{4}).
\end{equation}
Thanks to this identity, the EFMIS for $U(\frac{\pi}{4},\frac{\pi}{4},\frac{\pi}{4})$ presented in \eref{eq:1st-case MIS} is also an EFMIS for the SWAP gate. In addition, any product state satisfies the product-state constraint, so any MIS composed of product states is an EFMIS for the SWAP gate.
\section{\label{sec:summary}Summary}
We studied systematically QSV and QGV with a focus on the number of experimental settings based on local operations. We showed that any bipartite pure state can be verified by only two measurement settings based on local projective measurements. The minimum number of experimental settings required to verify a bipartite unitary increases linearly with the total dimension.
In addition, we introduced the concept of entanglement-free verification, which does not generate any entanglement in the verification procedure.
The connection with minimal-setting verification is also clarified.
Finally, we determined the minimum number of experimental settings required to verify each two-qubit unitary. It turns out any two-qubit unitary can be verified using at most five settings based on local operations, and a generic two-qubit unitary requires only four settings.
In the course of study we derived a number of results on two-qubit unitaries and their Schmidt coefficients, which are of independent interest. Our work significantly promotes the current understanding
on QSV and QGV with respect to the number of required experimental settings, which is instructive for both theoretical studies and practical applications.
In addition, our work shows that verification protocols with minimal settings are in general not balanced and thus do not have natural analogs in QSV, which reflects a key distinction between QGV and QSV that is not recognized before.
In the future it would be desirable to generalize our results to the multipartite setting.
\section*{Acknowledgments}
This work is supported by the National Natural Science Foundation of China (Grants No.~92165109 and No.~11875110) and Shanghai Municipal Science and Technology Major Project (Grant No.~2019SHZDZX01).
|
1809.03807
|
\section{Introduction}
\label{sec:intro}
Bow shock pulsar wind nebulae (BSPWNe) are a particular type of pulsar wind nebulae (PWNe), usually associated with old pulsars that have emerged from their progenitor supernova remnant \citep{Bucciantini:2001,Gaensler_Slane06a}.
It has been estimated that a considerable fraction of all the pulsars (from $10\%$ up to $50\%$) is born with kick velocity of the order of 100-500 km s$^{-1}$ \citep{Arzoumanian:2002}. Since the expansion of the surrounding remnant is decelerated due to the sweeping up of interstellar medium (ISM), these pulsars are fated to escape their progenitor shell over typical timescales of few tens of thousands of years, sufficiently short if compared with typical pulsar ages ($\sim 10^6$ years). After escaping from the remnant those pulsars interact directly with the ISM, and since the typical sound speed in the ISM is of the order of $10-100$ km s$^{-1}$, their motion is highly supersonic.
Pulsars are known to be powerful sources of cold, magnetized and ultra-relativistic outflows, with bulk Lorentz factors in the range $10^4-10^7$ \citep{Goldreich:1969,Kennel:1984,Kennel:1984a}. This wind, mainly composed by electron-positron pairs, is launched from the pulsar magnetosphere \citep{Contopoulos_Kazanas+99a,Spitkovsky06a,Tchekhovskoy_Philippov+16a,Hibschman:2001,Takata_Wang+10a,Timokhin_Arons13a,Takata_Ng+16a}, at the expenses of the rotational energy of the pulsar itself. The wind, confined and slowed down in a strong termination shock (TS), by the interaction with the ambient medium gives rise to a nebula, that may be revealed as a source of non-thermal radiation produced by the accelerated pairs interacting with the nebular magnetic field.
Due to the pulsar fast speed, the ram pressure
balance between the pulsar wind and the supersonic ISM flow (in the
reference frame of the pulsar itself) gives rise to a cometary nebula \citep{Wilkin:1996,Bucciantini:2001,Bucciantini:2002}: the pulsar wind is still slowed down by a TS, elongated behind the pulsar, beyond which the shocked wind gives rise to the PWN itself, separated from the ISM, shocked in a forward bow-shock, by a contact discontinuity (CD).
A schematic sketch of this structure can be seen in Fig.~\ref{fig:sketch}. This morphology has been confirmed by numerical simulations in various regimes \citep{Bucciantini:2002,Bucciantini_Amato+05a,Vigelius:2007,Barkov:2018}.
These nebulae can be seen in non-thermal radio/X-rays, usually in the form of long tails \citep{Arzoumanian_Cordes+04a,Kargaltsev:2017,Kargaltsev_Misanovic+08a,Gaensler_van-der-Swaluw+04a,Yusef-Zadeh_Gaensler05a,Li_Lu+05a,Gaensler05a,Chatterjee_Gaensler+05a,Ng_Camilo+09a,Hales_Gaensler+09a,Ng_Gaensler+10a,De-Luca_Marelli+11a,Marelli_De-Luca+13a,Jakobsen_Tomsick+14a,Misanovic_Pavlov+08a,Posselt_Pavlov+17a,Klingler_Rangelov+16a,Ng_Bucciantini+12a}. In some cases polarimetric informations are also available, suggesting a large variety of magnetic configurations, from highly ordered to strong turbulent, both in the head and in the tail of the nebula \citep{Ng_Bucciantini+12a, Yusef-Zadeh_Gaensler05a, Ng_Gaensler+10a}.
If the pulsar is moving through a partially ionized medium the bow shock may also be detected as H$_\alpha$ emission produced as a consequence of collisional and/or charge exchange excitations of neutral hydrogen atoms in the tail \citep{Chevalier:1980, Kulkarni_Hester88a,Cordes_Romani+93a,Bell_Bailes+95a,van-Kerkwijk_Kulkarni01a,Jones_Stappers+02a,Brownsberger:2014,Romani_Slane+17a}. Such a phenomenon is not restricted to BSPWNe but is observed also in supernova remnant shocks \citep[see, e.g.,][]{Heng:2010,Blasi-Morlino:2012,Morlino-Blasi:2014}.
Recently it was also detected the second example of a BSPWN emitting in the far-UV, with the emission spatially coincident with the previous detected H$_\alpha$ one \citep{Rangelov:2017}.
In the last several years, detailed observations at different wavelengths have revealed a varieties of morphologies in the pulsar vicinity, different emission patterns and shapes of tails, with puzzling outflows misaligned with the pulsar velocity.
This diversity is still poorly understood, and generically attributed to a configuration-dependent growth of shear instabilities and/or to the propagation of the pulsar in a non-uniform ISM \citep{Romani:1997,Vigelius:2007,Yoon-Heinz:2017, Toropina:2018}.
BSPWNe are in particular shown to have a peculiar \textit{head-and-shoulder} structure (shown in the sketch of Fig.~\ref{fig:sketch}), with the smooth bow shock in the head evolving neither in a cylindrical shape nor in a conical one, rather revealing a sideways expansion that sometimes show a periodic structure (one of the most famous example is the Guitar nebula, see, e.g., \citealt{Chatterjee:2002,Chatterjee:2004,van-Kerkwijk:2008}). While at present the structures in the vicinity of the bow shock head are reasonably well understood, the same cannot be said of the structures in the tail \citep{Brownsberger:2014,Pavan:2016,Klingler:2016}.
The possibility that the dynamics of PWNe can be strongly affected by the mass loading was discussed by \citet{Bucciantini:2001} and \citet{Bucciantini:2002a}. There the authors have shown that a non negligible fraction of neutral atoms is expected to penetrate in the bow shock through shocked ISM, and then interact with the pulsar wind, modifying the predicted dynamics.
These first models provide a good description of both the hydrogen penetration length scale and the H$\alpha$ luminosity. In \citet{Morlino:2015} a quasi-1D steady-state model have been proposed to investigate the effect of mass loading on the tail morphology of BSPWNe. If a significative fraction of neutral atoms from the ISM penetrate into the bow shock, they can undergo ionization essentially by interacting with the UV photons emitted by the relativistic plasma of the nebula.
The resulting protons and electrons interact, in turn, with the wind thorough its magnetic field, leading to a net mass loading of the shocked wind in the tail.
The newly formed protons and electrons are at rest in the rest frame of the ISM. Moreover they originate from the cold neutrals of the ISM which carry negligible thermal energy. Hence they do not add momentum or internal energy to the flow (they however add their rest mass energy). Given that momentum and energy are conserved, the rise in rest mass energy density causes the wind to slow down and the pressure to rise, resulting ultimately in a lateral expansion.
This effect indeed produces a shape which resembles the \textit{head-and-shoulder} morphology.
Nevertheless the analytic model by \citet{Morlino:2015} has few shortcomings: being 1D it does not account for the fluid dynamics in the lateral direction, the ISM ram pressure is neglected despite the supersonic motion, and, being stationary, it cannot capture possible instabilities.
In this paper we extend this model by means of 2D relativistic HD axisymmetric simulations, with the aim of validating the \textit{head-shoulder} morphology of BSPWNe predicted by the aforementioned work, and extending it beyond its limitations. Although magnetic field might influence the dynamics of the flow, its response to mass loading, and the expected opening of the tail, in order to avoid a further level of complexity we limit this work to pure HD.
This paper is organized as follows: in Sec.~\ref{sec:Nsetup} the numerical tool and setup are described; in Sec.~\ref{sec:results} we present and discuss our findings. Conclusions are then drawn in Sec.~\ref{sec:conclusion}.
\section{Numerical setup}
\label{sec:Nsetup}
The typical length scale of BSPWNe, is the so called {\it stand off distance} $d_0$, \citep{Wilkin:1996,Bucciantini:2001, van-der-Swaluw:2003, Bucciantini_Amato+05a}. This is the distance from the pulsar of the stagnation point where the wind momentum flux and the ISM ram pressure balance each other
\begin{equation}\label{eq:stagnationp}
d_0 = \sqrt{\dot{E}/(4\pi c \rho_\mathrm{ISM} v_\mathrm{ISM}^2)}\,,
\end{equation}
where $\dot{E}$ is the pulsar luminosity, $\rho_\mathrm{ISM}$ is the ISM density (the ionized component), $ v_\mathrm{ISM}$ the speed of the pulsar with respect to the local medium, and $c$ the speed of light. It is then convenient to normalize all the distances in terms of $d_0$.
Numerical simulations \citep{Bucciantini:2002b,Bucciantini_Amato+05a,Bucciantini:2018}, in the absence of mass loading, show that at a distance of a few $d_0$ from the pulsar a backward collimated tail is already formed. Typical flow velocities of the shocked pulsar wind plasma in the tail are of the order of $v_0\simeq 0.8c$, while its pressure is approximately in equilibrium with the ISM. Since we are actually interested in the behavior of the tail we do not simulate the bow shock head. In practice we only study the domain shown by the highlighted yellow-dotted rectangle in Fig.~\ref{fig:sketch}.
Simulations were carried out using the numerical code PLUTO \citep{Mignone:2007}, a shock-capturing, finite-volume code for the solution of hyperbolic/parabolic systems of partial differential equations, using a second order Runge-Kutta time integrator and an HLLC Riemann solver. Given the geometry of the problem, simulations were done on a 2D axisymmetric grid with cylindrical coordinates $(r,\, z)$, assuming a vanishing azimuthal velocity $v_\phi=0$. The grid has uniform spacing along $r$ and $z$, but with different resolution along the two directions, in order to ensure the requested numerical resolution in the radial direction at the injection zone.
The fluid is described using an ideal gas equation of state (EoS) with adiabatic index $\Gamma=4/3$, appropriate for the relativistic shocked pulsar plasma. \citet{Bucciantini:2002} has shown that more sophisticated simulations with multi-fluid treatment, enabling the use of a different adiabatic coefficient for the non-relativistic component, lead only to minor deviations in the overall geometry, until efficient thermalization between the two components is reached, which physically happens on timescales much longer than the flow time in the tail of BSPWNe. For this same reason the use of {\it adaptive EoS}, like Taub's EoS \citep{Taub:1948, Mignone:2005} is not feasible in the presence of mass loading (or other form of mass contamination), because they are based on the assumption of instantaneous thermalization between the two components.
Our reference frame is moving with the pulsar. In this reference frame the ISM is seen as a uniform, unmagnetized flow moving along the positive $z-$direction with velocity $v_z=v_\mathrm{ISM}$. The unloaded pulsar tail plasma is also moving along the positive $z-$direction with velocity $v_z=0.8c$. In accord with \citet{Morlino:2015}, the unloaded tail is supposed to extend between $r=0$ and $r=1$ (in units of $d_0$).
As we will show the parameters regulating the tail geometry are the ISM pressure $p_\mathrm{ISM}$ and the ISM Mach number $M^2 = \rho_\mathrm{ISM} v_\mathrm{ISM}^2/(\Gamma p_\mathrm{ISM})$, and not the separated values of the ISM density and velocity. At the $z=0$ boundary in a layer extending up to $z=1.5$ we keep fixed the inflow conditions for the ISM and pulsar wind tail, in terms of velocity density and pressure. The pressure in the tail is set equal to the one in the ISM, $p_{\rm tail}=p_\mathrm{ISM}$, while its density is $\rho_{\rm tail}= 10^{-2} p_{\rm tail}$ appropriate for the hot shocked relativistic gas of the PWN (we verified that lowering it further does not change the results). The velocity of the ISM is indeed varied in order to sample both the subsonic and the supersonic regimes with Mach numbers ranging from 0 to $\sim 6$ (see Tab.~\ref{Tab:Runs}). Axial symmetry is imposed in $r=0$, while outflow conditions are set at the other two boundaries.
\begin{table}
\begin{center}
\begin{tabular}{{cccccccc}
\hline
Run & $(r_f,\, z_f)$ [$d_0$] & $(N_r,\,N_z)$ & $\rho_\mathrm{ISM}$ & $v_\mathrm{ISM}$ [$c$] & $M$ \\
\hline
S$_0$ & $(15,\,50)$ & $(320,\,320)$ & 1.00 & $0$ & $0.0$\\
S$_1$ & $(15,\,60)$ & $(320,\,320)$ & 1.00 & $0.015$ & $0.4$\\
S$_{1a}$ & $(15,\,60)$ & $(320,\,320)$ & 0.25 & $0.03$ & $0.4$\\
S$_2$ & $(15,\,60)$ & $(320,\,320)$ & 1.00 & $0.03$ & $0.8$\\
S$_{2a}$ & $(15,\,60)$ & $(320,\,320)$ & 0.25 & $0.06$ & $0.8$\\
S$_3$ & $(10,\,100)$ & $(256,\,512)$ & 1.00 & $0.07$ & $1.9$ \\
S$_4$ & $(10,\,100)$ & $(256,\,512)$ & 1.00 & $0.1$ & $2.7$\\
S$_5$ & $(10,\,100)$ & $(256,\,512)$ & 1.00 & $0.14$ & $3.8$\\
S$_6$ & $(10,\,100)$ & $(256,\,512)$ & 1.00 & $0.2$ & $5.5$\\
S$_{6a}$ & $(10,\,100)$ & $(256,\,512)$ & 4.00 & $0.1$ & $5.5$\\
S$_{6b}$ & $(10,\,100)$ & $(256,\,512)$ & 2.04 & $0.14$ & $5.5$ \\
\hline
\end{tabular}
\end{center}
\caption{List of all runs. The columns are from the left: box size $r_f$ and $z_f$ in unit of $d_0$; number of grid points $N_r$ and $ N_z$ in the $r$ and $z$ directions; ISM density (arbitrary units) and velocity (in unit of $c$); pulsar Mach number $M$.}
\label{Tab:Runs}
\end{table}
\begin{figure}
\centering
\includegraphics[width=.45\textwidth]{bow_sketch2.pdf}
\caption{Sketch of the \textit{head-shoulder} structure of a BSN.
The simulated region is highlighted with the yellow-dotted rectangle.}
\label{fig:sketch}
\end{figure}
\subsection{Mass loading}
\label{SS:mload}
Given that mass loading is supposed to act only on the shocked pulsar wind material (as discussed in \citealt{Morlino:2015}), we also evolve a numerical tracer $\chi$ that allows us to distinguish it with respect to the ISM.
This ensures that we can identify the bow shock tail region even when it starts to broaden.
The tracer has value 1 in the BSPWN tail and 0 for the ISM.
The additional mass is loaded uniformly along the spatial grid, wherever it is satisfied the condition $\chi=1$.
Given that the typical pulsar kick velocities in the ISM are much smaller than the typical flow speed in the tail,
we simplify the treatment of mass loading, assuming that the this mass is added with zero momentum. This approximation is valid as long as the loaded tail does not slow down to speed comparable to $v_{\rm ISM}$.
The effect of the mass loading can then be simulated with a simple modification of the equation for mass conservation (PLUTO evolves the reduced energy density that does not include the rest mass energy density) according to
\begin{equation}\label{eq:rhos}
\frac{\partial (\gamma \rho)}{\partial t} + \vec{\nabla} (\gamma \rho \vec{v})= \chi \dot{\rho}\,,
\end{equation}
where $\dot{\rho}$ is the rate of mass loading that depends on the ISM density of neutrals and the UV ionizing flux from the PWN \citep{Morlino:2015}. In the present work we adopt a value of $\dot{\rho}$ such that the distance $\lambda_{\rm rel}$ over which the inertia in the BSPWN tail doubles with respect to the equilibrium value is
\begin{equation}\label{eq:lambda}
\lambda_\mathrm{rel} = \frac{4p_\mathrm{ISM} v_0}{\dot{\rho}} = 8 d_0\,.
\end{equation}
This value has been chosen in order to optimize the simulation runs in terms of speed and size of the computational domain. Notice also that $\lambda_{\rm rel}$ is assumed to be constant in space and time while, in principle, it can change depending on the amount of neutral atoms at each spatial position, which is equivalent to the requirement that only a small fraction of the ISM neutral are ionized.
\section{Results and Discussion}
\label{sec:results}
The mass loading has a twofold effect on the tail dynamics: the density in the tail increases according to Eq.~(\ref{eq:rhos}) and the bulk velocity decreased, since the total momentum is conserved. As a consequence the tail pressure rises, and this leads to an expansion of the tail cross section, until a new steady state is reached. All the results are shown when the stationarity is reached, namely when no significant evolution is observed anymore.
In Table~\ref{Tab:Runs}, we list all the runs. In Fig.~\ref{fig:STRM} we show the morphology of the tail (density and streamlines) for the representative run S$_3$. In all supersonic runs, the expansion of the tail is accompanied by the generation of an oblique shock in the ISM, also shown in the same figure. Such oblique shocks are indeed predicted whenever the contact discontinuity makes an angle with the flow that is larger than the Mach cone \citep{Morlino:2015}.
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{rhoSTREAM.pdf}
\caption{Logarithmic map of the density for run S$_3$, chosen as representative, with velocity streamline superimposed.
The effect of the radial component of the velocity on the velocity streamlines is clearly visible.The light-red line identifies the presence of the forward shock in the ISM medium.}
\label{fig:STRM}
\end{figure}
In Fig.~\ref{fig:MvsT}, we show how the density, taken at a fixed position in the tail, changes in time. Starting from the initial value of $\rho_{\rm tail}$, the tail density increases until it reaches the asymptotic value of $\sim \rho_\mathrm{ISM}/4$ (this value depends on the position). We see that the steady state is reached already at $t \simeq 5000 d_0/c$, when the system approaches a stationary configuration in the entire domain. Afterward, the density remains almost constant and the tail preserves its morphology. This is a good estimate of the time it takes to reach steady state for all the cases we run.
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{plotMvsFINAL.pdf}
\caption{Variation of the mass density with time for the runs S$_3$, S$_4$ and S$_6$, chosen as illustrative cases. The value is computed at $r=0.4$ as an average on a few neighbor cells around $z=60$. Starting from the initial mass of the shocked wind ($\rho_0=10^{-4}\rho_\mathrm{ISM}$) the effect of the mass loading is clearly visible. As time passes, the density increases up to the saturation value that is a fraction of the ambient medium density $\rho_\mathrm{ISM}$.}
\label{fig:MvsT}
\end{figure}
In the top panel of Fig.~\ref{fig:tailsCFR} the position of the contact discontinuity, once the steady state is reached, is shown for all the configurations listed in Table \ref{Tab:Runs}. For each simulation the contact discontinuity is identified as contour lines of the inertia ($\rho v^2$). Different runs are drawn as lines of different colors and labelled as S$_i$, to be compared with Table \ref{Tab:Runs}, for an easier identification of the current configuration.
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{CFR_bowsG2.pdf}
\caption{Top panel: density contours for the all the considered configurations S$_i$ listed in Table~\ref{Tab:Runs}.
The A curve is the analytical profile obtained from \citep{Morlino:2015}, to be compared with S$_0$ run.
The $z$ and $r$ axis are plotted in units of the stagnation point $d_0$. Solid lines indicate different runs and thus different values of $v_\mathrm{ISM}$. As expected the tail appears to be more broaden when the velocity of the ambient medium is smaller, namely when the pulsar velocity is subsonic (S$_0$ is the one in the limit $M=0$). On the contrary, as the Mach number grows beyond unity (run S$_2$ has $M\simeq1.9$) and the pulsar enters in the supersonic regime, the tail starts to be more narrow.
Bottom panel: the same density profiles for the analytic model and S$_0$ run superimposed to a colored map of $v_r/v_z$. The extra broadening of S$_0$ with respect to A can be seen as the effect of the radial component of the velocity, and its related momentum, which is not present in the analytical model.}
\label{fig:tailsCFR}
\end{figure}
As a first step we begin by comparing our results with the analytical model by \citet{Morlino:2015}.
Notice that in principle the velocity of the loaded material is always equal to the ISM velocity, and this is assumed in the cited work, while in our simulations the extra mass is loaded with velocity equal to zero. However it is easy to show that, in the limit of zero speed, the model by \citet{Morlino:2015}, admit a simple analytical solution which is
\begin{flalign}
v(z) &= v_0 \operatorname{e}^{-z/\lambda_\mathrm{rel}}\,, && \label{eq:an_gio_v} \\
\rho_p(z) &= \left(\frac{4 p_\mathrm{ISM}}{c^2}\right)
\left( \operatorname{e}^{z/\lambda_\mathrm{rel}}-1\right)\,, && \label{eq:an_gio_rho} \\
R(z) &= R_0 \operatorname{e}^{z/(2\lambda_\mathrm{rel})} \,, && \label{eq:an_gio_R}
\end{flalign}
where $R_0=d_0$ is the position of the contact discontinuity at the initial time. This solution agrees to the more general case as long as the flow in the tail remains higher than the ISM speed. In our simulations we find that this conditions holds in our entire domain. The function $R(z)$ is shown in the top panel of Fig.\ref{fig:tailsCFR}, to
be compared with our numerical result for $v_\mathrm{ISM}=0$.
This should be taken as a limit for small pulsar speeds, given that for $v_\mathrm{ISM}=0$ no tail is expected in a realistic situation. Moreover this condition corresponds to the assumption, adopted in the analytic model by \cite{Morlino:2015}, that the ISM ram pressure is negligible at the interface between the pulsar wind material and the ISM, allowing us a direct comparison and check of those results.
One can clearly see that the broadening of the two curves is very similar, but the analytic model starts to broaden at a slightly larger distance, from the injection point. This difference is likely due to the 1D assumption of the analytical model that neglects the transverse component of the velocity. From the bottom panel of Fig.~\ref{fig:tailsCFR} one can clearly see that ratio between the transverse (radial) and aligned ($z$) components of the velocity, $v_r/v_z$, reaches values of the order of 0.5. To this transverse velocity there is an associated transverse ram pressure, that is comparable to the bounding ISM pressure. This term, neglected in the work by \citet{Morlino:2015}, exert a pressures that facilitates the opening of the system.
If we consider the behavior of the analytical solution $A$ and the numerical one S$_0$ in Fig.~\ref{fig:tailsCFR}, with reference to the ratio $v_r/v_z$, we can see that the analytic curve $A$ lies at the boundary of the region in which the $v_r$ component becomes comparable to $v_z$.
In Fig.~\ref{fig:cfrAS0} the analytical profiles for $v_z$ and the density, as given by Eqs.~(\ref{eq:an_gio_v})-(\ref{eq:an_gio_rho}), are compared with those from case S$_0$, taken along the axis of the tail.
As can be seen in Fig.~\ref{fig:STRM} the fluid quantities, like pressure and density, stay almost uniform across the tail at fixed $z$, with the obvious exception of the region in the vicinity of the contact discontinuity.
One can clearly see that the agreement with the analytical predictions is quite good except close to the outer boundary (probably because of boundary effects) and close to the injection region, where the condition in the tail changes rapidly as a consequence of mass loading.
\begin{figure}
\centering
\includegraphics[width=.45\textwidth]{plot_D_Vz.pdf}
\caption{Comparison between the analytical model by \citet{Morlino:2015} (black dot-dashed lines) and simulation S$_0$ (solid-indigo line). In the top panel the $z$ component of the velocity, in $c$ units, is given as a function of $z$, in $d_0$ units. In the bottom panel the logarithmic plot of the density (in $\rho_\mathrm{ISM}$ units) is shown.}
\label{fig:cfrAS0}
\end{figure}
Actually pulsars move in general through the ISM with supersonic speeds.
This means that the ISM ram pressure can not be taken as negligible with respect to the thermal pressure in confining the tail. In Fig.~\ref{fig:tailsCFR}, we show the result of various run with increasing Mach numbers, up to the value $M \sim 6$, including a case
close to the transonic regime (S$_{2a}$). As can be seen, the shape of the tail (the opening of the CD) is just a function of the Mach number and not of the ISM density and velocity. This ensures that our runs, with values of the ISM velocity much in excess of realistic numbers, produce correct outcomes, and that it is possible to speed up the computation without compromising the results. This moreover confirms that not only the shape of the head of the bow-shock is just a function of the Mach number \citep{Wilkin:1996} but also the tail.
We cannot however rule out that the speed difference between the tail and the ISM could play a key role in the development of shear instabilities, which are only marginally observed in our runs, as testified by the oscillating pattern of the CD close to the outer boundary.
Looking at the morphology of the tail in the different cases we can see that, as the Mach number increases, the tail became less and less broaden. In particular going from $M=0$ to $M=1$ we see a change in the opening angle of about a factor 2, while rising further the mach number to 6 does not provide a proportionally larger collimation.
It appears that the effect of the ISM ram pressure becomes less and less effective for high $M$.
Subsonic cases are also more affected by turbulence at the contact discontinuity, that tends to destroy the tail at large distances from the bow shock head. For this reason we selected our simulation boxes for the subsonic cases in order to keep the tail coherent. The same effect is indeed much less evident for the supersonic cases, in which the tail remains coherent up to larger distances from the origin.
\section{Conclusions}
\label{sec:conclusion}
In this paper for the first time we simulated the tail of bow shock pulsar wind nebulae in presence of mass loading due to neutrals in the external medium. This is bound to happen whenever a neutron star moves through a partially neutral medium, where the interaction of the neutral component of the ISM with the relativistic pulsar wind plasma results in a mass loading of the latter, modifying its dynamical evolution. This effect can be important to explain the large variety of shapes observed in the BSPWNe, especially concerning the {\it head-and-shoulder} shape observed in some of them.
The effect of mass loading is studied here by means of 2D relativistic HD simulations using the PLUTO code. A dedicated module that handle the mass loading in relativistic regime has been developed and added to the original code. We explored several configuration for pulsars moving both subsonically and supersonically through the ISM and we assume that the new mass is loaded with null initial velocity. In all case we found that when the inertia of the loaded mass starts to dominate over the inertia of the relativistic wind, the wind slows down and undergoes a sideway expansion, confirming previous findings by analytical models. However, we have shown that the ram pressure of the ISM, neglected in analytical models, can instead strongly modify the expansion properties, and this is particularly relevant in the highly supersonic case, that characterizes known pulsars. Interestingly, the expansion of the tail is only a function of the pulsar Mach number with respect to the ISM and decreases with increasing Mach number as a consequence of the increasing ram pressure exerted by the ISM on the contact discontinuity between the wind and the shocked ISM. For the supersonic cases we also observe the development of oblique shocks in the ISM due to the tail expansion. It might be feasible to add this effect to the analytical model, and we plan to do it in the future.
Our results however show that it is unlikely that a large radial expansion of the tail (exceeding a few) could be achieved for realistic pulsar-ISM conditions, given that pulsars move in general with high Mach number through the ISM.
On the other hand, our simulations extend up to a distance where the velocity of the wind is $\sim 0.02 c$, a value still much larger than the typical pulsar speed and such that the shear between the pulsar wind and the ISM is still strongly supersonic. In such a conditions instabilities at the contact discontinuity do not grow efficiently \citep{Bodo-Mignone+:2004}. It is than possible that when the flow speed keeps slowing down, reaching a value of the order of $v_{\rm ISM}$ and the relative shear becomes subsonic, Kelvin-Helmholtz instabilities start growing efficiently disrupting the laminar dynamics in the tail and leading to the formation of large bubbles.
In the present work we have neglected the role of the magnetic field, which, depending on its initial configuration, could become dynamically important when the plasma is compressed, as we observe in the tail. If the flow in the tail is assumed to be laminar at injection and to remain so, then 2D simulations could still be used to investigate the magnetic field dynamics, even if the parameter space can substantially increase given that the strength and geometry of the field now enters as new parameters. However, it is well known that magnetic field tend to make the flow more unstable \citep{Mizuno:2011,Mignone:2013}, and in this case the dynamics can only be handled with full 3D simulations.
\section*{Acknowledgements}
We acknowledge the author of the PLUTO code \citep{Mignone:2007} that has been used in this work for HD simulations. The authors also acknowledge support from the PRIN-MIUR project prot. 2015L5EE2Y "Multi-scale simulations of high-energy astrophysical plasmas".
\footnotesize{
\bibliographystyle{mn2e}
|
1809.03838
|
\section*{Highlights}
\begin{itemize}
\item New algorithm for placing hydraulic control structures in drainage
networks.
\item Attenuates downstream hydrograph by de-synchronizing tributary flows.
\item Algorithm is fast and requires only digital elevation data.
\end{itemize}
\section{Introduction}
\label{sec:intro}
In the wake of rapid urbanization, aging infrastructure and a changing climate,
effective stormwater management poses a major challenge for cities worldwide
\cite{Kerkez_2016}. Flash floods are one of the largest causes of natural
disaster deaths in the developed world \cite{Doocy_2013}, and often occur when
stormwater systems fail to convey runoff from urban areas \cite{Wright_2017}. At
the same time, many cities suffer from impaired water quality due to inadequate
stormwater control \cite{walsh_2005}. Flashy flows erode streambeds, release
sediment-bound pollutants, and damage aquatic habitats \cite{walsh_2005,
booth_1997, finkenbine_2000, wang_2001}, while untreated runoff may trigger
fish kills and toxic algal blooms \cite{sahagun_2013, wines_2014}. Engineers
have historically responded to these problems by expanding and upsizing
stormwater control infrastructure \cite{rosenberg_2010}. However, larger
infrastructure frequently brings adverse side-effects, such as dam-induced
disruption of riparian ecosystems \cite{dams_and_development_2000}, and erosive
discharges due to overdesigned conveyance infrastructure \cite{Kerkez_2016}. As
a result, recent work has called for the replacement of traditional peak
attenuation infrastructure with targeted solutions that better reduce
environmental impacts \cite{arora_2015, Hawley_2016}.
As the drawbacks of oversized stormwater infrastructure become more apparent,
many cities are turning towards decentralized stormwater solutions to regulate
and treat urban runoff while reducing adverse impacts. Green infrastructure, for
instance, uses low-impact rain gardens, bioswales, and green roofs to condition
flashy flows and remove contaminants \cite{coffman_1999, strecker_2000,
askarizadeh_2015}. \textit{Smart} stormwater systems take this idea further by
retrofitting static infrastructure with dynamically controlled valves, gates and
pumps \cite{Kerkez_2016, Bartos_2018, Mullapudi_2017, Mullapudi_2018}. By
actuating small, distributed storage basins and conveyance structures in
real-time, \textit{smart} stormwater systems can halt combined sewer overflows
\cite{Montestruque_2015}, mitigate flooding \cite{Kerkez_2016}, and improve
water quality at a fraction of the cost of new construction \cite{Kerkez_2016,
Bartos_2018}. While decentralized stormwater management tools show promise
towards mitigating urban water problems, it is currently unclear how these
systems can be designed to achieve maximal benefits at the watershed scale.
Indeed, some research suggests when stormwater control facilities are not
designed in a global context, local best management practices can lead to
adverse system-scale outcomes---in some cases inducing downstream flows that are
more intense than those produced under unregulated conditions
\cite{Emerson_2005, petrucci_2013}.
Thus, as cities begin to experiment with decentralized stormwater control, the
question of \textit{where} to place control structures becomes crucial. While
many studies have investigated the ways in which active control can realize
system-scale benefits (using techniques like feedback control \cite{wong_2018},
market-based control \cite{Montestruque_2015}, or model-predictive control,
\cite{gelormino_1994, mollerup_2016}), the location of control structures within
the drainage network may serve an equally important function. Hydrologists have
long recognized the role that drainage network topology plays in shaping
hydrologic response \cite{kirkby_1976, gupta_1986, gupta_1988, mesa_1986,
marani_1991, troutman_1985, Mantilla_2011, Tejedor_2015a, Tejedor_2015b}. It
follows that strategic placement of hydraulic control structures can shape the
hydrograph to fulfill operational objectives, such as maximally flattening flood
waves and regulating erosion downstream. To date, however, little research has
been done to assess the problem of optimal placement of hydraulic control
structures in drainage networks:
\begin{itemize}
\item Recent studies have investigated optimal placement of green infrastructure
upgrades like green roofs, rain tanks and bioswales \cite{Zellner_2016,
schubert_2017, yao_2015, zhang_2015, norton_2015, meerow_2017, schilling_2008}. However,
these studies generally focus on quantifying the potential benefits of green
infrastructure projects through representative case studies
\cite{Zellner_2016, schubert_2017, yao_2015, zhang_2015}, and do not intend to
present a generalized framework for placement of stormwater control
structures. As a result, many of these studies focus on optimizing multiple
objectives (such as urban heat island mitigation \cite{norton_2015}, air
quality \cite{meerow_2017}, or quality of life considerations
\cite{schilling_2008}), or use complex socio-physical models and optimization
frameworks \cite{Zellner_2016}, making it difficult to draw general
conclusions about controller placement in drainage networks.
\item Studies of pressurized water distribution networks have investigated the
related problems of valve placement \cite{cattafi_2011, creaco_2010}, sensor
placement \cite{Perelman_2013}, subnetwork vulnerability assessment
\cite{Yazdani_2011}, and network sectorization \cite{Tzatchkov_2008,
Hajebi_2015}. While these studies provide valuable insights into the ways
that complex network theory can inform drinking water infrastructure design,
water distribution networks are pressure-driven and cyclic, and are thus
governed by different dynamics than natural drainage networks, which are
mainly gravity-driven and dendritic.
\item Inspiration for the controller placement problem can be drawn from recent
theoretical work into the controllability of complex networks. These studies
show that the control properties of complex systems ranging from power grids
to gene expression pathways are inextricably linked with topological
properties of an underlying network representation \cite{liu_2016}. The
location of driver nodes needed for complete controllability of a linear
system, for instance, can be determined from the maximum matching of a graph
associated with that system's state space representation \cite{Liu_2011}. For
systems in which complete control of the network is infeasible, the relative
performance of driver node configurations can be measured by detecting
controllable substructures \cite{Ruths_2014}, or by leveraging the concept of
``control energy'' from classical control theory \cite{Summers_2014, yan_2012,
yan_2015, shirin_2017}. While these studies bring a theoretical foundation
to the problem of controller placement, they generally assume linear system
dynamics, and may thus not be well-suited for drainage networks, which are
driven by nonlinear runoff formation and channel routing processes.
\end{itemize}
Despite the critical need for system-scale stormwater control, there is to our
knowledge no robust theoretical framework for determining optimal placement of
hydraulic control structures within drainage networks. To address this knowledge
gap, we formulate a new graph-theoretic algorithm that uses the network structure
of watersheds to determine the controller locations that will maximally
``de-synchronize'' tributary flows. By flattening the discharge hydrograph, our
algorithm provides a powerful method to mitigate flash floods and curtail water
quality impairments in urban watersheds. Our approach is distinguished by the
fact that it is theoretically-motivated, and links the control of stormwater
systems with the underlying structure of the drainage network. The result is a
fast, generalized algorithm that requires only digital elevation data for the
watershed of interest. More broadly, through our graph-theoretic framework we
show that network structure plays a dominant role in the control of drainage
basins, and demonstrate how the study of watersheds as complex networks can
inform more effective stormwater infrastructure design.
\section{Algorithm description}
\label{sec:meth}
Flashy flows occur when large volumes of runoff arrive synchronously at a given
location in the drainage network. If hydraulic control structures are placed at
strategic locations, flood waves can be mitigated by ``de-synchronizing''
tributary flows before they arrive at a common junction. With this in mind, we
introduce a controller placement algorithm that minimizes flashy flows by
removing regions of the drainage network that contribute disproportionately to
synchronous flows at the outlet. In our approach, the watershed is first
transformed into a directed graph consisting of unit subcatchments (vertices)
connected by flow paths (edges). Next, critical regions are identified by
computing the catchment's \textit{width function} (an approximation of the
distribution of travel times to the outlet), and then weighting each vertex in
the network in proportion to the number of vertices that share the same travel
time to the outlet. The weights are used to compute a \textit{weighted
accumulation} score for each vertex, which sums the weights of every possible
subcatchment in the watershed. The graph is then partitioned recursively based
on this weighted accumulation score, with the most downstream vertex of each
partition representing a controller location.
\begin{figure*}[htb!] \centering
\includegraphics[width=\textwidth]{img/elev_and_graph_green.jpg}
\caption[Watershed elevation and accumulation]{Left panel: Digital elevation
model (DEM) of a watershed with river network highlighted. Right panel (from
left to right, top to bottom): (i) DEM detail (colors not to scale); (ii)
flow directions; (iii) delineated subcatchment graph; (iv) adjacency matrix
representation of graph.}
\label{fig:elev_and_river}
\end{figure*}
\subsection{Definitions}
\textbf{Graph representation of a watershed:} Watersheds can be represented as
directed graphs, in which subcatchments (vertices or cells) are connected by
elevation-dependent flow paths (edges). The directed graph can be formulated
mathematically as an adjacency matrix, $A$, where for each element $a_{i,j}$,
$a_{i,j} \neq 0$ if there exists a directed edge connecting vertex $v_i$ to
$v_j$, and conversely, $a_{i,j} = 0$ if there does not exist a directed edge
connecting vertex $v_i$ to $v_j$. Nonzero edge weights can be specified to
represent travel times, distances, or probabilities of transition between
connected vertices. Flow paths between adjacent cells are established using a
routing scheme, typically based on directions of steepest descent (see Figure
\ref{fig:elev_and_river}).
In this study, we determine the connectivity of the drainage network using a
\textit{D8 routing} scheme \cite{o_callaghan_1984}. In this scheme, elevation
cells are treated as vertices in a 2-dimensional lattice (meaning that each
vertex $v_i$ is surrounded by eight neighbors $\mathcal{N}_i$). A directed link
is established from vertex $v_i$ to a neighboring vertex $v_j$ if the slope
between $v_i$ and $v_j$ is steeper than the slope between $v_i$ and all of its
other neighbors $\mathcal{N}_i \setminus v_j$ (where $v_j$ has a lower elevation
than $v_i$). The \textit{D8 routing} scheme produces a directed acyclic graph
where the indegree of each vertex is between 0 and 8, and the outdegree of each
vertex is 1, except for the watershed outlet, which is zero. It should be noted
that other schemes exist for determining drainage network structure, such as the
\textit{D-infinity} routing algorithm, which better resolves drainage directions
on hillslopes \cite{tarboton_1997}. However, because the routing scheme is not
essential to the construction of the algorithm, we focus on the simpler
\textit{D8} routing scheme for this study. Similarly, to simplify the
construction of the algorithm, we will assume that the vertices of the watershed
are defined on a regular grid, such that the area of each unit subcatchment is
equal. Figure \ref{fig:elev_and_river} shows the result of delineating a river
network from a digital elevation model (left), along with an illustration of the
underlying graph structure and adjacency matrix representation (right).
\textbf{Controller}: In the context of this study, a controller represents any
structure or practice that can regulate flows from an upstream channel segment
to a downstream one. Examples include retention basins, dams, weirs, gates and
other hydraulic control structures. These structures may be either passively or
actively controlled. For the validation assessment presented later in this
paper, we will examine the controller placement problem in the context of
\textit{volume capture}, meaning that controllers are passive, and that they are
large enough to completely remove flows from their upstream contributing areas.
However, the algorithm itself does not require the controller to meet these
particular conditions.
Mathematically, we can think of a controller as a cut in the graph that
removes one of the edges. This cut halts or inhibits flows across the affected
edge. Because the watershed has a dendritic structure, any cut in the network
will split the network into two sub-trees: (i) the delineated region upstream of
the cut, and (ii) all the vertices that are not part of the delineated region.
Placing controllers is thus equivalent to removing branches (subcatchments) from
a tree (the parent watershed).
\textbf{Delineation}: Delineation returns the set of vertices upstream of a
target vertex. In other words, this operation returns the contributing area of
vertex $v_i$. Expressed in terms of the adjacency matrix:
\begin{equation}
\begin{split} V_{d}(A, v_i) = \{ v_j \in V | (A^n)_{ij} \not = 0 \text{ for
some } n \leq D \}
\end{split}
\end{equation}
Where $A^n$ is the adjacency matrix $A$ raised to the $n^{th}$ power, $i$ is the
row index, $j$ is the column index, $V$ is the vertex set of $A$, and $D$ is
the graph diameter. Note that $(A^n)_{ij}$ is nonzero only if vertex $v_j$ is
located within an n-hop neighborhood of vertex $v_i$.
\textbf{Pruning}: Pruning is the complement of delineation. This operation
returns the vertex set consisting of all vertices that are not upstream of the
current vertex.
\begin{equation} V_p(A, v_i) = V \setminus V_{d}(A, v_i)
\end{equation}
Subgraphs induced by the delineated and pruned vertex sets are defined as follows:
\begin{equation}
\begin{split} A_d(A, v_i) = A(G[V_d]) \\ A_p(A, v_i) = A(G[V_p])
\end{split}
\end{equation}
Where $A(G[V])$ represents the adjacency matrix of the subgraph induced by the
vertex set $V$.
\clearpage
\textbf{Width function}: The width function describes the distribution of travel
times from each upstream vertex to some downstream vertex, $v_i$\footnote{The
width function $H(x)$ was originally defined by Shreve (1969) to yield the
number of links in the network at a topological distance $x$ from the outlet
\cite{shreve_1969}. Because travel times may vary between hillslope and
channel links, we present a generalized formulation of the width function
here.} \cite{rodriguez_2001}. In general terms, the width function can be
expressed as:
\begin{equation}
\label{eq:prob1}
H(t, v_i) = \sum_{\gamma \in \Gamma_i} I(\gamma, t)
\end{equation}
Along with an indicator function, $I(\gamma, t)$:
\begin{equation}
\label{eq:prob1}
I(\gamma, t) =
\begin{cases}
1 & T(\gamma) = t \\
0 & \text{otherwise}
\end{cases}
\end{equation}
Where $\Gamma_i$ is the set of all directed paths to the target vertex $v_i$,
and $T(\gamma)$ is the travel time along path $\gamma$. If the travel times
between vertices are constant, the width function of the graph at vertex $v_i$
can be described as a linear function of the adjacency matrix:\footnote{While
mathematically concise, this equation is computationally inefficient. See
Section S1 in the Supplementary Information for the efficient implementation
used in our analysis.}
\begin{equation}
H(t, v_i) = (A^t \mathbf{1}) (i)
\end{equation}
In real-world drainage networks, travel times between grid cells are not
uniform. Crucially, the travel time for channelized cells will be roughly 1-2
orders of magnitude faster than the travel time in hillslope cells
\cite{rodriguez_2001, tak_1990}. Thus, to account for this discrepancy, we
define $\phi$ to represent the ratio of hillslope to channel travel times:
\begin{equation}
\label{eq:prob1}
\phi = \frac{t_h}{t_c}
\end{equation}
\begin{figure*}[t] \centering
\includegraphics[width=\textwidth]{img/weights_phi10.png}
\caption[Distance histogram and vertex weights]{Left: width function
(travel-time histogram) of the watershed, assuming that channelized travel
time is ten times faster than on hillslopes ($\phi = 10$). Right: weights
associated with each vertex of the graph. Brighter regions correspond to areas
that contribute to the peaks of the width function.}
\label{fig:hist_and_ratio}
\end{figure*}
Where $t_h$ is the travel time for hillslopes and $t_c$ is the travel time for
channels. Figure \ref{fig:hist_and_ratio} (left) shows the width function for an
example watershed, under the assumption that channel velocity is ten times
faster than hillslope velocity ($\phi = 10$). The width functions for various
values of $\phi$ are shown in Figures S1 and S2 in the Supplementary Information.
Note that when the effects of hydraulic dispersion are ignored, the width
function is equivalent to the geomorphological impulse unit hydrograph (GIUH) of
the basin \cite{rodriguez_2001}. The GIUH represents the response of the basin
to an instantaneous impulse of rainfall distributed uniformly over the
catchment; or equivalently, the probability that a particle injected randomly
within the watershed at time $t=0$ exits the watershed through the outlet at
time $t=t'$.
\textbf{Accumulation}: The accumulation at vertex $v_i$ describes the number of
vertices located upstream of $v_i$ (or alternatively, the upstream area
\cite{moore_1991}). It is equivalent to the cumulative sum of the width function
with respect to time\footnote{See Section S1 in the Supplementary Information
for the efficient implementation of the accumulation algorithm.}:
\begin{equation} C(v_i) = (\sum_{t=0}^\infty A^t \mathbf{1}) (i)
\end{equation}
Figure \ref{fig:acc_and_wacc} (left) shows the accumulation at each vertex for
an example catchment. Because upstream area is correlated with mean discharge
\cite{rodriguez_2001}, accumulation is frequently used to determine
locations of channels within a drainage network \cite{moore_1991}.
\textbf{Weighting function}: To identify the vertices that contribute most to
synchronous flows at the outlet, we propose a weighting function that weights
each vertex by its rank in the travel time distribution. Let $\tau_{ij}$
represent the known travel time from a starting vertex $v_j$ to the outlet
vertex $v_i$. Then the weight associated with vertex $v_j$ can be expressed in
terms of a weighting function $W(v_i,v_j)$:
\begin{equation} w_{j} = W(v_i, v_j) = \frac{H(\tau_{ij}, v_i)}{\underset{t}{\max}(H(v_i))}
\end{equation}
Where $\tau_{ij}$ represents the travel time from vertex $v_j$ to vertex $v_i$,
$H(\tau_{ij}, v_i)$ represents the width function for an outlet vertex $v_i$
evaluated at time $\tau_{ij}$, and the normalizing factor
$\underset{t}{\max}(H(v_i))$ represents the maximum value of the width function
over all time steps $t$. In this formulation, vertices are weighted by the rank
of the associated travel time in the width function. Vertices that
contribute to the maximum value of the width function (the mode of the travel
time distribution) will receive the highest possible weight (unity), while
vertices that contribute to the smallest values of the width function will
receive small weights. In other words, vertices will be weighted in proportion
to the number of vertices that share the same travel time to the outlet. Figure
\ref{fig:hist_and_ratio} shows the weights corresponding to each bin of the
travel time distribution (left), along with the weights applied to each vertex
(right). Weights for varying values of $\phi$ are shown in Figures S1 and S2 in
the Supplementary Information.
\begin{figure*}[t] \centering
\includegraphics[width=\textwidth]{img/acc_and_wacc.png}
\caption[Accumulation and weighted accumulation]{Left: accumulation (number of
cells upstream of every cell). Right: ratio of weighted accumulation to
accumulation ($C_w / C$).}
\label{fig:acc_and_wacc}
\end{figure*}
\textbf{Weighted accumulation}: Much like the \textit{accumulation} describes the number
of vertices upstream of each vertex $v_i$, the \textit{weighted accumulation} yields the
sum of the weights upstream of $v_i$. If each vertex $v_j$ is given a weight
$w_j$, the weighted accumulation at vertex $v_i$ can be defined:
\begin{equation} C_w(v_i, \mathbf{w}) = (\sum_{t=0}^\infty A^t \mathbf{w}) (i)
\end{equation}
Where $\mathbf{w}$ is a vector of weights, with each weight $w_j$ associated
with a vertex $v_j$ in the graph. When the previously-defined weighting function
is used, the weighted accumulation score measures the extent to which a
subcatchment delineated at vertex $v_i$ contributes to synchronous flows at the
outlet. In other words, if the ratio of weighted accumulation to accumulation is
large for a particular vertex, this means that the subcatchment upstream of that
vertex contributes disproportionately to the peak of the hydrograph. Figure
\ref{fig:acc_and_wacc} (right) shows the ratio of weighted accumulation to
accumulation for the example catchment. The weighted accumulation provides a
natural metric for detecting the cuts in the drainage network that will
maximally remove synchronous flows, and thus forms the basis of the controller
placement algorithm.
\subsection{Controller placement algorithm definition}
The controller placement algorithm is described as follows. Let $A$ represent
the adjacency matrix of a watershed delineated at some vertex $v_i$.
Additionally, let $k$ equal the desired number of controllers, and $c$ equal the
maximum upstream accumulation allowed for each controller. The graph is then
partitioned according to the following scheme:
\begin{enumerate}
\item Compute the width function, $H(t, v_i)$, for the graph described
by adjacency matrix $A$ with an outlet at vertex $v_i$.
\item Compute the accumulation $C(v_j)$ at each vertex $v_j$.
\item Use $H(t, v_i)$ to compute the weighted accumulation $C_w(v_j)$ at each vertex $v_j$.
\item Find the vertex $v_{opt}$, where the accumulation $C(v_{opt})$ is less
than the maximum allowable accumulation and the weighted accumulation
$C_w(v_{opt})$ is maximized:
\begin{equation} v_{opt} \gets \underset{ v_s \in V_s }{\text{argmax}}(C_{w}(v_s))
\end{equation}
Where $V_s$ is the set of vertices such that vertex $v_i$ is reachable from
any vertex in $V_s$ and the accumulation $C$ at any vertex in $V_s$ is less
than $c$.
\item Prune the graph at vertex $v_{opt}$: $A \gets A_p(A, v_{opt})$
\item If the cumulative number of partitions is equal to $k$, end the
algorithm. Otherwise, start at (1) with the catchment described by the new $A$
matrix.
\end{enumerate}
The algorithm is described formally in Algorithm \ref{alg:1}. An open-source
implementation of the algorithm in the \textit{Python} programming language is
also provided \cite{this_repo_2018}, along with the data needed to reproduce our
results. Efficient implementations of the \textit{delineation},
\textit{accumulation}, and \textit{width function} operations are provided via
the \texttt{pysheds} toolkit, which is maintained by the authors
\cite{pysheds_2018}.
\begin{figure*}[t] \centering
\includegraphics[width=\textwidth]{img/partitions_k15_phi10.png}
\caption[Partitions and stacked histogram]{Left: partitioning of the example
watershed using the controller placement algorithm. Right: stacked width
functions for each partition. The brightness expresses the priority of each
partition, with the darker partitions being prioritized over the brighter
ones.}
\label{fig:partitions}
\end{figure*}
Figure \ref{fig:partitions} shows the controller configuration generated by
applying the controller placement algorithm to the example watershed, with
$k=15$ controllers, each with a maximum accumulation of $c = 900$ (i.e. each
controller captures roughly 8\% of the catchment's land area). In the left
panel, partitions are shown in order of decreasing priority from dark to light
(i.e. darker regions are partitioned first by the algorithm). The right panel
shows the stacked width functions for each partition. The sum of the width
functions from each partition reconstitute the original width function for the
catchment. From the stacked width functions, it can be seen that the algorithm
tends to prioritize the pruning of subgraphs that align with the peaks of the
travel time distribution. Note for instance, how the least-prioritized paritions
gravitate towards the low end of the travel-time distribution, while the
most-prioritized partitions are centered around the mode. Controller placement
schemes corresponding to different numbers of controllers are shown in Figure S3
in the Supplementary Information.
\begin{algorithm}
\begin{tcolorbox}
\KwData{\\
A directed graph described by adjacency matrix $A$\;
A target vertex $v_i$ with index $i$\;
A desired number of partitions $k$\;
The maximum accumulation at each controller, $c$\;}
\KwResult{Generate partitions for a catchment}
\vspace{6pt}
Let $\mathbf{q}$ be a vector representing all vertices in the graph\;
Let $k_c$ equal the current number of partitions\;
Let $\mathbf{\tau}$ represent a vector of travel times from each vertex to vertex $v_i$\;
Let $A$ represent the adjacency matrix of the system\;
Let $A_c$ represent the adjacency matrix for the current iteration\;
\vspace{6pt}
$A_c \gets A$\;
$k_c \gets 0$\;
\While{$k_c < k$}{
$H(t, v_i) \gets (A_c^t \mathbf{1}) (i)$\;
$C \gets (\sum_{t=0}^\infty A_c^t \mathbf{1})$\;
$\mathbf{w} \gets W(v_i, \mathbf{q})$\;
$C_{w} \gets (\sum_{t=0}^\infty A_c^t \mathbf{w})$\;
\eIf{$C(v_i) > 0$}{
$V_c \gets \{ v_m \in V | C(v_m) \leq c \}$\;
$V_s \gets V_d(A_c, v_i) \cap V_c$\;
$v_{opt} \gets \underset{ v_s \in V_s }{\text{argmax}}(C_{w}(v_s))$\;
$A_c \gets A_p(A_c, v_{opt})$\;
$k_c \gets k_c + 1$\;
}{
}
}
\caption{Controller placement algorithm}
\label{alg:1}
\end{tcolorbox}
\end{algorithm}
\section{Algorithm validation}
To evaluate the controller placement algorithm, we simulate the controlled
network using a hydrodynamic model, and compare the performance to a series of
randomized controller placement configurations. Performance is characterized by
the ``flatness" of the flow profile at the outlet of the watershed, as measured
by both the peak discharge and the variance of the hydrograph (i.e. the extent
to which the flow deviates from the mean flow over the course of the hydrologic
response). To establish a basis for comparison, we simulate a volume capture
scenario \cite{Emerson_2005}, wherein roughly half of the total contributing
area is controlled, and each controller completely captures the discharge from
its respective upstream area.
The validation experiment is designed to test the central premises of the
controller placement algorithm: that synchronous cells can be identified from
the structure of the drainage network, and that maximally capturing these
synchronous cells will lead to a flatter overall hydrologic response. If these
premises are accurate, we expect to see two results. First, the controller
placement algorithm will produce flatter flows than the randomized control
trials. Second, the performance of the algorithm will be maximized when using a
large number of small partitions. Using many small partitions allows the
algorithm to selectively target the highly-weighted cells that contribute
disproportionately to the peak of the hydrograph. Conversely, large partitions
capture many extraneous low-weight cells that don't contribute to the peak of
the hydrograph. In other words, if increasing the number of partitions improves
the performance of the algorithm, it not only confirms that the algorithm works
for our particular experiment, but also justifies the central premises on
which the algorithm is based.
\subsection{Experimental design}
We evaluate controller configurations based on their ability to flatten the
outlet hydrograph of a test watershed when approximately 50\% of the
contributing area is controlled. This test case is chosen because it presents a
practical scenario with real-world constraints, and because it allows for direct
comparison of many different controller placement strategies. For our test case,
we use the Sycamore Creek watershed, a heavily urbanized creekshed located in
the Dallas--Fort Worth Metroplex with a contributing area of roughly 83 km$^2$
(see Figure \ref{fig:elev_and_river}). This site is the subject of a long-term
monitoring study led by the authors \cite{Bartos_2018}, and is chosen for this
analysis because (i) it is known to experience issues with flash flooding, and
(ii) it is an appropriate size for our analysis---being large enough to capture
fine-scale network topology, but not so large that computation time becomes
burdensome.
A model of the stream network is generated from a conditioned digital elevation
model (DEM) by determining flow directions from the elevation gradient and then
assigning channels to cells that fall above an accumulation threshold.
Conditioned DEMs and flow direction grids at a resolution of 3 arcseconds
(approximately 70 by 90 m) are obtained from the USGS HydroSHEDS database
\cite{Lehner_2008}. Grid cells with an accumulation greater than 100 are defined
to be channelized cells, while those with less than 100 accumulation are defined
as hillslope cells. This threshold is based on visual comparison with the stream
network defined in the National Hydrography Dataset (NHD) \cite{nhd_2013}.
Hillslope cells draining into a common channel are aggregated into
subcatchments, with a flow length corresponding to the longest path within each
hillslope, and a slope corresponding to the average slope over all flow paths in
the subcatchment. To avoid additional complications associated with modeling
infiltration, subcatchments and channels are assumed to be completely
impervious. Channel geometries are assigned to each link within the channelized
portion of the drainage network. We assume that each stream segment can be
represented by a ``wide rectangular channel", which is generally accurate for
natural river reaches in which the stream width is large compared to the stream
depth \cite{mays_2010}. To simulate channel width and depth, we assume a power
law relation between accumulation and channel size based on an empirical
formulation from Moody and Troutman (2002) \cite{Moody_2002}:
\begin{equation}
\begin{split}
\omega = 7.2 \cdot Q^{0.50 \pm 0.02} \\
h = 0.27 \cdot Q^{0.30 \pm 0.01}
\end{split}
\end{equation}
Where $\omega$ is stream width, $h$ is stream depth, and $Q$ is the mean river
discharge. Knowing the width and depth of the most downstream reach, and
assuming that the accumulation at a vertex is proportional to the mean flow, we
generate channel geometries using the mean parameter values from the above
relations.
Using the controller placement algorithm, control structures are placed such
that approximately 50$\pm$3\% of the catchment area is captured by storage
basins. To investigate the effect of the number of controllers on performance,
optimized controller strategies are generated using between $k=1$ and $k=35$
controllers. The ratio of hillslope-to-channel travel times is assumed to be
$\phi = 10$. We compare the performance of our controller placement algorithm to
randomized controller placement schemes, in which approximately 50$\pm$3\% of
the catchment area is controlled but the placement of controllers is random. For
this comparison assessment, we generate 50 randomized controller placement
trials, each using between $k=1$ and $k=24$ controllers.\footnote{While the
controller randomization code was programmed to use between 1 and 35
controllers, the largest number of controllers achieved was 24. This result
stems from the fact that the randomization algorithm struggled to achieve 30+
partitions without selecting cells that fell below the channelization
threshold (100 accumulation).}
We simulate the hydrologic response using a hydrodynamic model, and evaluate
controller placement performance based on the flatness of the resulting
hydrograph. To capture the hydrologic response under various rainfall
conditions, we simulate small, medium and large rainfall events, corresponding
to 0.5, 1.5 and 4.0 mm of rainfall delivered instantaneously over the first five
minutes of the simulation. A hydrodynamic model is used to simulate the
hydrologic response at the outlet by routing runoff through the channel network
using the dynamic wave equations \cite{swmm_2018}. The simulation performance is
measured by both the peak discharge and the total variance of the hydrograph.
The variance of the hydrograph (which we refer to as ``flashiness'') is defined
as:
\begin{equation}
\label{eq:prob1}
\sigma^2 = \frac{1}{N} \sum_{i=1}^N (Q_i - \bar{Q})^2
\end{equation}
Where $Q$ is the discharge, $\bar{Q}$ is the mean discharge in the storm window,
and $N$ is the number of data points in the storm window. This variance metric
captures the flow's deviation from the mean over the course of the hydrologic
response, and thus provides a natural metric for the flatness of the hydrograph.
This metric is important for water quality considerations like first flush
contamination or streambed erosion---in which the volume of transported material
(e.g. contaminants or sediments) depends not only on the maximum discharge, but
also on the duration of flow over a critical threshold \cite{Wong_2016}.
Note that the validation experiment is not intended to faithfully reproduce the
precise hydrologic response of our chosen study area, but rather, to test the
basic premises of the controller placement algorithm. As such, site-specific
details---such as land cover, soil types and existing infrastructure---have been
deliberately simplified. For situations in which these characteristics exert an
important influence on the hydrologic response, one may account for these
factors by adjusting the inter-vertex travel times used in the controller
placement algorithm.
\section{Results}
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{img/performance_overview.png}
\caption[Hydraulic modeling results]{Results of the hydraulic simulation
experiment for the medium storm event (1.5 mm). Top left: optimal controller
placement ($k=35$), with captured regions in red. Bottom left: hydrographs
resulting from each simulation. The uncontrolled simulation is shown in black,
while the optimized controller placement simulations are shown in red, and the
randomized controller simulations are shown in gray. Right: the overall
flashiness (variance of the hydrograph) and peak discharge for each
simulation, using the same coloring scheme.}
\label{fig:swmm}
\end{figure*}
The controller placement algorithm produces consistently flatter flows than
randomized control trials. Figure \ref{fig:swmm} shows the results of the
hydraulic simulation assessment in terms of the resulting hydrographs (bottom
left), and the overall flashiness and peak discharge of each simulation (right)
for the medium-sized (1.5 mm) storm event. The best performance is achieved by
using the controller placement algorithm with $k=35$ controllers (see Figure
\ref{fig:swmm}, top left). Comparing the overall variances and peak discharges,
it can be seen that the optimized controller placement produces flatter outlet
discharges than any of the randomized controller placement strategies.
Specifically, the optimized controller placement achieves a peak discharge that
is roughly 47\% of that of the uncontrolled case, while the randomized
simulations by comparison achieve an average peak discharge that is more than
72\% of that of the uncontrolled case. Similarly, the hydrograph variance of the
optimized controller placement is roughly 21\% of that of the uncontrolled case,
compared to 35\% for the randomized simulations on average.\footnote{Note that
the controller placement algorithm results in a longer falling limb than the
randomized trials. This result stems from the fact that the algorithm
prioritizes the removal of grid cells that contribute to the peak and rising
limb of the hydrograph, while grid cells contributing to the falling limb are
ignored. In other words, the controller placement algorithm shifts discharges
from the peak of the hydrograph to the falling limb.} When tested against
storm events of different sizes (0.5 to 4 mm of rain), the controller placement
algorithm also generally outperforms randomized control trials (see Section S4
in the Supplementary Information). However, the within-group performance varies
slightly with rain event size, which could result from the nonlinearities
inherent in wave propagation speed. Thus, while the optimized controller
placement still produces flatter flows than randomized controls, this result
suggests that the performance of the controller placement algorithm could be
further improved by tuning the assumed inter-vertex travel times to correspond
to the expected speed of wave propagation.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{img/hydrograph_num_controllers_vert.png}
\caption{Left: hydrographs associated with varying numbers of controllers (k),
using the controller placement algorithm with 50\% watershed area
removal. Right: hydrograph variance (top) and peak discharge (bottom) vs.
number of controllers. In general, more controllers produces a flatter response.}
\label{fig:num_controllers}
\end{figure*}
Under the controller placement algorithm, the best performance is achieved by
using a large number of small-scale controllers; however, more controllers does
not lead to better performance for the randomized controller placement scheme.
Given that increasing the number of controllers allows the algorithm to better
target highly synchronous cells, this result is consistent with the central
premise that capturing synchronous cells will lead to a flatter hydrologic
response. Figure \ref{fig:num_controllers} shows the optimized hydrologic
response for varying numbers of controllers (left), along with the overall
variance (top right) and peak discharge (bottom right). In all cases, roughly
50\% of the watershed is controlled; however, configurations using many small
controllers consistently perform better than configurations using a few large
controllers. This trend does not hold for the randomized controller placement
strategy (see Figure S8 in the Supplementary Information). Indeed, the
worst-performing randomized controller placement uses $k=6$ controllers (out of
a minimum of 1) while the best-performing randomized controller placement uses
$k=18$ controllers (out of a maximum of 24). The finding that the controller
placement algorithm converges to a (locally) optimal solution follows from the
fact that as the number of partitions increases, controllers are better able to
capture highly-weighted regions without also capturing extraneous low-weight
cells. This in turn implies that the weighting scheme used by the algorithm
accurately identifies the regions of the watershed that contribute
disproportionately to synchronized flows. Thus, in spite of various sources of
model and parameter uncertainty, the experimental results confirm the central
principles under which the controller placement algorithm operates: namely, that
synchronous regions can be deduced from the graph structure alone, and that
controlling these regions results in a flatter hydrograph compared to randomized
controls.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{img/placement_visualization.png}
\caption{Left: Best controller placement in terms of peak discharge ($k = 35$
controllers, optimized). Center: worst controller placement in terms of peak
discharge ($k = 6$ controllers, randomized). Controller locations are
indicated by black crosses, and controlled partitions are indicated by colored
regions. Right: hydrographs associated with the best and worst controller
placement strategies.}
\label{fig:placement_visualization}
\end{figure*}
In addition to demonstrating the efficacy of the controller placement algorithm,
the validation experiments reveal some general principles for organizing
hydraulic control structures within drainage networks to achieve downstream
streamflow objectives. Overall, the controller placement strategies that perform
best---whether achieved through optimization or randomization---tend to
partition the watershed axially rather than laterally. These lengthwise
partitions result in a long, thin drainage network that prevents tributary flows
from ``piling up''. Figure \ref{fig:placement_visualization} shows the
partitions corresponding to the best-performing and worst-performing controller
placement strategies with respect to peak discharge (left and center,
respectively), along with the associated hydrographs (right). While the
best-performing controller placement strategy evenly distributes the partitions
along the length of the watershed, the worst-performing controller placement
strategy controls only the most upstream half of the watershed. As a result, the
worst-performing strategy removes the largest part of the peak, but completely
misses the portion of the peak originating from the downstream half of the
watershed. In order to achieve a flat downstream hydrograph, controller
placement strategies should seek to evenly distribute controllers along the
length of the watershed.
\section{Discussion}
The controller placement algorithm presented in this study provides a tool for
designing stormwater control systems to better mitigate floods, regulate
contaminants, and protect aquatic ecosystems. By reducing peak discharge,
optimized placement of stormwater control structures may help to lessen the
impact of flash floods. Existing flood control measures often focus on
controlling large riverine floods---typically through existing centralized
assets, like dams and levees. However, flash floods may occur in small
tributaries, canals, and even normally dry areas. For this reason, flash floods
are not typically addressed by large-scale flood control measures, despite the
fact that they cause more fatalities than riverine floods in the developed world
\cite{Doocy_2013}. By facilitating distributed control of urban flash floods,
our controller placement strategy could help reduce flash flood related
mortality. Moreover, by flattening the hydrologic response, our controller
placement algorithm promises to deliver a number of environmental and water
quality benefits, such as decreased first flush contamination \cite{Wong_2016},
decreased sediment transport \cite{muschalla_2014}, improved potential for
treatment in downstream green infrastructure \cite{Kerkez_2016, Bartos_2018},
and regulation of flows in sensitive aquatic ecosystems \cite{poresky_2015}.
\subsection{Key features of the algorithm}
The controller placement algorithm satisfies a number of important operational
considerations:
\begin{itemize}
\item \textbf{Theoretically motivated}. The controller placement algorithm has
its foundation in the theory of the geomorphological impulse unit
hydrograph---a relatively mature theory supported by a established body of
research \cite{rodriguez_2001, kirkby_1976, gupta_1986, gupta_1988, mesa_1986,
marani_1991, troutman_1985}. Moreover, the algorithm works in an intuitive
way---by recursively removing the subcatchments of a watershed that contribute
most to synchronized flows. This theoretical basis distinguishes our algorithm
from other strategies that involve exhaustive optimization or direct
application of existing graph theoretical constructs (such as graph centrality
metrics).
\item \textbf{Generalizable and extensible}. Because it relies solely on network
topology, the controller placement algorithm will provide consistent results
for any drainage network---including both natural stream networks and
constructed sewer systems. Moreover, because each step in our algorithm has a
clear meaning in terms of the underlying hydrology, the algorithm can be
modified to satisfy more complex control problems (such as systems in which
specific regulatory requirements must be met).
\item \textbf{Flexible to user objectives and constraints}. The controller
placement algorithm permits specification of important practical constraints,
such as the amount of drainage area that each control site can capture, and
the number of control sites available. Moreover, the weighting function can
be adjusted to optimize for a variety of objectives (such as the overall
``flatness'' of the hydrograph, or removal of flows from a contaminated
upstream region).
\item \textbf{Parsimonious with respect to data requirements}. The controller
placement algorithm requires only a digital elevation model of the watershed
of interest. Additional data---such as land cover and existing hydraulic
infrastructure---can be used to fine-tune estimates of travel times within
the drainage network, but are not required by the algorithm itself.
\item \textbf{Fast implementation} For the watershed examined in this study
(consisting of about 12,000 vertices), the controller placement algorithm
computes optimal locations for $k=15$ controllers in roughly
3.0 seconds (on a 2.9 GHz Intel Core i5 processor). While the
computational complexity of the algorithm is difficult to
characterize\footnote{The computational complexity of the controller
placement algorithm depends on the implementation of component functions
(such as delineation and accumulation computation), which can in turn
depend on the structure of the watershed itself.}, it is faster than other
comparable graph-cutting algorithms, such as recursive spectral bisection or
spectral clustering, both of which are $O(n^3)$ in computational complexity.
\end{itemize}
Taken together, our algorithm offers a solution to the controller placement
problem that is suitable for research as well as for practical applications. On
one hand, the algorithm is based in hydrologic and geomorphological theory, and
provides important insights into the connections between geomorphology and the
design of the built environment. On the other hand, the algorithm is fast,
robust, and easy-to-use, making it a useful tool for practicing engineers and
water resource managers.
\subsection{Caveats and directions for future research}
While our controller placement algorithm is robust and broadly-applicable, there
are a number of important considerations to keep in mind when applying this
algorithm to real-world problems.
\begin{itemize}
\item The controller placement algorithm implicitly assumes that rainfall is
uniform over the catchment of interest. While this assumption is justified for
small catchments in which the average spatial distribution of rainfall will be
roughly uniform, this assumption may not hold for large (e.g. continent-scale)
watersheds. Modifications to the algorithm would be necessary to account for a
non-uniform spatial distribution of rainfall.
\item The controller placement algorithm is sensitive to the chosen ratio of
hillslope to channel speeds, $\phi$. Care should be taken to select an
appropriate value of $\phi$ based on site-specific land cover and
morphological characteristics. More generally, for situations in which
differential land cover, soil types, and existing hydraulic infrastructure
play a dominating role, the performance of the algorithm may be enhanced by
adjusting inter-vertex travel times to correspond to estimated overland flow
and channel velocities.
\item Our assessment of the algorithm's performance rests on the assumption that
installed control structures (e.g. retention basins) are large enough to
capture upstream discharges. The algorithm itself does not explicitly account
for endogenous upstream flooding that could be introduced by installing new
control sites.
\item In this study, experiments were conducted only for impulsive rainfall
inputs (i.e. with a short duration of rainfall). Future work should assess the
performance of the distance-weighted controller placement strategy under
arbitrary rainfall durations.
\end{itemize}
More broadly, future research should investigate the problem of sensor placement
in stream networks using the theoretical framework developed in this paper.
While this study focuses on the problem of optimal placement of hydraulic
control structures, our algorithm also suggests a solution to the problem of
sensor placement. Stated probabilistically, the geomorphological impulse unit
hydrograph (GIUH) represents the probability that a ``particle'' injected
randomly within the watershed at time $t=0$ exits the outlet at time $t=t'$. Thus,
the peaks of the GIUH correspond to the portions of the hydrologic response
where there is the greatest amount of ambiguity about where a given ``particle''
originated. It follows that the same locations that maximally de-synchronize
flows may also be the best locations for disambiguating the locations from which
synchronous flows originated. Future experiments should investigate the ability
to estimate upstream states (e.g. flows) within the network given an outlet
discharge along with internal state observers (e.g. flow sensors) placed using
the algorithm developed in this study.
\section{Conclusions}
We develop an algorithm for placement of hydraulic control structures that
maximally flattens the hydrologic response of drainage networks. This algorithm
uses the geomorphological impulse unit hydrograph to locate subcatchments that
dominate the peaks of the hydrograph, then partitions the drainage network to
minimize the contribution of these subcatchments. We find that the controller
placement algorithm produces flatter hydrographs than randomized controller
placement trials---both in terms of peak discharge and overall variance. By
reducing the flashiness of the hydrologic response, our controller placement
algorithm may one day help to mitigate flash floods and restore urban water
quality through reduction of contaminant loads and prevention of streambed
erosion. We find that the performance of the algorithm is enhanced when using a
large number of small, distributed controllers. In addition to confirming the
central hypothesis that synchronous cells can be identified based on network
structure of drainage basins, this result lends justification to the development
of decentralized \textit{smart} stormwater systems, in which active control of
small-scale retention basins, canals and culverts enables more effective
management of urban stormwater. Overall, our algorithm is efficient, requires
only digital elevation model data, and is robust to parameter and model
uncertainty, making it suitable both as a research tool, and as a design tool
for practicing water resources engineers.
\section*{Acknowledgments}
Funding for this project was provided by the National Science Foundation (Grants
1639640 and 1442735) and the University of Michigan. We would like to thank Alex
Ritchie for exploring alternative approaches to the controller placement problem
and for his help with literature review. We would also like to thank Dr. Alfred
Hero for his advice in formulating the problem.
\section*{Declarations of interest}
Declarations of interest: none.
\section*{Data availability}
Upon publication, code and data links will be made available at:
\url{https://github.com/kLabUM/hydraulic-controller-placement}
\section*{References}
|
2006.13189
|
\section{Introduction}\label{section: Introduction}
\vskip-8pt
With increasing success in reinforcement learning (RL), there is broad interest in applying these methods to real-world settings. This has brought exciting progress in offline RL and off-policy policy evaluation (OPPE). These methods allow one to leverage observed data sets collected by expert exploration of environments where, due to costs or ethical reasons, direct exploration is not feasible. Sample-efficiency, reliability, and ease of interpretation are characteristics that offline RL methods must have in order to be used for real-world applications with high risks, where a tendency is exhibited towards sampling bias. In particular there is a need for policies that shed light into the decision-making at all states and actions, and account for the uncertainty inherent in the environment and in the data collection process. In healthcare data for example, there is a common bias that arises: drugs are mostly prescribed only to sick patients; and so naive methods can lead agents to consider them harmful. Actions need to be limited to policies which are similar to the expert behavior and sample size should be taken into account for decision-making \citep{Gottesman2019, Gottesmancorr2019}.
To address these deficits we propose an Expert-Supervised RL (ESRL) approach for offline learning based on Bayesian RL. This method yields safe and optimal policies as it learns when to adopt the expert's behavior and when to pursue alternative actions. Risk aversion might vary across applications as errors may entail a greater cost to human life or health, leading to variation in tolerance for the target policy to deviate from expert behavior. ESRL can accommodate different risk aversion levels. We provide theoretical guarantees in the form of a regret bound for ESRL, independent of the risk aversion level. Finally, we propose a way to interpret ESRL's policy at every state through posterior distributions, and use this framework to compute off-policy value function posteriors for any given policy.
While training a policy, ESRL considers the reliability of the observed data to assess whether there is substantial benefit and certainty in deviating from the behavior policy, an important task in a context of limited data. This is embedded in the method by learning a policy that chooses between the optimal action or the behavior policy based on statistical hypothesis testing. The posteriors are used to test the hypothesis that the seemingly optimal action is indeed better than the one from the behavior policy. Therefore, ESRL is robust to the quality of the behavior policy used to generate the data.
To understand the intuition for why hypothesis testing works for offline policy learning, we discuss an example. Consider a medical setting where we are interested in the best policy to treat a complex disease over time. We first assume there is a standardized treatment guideline that works well and that most physicians adopt it to treat their patients. The observed data will have very little exploration of the whole environment ---in this case, meaning little use of alternative treatments. However, the state-action pairs observed will be near optimal. For any fixed state, those actions not recommended by the treatment guidelines will be rare in the data set and the posterior distributions will be dominated by the uninformative wide priors. The posteriors for the value associated with the optimal actions will incorporate more information from the data as they are commonly observed. Thus, testing for the null hypothesis that an alternative action is better than the treatment guideline will likely yield a failure to reject the null, and the agent will conclude the physician's action is best. Unless the alternative is substantially better for a given state, the learned policy will not deviate from the expert's behavior when there is a clear standard of care.
On the other hand, if there is no treatment guideline or consensus among physicians, different doctors will try different strategies and state-action pairs will be more uniformly observed in the data. At any fixed state, some relatively good actions may have narrower posterior distributions associated with their value. Testing for the null hypothesis that a fixed action is better than what the majority of physicians chose is more likely to reject the null and point towards an alternative action in this case, as variance will be smaller across the sampled actions. Deviation from the (noisy) behavior policy will occur more frequently. Therefore, whether there is a clear care guideline or not, the method will have learned a suitable policy. A central point in Bayesian RL is that the posterior provides not just the expected value for each action, but also higher moments. We leverage this to produce interpretable policies which can be understood and analyzed within the context of the application. We illustrate this with posterior distributions and credible intervals (CI). We further propose a way to produce posterior distributions for OPPE with consistent and unbiased estimates.
\paragraph{Handling Uncertainty.} To the best of our knowledge, there is no work that has incorporated hypothesis testing directly into the policy training process. However, accounting for the uncertainty in policy estimation is a successful idea which has been widely explored in other works. Methods range from confidence interval estimation using bootstrap, to model ensembles for guiding online exploration \citep{Kaelbling1993, White2010, Kurutach2018}. For example, a simple and effective way of incorporating uncertainty is through random ensembles (REM) \cite{agarwal2019REM}. These have shown promise on Atari games, significantly outperforming Deep $Q$ networks (DQN) \citep{Mnih2015} and naive ensemble methods in the offline setting. We adopt the Bayesian framework, which has been proven successful in online RL \citep{Ghavamzadeh2015,O'Donoghue2018}, as it provides a natural way to formalize uncertainty in finite samples. Bayesian model free methods such as temporal difference (TD) learning provide provably efficient ways to explore the dynamics of the MDP \citep{Dearden1998, AsmuthJohn2012, Metelli2019}. Gaussian Process TD can be also used to provide posterior distributions with mean value and CI for every state-action pair \citep{Engel2005}. Although efficient for online exploration, TD methods require large data in high dimensional settings, which can be a challenge in complex offline applications such as healthcare. ESRL is model-based which makes it sample efficient \citep{Deisenroth2011}. Within model-based methods, the Bayesian framework allows for natural incorporation of uncertainty measures. Posterior sampling RL proposed by Strens efficiently explores the environment by using a single MDP sample per episode \citep{Strens00}. ESRL fits within this line of methods, which are theoretically guaranteed to be efficient in terms of finite time regret bounds \citep{Osband2013, Osband2016}.
\paragraph{Hypothesis Testing for Offline RL}
Naively applying model-based RL to offline, high dimensional tasks can degrade its performance, as the agent can be led to unexplored states where it fails to learn reliable policies. There are environments where simple approaches like behavior cloning (BC) on the offline data set is enough to ensure reliability. BC has actually been shown to perform quite well in offline benchmarks like RL Unplugged \cite{gulcehre2020rl}, D4RL \cite{fu2020d4rl} and Atari when the data is collected from a single noisy behavior policy \cite{fujimoto2019benchmarking}. The issue with these approaches is that there is to be gained in terms of optimality with respect to the expert, and there is no guarantee that the learned policies are safe in all states, a necessary condition when treating patients. A common strategy is to regularize the learned policy towards the behavior policy whether directly in the state space or in the action space \citep{Kumar2019, kidambi2020, wu2019,gulcehre2020rl,fujimoto2019benchmarking}. However, there are cases where the data logging policy is a noisy representation of the expert behavior, and regularization will lead to sub-optimal actions. ESRL can detect these cases through hypothesis testing \citep{Quentin2020} to check whether improvement upon the behavior policy is feasible and, if so, incorporate new actions into the policy in accordance with the user’s risk tolerance. Additionally, as opposed to the regularization hyper-parameter that one must choose for methods like Batch Constrained deep Q-learning (BCQ) \cite{gulcehre2020rl,fujimoto2019benchmarking}, the risk-aversion parameter has a direct interpretation as the significance level that the user is comfortable with for the policy to deviate from the expert behavior. It allows the method to be tailored to different scientific and business applications where one might have different tolerance towards risk in search for higher rewards.
\paragraph{Off-Policy Policy Evaluation and Interpretation.} Many of the aforementioned methods can be easily adapted for offline learning and often importance sampling is used to address the distribution shift between the behavior and target policies \citep{sutton}. However, importance sampling can yield high variance estimates in finite samples, especially in long episodes. Doubly robust estimation of the value function is proposed to address these issues. These methods will have low variance and consistent estimators if either the behavior policy or the model is correctly specified \citep{Jiang2016, WDR}. Still, in finite samples or environments with high dimensional state-action spaces, these doubly robust estimators may still not be reliable, because only a few episodes end up contributing to the actual value estimate due to the product in the importance sampling weights \citep{Gottesmancorr2019}. Additionally, having point estimates without any measure of associated uncertainty can be dangerous, as it is hard to know whether the sample size is large enough for the estimate to be reliable. To this end, we use the ESRL framework to sample MDP models from the posterior and evaluate the policy value. Our estimates are unbiased and consistent, and are equipped with uncertainty measures.
\vskip-8pt
\section{Problem Set-up}\label{section: setup}
\vskip-8pt
We are interested in learning policies that can be used in real-world applications. To develop the framework we will use the clinical example discussed in Section \ref{section: Introduction}. Consider a finite horizon MDP defined by the following tuple: $<\mathcal{S},\mathcal{A},R^M,P^M,P_0,\tau>$, where $\mathcal{S}$ is the state-space, $\mathcal{A}$ is the action space, $M$ is the model over all rewards and state transition probabilities with prior $f(\cdot),$ $R^M(s,a):\mathcal{S}\times\mathcal{A}\rightarrow[0,1]$ is the reward distribution for fixed state-action pair $(s,a)$ under model $M$, with mean $\bar R^M(s,a)$. $P^M_a(s'|s)$ is the probability distribution function for transitioning to state $s'$ from state-action pair $(s,a)$ under model $M$, $\tau\in\mathbb{N}$ is the fixed episode length, and $P_0$ is the initial state distribution. The true MDP model $M^*$ has distribution $f$.
The behavior policy function is a noisy version of a deterministic policy. Going back to the clinical example there is generally a consensus of what the correct treatment is for a disease, but the data will be generated by different physicians who might adhere to the consensus to varying degrees. Thus, we model the standard of care as a deterministic policy function $\pi^0:\mathcal{S}\times\{1,\dots,\tau\}\mapsto\mathcal{A}$. The behavior policy is $\pi(s,t)=\pi^0(s,t)$ with probability (w.p.) $1-\epsilon$, and $\pi(s,t)=a$ sampled uniformly at random from $\mathcal{A}$ w.p. $\epsilon$. For a fixed $\epsilon\in[0,1]$, $\pi$ generates the observed data $\pmb D_T=\{(s_{i1},a_{i1},r_{i1},\dots,s_{i\tau },a_{i\tau},r_{i\tau})\}_{i=1}^T$ which consists of $T$ episodes (i.e. patient treatment histories), where $s_{i1}\sim P_0$ $\forall i=1,\dots,T$. Note that $\pi^0$ may generally yield high rewards, however it is not necessarily optimal and can be improved upon.
We'll denote a policy function by $\mu:\mathcal{S}\times\{1,\dots,\tau\}\rightarrow\mathcal{A}$. The associated value function for $\mu$, model $M$ is $V_{\mu,t}^M(s)=\mathbb{E}_{M,\mu}\left[\sum_{j=t}^\tau\bar R^M(s_j,a_j)|s_t=s\right],$ and the action-value function is $Q^{M}_{\mu,t}(s,a)=\bar R^{M}(s,a)+ \sum_{s'\in\mathcal{S}}P^{M}_a(s'|s)V_{\mu,t+1}(s').$ At any fixed $(s,t)$, $\mu(s,t)\equiv$arg max$_aQ_{\tilde\mu,t}(s,a)$, note that we allow $\tilde\mu$ in the $Q$ function to differ from $\mu$. This distinction will be useful as $\tilde\mu$ can be $\mu$, $\pi$ (or the ESRL policy defined in Section \ref{section: ESRL}). Finally, $\pi(a|s,t)$ is the probability of $a$ given ($s,t$), under the behavior policy.
\vskip-8pt
\section{Expert-Supervised Reinforcement Learning}\label{section: ESRL}
\vskip-8pt
We are interested in finding a policy which improves upon $\pi$. Directly regularizing the target policy to the behavior might restrict the agent from finding optimal actions, especially when $\pi$ has a high random component $\epsilon$, or $\pi^0$ is not close to optimal. Thus we want to know when to use $\mu$ versus $\pi$. This motivates the use of posterior distributions to quantify how well each state has been explored in $\pmb D_T$ and how close $\pi$ is to $\pi^0$. At every state and time $(s,t)$ in the episode we can sample $K$ MDP models from $f(\cdot|\pmb D_T)$. These samples are used to compare the quality of the behavior and target policy actions. We consider both the expected values of each action $Q_{\tilde\mu,t}(s,\pi(s,t))$ versus $Q_{\tilde\mu,t}(s,\mu(s,t))$, and their second moments for any fixed $\tilde\mu$. In particular, posterior distributions of $Q_{\tilde\mu,t}(s,a),$ $a\in\mathcal{A}$ are used to test if the value for $\mu(s,t)$ is significantly better than $\pi$. This makes the learning process robust to the quality of the behavior policy.
Next we formalize these arguments by a sampling scheme, define the ESRL policy, and state its theoretical properties.
\paragraph{Sampling $Q$ functions.}\label{section: CI estimation}
The distribution over the MDP model $f(\cdot|\pmb D_T)$ implicitly defines a posterior distribution for any $Q$ function: $Q_{\tilde\mu,t}(s,a)\sim f_Q(\cdot|s,a,t,\pmb D_T)$. As the true MDP model $M^*$ is stochastic, we want to approximate the conditional mean $Q$ value:
$
\mathbb{E}\left[Q^{M^*}_{\tilde\mu,t}(s,a)|s,a,t,\pmb D_T\right].
$
We do this by sampling $K$ MDP models $M_k$, compute $Q^{(k)}_{\tilde\mu,t}(s,a)$, $k=1,\dots,K$ and use $\hat Q_{\tilde\mu,t}(s,a)\equiv\frac{1}{K}\sum_{k=1}^KQ^{(k)}_{\tilde\mu,t}(s,a)$.
\begin{lemma}\label{lemma: q function concent}
$\hat Q_{\tilde\mu,t}(s,a)$ is consistent and unbiased for $Q^{M^*}_{\tilde\mu,t}(s,a):$
\[
\mathbb{E}\left[\hat Q_{\tilde\mu,t}(s,a)|s,a,t,\pmb D_T\right]=\mathbb{E}\left[Q^{M^*}_{\tilde\mu,t}(s,a)|s,a,t,\pmb D_T\right],
\]
\[
\hat Q_{\tilde\mu,t}(s,a)-\mathbb{E}\left[Q^{M^*}_{\tilde\mu,t}(s,a)|s,a,t,\pmb D_T\right]=O_p\left(K^{-\frac{1}{2}}\right), \forall(t,s,a).
\]
\end{lemma}
Lemma \ref{lemma: q function concent} establishes desirable properties for our $Q$ function estimation. Choosing $K=1$ yields an immediate result: every $Q^{(k)}_{\tilde\mu,t}(s,a)$ from model $M_k$ is unbiased.
The stochasticity of $M^*$ and $\pi$ suggests the mean $Q$ values for $\pi$ and $\mu$ are not enough to make a decision for whether it is beneficial to deviate from $\pi$.
Next we discuss how to directly incorporate this uncertainty assessment into the policy training through Bayesian hypothesis testing.
\paragraph{ESRL Policy Learning Through Hypothesis Testing.}
{
For a fixed $\alpha$-level, denote the ESRL policy by $\mu^{\alpha}$, we next describe the steps to learn this policy. By iterating backwards as in dynamic programming, assume we know $\mu^{\alpha}(s,j)$ $\forall s\in\mathcal{S},j\in\{t+1,\dots,\tau\}$, and we have $V_{\mu^{\alpha},\tau+1}^M(s)=0,\forall s\in\mathcal{S}$. Intuitively, at any $(s,t)$ we want to assess whether there is enough information in $D_T$ to support choosing the seemingly best action $\mu$ over $\pi$. Denote $\mu(s,t)=$arg max$_aQ_{\mu^\alpha,t}(s,a)$ as the best action if we follow the ESRL policy $\mu^{\alpha}$ onward, we formalize this with the following hypothesis:
\begin{align}\label{def: generic null}
H_0:Q^M_{\mu^{\alpha},t}(s,\mu(s,t))\le Q^M_{\mu^{\alpha},t}(s,\pi(s,t)).
\end{align}
Note that in \eqref{def: generic null}, both $Q$ functions assume the agent proceeds with ESRL policy $\mu^\alpha$ onward. If we can reject $H_0$, then it is safe to follow $\mu$, if we fail to reject the null, it does not necessarily mean the behavior policy is better, but there is not enough information in the data to support following $\mu$. To construct a safe ESRL policy we simply evaluate $H_0$ by computing the null probability $\mathbb{P}\left(H_0|t,s,\pmb D_T\right)$, if this is below a pre-specified risk-aversion level $\alpha$ then we can safely choose $\mu$. In other words if the learned policy does not yield a significantly better value estimate, then we fail to reject the null and proceed to use the behavior policy's action. The ESRL policy at $(s,t)$ is then
\[
\mu^{\alpha}(s,t) = \left\{
\begin{array}{ll}
\mu(s,t) & \text{ if }\quad \mathbb{P}\left(H_0|t,s,\pmb D_T\right)<\alpha, \\
\pi(s,t) & \text{else.}
\end{array}
\right.
\]
To compute $\mu^\alpha(s,t)$, we start by sampling $K$ MDP models from the posterior distribution, computing $\{Q^{(k)}_{\mu^\alpha,t}(s,a)\}_{k=1}^K$ and splitting the samples into two disjoint sets $\mathcal{I}_1,\mathcal{I}_2$. We use $\mathcal{I}_1$ to draw the policy $\hat\mu(s,t)$ based on majority voting. Then we use $\mathcal{I}_2$ to assess the null hypothesis in \eqref{def: generic null}, with estimator
$
\hat{\mathbb{P}}\left(H_0|t,s,\pmb D_T\right)
=
\frac{1}{K}\sum_{k=1}^KI\left(Q^{(k)}_{\mu^{\alpha},t}(s,\hat\mu(s,t))\le Q^{(k)}_{\mu^{\alpha},t}(s,\pi(s,t))\right).$ We next discuss convergence of the null probability estimator, and how to choose $\hat\mu(s,t)$ $\forall (s,t)\in\mathcal{S}\times\{1,\dots,\tau\}$.
}
\begin{lemma}\label{lemma: Q H_0 concent}
Let $\mathbb{P}^*\left(H_0|t,s,\pmb D_T\right)$ be the null probability under true MDP $M^*$ with policy $\mu^*$,
\[
\hat{\mathbb{P}}\left(H_0|t,s,\pmb D_T\right)-\mathbb{P}^*\left(H_0|t,s,\pmb D_T\right)=O_p\left(K^{-\frac{1}{2}}\right).
\]
\end{lemma}
Lemma \ref{lemma: Q H_0 concent} guarantees that we can construct a consistent policy $\mu^\alpha$ by sampling from the MDP posterior. There are two factors that come into play in \eqref{def: generic null}: the difference in mean $Q$ values, and the second moments. If $Q^{M^*}_{\mu^\alpha,t}(s,\mu(s,t))$ is much higher than $Q^{M^*}_{\mu^\alpha,t}(s,\pi(s,t))$, but there are very few samples in $\pmb D_T$ for $(s,\mu(s,t))$, the wide posterior will translate into a high $\hat{\mathbb{P}}\left(H_0|t,s,\pmb D_T\right)$ leading ESRL to adopt $\pi(s,t)$. To choose $\mu(s,t)$ there needs to be both a substantial benefit for this new action and a high certainty of such gain. How averse the user is to deviating from $\pi$ is controlled by parameter $\alpha$. A small risk averse $\alpha$ will allow $\mu^\alpha$ to deviate from $\pi$ only with high certainty. When $\alpha=1$, Algorithm \ref{algorithm: ESRL} boils down to an offline version of PSRL after $T$ episodes, which uses majority voting for a robust policy. Algorithm \ref{algorithm: ESRL} collects these ideas in order to learn an ESRL policy $\mu^\alpha$. Disjoint sets $\mathcal{I}_1,\mathcal{I}_2$, ensure independence and keep theoretical guarantees under the Assumption \ref{assumptions: alpha}.
\begin{algorithm}
\SetAlgoLined
Sample $M_k\sim f(\cdot|\pmb D_T)$ $k=1,\dots,K$, set $\mathcal{I}_1=\{1,\dots,\ceil{\frac{K}{2}}\}$, $\mathcal{I}_2=\{\ceil{\frac{K}{2}}+1,\dots,K\}$\;
Set $\hat V^{(k)}_{\tau+1}(s)\leftarrow0$ $\forall s\in\mathcal{S}$, $k=1,\dots,K$\;
Compute behavior distribution $\pi(a|s,t)$ from $\pmb D_T$, set $\pi(s,t)=\text{arg}\max_a\pi(a|s,t)$\;
\For{$t=\tau,\dots,1$}{
\For{$s\in\mathcal{S}$}{
\For{$k=1,\dots,K$}{
$\mu_k(s,t)\leftarrow\text{arg}\max_aQ^{(k)}_{\mu^{\alpha},t}(s,a)$\;
}
$\hat\mu(s,t)\leftarrow\text{maj. vote}\{\mu_k(s,t),k\in\mathcal{I}_1\}$\;
Compute $\hat{\mathbb{P}}(H_0|s,t,\pmb D_T)=\frac{1}{|\mathcal{I}_2|}\sum_{k\in\mathcal{I}_2}I\left(Q^{(k)}_{\mu^{\alpha},t}(s,\hat\mu(s,t))<Q^{(k)}_{\mu^{\alpha},t}(s,\pi(s,t))\right)$\;
\For{$k=1,\dots,K$}{
$\mu^\alpha_k(s,t)\leftarrow I\left(\hat{\mathbb{P}}(H_0|s,t,\pmb D_T)<\alpha\right)\mu_k(s,t)+I\left(\hat{\mathbb{P}}(H_0|s,t,\pmb D_T)\ge\alpha\right)\pi(s,t)$\;
$\hat V^{(k)}_t(s)\leftarrow Q^{(k)}_{\mu^{\alpha},t}(s,\mu^\alpha_k(s,t))$\;
}
$\hat\mu^\alpha(s,t)\leftarrow\text{maj. vote}\{\mu^\alpha_k(s,t),k\in\mathcal{I}_1\}$\;
$\mathcal{M}^\alpha(s,t)\leftarrow\left\{k|k\in\mathcal{I}_1,\mu^\alpha_k(s,t) = \hat\mu^\alpha(s,t)\right\}$\;
}
}
Define majority voting set: MV$^\alpha=\cap_{(s,t)}\mathcal{M}^\alpha(s,t)$\;
\uIf{$\exists k\in \text{MV}^\alpha$}{
choose $k\in \text{MV}^\alpha$ at random, set $k^{\text{MV}}\leftarrow k$
}
\Else{
Set $k^{\text{MV}}$ to most common $k\in \mathcal{M}^\alpha(s,t),\forall (s,t)$
}
Set $\mu^\alpha=\mu_{k^{MV}}$
\caption{Expert-Supervised RL}
\label{algorithm: ESRL}
\end{algorithm}
\begin{assumption}\label{assumptions: alpha}
Let $\mathbb{P}^*(H_0|s,t,\pmb D_T)$ be defined as in \eqref{def: generic null} for the true $M^*$. The chosen risk-averse parameter $\alpha\in[0,1]$ satisfies $\mathbb{P}^*(H_0|s,t,\pmb D_T)\neq\alpha$ $\forall (s,t)\in\mathcal{S}\times\{1,\dots,\tau\}$.
\end{assumption}
As $\alpha$ is set by the user, Assumption \ref{assumptions: alpha} is easily satisfied as long as $\alpha$ is chosen carefully. Let $V_{\mu^{\alpha*},1}^{M^*}(s)$ be the value under the true MDP $M^*$ and let $\mu^{\alpha*}$ be an ESRL policy which uses the null hypotheses in \eqref{def: generic null} defined under $M^*$. Then, for episode $i$ we can define the regret for $\mu^{\alpha}$ from Algorithm \ref{algorithm: ESRL} as $\Delta_i=\sum_{s_i\in\mathcal{S}}P_0(s_i)\left(V_{\mu^{\alpha*},1}^{M^*}(s_i)-V_{\mu^{\alpha},1}^{M^*}(s_i)\right)$, and the expected regret after $T$ episodes as
$\mathbb{E}[Regret(T)]=\mathbb{E}\left[\sum_{i=1}^T\Delta_i\right]$.
\begin{theorem}[Regret Bound for ESRL]\label{theorem: alg 2}
For any $\alpha\in[0,1]$ which satisfies Assumption \ref{assumptions: alpha}, Algorithm \ref{algorithm: ESRL} using $\pmb D_T$ and choosing $K=\mathcal{O}\left( T\right)$ will yield
\[
\mathbb{E}\left[Regret\left(T\right)\right]=\mathcal{O}\left(\tau S\sqrt{AT\log(SAT)}\right).
\]
\end{theorem}
Theorem \ref{theorem: alg 2} shows ESRL is sample efficient, flexible to risk aversion level $\alpha$, and robust to the quality of behavior policy $\pi$. As the regret bound is true for any level of risk aversion $\alpha$, Algorithm 1 universally converges to the oracle. This makes ESRL flexible for a wide range of applications. It also shows that ESRL is suitable to a large class of models, as the regret bound does not impose a specific form on $f$. Regarding access to $f(\cdot|\textbf{D}_T)$ for sampling MDPs in real-world problems, as data increases, dependency of results on the prior decreases, so we can use any \textit{working model} to approximate the MDP. Several models are computationally simple to sample from, and can be used for learning. For example, we use the Dirichlet/multinomial, and normal-gamma/normal conjugates for $P^M$ and $R^M$ respectively, which work well for all simulation and real data settings explored in Section \ref{section: experiments}. In fact, if a Dirichlet prior over the transitions is assumed, the regret bound in Theorem \ref{theorem: alg 2} can be improved. Chosen priors should be flexible enough to capture the dynamics and easy to sample from efficiently. Next we consider how to discern whether ESRL, or any other fixed policy, is an improvement on the behavior policy.
\vskip-8pt
\section{Off-Policy Policy Evaluation and Uncertainty Estimation}
\vskip-8pt
We now illustrate how the ESRL framework can be used to construct efficient point estimates of the value function, and their posterior distributions. Hypothesis testing can also be used to assess whether the difference in value of two policies is statistically significant (i.e. $\mu^\alpha$ vs. $\pi$).
To compute the estimated value of a given policy $\tilde\mu$, we sample $K$ models from the posterior and navigate $M_k$ using $\tilde\mu$. This yields samples $V_{\tilde\mu,1}^{(k)}\sim f_V(\cdot|\pmb D_T)$. We estimate $\mathbb{E}\left[V^{M^*}_{\tilde\mu,1}(s)|\pmb D_T\right]$ with $\hat V_{\tilde\mu}=\frac{1}{K}\sum_{k=1}^KV^{(k)}_{\tilde\mu,1}$. Note that we average over the initial states as well, as we are interested to know the marginal value of the policy. A conditional value of the policy function $V_{\tilde\mu,1}^{M^*}(s_0)$ can also be computed simply by starting all samples at a fixed state $s_0$.
\begin{theorem}\label{theorem: V function}
Let $\tilde\mu:\mathcal{S}\times\{1,\dots,\tau\}\mapsto\mathcal{A}$ be a pre-specified policy,
\[
\mathbb{E}\left[\hat V_{\tilde\mu}\bigg|\pmb D_T\right]=\mathbb{E}\left[V^{M^*}_{\tilde\mu,1}(s)\bigg|\pmb D_T\right],
\hat V_{\tilde\mu}-\mathbb{E}\left[V^{M^*}_{\tilde\mu,1}(s)\bigg|\pmb D_T\right]=O_p\left(K^{-\frac{1}{2}}\right).
\]
\end{theorem}
Theorem \ref{theorem: V function} ensures that we are indeed estimating the quantity of interest. It establishes that $\hat V_{\tilde\mu}$ is consistent and unbiased for $\sum_{s\in\mathcal{S}}P_0(s)V^{M^*}_{\tilde\mu,1}(s)$. As MDP $M^*$ is stochastic, point estimates without measures of uncertainty are not sufficient to evaluate the quality of a policy. For example in an application such as healthcare, there might be policies for which the second best action (treatment) is not significantly different in terms of value, but has less associated secondary risks. Including a secondary risk directly into the method might force us to make strong modeling assumptions. Therefore, testing whether such policies yield a statistically significant difference in value is important. With this information, one can devise a policy that always chooses the safest action (e.g. in clinical terms) and if this yields an equivalent value to the optimal policy, then it is preferable.
\paragraph{Policy-level hypothesis testing.} Define the value function null hypothesis for two fixed policies $\tilde\mu_1,\tilde\mu_2$ as the event in which policy $\tilde\mu_1$ has a higher expected value than $\tilde\mu_2$ conditional on $\pmb D_T$: $H_0:\mathbb{E}_{s\sim P_0,M^*}\left[V_{\tilde\mu_1,1}(s)|\pmb D_T\right]>\mathbb{E}_{s\sim P_0,M^*}\left[V_{\tilde\mu_2,1}(s)|\pmb D_T\right]$. The probability of the null under the true model $M^*$ is
\begin{align*}
\mathbb{P}_{\mu}\left(H_0|\pmb D_T\right)
=
\sum_{s\in\mathcal{S}}P_0(s)\mathbb{P}\left(V^{M^*}_{\tilde\mu_1,1}(s)>V_{\tilde\mu_2,1}^{M^*}(s)\bigg|s,\pmb D_T\right).
\end{align*}
We use samples $V_{\tilde\mu_\ell}^{(k)}$, $\ell=1,2$ to estimate the probability of the null with
$
\hat{\mathbb{P}}_\mu\left(H_0|\pmb D_T\right)
=
\frac{1}{K}\sum_{k=1}^KI\left(V^{(k)}_{\tilde\mu_1,1}(s)>V^{(k)}_{\tilde\mu_2,1}(s)\right).
$ Consistency of this estimator is shown in the Appendix \ref{section: other proofs}.
\vskip-8pt
\section{Experiments and Application}\label{section: experiments}
\vskip-8pt
We perform several analyses to assess ESRL policy learning, sensitivity to the risk aversion parameter $\alpha$, value function estimation, and finally illustrate how we can interpret the posteriors within the context of the application. The code for implementing ESRL with detailed comments is publicly available\footnote{https://github.com/asonabend/ESRL}. We use the Riverswim environment \citep{Strehl2008}, and a Sepsis data set built from MIMIC-III data \citep{mimic}. We compare ESRL to several methods: a) a \textit{naive} baseline made from an ensemble of $K$ DQN models (DQNE), where we simply use the mean for selecting actions, this benchmark is meant to shed light into the empirical benefit of the hypothesis testing in ESRL. b) We argue ESRL can deviate from the behavior policy when allowed by the hypothesis testing, for further investigating the benefit of hypothesis testing, we implement behavior cloning (BC). c) We explore Batch Constrained deep Q-learning (BCQ) which uses regularization towards the behavior policy for offline RL \cite{gulcehre2020rl,fujimoto2019benchmarking,fujimoto2019off}. d) Finally, we also implement a strong benchmark which leverages ensembles and uncertainty estimation in the context of offline RL using random ensembles (REM) \cite{agarwal2019REM}. For Riverswim we use 2-128 unit layers, for Sepsis we use 128, 256 unit layers respectively \citep{Raghu}. For ESRL, we use conjugate Dirichlet/multinomial, and normal-gamma/normal for the prior and likelihood of the transition and reward functions respectively.
\vskip-8pt
\subsection{Riverswim}
\vskip-8pt
The Riverswim environment \citep{Strehl2008} requires deep exploration for achieving high rewards. There are 6 states and two actions: swim right or left. Only swimming left is always successful. There are only two ways to obtain rewards: swimming left while in the far left state will yield a small reward (5/1000) w.p. 1, swimming right in the far right state will yield a reward of 1 w.p. 0.6. The episode lasts 20 time points. We train policy $\pi^0$ using PSRL \citep{Osband2013} for 10,000 episodes, we then generate data set $\pmb D_T$ with $\pi$, varying both size $T$ and noise $\epsilon$. The offline trained policies are then tested on the environment for 10,000 episodes. This process is repeated 50 times.
\vskip-8pt
\begin{figure}[h!]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[height=.14\textheight]{figures/riveswim_rewards_T200}
\caption{Mean reward for $T$=200 episodes, while varying $\epsilon$-greedy behavior policy $\pi$.}
\end{subfigure}~
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[height=.14\textheight]{figures/riveswim_rewards_eps05.png}
\caption{Mean reward for a $\epsilon=0.05$ in the behavior policy, while varying number of episodes $T$ in $\pmb D_T$.}
\end{subfigure}
\caption{Mean test reward per episode for policies trained offline with ESRL ($\alpha = 0.01,0.05,0.1$), DQN, DQNE, BC, BCQ, and REM on Riverswim. Optimal policy expected reward is 2.}
\label{fig: riverswim reward}
\end{figure}
\vskip-8pt
\paragraph{Policy Learning.} We first assess ESRL on Riverswim. The training set sample size $T$ is kept low to make it hard to completely learn the dynamics of the environment. We train an offline policy using ESRL with different risk aversion parameters $(\alpha=0.01,0.05,0.1)$. Figure \ref{fig: riverswim reward} (a) shows mean reward for $T=200$ episodes while varying $\epsilon$. ESRL proves to be robust to the behavior policy quality. This is expected as when $\epsilon$ is low the environment is not fully explored. This yields high variance in the $Q$ posteriors, which leads ESRL to reject the null more often and favor the behavior policy. For low quality data generating policies there is greater exploration of the environment, which yields narrower posterior distributions for the $Q$ function posteriors, leading ESRL to reject the null when it is indeed beneficial to do so. When behavior policy is almost deterministic, the smaller risk aversion parameter $\alpha$ seems to yield good results as ESRL almost always imitates the behavior policy. BC does well as it seems to estimate the expert behavior well enough regardless of the noise level. Overall $Q$-learning methods lack enough data to learn a good policy. Figure \ref{fig: riverswim reward} (b) compares methods on an almost constant behavior policy ($\epsilon=0.05$), so there is little exploration in $\pmb D_T$. ESRL is robust as wide posteriors keep it from deviating from $\pi$. Methods other than BC generally fail likely to lack of exploration in $\pmb D_T$. However note that in real world data $\pi^0$ is not necessarily optimal, in which case BC will likely not perform very well relative to ESRL or others if there is a high-noise expert policy, which yields a well explored MDP, this is the case in the Sepsis results shown in Figure \ref{fig:sepsis_post} (c). Finally it's worth noting that REM does better than DQNE in Riverswim but not on Sepsis, we believe this is because the DQN neural networks are smaller, REM outperforms DQNE in a more complex and higher variance setting with more training data such as the Sepsis setting in Section \ref{section: sepsis}.
\begin{wrapfigure}[17]{R}{0.5\textwidth}
\includegraphics[height=.165\textheight]{figures/V_estimation_riverswim_eps015_alpha_015.png}
\caption{Mean squared error and 95\% confidence bands for OPPE of an ESRL policy. We compare step importance sampling (IS), step weighted IS (WIS), a non parametric model (NPM), an NPM ensemble (NPME) and ESRL estimation.}
\label{fig: value func}
\end{wrapfigure}
Figure \ref{fig: value func} shows Mean Squared Error (MSE) and 95\% confidence bands for value estimation of an ESRL policy using $\pmb D_T$ while varying $T$. We compare it with sample-based estimates: step importance sampling (IS), and step weighted IS (WIS),{ and model based estimates which use a full non parametric model (NPM), and an NPM ensemble (NPME). The non parametric models compute the rewards and transition probability tables based on observed counts. The policy is evaluated by using the tables as an MDP model where states are drawn using the estimated transition probability matrix. NPM uses 1000 episodes to evaluate a policy, NPME is an average over 100 NPM estimates. In small data sets ESRL performs substantially better as it uses the model posteriors to overcome rarely visited states in $\pmb D_T$. Eventually the priors (which are miss-specified for some state-action pairs) loose importance and ESRL converges to the non-parametric estimates. Sample based estimates are consistently less efficient but converge to the true policy with enough data.}
\vskip-8pt
\paragraph{Hypothesis testing and interpretability with $Q$ function posterior distributions.} We illustrate interpretability of the ESRL method in Riverswim as it is a simple, intuitive setting. Figure \ref{riverswim Q posteriors} shows 3 $Q$ function posterior distributions $f_Q(\cdot|s,t,\pmb D_T)$, each for a fixed state-time pair $(s,t)$. Display (a) shows $Q$ functions for the far left state and an advanced time point $t=17$. There is high certainty (no overlap in posteriors) that swimming left will yield a higher reward, as left is successful w.p. 1. $Q_{17}(0,\mathit{left)}$ has a wider posterior as this $(s,a)$ is not common in $\pmb D_T$.
Display (b) is the most interesting, it sheds light into the utility of uncertainty measures. A naive RL method that only considers mean values, would choose the optimal action according to $\mu$: swimming left. However, there is high uncertainty associated with such a choice. In fact, we know that the optimal strategy in Riverswim is $\pi(2,2)=\mathit{right}$, hypothesis testing will fail to reject the null and use the behavior action which will lead to a higher expected reward. Display (c) shows $Q$ posteriors for the state furthest to the right, at $t=5$. Choosing right will be successful with high certainty: narrow $Q_5(5,\mathit{left})$ posterior. Swimming left will still yield a relatively high reward as in the next time point the agent will proceed with the optimal policy (choosing right). As there is no overlap in (a) and (c), the best choice is clear as would be reflected with a hypothesis test.
\vskip-8pt
\begin{figure}
\centering
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[height=.12\textheight]{figures/Q_s0_t17.png}
\caption{$f_Q(\cdot|s=0,t=17,\pmb D_T)$}
\end{subfigure}~
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[height=.12\textheight]{figures/Q_s1_t15.png}
\caption{$f_Q(\cdot|s=2,t=2,\pmb D_T)$}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[height=.12\textheight]{figures/Q_s5_t5.png}
\caption{$f_Q(\cdot|s=5,t=5,\pmb D_T)$}
\end{subfigure}
\caption{Posterior distributions of $Q_t(s,a)$ functions for fixed $(s,t)$. We use $K=250$ MDP samples. Observed data, $\pmb D_T$ has $T=1000$ episodes, generated with $\epsilon =0.2$.}
\label{riverswim Q posteriors}
\end{figure}
\begin{figure}[!tb]
\centering
\begin{subfigure}{.35\textwidth}
\centering
\includegraphics[height=.14\textheight]{figures/Q_post_s90_t7_sepsis.png}
\caption{$Q$ function posterior distributions for $(s,t)=(90,7)$,\\ $a\in\{0,1,2,3,\pi(90,7),\mu(90,7)\}$.}
\end{subfigure}~
\begin{subfigure}{.35\textwidth}
\centering
\includegraphics[height=.14\textheight]{figures/Q_post_s5_t8_sepsis.png}
\caption{$Q$ function posterior distributions for $(s,t)=(5,8)$,\\ $a\in\{0,1,2,3,\pi(5,8),\mu(5,8)\}$.}
\end{subfigure}~
\begin{subfigure}{.35\textwidth}
\centering
\includegraphics[height=.135\textheight]{figures/value_fn_sepsis.png}
\caption{Posterior distribution for $\hat V_{\hat\mu}$,\\ for $\pi$, ESRL, BC, BCQ, DQN,\\ DQNE, REM.}
\end{subfigure}
\caption{Display (a) \& (b) show posterior distributions of $Q$ functions at fixed $(s,t)$. Display (c) shows posteriors $\hat V$ for policies: $\pi$ and $\mu^\alpha$ for $\alpha=0.01,0.05,0.1$, DQN, DQNE, BC, BCQ and REM for $K=500$.}
\label{fig:sepsis_post}
\end{figure}
\vskip-8pt
\subsection{Sepsis.}\label{section: sepsis} We further test ESRL on a Sepsis data set built from MIMIC-III \citep{mimic}. Sepsis is a state of infection where the immune system gets overwhelmed and can cause tissue damage, organ failure, and death. Deciding treatments and medication dosage is a dynamic and highly challenging task for the clinicians. We consider an action space representing dosage of intravenous fluids for hypovolemia (IV fluids) and vasopressors to counteract vasodilation. The action space $\mathcal{A}$ is size 25: a $5\times5$ matrix over discretized dose of vasopressors and IV fluids. The state space is composed of 1,000 clusters estimated using K-means on a 46-feature space which contains measures of the patient's physiological state. We used negative SOFA score as a reward \citep{Raghu}, we transform it to be between 0 and 1. The data set used has 12,991 episodes of 10 time steps- measurements every 4-hour interval. We used 80\% of episodes for training and 20\% for testing.
Figure \ref{fig:sepsis_post} (a) \& (b) show posterior distributions for two different $(s,t)$ pairs in the Sepsis data set hand-picked to illustrate interpretability. For simplicity we restrict to show the best action: $\mu(s,t)$, physician's action $\pi(s,t)$, and four other low dose actions $a\in\{0,\dots,3\}$. Display (a) shows posterior distributions over a state rarely observed in $\pmb D_T$, hence the $Q$ functions have relatively high standard errors. The expected cumulative inverse SOFA value for this state seems to be relatively stable no matter what action is taken. The $Q$ posteriors for $\mu$ and $\pi$ are practically overlapping so there's no reason to deviate from $\pi$, this is encoded into $\mu^\alpha$ through hypothesis testing. Interpertability is useful in these cases, as a physician might see there is no difference in actions: all will yield similar SOFA scores. Therefore, an action can be chosen to lower risk of side effects.
Display (b) on the other hand shows a common state in $\pmb D_T$: the low standard errors allow the policy to deviate from $\pi$ at any $\alpha$ level. Within this state, actions $\pi$ and $\mu$ are usually selected so the posteriors for their $Q$ functions are narrow, as opposed to those for $a=0,3$. These actions are not prevalent in $\pmb D_T$ as they seem to be sub-optimal, so they are less often chosen by doctors and seen in $\pmb D_T$.
Figure \ref{fig:sepsis_post} (c) shows the posterior distribution of the Sepsis value function for different policies. There seems to be a bi-modal distribution: it is easier to control the SOFA scores for patients in the set of states shown in the right mode of the distribution. Physicians know how to do this well as shown by the posterior value function for $\pi$; and ESRL picks up on this. The other clusters of states in the left mode seem to be harder to control. We can appreciate how deviating from the physician's policy is strikingly damaging to the expected value on the test set. DQN and BCQ, DQNE and BC generalize better but under preform relative to ESRL and REM. The $\pmb D_T$ is probably not enough to generalize to the test set due to the high dimensional state and action spaces. ESRL through hypothesis testing captures this and hardly deviates from the behavior policy. Thus, it is clear that we cannot do better than $\pi$ given the information in the data, but the posterior suggest the need to learn safe policies as we can do substantially worse with methods that don't account for uncertainty and safety.
\vskip-8pt
\section{Conclusion}
\vskip-8pt
We propose an Expert-Supervised RL (ESRL) approach for offline learning based on Bayesian RL. This framework can learn safe policies from observed data sets. It accounts for uncertainty in the MDP and data logging process to assess when it is safe and beneficial to deviate from the behavior policy. ESRL allows for different levels of risk aversion, which are chosen within the application context. We show a $\tilde{\mathcal{O}}(\tau S\sqrt{AT})$ Bayesian regret bound that is independent of the risk aversion level tailored to the environment and noise level in the data set. The ESRL framework can be used to obtain interpretable posterior distributions for the $Q$ functions and for OPPE. These posteriors are flexible to account for any possible policy function and are amenable to interpretation within the context of the application. An important limitation of ESRL is that it cannot readily handle continuous state spaces which are common in real world applications. Another extension we are interested in is in exploring the comparison of credible intervals as opposed to the null probability estimates. We believe ESRL is a step towards bridging the gap between RL research and real-world applications.
\vskip-8pt
\section*{Broader Impact}
\vskip-8pt
We believe ESRL is a tool that can help bring RL closer to real-world applications. In particular this will be useful in the clinical setting to find optimal dynamic treatment regimes for complex diseases, or at least assist in treatment decision making. This is because ESRL's framework lends itself to be questioned by users (physicians) and sheds light into potential biases introduced by the data sampling mechanism used to generate the observed data set. Additionally, using hypothesis testing and accommodating different levels of risk aversion makes the method sensible to offline settings and different real-world applications. It is important when using ESRL and any RL method, to question the validity of the policy's decisions, the quality of the data, and the method that was used to derive these.
\begin{ack}
We thank Eric Dunipace for great discussions on Bayesian hypothesis testing and the reviewers for thoughtful feedback, especially regarding state-of-the-art benchmark methods. Funding in support of this work is in part provided by Boehringer Ingelheim Pharmaceuticals. Leo A. Celi is funded by the National Institute of Health through NIBIB R01 EB017205.
\end{ack}
\bibliographystyle{unsrt}
|
2104.13607
|
\section{Introduction}
\label{sec:intro}
An active particle (or microswimmer), be it a living cell or a synthetic swimmer, converts the internal or ambient free energy into work as it moves through a viscous fluid \citep{Lauga2009,bechinger16,gompper2020}.
From a broad hydrodynamic perspective, the physics behind the propulsion of an active swimmer can be divided into two parts: the \emph{inner} problem which concerns the generation of the propulsive thrust, and the \emph{outer} problem which focuses on how swimmers interact with their neighbouring environment through altering their surrounding fluid. While in the former accounting for the details of the mechanism behind the impetus of each specific swimmer is essential (for example through cilia \citep{blake1974} or a phoretic mechanism \citep{golestanian05,nasouri2020}), in the latter, one can use a generic approach to describe the flow field induced by the swimmer \citep{kim1991,lauga2016stresslets,nasouri2018higher}.
Specifically for self-propelling axisymmetric swimmers, this generic approach classifies the swimmers into three groups of pushers, pullers, and neutrals, often referred to as the microswimming types \citep{Underhill.Graham2008,Lauga2009}. This categorization, which stems from the far-field description of the motion of a particle in a viscous fluid, relies on the fact that self-propulsion is force- and torque-free, and so the leading-order flow field induced by an active swimmer can be solely described by a symmetric force dipole, i.e. stresslet \citep{batchelor1970stress}. Based on the strength of this force dipole, a swimmer is a puller when it generates the impetus from its front end, a pusher when the thrust originates from the rear end, and is neutral when this strength is zero. One example of pusher-type microswimmers include \emph{E.\ coli} bacteria that utilize bundles of rotating helical filaments in their rear \citep{berke08}, or sperm cells that propel themselves by propagating a wave along a flexible flagellum. An example of puller-type microswimmers is \emph{Chlamydomonas reinhardtii} that pulls in the fluid in front of it with a pair of flagella beating in a breaststroke-like fashion \citep{kantsler2013ciliary}. \emph{Volvox}, a multicellular colony of green algae, is a neutral swimmer \citep{drescher2009,Pedley.Goldstein2016}, whereas \emph{Paramecium} is a weak pusher \citep{Zhang.Jung2015}.
Swimmers of different type behave differently in interacting with their surroundings.
For instance, unlike puller-like swimmers, pushers can be hydrodynamically trapped by nearby obstacles, or other pusher swimmers \citep{berke08,spagnolie2015, daddi18jpcm, sprenger2020towards}. The stresslet further determines the intensity of fluid stirring in suspensions of swimmers \citep{Lin.Childress2011}.
Although the effect of swimming type on the interaction of each swimmer with other swimmers/boundaries has been well explored, their energetic implications are yet to be fully understood. For surface-driven spherical swimmers, it has been shown that the viscous dissipation of neutral swimmers is minimal compared to that of pushers and pullers, and so neutral swimmers are often considered as the optimal type \citep{michelin2010}. However,
the innate question of whether this statement holds when the swimmer does not possess a perfect spherical shape, remains largely unanswered.
This is the question we address in this study.
{
The question of energetic efficiency and optimal propulsion, i.e. minimizing the dissipation while maintaining the swimming speed or equivalently maximizing the swimming speed while maintaining the dissipation, is a long-standing problem.
Earlier theoretical works focused on the optimal locomotion of flagellated micro-organisms \citep{pironneau1974optimal, lighthill1975}.
In particular, the optimal shape of a periodically actuated planar flagellum deforming via a travelling wave has been derived computationally \citep{lauga2013shape}, and shown to agree well with the waveform assumed by sperm cells of marine organisms.
The optimal swimming strokes and self-propulsion efficiencies of spherical and cylindrical bodies undergoing small deformation with respect to a reference shape has also been investigated \citep{shapere1987self, shapere1989efficiencies}.
Further studies considered the full optimization problem for simple mechanically-actuated model microswimmers \citep{alouges2007,Nasouri.Golestanian2019}.
}
{
Generally, the quest for the optimal propulsion strategy requires both the solution of the inner and the outer problem.
}
Swimming efficiency of ciliated microswimmers can be directly determined numerically \citep{Ito.Ishikawa2019,Omori.Ishikawa2020}, but it is more common to use a coarse grained approach, namely to separately calculate the dissipation in the propulsive layer and then replace this layer with an effective slip velocity when determining the external flow \citep{Keller.Wu1977,osterman2011,vilfan2012,sabass2010}. A fundamental limit on swimming efficiency can be obtained by finding the slip profile that minimizes the external dissipation for a given swimming speed.
For spherical swimmers, by using the classical squirmer model of \citet{lighthill1952} and \citet{blake1971spherical}, one can show that the contribution of the second mode of squirming (which characterizes the stresslet) to the dissipated power can only be positive. Because the swimming speed for spherical squirmers is independent of this second mode, we can conclude that minimizing the dissipation requires the second mode to be zero, thereby making the optimal swimmer a neutral one \citep{Blake1973}. However, such a simple decomposition of contributions cannot be achieved for non-spherical swimmers, and so the correlation between the dipole coefficient and the dissipation is not clearly known. Recently, using the boundary element method and numerical optimization, \citet{guo2021optimal} showed on some example shapes that when the swimmer body is not front-aft symmetric, pushers or pullers can be more efficient than neutral swimmers. In this study, we systematically investigate the relation between the stresslet and the shape of nearly-spherical optimal swimmers. By employing the recently derived minimum dissipation theorem \citep{Nasouri.Golestanian2021}, we circumvent the nonlinear optimization problem and arrive at the flow field for the optimal swimmer using the flow fields of two auxiliary passive problems. We remarkably find that the stresslet of an optimal swimmer is solely function of the third Legendre mode describing the shape of the swimmer, and so depending on the value (or sign) of this mode, the optimal swimmer can be a pusher, puller, or neutral.
\section{The problem statement}
In this study, our aim is to determine whether an optimal nearly spherical swimmer is a puller, pusher, or neutral. To this end, we consider a swimming body of axisymmetric shape moving with a steady velocity~$V_\mathrm{A}\bm{e}_z$, where $\bm{e}_z$ is a unit vector representing the axis of symmetry.
We parameterize the surface of the swimming object in axisymmetric spherical coordinates by
\begin{align}
\label{shape_function}
r(\theta)= a \left[ 1 + \sum_{\ell=1}^\infty \alpha_\ell P_\ell(\cos\theta) \right],
\end{align}
where $a$ denotes the radius of the undeformed sphere, $\theta$ represents the polar angle with respect to $\bm{e}_z$ and
$P_\ell$ is the Legendre polynomial of degree~$\ell$ (see figure~\ref{fig:schematic}). We assume $\alpha_\ell\ll 1$, thus the particle possesses a nearly spherical shape. Note that since the first mode merely implies body translation and does not indicate any departure from the spherical shape, we set $\alpha_1=0$.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{fig1}
\caption{{Schematic of the nearly-spherical swimmer described in equation~\eqref{shape_function}. The panels on the right show the isolated contribution of the first four modes in the shape function. Black lines show the perturbed shape and the gray lines illustrate the reference unperturbed sphere.
}}
\label{fig:schematic}
\end{figure}
At the small scales of microswimmers, viscous forces dominate inertial forces, and the flow is governed by the Stokes equations $\boldsymbol{\nabla} \cdot \boldsymbol{\sigma} = \boldsymbol{0}$ and $\boldsymbol{\nabla} \cdot \vect{v} = 0$ with $\vect{v}$ denoting the flow field, $\boldsymbol{\sigma}=-p \vect{I} +\mu\left( \boldsymbol{\nabla} \vect{v} + \boldsymbol{\nabla} \vect{v}^\top\right)$ the stress field, and $p$ the pressure field. The swimmer is surface-driven and its active mechanism induces an effective tangential slip-velocity $\vect{v}^s$ on its surface, which imposes the boundary condition on the fluid velocity in the co-moving frame $\vect{v}=\vect{v}^s$. The slip profile determines the swimming velocity $V_A$ through a relationship that can be derived from the Lorentz reciprocal theorem \citep{stone1996}. The dissipated power is given by $P=-\int \vect{v}^s \cdot \vect{\sigma} \cdot \vect{n}\, dS$.
We consider the swimmer to be optimal, thus this slip profile minimizes the viscous dissipation $P$, while maintaining the swimming speed $V_\mathrm{A}$.
{
We should note that the present analytical description of microswimmers applies exclusively to non-deformable active swimmers of nearly-spherical shape.
Prime examples of these swimmers include a broad class of ciliated microorganisms or synthetic microswimmers that achieve locomotion via a thin slip layer (e.g. self-phoretic mechanisms).
}
As discussed earlier, the far-field flow generated by the force- and torque-free motion of a microswimmer has the form (to the leading order) $\vect v(\vect x)= -(3 / (8\pi \mu)) (\vect x \cdot \vect S \cdot \vect x) \, \vect x/r^5$ and is characterized by the stresslet $\vect S$. Here, since the motion is axisymmetric, the stresslet takes the simple form of
\begin{equation}
\vect{S} = 8\pi \mu a^2 V_A\, \beta
\left( \vect{e}_z \vect{e}_z - \tfrac{1}{3} \, \vect{I} \right),
\label{eq:stresslet}
\end{equation}
where $\beta$ is the dimensionless dipole coefficient \citep{batchelor1970stress,nasouri2018higher}. Under this definition, the sign of $\beta$ determines the swimming type such that $\beta < 0$ holds for pushers, $\beta > 0$ for pullers, and $\beta=0$ indicates neutral swimming. Thus, to determine the swimming type of an optimal nearly spherical swimmer, we need to find the relation between $\beta$ and $\alpha_\ell$.
Conventionally, finding the flow field surrounding an optimal swimmer requires extensive optimization schemes, which are often implemented by the means of computational tools. Here, we alternatively apply a fundamental theorem that sets the lower bound on the energy dissipation of a self-propelled active microswimmer of arbitrary shape \citep{Nasouri.Golestanian2021}. It states that the motion of an active swimmer with minimal dissipation can be conveniently expressed as a linear superposition of two passive bodies of the same shape satisfying no-slip and perfect-slip boundary conditions at their surfaces, respectively. This theorem relies on the fact that perfect-slip bodies require the least dissipation for motion, suggesting that a swimmer with a \textit{similar} slip profile will be more efficient. A superposition with the no-slip problem is needed to obtain a force-free flow around an active swimmer (see \citet{Nasouri.Golestanian2021} for the details of the derivation). Specifically, defining $\bm{v}_A$ as the flow field induced by the motion of the optimal swimmer, this theorem dictates
\begin{align}
\label{theorem}
\bm{v}_A=\bm{v}_\text{PS}-\bm{v}_\text{NS}
\end{align}
where $\bm{v}_\text{PS}$ is the flow field due to the motion of a passive perfect-slip body of the same shape translating with speed $V_\text{PS}=[R_\text{NS}/(R_\text{NS}-R_\text{PS})]{V}_A$, and $\bm{v}_\text{NS}$ is the flow field of its no-slip counterpart moving with speed $V_\text{NS}=[R_\text{PS}/(R_\text{NS}-R_\text{PS})] {V}_A$, with ${R}_\text{NS}$ and ${R}_\text{PS}$ being the translational drag coefficients for the no-slip and the perfect-slip body, respectively.
Accordingly, by the means of this theorem, the optimization problem is reduced to finding the flow fields of two passive systems (henceforth referred to using `PS' and `NS') and their corresponding drag coefficients. Following an asymptotic approach, we will prove that the dipole coefficient takes a particularly simple expression and can solely be expressed in terms of the third Legendre mode as
\begin{equation}
\beta = \frac{27}{14} \, \alpha_3 \, . \label{centralResult}
\end{equation}
Based on this, the nearly spherical optimal swimmer is classified as a pusher when $\alpha_3 < 0$, puller when $\alpha_3 > 0$ and neutral if $\alpha_3 = 0$.
\section{Solution of the passive problem}
As discussed earlier, to find the flow field of the optimal active swimmer, we only need to determine the flow fields around a passive body of the same shape, once with a no-slip and once with a perfect-slip boundary condition.
Recalling that the particle is nearly spherical (i.e., $\alpha_\ell \ll 1$), we use an asymptotic approach in finding the flow fields, and expand all entities in terms of surface modes. At the zeroth order (denoted by `(0)'), we recover the flow fields due to the passive motion of a spherical particle with no-slip and perfect-slip boundary conditions. The first-order correction (denoted by `(1)') will then be due to the surface departure from the spherical shape, and so based on the linearity of the field equations, must be a linear superposition of the surface modes, e.g. $\vect{v}=\vect{v}^{(0)}+\sum_\ell \alpha_\ell \vect{v}^{(1)}_\ell$. In what follows, we find the zeroth- and first-order flow fields for both the NS and PS problems, by applying the Lamb's solution at each order separately.
We should also account for the correction to the surface normal vector at the first order. At the zeroth order we have $\vect{n}^{(0)} = \vect{e}_r$, and the departure from spherical shape leads to
$\vect{n}^{(1)} = -\sum_\ell \alpha_\ell P^1_\ell (\cos\theta) \, \vect{e}_\theta$ where $P_\ell^1(\cos\theta)=dP_\ell(\cos\theta)/d\theta$ is the associate Legendre polynomial of the first order. The tangent vector is given by $\vect{t}^{(0)} = \vect{e}_\theta$ and $\vect{t}^{(1)} = \sum_\ell \alpha_\ell P^1_\ell (\cos\theta) \, \vect{e}_r$.
\subsection{No-slip problem}
The solution for the flow past a nearly spherical body with a no-slip boundary is discussed in \citet{happel1983}. In the following, we derive the flow field in a form that will be convenient for the solution of the active problem in the next section. Due to linearity of the problem, we only need to solve the flow field for a single mode of surface deformation (e.g. $\alpha_\ell$), and the complete solution will be achieved by linear superposition of all modes.
In the co-moving frame of reference, the no-slip boundary condition requires vanishing velocities at the deformed surface of the object such that
\begin{equation}
\vect{v}_\text{NS} = \vect{0} \qquad \text{at} \qquad r(\theta) =a\left[ 1 + \alpha_\ell P_\ell(\cos\theta)\right] \, .
\end{equation}
This condition can be expanded perturbatively to linear order in $\alpha_\ell$ as
\begin{equation}
\left. \vect{v}^{(0)} + \alpha_\ell \left( \vect{v}^{(1)} +
a \, \frac{\partial \vect{v}^{(0)}}{\partial r} \, P_\ell(\cos\theta) \right)
\right|_{r=a} = \bm{0} \, .
\label{eq:bcns2}
\end{equation}
To find the Stokes flow that satisfies the above boundary condition, along with the condition $\vect{v}=-V_{\text{NS}} \, \vect{e}_z$ at $r\to \infty$, we use Lamb's general solution in spherical coordinates as an ansatz \citep{happel1983}. For axisymmetric problems, it simplifies to
\begin{subequations} \label{velocityComponents}
\begin{align}
\frac{v_r}{V_\text{NS}} &= -\cos\theta +
\sum_{n=1}^{\infty}
\frac{n+1}{2} \left( {n} \, A_n - 2{B_n} \left(\frac{a}{r}\right)^{2} \right) \left(\frac{a}{r}\right)^n P_n (\cos\theta) \, , \\
\frac{v_\theta}{V_\text{NS}} &=
\sin\theta +
\sum_{n=1}^{\infty}
\left( - \frac{n-2}{2} \, A_n + B_n \left(\frac{a}{r}\right)^{2} \right) \left(\frac{a}{r}\right)^n {P^1_n (\cos\theta)} \, ,
\end{align}
\end{subequations}
where $A_n$ and $B_n$ are series coefficients that must be determined from the boundary conditions.
The solution for the zeroth-order problem corresponding to an undeformed sphere can readily be obtained by imposing $v_r^{(0)} = 0$ and $v_\theta^{(0)} = 0$ at $r=a$.
This leads us to
$A_1^{(0)} = {3/2}$, $B_1^{(0)}= {1/4}$, and $A_n^{(0)} = B_n^{(0)}= 0$ for $n \ge 2$. The zeroth order flow field
\begin{equation}
\frac{v_r^{(0)}}{V_\mathrm{NS}} = -\frac{1}{2} \left( 2-\frac{3a}{r} + \frac{a^3}{r^3} \right) \cos\theta \, , \qquad
\frac{v_\theta^{(0)}}{V_\mathrm{NS}} = \frac{1}{4} \left( 4-\frac{3a}{r} - \frac{a^3}{r^3} \right) \sin\theta \, ,
\end{equation}
represents the well known flow past a no-slip sphere \citep{happel1983}.
The boundary condition for the first-order problem \eqref{eq:bcns2} then reads $\vect{v}^{(1)} = -a \, (\partial \vect{v}^{(0)}/\partial r) \, P_\ell(\cos\theta)$ at $r=a$.
By noting that $\partial v_r^{(0)}/\partial r = 0$ at $r=a$, we find upon using appropriate orthogonality relations that only the series coefficients of order $n=\ell \pm 1$ have non-zero values.
Specifically, we find
$A_{\ell-1}^{(1)}=-A_{\ell+1}^{(1)}=-(3/2)/(2\ell+1)$, $B_{\ell-1}^{(1)}=-(3/4)(\ell-1)/(2\ell+1)$, and $B_{\ell+1}^{(1)}=(3/4)(\ell+1)/(2\ell+1)$.
The first-order correction to the flow can be evaluated by inserting these coefficients into the generic solution given in \eqref{velocityComponents}. Examples of flow patterns for the first three deformation modes are shown in the left column of figure~\ref{fig:flowFields}.
The drag force exerted on an object is always determined by force monopole as $F_\text{D} = -4\pi\mu a A_1 V_\text{NS}$.
Accordingly, the translational drag coefficient for an approximate sphere only depends on the zeroth and second Legendre modes and can be written as \citep{happel1983}
\begin{equation}
\frac{{R}_\mathrm{NS}}{6\pi\mu a}
= 1 - \frac{1}{5} \, \alpha_2 \, .
\label{eq:drag-coeff-NS}
\end{equation}
\subsection{Perfect-slip problem}
For the perfect-slip boundary condition, the impermeability and vanishing tangential stress need to be satisfied at the surface of the approximate sphere,
\begin{equation}
\vect{v}_\text{PS} \cdot \vect{n} = 0, \quad \text{and} \quad \bm{t} \cdot
\boldsymbol{\sigma}_\text{PS} \cdot \vect{n}={0}, \quad \text{at} \quad r(\theta) = a\left[1 + \alpha_\ell P_\ell(\cos\theta)\right] \, .
\end{equation}
A Taylor expansion up to linear order in $\alpha_\ell$ leads to
\begin{subequations}
\begin{align}
\left. v_r^{(0)}
+ \alpha_\ell \left( v_r^{(1)} + v_\theta^{(0)} P_\ell^1(\cos\theta)
+ a \, \frac{\partial v_r^{(0)}}{\partial r} \, P_\ell(\cos\theta)
\right) \right|_{r = a} &= 0 \, , \\
\left. \sigma_{r\theta}^{(0)} \
+ \alpha_\ell \left(
\sigma_{r\theta}^{(1)}
+ \left( \sigma_{rr}^{(0)} - \sigma_{\theta\theta}^{(0)} \right)P_\ell^1(\cos\theta)
+ a \, \frac{\partial \sigma_{r\theta}^{(0)}}{\partial r} \, P_\ell(\cos\theta)
\right) \right|_{r = a}
&= 0 \, .
\end{align}
\end{subequations}
Again, we solve the flow problem using Lamb's solution \eqref{velocityComponents} and determine the coefficients $A_n$ and $B_n$ that satisfy the above conditions. The solution for the zeroth-order problem corresponding to an undeformed sphere is obtained by requiring $v_r^{(0)} = 0$ and $\sigma_{r\theta}^{(0)} = 0$, which readily leads us to $A_1^{(0)} = 1$, $B_1^{(0)} = 0$, and $A_n^{(0)}= B_n^{(0)}= 0$ for $n \ge 2$. Thus, at the zeroth order we have
\begin{equation}
\frac{v_r^{(0)}}{V_\mathrm{PS}} = -\left( 1 - \frac{a}{r} \right) \cos\theta \, , \qquad
\frac{v_\theta^{(0)}}{V_\mathrm{PS}} = \frac{1}{2} \left( 2 - \frac{a}{r} \right)\sin\theta
\end{equation}
which, as expected, is the flow past a spherical air bubble \citep{happel1983}.
Proceeding to the first order, noting that $\sigma_{r\theta}^{(0)} = \sigma_{\theta\theta}^{(0)} = 0$ everywhere in the fluid domain, we, again, find that all the terms except $n =\ell \pm 1$ are zero. The first-order coefficients due to the effect of $\alpha_\ell$ are thereby found
\begin{subequations}
\begin{align}
A_{\ell-1}^{(1)}&=-\cfrac{(\ell+1)(\ell+2)}{(2\ell-1)(2\ell+1)} \, ,
& A_{\ell+1}^{(1)}&=\cfrac{\ell^2+\ell+3}{(2\ell+3)(2\ell+1)} \, ,\\
B_{\ell-1}^{(1)}&=- \cfrac{(\ell-1)(\ell^2+\ell+3)}{2(2\ell-1)(2\ell+1)} \, ,
& B_{\ell+1}^{(1)}&= \cfrac{\ell(\ell-1)(\ell+1)}{2(2\ell+3)(2\ell+1)} \, .
\end{align}
\end{subequations}
These coefficients determine the first-order solution for the flow field with the perfect-slip boundary condition (figure~\ref{fig:flowFields}, middle column). From the drag force $F_\text{D} = -4\pi\mu a A_1 V_\text{PS}$, we determine the drag coefficient as
\begin{equation}
\frac{{R}_\mathrm{PS}}{4\pi\mu a}
= 1 - \frac{4}{5} \, \alpha_2 \, . \label{Drag-coeff-PS}
\end{equation}
The result is consistent with the calculation for an ellipsoidal particle, where only the deformation mode $\ell=2$ is present \citep{chang2009translation}, but has a broader validity, as it shows that deformation modes beyond the second do not influence the drag coefficient in linear order.
\begin{figure}
\centering
\includegraphics[width=0.65\textwidth]{fig2}
\caption{
Streamlines around a slightly deformed sphere with no-slip (left column), perfect-slip (middle column), and optimal active swimmer (right column) in the co-moving frame. Each row shows one deformation mode with the amplitude $\alpha_\ell=0.05$ for $\ell=2$ (top row), $\ell=3$ (middle row), and $\ell=4$ (bottom row). The colour indicates the fluid velocity, scaled by the speed of the active swimmer.
}
\label{fig:flowFields}
\end{figure}
\section{Optimal active swimmer}
Having derived the solutions of the flow problems for no-slip and perfect-slip boundary conditions, we next make use of these solutions to construct the flow field induced by a self-propelling active microswimmer with minimum dissipation, i.e. the optimal swimmer. As shown in Eq.~\eqref{theorem}, the flow field surrounding the optimal swimmer can be reconstructed by a linear superposition of the flow fields of the no-slip and perfect-slip problems, weighted by a specific combination of their drag coefficients.
\subsection{Stresslet of the optimal microswimmer}
We first evaluate the stresslet of the optimal swimmer and its dipole coefficient. Since both passive flows are expanded in terms of Lamb's solution, their superposition, too, has the same form. A comparison between the flow field in \eqref{velocityComponents} and the definition of the stresslet \eqref{eq:stresslet} shows that only the coefficient $A_2$ contributes to the stresslet.
{
Specifically, the dipolar contribution to the flow field which decays as $r^{-2}$ reads $ \frac{\boldsymbol{v}}{V} = \frac{3}{2} A_2 \left( 3\cos^2\theta -1 \right) \left( \frac{a}{r} \right)^2 \boldsymbol{e}_r$, indicating that the dipole coefficient must be $\beta = -(3/2) A_2$. Note that $A_2^{(1)} \ne 0$ only for $\ell \in \{ 1,3 \}$ and here we have set $\alpha_1=0$, so
in the perturbative expansion the dipole coefficient evaluates to }
\begin{equation}
\label{result1}
{\beta}
= -\frac{3}{2} \, \mathcal{A}_2^{(1)} \alpha_3 \, ,
\end{equation}
with
\begin{align}
\mathcal{A}_2^{(1)}=\frac{R_\text{NS}}{R_\text{NS}-R_\text{PS}} \left[ A_2^{(1)}\right]_\text{PS}-\frac{R_\text{PS}}{R_\text{NS}-R_\text{PS}}\left[ A_2^{(1)}\right]_\text{NS}
\label{eq:a2superpositon}
\end{align}
being the corresponding coefficient of the active swimmer, expressed in terms of those of the NS and PS problems.
From Eq.~\eqref{result1}, one can see that the corrections to the drag coefficients $R_\text{NS}$ and $R_\text{PS}$ do not have any contributions to $\beta$ in the leading order. Equation \eqref{eq:a2superpositon} can therefore be evaluated with the drag coefficients of spherical particles. Remarkably, the dipole coefficient, to the leading order, only depends on the third Legendre mode of the shape function ($\alpha_3$), and other modes have no contribution.
Inserting the values of $A_2^{(1)}$ from the NS and PS calculations into Eq.~\eqref{result1}, we finally arrive at our final solution given in Eq.~\eqref{centralResult}.
\subsection{Flow field of the optimal microswimmer}
The full velocity field induced by the optimal active microswimmer (figure~\ref{fig:flowFields}, right column) can be obtained up to the linear order in deformation amplitudes by evaluating all coefficients in the same way as shown in Eq.~\eqref{eq:a2superpositon}. Thereby, the drag coefficients $R_\text{NS}$ and $R_\text{PS}$ need to be evaluated to linear order, as given in Eqs.~\eqref{eq:drag-coeff-NS} and \eqref{Drag-coeff-PS}. In figure~\ref{fig:labframe}, the flow fields for some nearly spherical optimal swimmers are shown in the laboratory frame.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{fig3}
\caption{
Streamlines in laboratory frame of optimal swimmers of various shapes. The nonzero surface modes for each swimmer are given at the top of its panel. The colours in the flow field indicate its velocity scaled by the swimming speed of the active particle. The swimmer surface colours represent the slip velocity as in figure~\ref{fig:flowFields}.
}
\label{fig:labframe}
\end{figure}
\section{Conclusions}
In this study we analyzed the swimming type of nearly spherical optimal swimmers. We applied the minimum dissipation theorem \citep{Nasouri.Golestanian2021} to determine the flow field of the optimal swimmer and to show that the dipole coefficient (or the strength of stresslet) only depends to leading order on the third mode of the shape function. Thus, depending on the sign of this mode, the optimal swimmer is a puller (when positive), pusher (when negative), or neutral (when zero). Using our results, one can determine the optimal swimming type for surface-driven nearly-spherical swimmers by simply describing the shape function in terms of the Legendre expansion and calculating the third mode. Our results can also be applied to phoretic particles which use their surface activity to gain propulsion. For instance, for a chemically-active particle, the slip velocity depends on the surface coating pattern which characterizes the chemical activity and mobility rates. For a given nearly-spherical phoretic particle, one can then use our results to determine whether that surface coating minimizes the viscous dissipation. In the hydrodynamically optimal case, the dipole coefficient follows from the shape as derived here. We should note that for optimizing phoretic particles, one should also account for the dissipation in the slip layer and the energetics of the chemical reaction \citep{sabass2010,sabass2012}, which is not considered here and can be a natural extension to this work.
Our derivation demonstrates how the recently proposed theorem can enable us to find a perturbative explicit solution to a problem that would otherwise hardly be analytically tractable. It is also possible to extend the presented results by accounting for the nonlinear effect of the quadratic and higher-order terms, in which case the contribution of other surface modes will be nonzero. Beyond that, one can use the methodology discussed here to evaluate the swimming type of any optimal swimmer of any arbitrary shape, provided the flow fields for the no-slip and perfect-slip problems are known.
|
1303.7376
|
\section{Introduction}
The Einsteinean gravity is self-contained and links universally to all particles including photons. In contrast, Newtonian gravity affects massive particles but the massless ones behave as they felt no gravitation \cite{ND1}. Moreover, in the classical gravity it is impossible to cancel everywhere the centrifugal acceleration. As Dadhich \cite{ND2} has shown, the additional attractive force due to the coupling gravitational - centrifugal potential (a purely Einsteinean effect) counters the centrifugal acceleration everywhere. He also proved that particles can have angular velocity but no centrifugal radial repulsion. Dadhich further looks for the physical source which gives rise to his non-asymptotically flat space, thus introducing a global monopole of unit charge in an AdS spacetime. In addition, the free of centrifugal forces geometry is singular at the origin (the curvatures diverge there) but, however, $r = 0$ is a horizon of the spacetime.
Our purpose in this paper is to investigate the free of centrifugal acceleration geometry and to give a different interpretation of the nature of the source of the field. In our view, Dadhich's constant $k$ represents the constant acceleration (squared) of a static observer. The energy-momentum tensor corresponds to an anisotropic fluid and its energy density and radial pressures change their sign at some distance from the origin of coordinates. We study further the timelike and null geodesics and find that the test particle reaches the singularity after an infinite time, following a spiral trajectory.
\section{Anisotropic fluid stress tensor}
We write down the "`free of centrifugal acceleration"' spacetime as
\begin{equation}
ds^{2} = -g^{2}r^{2} dt^{2} + \frac{dr^{2}}{g^{2}r^{2}} + r^{2} d\Omega^{2}
\label{2.1}
\end{equation}
where $d\Omega^{2} = d\theta^{2} + sin^{2} \theta d\phi^{2}$ stands for the metric on the unit 2 - sphere and $g$ is a positive constant. Our aim is to investigate the structure of the energy-momentum tensor that leads to the metric (2.1), namely we look for $T_{ab}$ which solves Einstein's equations $G_{ab} = \kappa T_{ab}$, where $G_{ab}$ is the Einstein tensor and $\kappa = 8\pi G/c^{4}$ is Einstein's constant. We take from now on $c = 8\pi G = 1$. One obtains
\begin{equation}
T^{t}_{~t} = -\rho = 3g^{2} - \frac{1}{r^{2}},~~~ T^{r}_{~r} = p_{r} = -\rho,~~~T^{\theta}_{~\theta} = T^{\phi}_{~\phi} = 3g^{2} \equiv p_{\bot}
\label{2.2}
\end{equation}
where $p_{r}$ is the radial pressure and $\rho$ is the energy density.
It is worth to notice that Dadhich \cite{ND2} separated a $\Lambda$-term from $T^{a}_{~b}$ and, therefore, in his model the transverse pressures are vanishing. We see from (2.2) that $p_{r} = -\rho$ everywhere but $p_{\bot}$ are constant and positive. In addition, $\rho$ and $p_{r}$ change their sign at $r = 1/\sqrt{3}g$ and $T^{a}_{~b}$ acquires a $\Lambda$-form when $r \rightarrow \infty$. In contrast, $\rho$ and $p_{r}$ are divergent at the central singularity $r = 0$ which represents a horizon of the spacetime. We mention that there is no any central mass and, therefore, the origin of coordinates is arbitrary. That was probably the reason why Dadhich used a unit charge global monopole at $r = 0$ to justify the form of the spacetime (2.1). The central singularity is rooted from the divergence of the scalar curvature and the Kretschmann scalar
\begin{equation}
R^{a}_{~a} = -12g^{2} + \frac{2}{r^{2}},~~~~K = \frac{4}{r^{4}} (1 - 2g^{2}r^{2} + 6g^{4}r^{4})
\label{2.3}
\end{equation}
at $r = 0$. In addition, $K$ is regular at infinity, i.e. $K_{\infty} = 24g^{4}$.
Let us consider a congruence of static observers with the velocity vector field $v^{a} = (1/gr, 0, 0, 0)$. The acceleration 4 - vector is given by
\begin{equation}
a^{b} \equiv v^{a} \nabla_{a} v^{b} = (0, g^{2}r, 0, 0)
\label{2.4}
\end{equation}
The radial component $a^{r} = g^{2}r$ from (2.4) is the acceleration our observer would exert on a static particle to maintain it at $r = const.$ Because $a^{r} > 0$, the gravitational field is attractive. From (2.4) we immediately obtain $\sqrt{g_{bc}a^{b} a^{c}} = g$. In other words, the constant $g$ from the metric (2.1) is nothing but the invariant acceleration of the static observer. This interpretation is supported by the fact that for $r >> 1/\sqrt{3}g$, the energy density $\rho \propto g^{2}$, as in Newtonian gravity, $g$ being here the intensity of the gravitational field. We also observe that the projection of $a^{b}$ on the radial direction is constant, too: $a^{b}n_{b} = g$, where $n^{b} = (0, gr, 0, 0)$.
Let us compute now the Tolman-Komar gravitational energy. We have \cite{TP, HC}
\begin{equation}
W = 2 \int(T_{ab} - \frac{1}{2} g_{ab}T)v^{a} v^{b} N\sqrt{\gamma} d^{3}x ,
\label{2.5}
\end{equation}
with $N = \sqrt{-g_{00}} = gr$ and $\gamma$ is the determinant of the spatial three-metric. By means of $T_{ab}$ from (2.2) and with the above $v^{a}$ we get
\begin{equation}
W = g^{2} r^{3}
\label{2.6}
\end{equation}
$W$ is finite throughout the spacetime (2.1) and vanishes at the singularity. Even though $\rho < 0$ in the region $r > 1/\sqrt{3}g$, $W$ is positive everywhere because the pressures carry positive contribution in (2.5). The same result (2.6) should have been obtained if we calculated the gravitational energy by means of the formula \cite{TP1}
\begin{equation}
W = \frac{1}{4\pi} \int_{\partial V} N a_{b}n^{b} \sqrt{\sigma} d^{2}x ,
\label{2.7}
\end{equation}
where $\sigma$ is the determinant of the metric on the 2-dimensional boundary $\partial V$ of constant $r$ and $t$.
\section{Geodesics}
a) \textbf{Timelike geodesics}\\
The spacetime (2.1) is static and spherically symmetric and, therefore, we have two Killing vectors, $\xi^{a}_{(t)} = (1, 0, 0, 0)$ and $\xi^{a}_{(\phi)} = (0, 0, 0, 1)$. Taking for convenience $\theta = \pi/2$, (2.1) yields
\begin{equation}
g^{2}r^{2} \dot{t}^{2} - \frac{1}{g^{2}r^{2}}\dot{r}^{2} - r^{2} \dot{\phi}^{2} = 1,
\label{3.1}
\end{equation}
where $\dot{t} = dt/d\tau$, etc. and $\tau$ represents the proper time. From $-E = g_{ab}u^{a}\xi^{b}_{(t)}$ and $L = g_{ab}u^{a}\xi^{b}_{(\phi)}$, one obtains
\begin{equation}
\dot{t} = \frac{E}{g^{2}r^{2}},~~~~\dot{\phi} = \frac{L}{r^{2}}
\label{3.2}
\end{equation}
where $E$ and $L$ are the energy per unit mass of the test particle and, respectively, the angular momentum per unit mass. The velocity field vector is given by $u^{a} = (\dot{t}, \dot{r}, 0, \dot{\phi}$). From (3.1) we get the radial equation
\begin{equation}
\dot{r}^{2} = E^{2} - g^{2}L^{2} - g^{2}r^{2}
\label{3.3}
\end{equation}
We see that the constants of motion $E$ and $L$ appear on equal footing in (3.3), excepting the minus sign in front of $L$. Therefore, we denote $b = \sqrt{E^{2} - g^{2}L^{2}}$, ($E > gL$), a constant parameter. Moreover, we must have $r \leq b/g$, i.e. the particle motion is restricted to a finite range of $r$. Keeping in mind that $\dot{r} = (E/g^{2}r^{2})dr/dt$, one easily finds from (3.3) that
\begin{equation}
-\frac{\sqrt{b^{2} - g^{2}r^{2}}}{b^{2}r} = \pm \frac{g^{2}}{E}t + C
\label{3.4}
\end{equation}
with $C$ a constant of integration. Taking $r = r_{max} = b/g$ at $t = 0$ as initial condition, we have $C = 0$. The radial component of the equation of motion reads
\begin{equation}
r(t) = \frac{bE}{g\sqrt{E^{2} + b^{4}g^{2}t^{2}}},
\label{3.5}
\end{equation}
where the sign in front of $t$ from (3.4) was chosen so that $ dr/dt < 0$ (ingoing geodesics) because of the initial condition used. The radial geodesics is obtained from (3.5) if we replaced $b$ with $E$ ($L = 0$). The last equation shows that the test particle needs an infinite time to reach the central singularity. The curve $r(t)$ has inflexions at $t_{i} =\pm E/\sqrt{2}b^{2}g$ ($r(t)$ is an even function and, therefore, we may take into account the whole range $-\infty < t < \infty$, comprising both signs from (3.4); in other words, the particle starts at $t = -\infty$ from $r = 0$, reaches $r_{max}$ at $t = 0$ and then falls free to the singularity when $t \rightarrow \infty$).
As far as the angular component is concerned, we have from (3.2)
\begin{equation}
\frac{d\phi}{dt} = \frac{g^{2}L}{E}
\label{3.6}
\end{equation}
which leads to
\begin{equation}
\phi(t) = \omega t,
\label{3.7}
\end{equation}
with $\phi(0) = 0$. The constant $\omega = g^{2}L/E$ is the angular velocity of the test particle. This very simple result comes from the particular form of the metric (2.1).
We are now in a position to write down the curve $r(\phi)$. We pick up $t(\phi)$ from (3.7) and introduce it in (3.5). Hence
\begin{equation}
r(\phi) = \frac{bL}{\sqrt{g^{2}L^{2} + b^{4}\phi^{2}}},
\label{3.8}
\end{equation}
where $\phi \in [0, \infty)$. The trajectory is a spiral. The motion is not periodic due to the lack of centrifugal forces keeping the particle stationary.\\
b)\textbf{ Null geodesics}\\
For the null curves we obtain from (2.1) ( $ds^{2} = 0$)
\begin{equation}
g^{2}r^{2} \dot{t}^{2} - \frac{1}{g^{2}r^{2}}\dot{r}^{2} - r^{2} \dot{\phi}^{2} = 0,
\label{3.9}
\end{equation}
where $\dot{t} = dt/d\lambda$, etc. and $\lambda$ is the affine parameter along the null geodesics. We have now
\begin{equation}
\dot{r}^{2} = b^{2},
\label{3.10}
\end{equation}
namely $\dot{r} = \pm b$ which, combined with (3.2) yields
\begin{equation}
\frac{1}{r(t)} = \frac{1}{r_{0}} \pm \frac{bg^{2}}{E}t
\label{3.11}
\end{equation}
with $r_{0} = r(0)$ and ($\pm$) corresponds to the ingoing and, respectively, outgoing trajectories. One observes that the condition $t \leq E/br_{0}g^{2} \equiv t_{max}$ must be obeyed, with $r(t_{max}) = \infty$. The curves $r(\phi)$ are given by
\begin{equation}
\frac{1}{r(\phi)} = \frac{1}{r_{0}} \pm \frac{b}{L}\phi
\label{3.12}
\end{equation}
where we have a $\phi_{max} = L/br_{0}$ for the outgoing geodesics and $r(\phi_{max}) = \infty$.
\section{The cylindrically symmetric case}
Let us take now into consideration the cylindrically symmetric version of the metric (2.1)
\begin{equation}
ds^{2} = -g^{2}r^{2} dt^{2} + \frac{dr^{2}}{g^{2}r^{2}} + dz^{2} + r^{2} d\phi^{2}
\label{4.1}
\end{equation}
where $(r, \phi)$ represents here the polar coordinates. The $z = 0$ hypersurface of (4.1) corresponds physically to $\theta = \pi/2$ hypersurface of (2.1). The geometric features here are very different compared to those of (2.1). We found that the curvature invariants are regular everywhere. Moreover, they are constant: $R^{a}_{~a} = -6g^{2},~K = 12 g^{4}$ and the Weyl tensor is vanishing (see also \cite{HC1}). We have again a horizon at $r = 0$ (that is no longer a singularity) and the same expression for the acceleration (2.4) of a static observer. In contrast, the source of curvature comes from a stress tensor with constant negative energy density and positive pressures
\begin{equation}
\rho = -\frac{g^{2}}{8\pi} = -p_{r} = -p_{\phi},~~~ p_{z} = \frac{3g^{2}}{8\pi}
\label{4.2}
\end{equation}
As far as the TK energy is concerned, one obtains
\begin{equation}
W = \frac{g^{2} r^{2}}{2} \Delta z
\label{4.3}
\end{equation}
where an integration over a cylinder of radius $r$ and length $\Delta z = z_{2} - z_{1}$ was performed. Even though $\rho$ is negative, $W$ is, nevertheless, positive due to the positive pressures.
We would like now to estimate the energy $W$ enclosed by a cylinder with $r = 1m,~\Delta z = 1m$ and $g = 10 m/s^{2}$. One obtains $W = (g^{2}r^{2}/2G) \Delta z \approx 10^{12} J$. For the energy density one obtains $\rho = -(c^{2}/8\pi G)g^{2} \approx -5.10^{27} J/m^{3}$.
\section{Conclusions}
The curious Dadhich metric with no centrifugal radial repulsion is analysed in this letter. We looked for a different interpretation of the stresses giving rise to such a spacetime, compared to the global monopole introduced by Dadhich. Contrary to the constancy of the transverse pressures, the energy density $\rho = - p_{r}$ and they switch their signs at a definite value $r = 1/\sqrt{3}g$. They are also singular at $r = 0$ due to the divergence of the scalar curvature. It is worth to mention that the Tolman-Komar energy is finite throughout the spacetime and vanishes at the singularity because of the pressures' contribution. It is interesting to observe that the timelike and null geodesic particles have constant angular velocity $\omega = g^{2}L/E$ and the curves $r(\phi)$ are spirals.
For the cylindrically symmetric situation the stress tensor is constant, with a negative energy density and positive pressures. In addition, the horizon at $r = 0$ is no longer a singularity and the curvature invariants are constant.
|
2010.06075
|
\section{Introduction}
Although field-programmable gate array (FPGA) is known to provide a high-performance and energy-efficient solution for many applications, there is one class of applications where FPGA is generally known to be less competitive: memory-bound applications (e.g., ~\cite{guo2019hardware,Chi2018,qiao2018high,fpga16-fpgp}). In a recent study \cite{Cong2018}, the authors report that GPUs typically outperform FPGAs in applications that require high external memory bandwidth. The Virtex-7 690T FPGA board used for the experiment reportedly has only 13~GB/s peak DRAM bandwidth, which is much smaller than the 290~GB/s bandwidth of the Tesla K40 GPU board used in the study (even though the two boards are based on the same 28~nm technology). This result is consistent with comparative studies for earlier generation of FPGAs and GPUs~\cite{Cooke2015,Cope2010}---FPGAs traditionally were at a disadvantage compared to GPUs for applications with low reuse rate. The FPGA DRAM bandwidth was also lower than the CPUs---Sandy Bridge E5-2670 (32~nm, similar generation as Virtex-7 in \cite{Cong2018}) has a peak bandwidth of 42~GB/s \cite{Molka2014}.
But with the recent emergence of High Bandwidth Memory 2 (HBM2)~\cite{Jedec:HBM} FPGA boards, it is possible that future FPGA will be able to compete with GPUs when it comes to memory-bound applications. Xilinx's Alveo U50~\cite{Xilinx:U50}, U280~\cite{Xilinx:U280}, and Intel's Stratix 10 MX \cite{Intel:HBM} have a theoretical bandwidth of about 400~GB/s (two HBM2 DRAMs), which approaches that of Nvidia's Titan V GPU~\cite{Nvidia:TitanV} (650~GB/s, three HBM2 DRAMs). With such high memory bandwidth, these HBM-based FPGA platforms have the potential to allow a wider range of applications to benefit from FPGA acceleration.
One of the defining characteristics of the HBM FPGA boards is the existence of independent and distributed HBM channels (e.g., Fig.~\ref{fig:s10arch}). To take full advantage of this architecture, programmers need to determine the most efficient way to connect multiple PEs to multiple HBM memory controllers. It is worth noting that Convey HC-1ex platform \cite{Bakos2010} also has multiple (64) DRAM channels like the HBM FPGA boards. The difference is that PEs in Convey HC-1ex issue individual FIFO requests of 64b data, but HBM PEs are connected to 256b/512b AXI bus interface. Thus, utilizing the bus burst access feature has a large impact on the performance of HBM boards. Also, Convey HC-1ex has a pre-synthesized full crossbar between PEs and DRAM, but FPGA programmers need to customize the interconnect in the HBM boards.
\begin{table}[h]
\caption{Effective bandwidth of memory-bound applications on Stratix 10 MX and Alveo U280 using HLS tools}
\label{tab:app_perf}
\centering
\begin{tabular}{|ccc|ccc|}
\hline
\multirow{2}{*}{App}&\multirow{2}{*}{Plat}&PC&KClk&EffBW&EffBW/PC\\
&&\#& (MHz)&(GB/s)&(GB/s)\\
\hline
\multirow{3}{*}{MV}&S10&32&418&372&11.6\\
&U50&24&300&317&13.2\\
&U280&28&274&370&13.2\\
\hline
\multirow{3}{*}{Stencil}&S10&32&260&246&7.7\\
&U50&16&300&203&12.7\\
&U280&16&293&206&12.9\\
\hline
\multirow{2}{*}{Bucket}&S10&16&287&137&8.6\\
\multirow{2}{*}{sort}&U50&16&300&36&2.3\\
&U280&16&300&36&2.3\\
\hline
\multirow{2}{*}{Binary}&S10&32&310&5.2&0.16\\
\multirow{2}{*}{search}&U50&24&300&6.6&0.27\\
&U280&28&300&7.7&0.28\\
\hline
\end{tabular}
\end{table}
Table~\ref{tab:app_perf} shows the effective bandwidth of memory-bound applications\footnote{We only tested designs utilizing power-of-2 HBM PCs for stencil and bucket sort. The reason for using less than 32 PCs in Alveo U50/U280 will be explained in Section~\ref{sec:long_seq}. We only used 16 PCs in Stratix 10 MX for bucket sort due to high resource consumption and routing complexity (more details in Section~\ref{sec:many2many}).} we have implemented on the HBM boards. The kernels are written in C (Xilinx Vivado HLS \cite{Xilinx:VivadoHLS}) and OpenCL (Intel AOCL \cite{Intel:OpenCLProgramming}) for ease of programming and faster development cycle \cite{Lahti2019}. For dense matrix-vector multiplication (MV) and stencil, the effective bandwidth per pseudo channel (PC) is similar\footnote{The effective bandwidth of stencil and bucket sort on Stratix 10 MX has been slightly reduced (7.7\textasciitilde8.6~GB/s) due to the low kernel frequency (260\textasciitilde287~MHz). This will be further explained in Section~\ref{sec:freq_variation}.} to the boards' sequential access bandwidth (Section~\ref{sec:long_seq}). Both applications can evenly distribute the workload among the available HBM PCs, and their long sequential memory access pattern allows a single processing element (PE) to fully saturate an HBM PC's available bandwidth.
However, the effective bandwidth is far lower for bucket sort and binary search. In bucket sort, a PE distributes keys to multiple HBM PCs (one HBM PC corresponds to one bucket). Alveo U280 provides an area-efficient crossbar to facilitate this multi-PC distribution. But, as will be explained in Section~\ref{sec:opt_description}, it is difficult to enable external memory burst access to multiple PCs in the current high-level synthesis (HLS) programming environment. For binary search, its latency-bound characteristic is the main reason for the reduced bandwidth in both platforms. But whereas Stratix 10 MX allows multiple PEs to share access to an HBM PC to hide the latency, it is difficult to adopt a similar architecture in Alveo U280 due to HLS limitation (more details in Section~\ref{sec:multiPEtoPC}).
In this paper, we will first evaluate the performance of three recent representative HBM2 FPGA boards--Intel Stratix 10 MX~\cite{Intel:HBM} and Xilinx Alveo U50~\cite{Xilinx:U50} and U280~\cite{Xilinx:U280}--with various memory access patterns in Section~\ref{sec:evaluation}. Based on this evaluation, we identify several problems in using existing commercial HLS tools for HBM application development.
A novel HLS-based approach will be presented in Section~\ref{sec:opt} to improve the effective bandwidth when a PE accesses several PCs or when several PEs share access to a PC. The opportunities for future research will be presented in Section~\ref{sec:insight}.
A related paper named Shuhai~\cite{Wang2020} has been recently published in FCCM'20. Shuhai is a benchmarking tool used to evaluate Alveo U280's HBM and DRAM. It measures the memory throughput and latency for various burst size and strides using RTL microbenchmarks. Whereas Shuhai predicted the per-PC bandwidth to linearly scale up to all PCs, we show that such scaling cannot be achieved in practice and provide an explanation. We also quantify the overhead of using HLS-based design flow compared to Shuhai's RTL-based flow. Moreover, we not only evaluate Alveo U280 but also compare the performance and architectural differences with Stratix 10 MX board.
We make the following contributions in this paper:
\begin{itemize}
\item
We quantify the performance of several memory-bound applications on the latest Intel and Xilinx FPGA HBM boards and identify problems in directly applying existing commercial HLS tools to develop memory-bound applications.
\item
With microbenchmarks, we analyze the cause for the performance degradation when using HLS tools.
\item
We propose a novel HLS-based solution for Alveo U280 to increase the effective bandwidth when a PE accesses several PCs or when several PEs share access to a PC.
\item
We present several insights for the future improvement of the FPGA HBM HLS design flow.
\end{itemize}
The benchmarks used in this work can be found in: \\ \href{https://github.com/UCLA-VAST/hbmbench}{https://github.com/UCLA-VAST/hbmbench}.
\section{Background}
\label{sec:HBM}
\subsection{High Bandwidth Memory 2}
High Bandwidth Memory \cite{Jedec:HBM} is a 3D-stacked DRAM designed to provide a high memory bandwidth. Each stack is composed of 2\textasciitilde8 HBM dies and 1024 data I/Os. The HBM dies are connected to a base logic die using Through Silicon Via (TSV) technology. The base logic die connects to FPGA/GPU/CPU dies through an interposer. The maximum I/O data rate improves from 1~Gbps in HBM1 to 2~Gbps in HBM2. This is partially enabled by the use of two pseudo channels (PCs) per physical channel to hide the latency \cite{Intel:HBM,Jun2017}. Sixteen PCs exist per stack, and they can be accessed independently.
\subsection{FPGA Platforms for HBM2}
\label{sec:fpga_platforms}
\subsubsection{Intel Stratix 10 MX}
\label{sec:S10MX}
The overall architecture of Intel Stratix 10 MX is shown in Fig.~\ref{fig:s10arch} \cite{Intel:HBM}. Intel Stratix 10 MX (early-silicon version) consists of an FPGA and two HBM2 stacks (8 HBM2 dies). The FPGA resource is presented in Table~\ref{tab:constraint}. The FPGA and the HBM2 dies are connected through 32 independent pseudo channels (PCs). Each PC has 256MB of capacity (8GB in total). Each PC is connected to the FPGA PHY layer through 64 data I/Os that operates at 800MHz (double data rate). The data communication between the kernels (user logic) and the HBM2 memory is managed by the HBM controller (HBMC). AXI4 \cite{ARM:AXI} and Avalon \cite{Intel:Avalon} interfaces with 256 data bitwidth are used to communicate with the kernel side.
\begin{figure}[t]
\centering
\includegraphics[width=0.99\linewidth]{fig/s10arch.png}
\caption{Intel Stratix 10 MX Architecture \cite{Intel:HBM}}
\label{fig:s10arch}
\end{figure}
\begin{table}[ht]
\caption{FPGA resource on Stratix 10 MX, Alveo U50, and Alveo U280}
\label{tab:constraint}
\centering
\begin{tabular}{|c|c c c c|}
\hline
Platform&LUT & FF & DSP & BRAM \\
\hline
S10 MX & 1.41M & 2.81M & 6.85K & 3.95K \\
Alv U50 & 872K & 1.74M & 5.96K & 2.69K \\
Alv U280 & 1.30M & 2.60M & 9.02K & 4.03K \\
\hline
\end{tabular}
\end{table}
The clock frequency of kernels may vary (capped at 450MHz) depending on the complexity of the user logic. Since the frequency of HBMCs is fixed to 400MHz, rate matching (RM) FIFOs are inserted between the kernels and the memory controllers. The ideal memory bandwidth is 410GB/s (=~256b * 32PCs * 400MHz =~64b * 32PCs * 2 * 800MHz).
\subsubsection{Xilinx Alveo U50 and U280}
\label{sec:AlvU50}
Similar to Stratix 10 MX, Xilinx Alveo U50 FPGA has an FPGA and two HBM2 stacks~\cite{Xilinx:U50}. There are 32 PCs, each with 64b data I/Os and 256MB capacity. The FPGA is composed of two super logic regions (SLRs), and its resource is shown in Table~\ref{tab:constraint}.
The two HBM2 stacks are physically connected to the bottom SLR (SLR0).
Data I/Os runs at 900MHz (double data rate), and the the total ideal memory bandwidth is 460GB/s (=~64b*32PCs*2*900MHz).
An HBMC IP slave interfaces the kernel (user logic) via a 256b AXI3 interface running at 450MHz~\cite{Xilinx:HBMIP}. A kernel master has a 512b AXI3 interface that may run up to 300~MHz. Four of AXI masters and four of AXI slaves are connected through a fully-connected crossbar (Fig.~\ref{fig:axi_crossbar}). There exists a datapath across the 4$\times$4 unit crossbar, but the bandwidth may be limited due to the network contention among the switches~\cite{Xilinx:HBMIP}. The thermal design power (TDP) of U50 is 75W, and it may not be possible to utilize all HBM channels and FPGA logic due to the power restriction~\cite{Xilinx:U50}.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{fig/axi_crossbar.png}
\caption{AXI crossbar in Alveo U50 and U280 \cite{Xilinx:HBMIP}}
\label{fig:axi_crossbar}
\end{figure}
Alveo U280 has a similar architecture as U50 except that its FPGA is composed of three SLRs~\cite{Xilinx:U280}. Its TDP is 200W. Note that Alveo U280 also has traditional DDR DRAM---but we decided not to utilize the traditional DRAM because the purpose of this paper is to evaluate the HBM memory. Readers are referred to \cite{Miao2019} for optimization case studies on heterogeneous external memory architectures.
\subsection{HLS Programming for HBM2}
\label{sec:hbmhls}
For Stratix 10 MX, we program kernels in OpenCL and synthesize them using Intel's Quartus \cite{Intel:Quartus} and AOCL \cite{Intel:OpenCLProgramming} 19.4 tools. For Alveo U50 and U280, we program in C and utilize Xilinx's Vitis~\cite{Xilinx:Vitis} and Vivado HLS~\cite{Xilinx:VivadoHLS} 2019.2 tools. We use dataflow programming style (C functions executing in parallel and communicating through streaming FIFOs) for Alveo kernels to achieve high throughput with small BRAM consumption \cite{Xilinx:VivadoHLS}.
\subsubsection{Accessing Multiple PCs from a PE}
\label{sec:multiPCtoPE}
On Stratix 10 MX, programmers specify the target PC for each function argument (PE's port) using ``buffer\_location" attribute (Fig.~\ref{fig:hls_bucket}(a)). This creates a new connection between a PE and an AXI master of a PC. Although programmer-friendly, the inter-connection may consume most of the FPGA resource (Section~\ref{sec:many2many}).
\begin{figure}[t]
\centering
\includegraphics[width=0.99\linewidth]{fig/hls_bucket.png}
\caption{Assigning different destination PCs to a bucket sort top function argument (a) AOCL style for Stratix 10 MX (b) Vivado HLS style for Alveo U50/U280}
\label{fig:hls_bucket}
\end{figure}
On Alveo U50 and U280, programmers can exploit the built-in AXI crossbar to avoid high resource consumption. An example HLS code is provided in Fig.~\ref{fig:hls_bucket}(b). Given a single function (PE) with multiple arguments (ports), a programmer can assign different PC number (PC8 and PC9) to each argument while assigning the same bundle (axi8) to all arguments. A \textit{bundle} is a Vivado HLS concept that corresponds to an AXI master. When an AXI master is connected to ports with multiple PCs (see Makefile of Fig.~\ref{fig:hls_bucket}(b)), it will automatically use the AXI crossbar. Although area-efficient, directly using such assignment strategy for bucket sort (Table~\ref{tab:app_perf}) may result in severe performance degradation (details in Section~\ref{sec:opt_description}).
\subsubsection{Accessing a PC from Multiple PEs}
\label{sec:multiPEtoPC}
On Stratix 10 MX, ports from multiple PEs may have the same buffer\_location attribute and access the same PC. Even if a PE port's access rate is low (e.g., binary search), one could increase the bandwidth utilization by connecting several PEs to the same PC.
Alveo U50 and U280, on the other hand, allow only one read PE and one write PE to be connected to a bundle in dataflow.\footnote{The restriction is on the number of connected PEs (functions), not the number of ports (function arguments) from a PE. Multiple read/write ports from a single PE may be connected to a bundle (e.g. to access multiple PCs as in Section~\ref{sec:multiPCtoPE}).} An example of illegal coding style due to two read PEs connected the same bundle is shown in Fig.~\ref{fig:hls_bsearch}. This limitation was not a problem in non-HBM boards (e.g., \cite{Alphadata:KU3,Amazon:F1}) because one could easily utilize multiple bundles for access to a few (1~\textasciitilde~4) DRAM channels. Assigning multiple bundles per DRAM channel allows parallel DRAM access from multiple PEs in non-HBM boards. But due to the fixed number of AXI masters in Alveo U50 and U280, the number of usable bundles is fixed to 30 for U50 and 32 for U280. This effectively limits one bundle to be used for one PC. Thus, a PC's bandwidth cannot be fully utilized unless one read PE and one write PE can make full use of it.
\begin{figure}[t]
\centering
\includegraphics[width=0.99\linewidth]{fig/hls_bsearch.png}
\caption{Illegal Vivado HLS coding style of two read ports from different PEs connected to the same bundle}
\label{fig:hls_bsearch}
\end{figure}
\subsection{Applications}
\label{sec:apps}
\subsubsection{Bucket Sort}
\label{sec:bsort}
We sort an array of keys that would be sent to buckets. Each bucket is stored in a single HBM PC, and this allows a second stage of sorting (e.g., with merge sort) to be independently performed for each bucket. For simplification, we assume a key is 512b/256b (Alveo U280/Stratix 10 MX) long, and there is an equal number of keys that would be sent to each bucket. We fetch keys from 8 read HBM PCs and send them to the buckets among 8 write HBM PCs. We use the many-to-many unicast architecture that will be described in Section~\ref{sec:many2many}.
\subsubsection{Radix Sort}
\label{sec:rsort}
Radix sort is composed of multiple iterations---each iteration sorting based on 3 (=log8) bits of the key. We sort from least significant bit to most significant bit to ensure stability of the sort. Similar to bucket sort, we use 8 read PCs and 8 write PCs for radix sort. We switch the input and output PCs in each iteration and send the 512b/256b keys in a ping-pong fashion.
\subsubsection{Binary Search}
\label{sec:bsearch}
We perform a binary search on an array with the size of 16MB. Each data element is set to 512b/256b. Each PE accesses one PC, and multiple PEs executes the search independent of each other.
\subsubsection{Depth-first Search}
\label{sec:dfs}
We conduct depth-first search on a binary tree implemented as a linked list. The value of each node and ID of the connected nodes form 512b/256b data. Each PE has a stack to store the address of the nodes to be searched later.
\begin{table}[t]
\caption{Summary of HBM2 FPGA platform evaluation result}
\label{tab:summary}
\centering
\begin{tabular}{|c c|c c c|}
\hline
\multicolumn{2}{|c|}{Platform}&S10 MX&Alv U280&Alv U280 \\
\hline
\multicolumn{2}{|c|}{Ker Lang}&OpenCL&C&RTL \cite{Wang2020}\\
\hline
\multicolumn{2}{|c|}{Seq R/W BW (GB/s)} & 357 & 388 & 425 \\
\hline
Strided & 0.25KB & 182 & 79 & 420\\
R/W BW & 1KB & 158 & 79 & 420\\
(GB/s) & 4KB & 50 & 42 & 70\\
\hline
Read seq & 128b & 215 & 141 & \\
BW top & 256b & 353 & 276 & N/A \\
arg width & 512b & 369 & 391 & \\
\hline
Read seq & 150MHz & 154 & 285 & \\
BW kernel & 200MHz & 205 & 380 & N/A \\
clk freq & 250MHz & 256 & 393 & \\
\hline
\multicolumn{2}{|c|}{Read latency (ns)} & 633 & 229 & 107 \\
\hline
M2M unicast&2$\times$2& 186 & 209 & \multirow{3}{*}{N/A} \\
16 CH BW & 4$\times$4 & 185 & 209 & \\
(GB/s) & 8$\times$8 & 175 & 96 & \\
\hline
\end{tabular}
\end{table}
\section{HBM2 FPGA Platform Evaluation}
\label{sec:evaluation}
In this section, we will analyze the on-board behavior of the HBM2 memory system using a set of HLS microbenchmarks. The summary of the result is shown in Table~\ref{tab:summary}.
When applicable, we also make a quantitative comparison with an RTL-based evaluation result in Shuhai \cite{Wang2020}. Note that, except for Section~\ref{sec:long_seq}, we omit the result for Alveo U50 due to space limitation---per-PC performance of Alveo U50 is similar to that of U280.
\subsection{Sequential Access Bandwidth}
\label{sec:long_seq}
The maximum memory bandwidth of the HBM boards is measured with a sequential access pattern shown in Fig.~\ref{fig:microbench}(a). The experiment performs a simple data copy with read \& write, read only, and write only operations. We use the default bus data bitwidth of 256b for Stratix 10 MX and 512b for Alveo U50 and U280.
In order to reduce the effect of kernel invocation overhead, we transfer 1~GB of data per PC. Since a single PC cannot store 1~GB of contiguous data, we repeatedly ($k$ loop) copy 64MB of data ($i$ loop).
We coalesce (flatten) the $k$ and the $i$ loops to increase the burst length and to remove the loop iteration latency of the $i$ loop.
\begin{figure}[t]
\centering
\includegraphics[width=0.99\linewidth]{fig/microbench.png}
\caption{Microbenchmark code for (a) sequential access bandwidth (b) strided access bandwidth (c) read latency measurement}
\label{fig:microbench}
\end{figure}
Shuhai \cite{Wang2020} assumes that the total effective bandwidth can be estimated by multiplying the bandwidth of a single PC by the number of PCs. In practice, however, we found that it is difficult to utilize all PCs. PC 30 and 31 cannot be used since they overlap with the PCIE static region~\cite{Xilinx:U50}. Also, Vitis was not able to complete the routing for PC 24-29 for U50 due to the congestion near the HBM ports. Thus, we could only use 24~PCs in U50 and 30~PCs in U280 for this experiment.
Adding more computation logic slightly worsens this problem and we can typically use 28~PCs for Alveo U280 (MV and binary search in Table~\ref{tab:app_perf}).
\begin{table}[ht]
\caption{Effective memory bandwidth with sequential access pattern (GB/s)}
\label{tab:bw_seq}
\small
\centering
\begin{tabular}{|c|c|c c c|c|}
\hline
Platform & PC\# & Read \& Write & Read only & Write only & Ideal \\
\hline
\multirow{2}{*}{S10 MX} & 32 & 357 & 353 & 354 & 410\\
& 1 & 11.2 & 11.0 & 11.1 & 12.8\\
\hline
\multirow{2}{*}{Alv U50} & 24 & 310 & 316 & 314 & 460\\
& 1 & 12.9 & 13.2 & 13.1 & 14.4\\
\hline
Alv U280 & 30 & 388 & 391 & 393 & 460\\
(This work) & 1 & 12.9 & 13.0 & 13.1 & 14.4\\
\hline
Alv U280 & 32 & 425 & N/A & N/A & 460\\
\cite{Wang2020} & 1 & 13.3 & N/A & N/A & 14.4\\
\hline
\end{tabular}
\end{table}
The overall effective bandwidth and the effective bandwidth per PC are presented in Table~\ref{tab:bw_seq}. The per-PC bandwidth is 15\% higher in U50 and U280 compared to Stratix 10 MX, because the Alveo boards use faster HBM PHY clock of 900MHz (800MHz in Stratix 10 MX). Also, the per-PC bandwidth result shows that we can obtain about 87\% of the ideal bandwidth for Stratix 10 MX and 90\% for Alveo U50/U280. The bandwidth can be saturated with read-only or write-only access in all boards.
Due to the limitation in the internal memory or the application's memory access pattern, the burst length may be far shorter than 64MB. Readers may refer to \cite{Choi2016,Choi2017+,Choi2019,Park2004} on how the effective bandwidth changes depending on the burst length and the memory latency.
\subsection{Strided Access Bandwidth}
\label{sec:stridebw}
We measure the effective bandwidth when accessing data with a fixed address stride. The granularity of the data is set to 512b.
The microbenchmark is shown in Fig.~\ref{fig:microbench}(b).
\begin{figure}[t]
\centering
\subfigure[]{\includegraphics[width=0.48\linewidth]{fig/intel_stride.png}}
\subfigure[]{\includegraphics[width=0.48\linewidth]{fig/xilinx_stride.png}}
\caption{Effective memory bandwidth with varying stride (a) Stratix 10 MX (b) Alveo U280}
\label{fig:stride_bw}
\end{figure}
The result for Alveo U280 is shown in Fig.~\ref{fig:stride_bw}(b). Compared to the strided access bandwidth reported in Shuhai \cite{Wang2020} (about 420~GB/s for both 0.25KB and 1KB stride, Table~\ref{tab:summary}), the obtained effective bandwidth is about 5X lower (79~GB/s for both 0.25KB and 1KB stride). The reason can be found in how the memory request is sent to the AXI bus. In Shuhai, a new memory request is sent as soon as the AXI bus is ready to receive a new request. In HLS, the generated AXI master has some limitation in bookkeeping the outstanding requests. Since a burst access cannot be used for strided access, the number of outstanding requests becomes much larger than that of sequential access. As a result, HLS AXI master often stalls and the performance is limited to about 50--80~GB/s. When the stride is longer than 4KB, the effective bandwidth is limited by the latency of opening a new page in the HBM memory \cite{Wang2020}, and the bandwidth becomes similar to that of Shuhai.
The effective bandwidth of Stratix 10 MX, on the other hand, degrades more gracefully with larger stride (Fig.~\ref{fig:stride_bw}(a)). Stratix 10 MX fixes the AXI burst length to one \cite{Intel:HBM} and instead relies on multiple outstanding requests even for sequential access. Thus, we can deduce that Stratix 10 MX's HLS AXI master is designed to handle more outstanding requests, and it can better retain the effective bandwidth even with short accesses.
\begin{figure}[t]
\centering
\subfigure[]{\includegraphics[width=0.48\linewidth]{fig/intel_bitwidth.png}}
\subfigure[]{\includegraphics[width=0.48\linewidth]{fig/xilinx_bitwidth.png}}
\caption{Effective memory bandwidth with varying kernel top function argument bitwidth (a) Stratix 10 MX (b) Alveo U280}
\label{fig:bitwidth_result}
\end{figure}
\subsection{Sequential Access Bandwidth with Varying Kernel Top Function Argument Bitwidth}
We test the effective bandwidth of sequential access after varying the bitwidth of the kernel's top function argument. This changes the data width of the AXI bus being utilized. Fig.~\ref{fig:bitwidth_result} (a) shows that Stratix 10 MX bandwidth can be saturated with 128b variable when simultaneously reading and writing. It can also be saturated with 256b bus data width with read or write only. In Alveo U280, however, the performance is saturated with wider 512b data width (read or write only). The reason for the difference is that the Stratix 10 MX kernel can be typically synthesized at a higher frequency (the maximum is 450MHz for S10 MX and 300MHz for Alv U50/U280). As a result, the Alveo U50/U280 kernels require larger bus data width to saturate the bandwidth. This suggests that Alveo U50/U280 HLS programmers would need to use larger 512b data types for top function argument and distribute it to more PEs after breaking it down to primitive data types (e.g., short or int). The complexity of the data distribution may reduce the kernel frequency.
\subsection{Sequential Access Bandwidth with Kernel Frequency Variation}
\label{sec:freq_variation}
The HBM2 controller and the PHY layer transfer data at a fixed frequency, but the kernel clock may change depending on the complexity of the user logic. If the kernel clock is too slow, the kernel may not be able to saturate the maximum bandwidth. To obtain the bandwidth-saturating kernel clock frequency, we measure the effective bandwidth of the sequential access pattern with varying kernel clock frequency. We fix the bus data width to default 256b for Stratix 10 MX and 512b for Alveo U280. Although Vitis allows the kernel clock frequency to be freely configured after bitstream generation, Quartus does not have such functionality. Thus, for Stratix 10 MX, we present the estimated bandwidth assuming ideal bandwidth, saturated by the sequential bandwidth (from Table~\ref{tab:bw_seq}).
Fig.~\ref{fig:freq_result}(a) shows that the maximum bandwidth is reached at about 200MHz for read \& write and 350MHz for read only and write only on Stratix 10 MX. This is consistent with the Stratix 10 MX performance result of stencil and bucket sort in Table~\ref{tab:app_perf}---these applications use read or write only ports and have kernel frequency of 260\textasciitilde287~MHz. From Fig.~\ref{fig:freq_result}(a), it is not possible to saturate the bandwidth at this frequency.
The Alveo U280 experimental result in Fig.~\ref{fig:freq_result}(b) shows that the bandwidth is saturated at about 150MHz for read \& write and 200MHz for read only and write only. The saturation point is reached at a lower frequency on Alveo U50/U280, because it uses a larger top function argument bitwidth. The different saturation frequency may make Intel Stratix 10 MX to suffer more from complex circuitry. But also note that the maximum kernel frequency is 450~MHz on Stratix 10 MX and 300~MHz on Alveo U50/U280.
\begin{figure}[t]
\centering
\subfigure[]{\includegraphics[width=0.48\linewidth]{fig/intel_freq.png}}
\subfigure[]{\includegraphics[width=0.48\linewidth]{fig/xilinx_freq.png}}
\caption{Effective bandwidth with different kernel frequency (a) Stratix 10 MX (estimated) (b) Alveo U280 (experimented)}
\label{fig:freq_result}
\end{figure}
\subsection{Many-to-Many Unicast Bandwidth}
\label{sec:many2many}
Processing elements may communicate with HBM2 PCs in various patterns---including scatter, gather, and broadcast. Among them, we analyze the most complex communication pattern where all PEs communicate with all PCs simultaneously.
We quantify the FPGA resource consumption and the effective bandwidth for transferring data from 8 read PCs to 8 write PCs. Data in a read PC is written to 1\textasciitilde8 different write PCs. There are 8 PEs transferring the data in parallel. Each PE transfers data from a single read PC to a single write PC at a time (many-to-many unicast). If read data needs to be written to multiple PCs, the write PC is changed in round-robin. The complexity of the communication architecture increases from 1$\times$1 (data read from one PC is written to one PC) to 8$\times$8 (data read from one PC is written to 8 PCs). Equal amount of data is read and written to each PC in a contiguous chunk.
\begin{table}[ht]
\caption{Effective bandwidth and resource consumption of many-to-many unicast (from 8 read PCs to 8 write PCs)}
\label{tab:crossbar}
\centering
\begin{tabular}{|c c|c|c c|}
\hline
\multirow{2}{*}{Plat}&Comm&FPGA Resource&KClk&EffBW\\
&compl&LUT / FF / DSP / BRAM & (MHz)&(GB/s)\\
\hline
& 1$\times$1 & 147K / 376K / 12 / 958 & 406 & 187\\
S10& 2$\times$2 & 157K / 422K / 12 / 1.1K & 438 & 186\\
MX& 4$\times$4 & 169K / 464K / 12 / 1.3K & 425 & 185\\
& 8$\times$8 & 193K / 559K / 12 / 1.6K & 385 & 175\\
\hline
& 1$\times$1 & 42K / 106K / 0 / 124 & 300 & 209\\
Alv& 2$\times$2 & 44K / 109K / 0 / 124 & 300 & 209\\
U280& 4$\times$4 & 47K / 112K / 0 / 124 & 300 & 209\\
& 8$\times$8 & 51K / 120K / 0 / 124 & 300 & 96\\
\hline
\end{tabular}
\end{table}
Experimental result is shown in Table.~\ref{tab:crossbar}. As explained in Section~\ref{sec:multiPCtoPE}, Stratix 10 MX creates a new connection from a PE to AXI masters proportional to the number of PCs accessed from each PE. A corresponding control logic is added as well. This causes the FPGA resource consumption to rapidly increase with a more complex communication scheme. As a result, we were not able to test the bandwidth with a 16$\times$16 crossbar that accesses all 32 PCs (16 read and 16 write PCs)---routing failed due to the complexity of the custom crossbar and the large resource consumption.
The difference in resource consumption for Alveo U280 is relatively small (only 9K LUTs and 14K FFs) even though the complexity has increased from 1$\times$1 to 8$\times$8. This is attributed to the built-in crossbar that connects all user logic AXI masters to all HBM PC AXI slaves (Section~\ref{sec:AlvU50}). Table.~\ref{tab:crossbar} also reveals that the effective bandwidth is maintained in 2$\times$2 and 4$\times$4 and drops rapidly in 8$\times$8. This is due to the data congestion when crossing the 4$\times$4 unit switch in the AXI crossbar.
\subsection{Memory Latency}
\label{sec:latency}
\textit{Memory latency} is defined as a round-trip delay between the time user logic makes a memory request and the time acknowledgement is sent back to the user logic. This becomes an important metric for latency-sensitive applications. We measure the read memory latency with a pointer-chasing microbenchmark shown in Fig.~\ref{fig:microbench}(c). The data is used as an address of the next element.
The measurement result is presented in Table~\ref{tab:mem_lat}. We break down the averaged total latency into the latency caused by the HLS PE and the latency of the memory system (HBMC + HBM memory). The HLS PE latency depends on the application. In Alveo U280, the HLS PE latency was obtained by observing the waveform. We were not able to observe the waveform in current Quartus debugging environment for Stratix 10 MX, so we present an estimate based on the loop latency in the AOCL synthesis report.
The overall latency is longer in Stratix 10 MX compared to Alveo U280 because of the heavy pipeline AOCL automatically applies to improve the throughput. As a side effect, the the latency of the pointer chasing PE becomes very long (492~ns). Such long latency also causes the the binary search application (Table~\ref{tab:app_perf}) to have a low effective bandwidth (5.2~GB/s). The HLS PE latency may be improved if HLS vendors provide optional latency-sensitive (less pipelined) flow.
\begin{table}[ht]
\caption{Read memory latency measurement result}
\label{tab:mem_lat}
\centering
\begin{tabular}{|c|c c|}
\hline
& S10 MX & Alv U280 \\
\hline
Total & 633~ns & 229~ns \\
\hline
HLS PE & ~\textasciitilde492~ns & 47~ns \\
HBMC+HBM & ~\textasciitilde141~ns & 182~ns \\
\hline
\hline
HBMC+HBM \cite{Wang2020} & N/A & 107~ns \\
\hline
\end{tabular}
\end{table}
Note that, compared to the read latency measurement in Shuahai~\cite{Wang2020} (107~ns), the latency of the HBMC and HBM on Alveo U280 is longer in our measurement (182~ns). This is likely due to the fact that Shuhai obtained page hit and disabled AXI crossbar~\cite{Wang2020}---which removes the crossbar latency and any interaction with other AXI masters accessing HBM in parallel.
\section{Effective Bandwidth Improvement for HBM HLS}
\label{sec:opt}
\subsection{Problem Description}
\label{sec:opt_description}
We identify two problems from the HLS-based implementation of memory-bound applications listed in Table~\ref{tab:app_perf}. First, in bucket sort, we use the many-to-many unicast architecture (Section~\ref{sec:many2many}) to distribute the keys from input PCs to output PCs (each PC corresponds to a bucket) in parallel. However, there is a large difference in effective bandwidth in bucket sort (36~GB/s, Table~\ref{tab:app_perf}) and many-to-many unicast microbenchmark result (96~GB/s, Table~\ref{tab:crossbar}). This is because in many-to-many unicast experiment, we transferred data from input PCs to output PCs in a large contiguous chunk. But, in bucket sort, there is no guarantee that two consecutive keys will be sent to the same bucket. This could become problematic since existing HLS tools do not automatically hold the data in buffer for burst AXI access to each HBM PC. Stratix 10 MX better retains a high bandwidth for short memory access (Section~\ref{sec:stridebw}), so the effective bandwidth is high (125~GB/s, Table~\ref{tab:app_perf}) for bucket sort. But short memory access does become a problem with Alveo U280 due to its dependence on long burst length---we found that Vivado HLS conservatively sets the AXI burst length to one for bucket sort key write.
Second, in binary search, some degradation in effective bandwidth is unavoidable because it needs to wait for data to arrive before it can access the next address. We can alleviate this problem by connecting several PEs to the same PC for shared access and amortize the memory latency. This technique increases the effective bandwidth from 3.1~GB/s to 5.2~GB/s for Stratix 10 MX. But, as mentioned in Section~\ref{sec:multiPEtoPC}, the HLS tool for Alveo U280 only allows up to one read and one write PE to access the same PC. As a result, the effective bandwidth could not be improved beyond 7.7~GB/s (Table~\ref{tab:app_perf}) for Alveo U280.
The two problems on Alveo U280 can be summarized as follows:
\begin{itemize}
\item
Problem 1: Suppose there is a PE that accesses multiple PCs. The order of the destination PC is random but the access address to each PC is sequential. Then how can we improve the effective bandwidth in HLS?
\item
Problem 2: Given multiple PEs with low memory access rate and an AXI master that allows read access from one PE and write access from one PE, how can we improve the effective bandwidth in HLS?
\end{itemize}
In this section, we concentrate on solving the above listed problems for Alveo U280. We provide some insights to solve the issues for Stratix 10 MX in Section~\ref{sec:insight}.
\subsection{Proposed Solution}
\begin{figure}[t]
\centering
\includegraphics[width=0.99\linewidth]{fig/bica.png}
\caption{Batched inter-channel arbitrator to 8~PCs (a) architecture (b) HLS code}
\label{fig:bica}
\end{figure}
\subsubsection{Solution 1: Batched Inter-Channel Arbitrator (BICA)}
Since Vivado HLS does not automate burst access to different PCs, we propose BICA, which batches the access to each PC. We instantiate multiple FIFOs to be used as batch buffers for each PC (Fig.~\ref{fig:bica}(a)). Based on data's target PC information, a splitter is used to send the data to the corresponding batch buffer (lines~1--10 in Fig.~\ref{fig:bica}(b)). To infer burst access, we make the innermost loop of write PE to read from a FIFO and write to a PC for a fixed burst length (lines~14--16 in Fig.~\ref{fig:bica}(b)). The performance increases with longer burst length (at the cost of larger batch buffer size) as will be shown in Section~\ref{sec:opt_exp}.
The PC splitter logic of BICA has an initiation interval (II) of 1 on Alveo U280.
The time $t_{BUR}$ taken to complete one burst transaction of length $BLEN$ to HBM PC can be modeled as \cite{Choi2017+,Park2004}:
\begin{equation}
t_{BUR}=BLEN*DW/BW_{max} + LAT
\label{eq:bica_mem_model}
\end{equation}
where $DW$ is the data bitwidth (512b), $BW_{max}$ is the maximum effective bandwidth (sequential access bandwidth) of one PC, and $LAT$ is the memory latency. $BW_{max}$ and $LAT$ are obtained from the microbenchmarks in Section~\ref{sec:evaluation}.
The effective bandwidth of a PC after applying BICA ($BW_{PC}$) is estimated as $BW_{PC}=BLEN*DW/t_{BUR}$ after substituting $t_{BUR}$ with Eq.~\ref{eq:bica_mem_model}. The overall effective bandwidth $BW_{eff}$ ($=PC_{num}*BW_{PC}$) cannot exceed the many-to-many unicast bandwidth ($BW_{mc}$) obtained in Section~\ref{sec:many2many}. $BW_{eff}$ is modeled as:
\begin{equation}
BW_{eff}=min(PC_{num}*BLEN*DW/t_{BUR},BW_{mc})
\label{eq:bica_effbw_model}
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width=0.99\linewidth]{fig/bipa.png}
\caption{Batched inter-PE arbitrator for two PEs (a) architecture (b) HLS code}
\label{fig:bipa}
\end{figure}
\subsubsection{Solution 2: Batched Inter-PE Arbitrator (BIPA)}
In order to allow multiple PEs to share access to a PC in current HBM HLS environment, we propose BIPA, which arbitrates and batches the memory requests from multiple PEs. The architecture of BIPA is shown in Fig.~\ref{fig:bipa}(a). BIPA receives the address of memory requests in a non-blocking, round-robin fashion from the connected PEs (lines~4--8 of Fig.~\ref{fig:bipa}(b)). Then the memory access request is serialized and sent to HBM PC. After receiving the read data from HBM PC, the data is sent back to the PE that has requested it (lines~9--15 of Fig.~\ref{fig:bipa}(b)). For write request arbitration, BIPA can be further simplified by removing the data send logic.
HLS synthesis shows that we can achieve II of 1 for BIPA. From Eq.~\ref{eq:bica_mem_model}, we assume all memory requests have data bitwidth $DW$ and a burst length ($BLEN$) of 1. For $BW_{max}$, we use the strided access bandwidth $BW_{str}$ obtained in Section~\ref{sec:stridebw}. The bandwidth is divided by the number of PEs ($PE_{num}$) accessing BIPA due to the time sharing among PEs ($BW_{max}=BW_{str}/PE_{num}$). The latency $LAT$ is affected by the HMB memory system $LAT_{HBM}$ and PE overhead $LAT_{PE}$ as mentioned in Section~\ref{sec:latency}. It should also include the address and data arbitration latency that increases with the number of PEs. The effective bandwidth using BIPA is modeled as:
\begin{equation}
BW_{eff}=PC_{num}/[1/BW_{str}+(LAT_{HBM} + LAT_{PE} + 2*PE_{num})/DW]
\label{eq:bipa_effbw_model}
\end{equation}
\subsection{Experimental Results}
\label{sec:opt_exp}
\subsubsection{BICA}
Table~\ref{tab:resource_bicabipa} shows the resource consumption of BICA that accesses 8~PCs with 512b data and has FIFO depth of 64.
\begin{table}[ht]
\caption{FPGA resource consumption of BICA and BIPA}
\label{tab:resource_bicabipa}
\centering
\begin{tabular}{|c|c c c c|}
\hline
&LUT & FF & DSP & BRAM \\
\hline
BICA (to 8 PCs) & 5.3K & 11K & 0 & 60 \\
BIPA (from 8 PEs) & 1.4K & 1.4K & 0 & 0 \\
\hline
\end{tabular}
\end{table}
We test the performance improvement from using BICA with bucket sort (Section~\ref{sec:bsort}) and radix sort applications (Section~\ref{sec:rsort}). The result is shown in Table~\ref{tab:bica}.
The effective bandwidth increases with longer burst length. For bucket sort, it is saturated by the many-to-many unicast bandwidth ($BW_{mc}$=96~GB/s, from Table~\ref{tab:crossbar}) at $BLEN$=64 (=4KB). The effective bandwidth of radix sort is slightly lower than that of bucket sort due to the low kernel frequency. The averaged effective bandwidth improvement of the two applications is 2.4X. Note that the BRAM usage does not change with increasing burst length because the minimum depth of Alveo U280 BRAMs is 512. That is, the BRAMs in PC batch buffers are under-utilized.
\begin{table}[ht]
\caption{Effective bandwidth and FPGA resource consumption of bucket sort and radix sort on Alveo U280}
\label{tab:bica}
\centering
\begin{tabular}{|c|c c|c|c c|}
\hline
\multirow{2}{*}{App} & \multirow{2}{*}{Arch}&B & FPGA Resource&KClk&EffBW\\
& &LEN& LUT/FF/DSP/BRAM &(MHz)&(GB/s)\\
\hline
& \multicolumn{2}{c|}{Baseline} & 42K / 108K / 0 / 124 & 300 & 36\\
Buc & & 16 & 102K / 192K / 0 / 604 & 242 & 58\\
sort & BICA & 32 & 103K / 192K / 0 / 604 & 232 & 88\\
& & 64 & 102K / 192K / 0 / 604 & 220 & 98\\
\hline
& \multicolumn{2}{c|}{Baseline} & 111K / 223K / 0 / 124 & 217 & 35\\
Rad & & 16 & 170K / 286K / 0 / 604 & 191 & 42\\
sort & BICA & 32 & 169K / 278K / 0 / 604 & 200 & 68\\
& & 64 & 169K / 278K / 0 / 604 & 180 & 75\\
\hline
\end{tabular}
\end{table}
\subsubsection{BIPA}
In Table~\ref{tab:resource_bicabipa}, we report the resource consumption of BIPA that allows 8~PEs with 512b data to share access to a single PC. We test the performance improvement from using BIPA with binary search (Section~\ref{sec:bsearch}) and depth-first search (Section~\ref{sec:dfs}). The result is shown in Table~\ref{tab:bipa}. As more PEs are added, the resource consumption increases as well.
The effective bandwidth does not improve linearly with more PEs because of the arbitration overhead (Eq.~\ref{eq:bipa_effbw_model}). Compared to the baseline implementation of 1 PE per PC, the averaged effective bandwidth improvement of the two applications is 3.8X.
\begin{table}[ht]
\caption{Effective bandwidth and FPGA resource consumption of binary search and depth-first search on Alveo U280}
\label{tab:bipa}
\centering
\begin{tabular}{|c|c c|c|c c|}
\hline
\multirow{2}{*}{App} & \multirow{2}{*}{Arch}&PE & FPGA Resource&KClk&EffBW\\
& &\#& LUT/FF/DSP/BRAM &(MHz)&(GB/s)\\
\hline
& \multicolumn{2}{c|}{Baseline} & 35K / 54K / 0 / 46 & 300 & 7.7\\
\multirow{2}{*}{Bin} & & 2 & 66K / 98K / 0 / 50 & 300 & 11\\
\multirow{2}{*}{srch} & \multirow{2}{*}{BIPA} & 4 & 113K / 162K / 0 / 57 & 274 & 17\\
& & 8 & 204K / 280K / 0 / 57 & 237 & 23 \\
& & 16 & 379K / 518K /112/ 57 & 135 & 34 \\
\hline
& \multicolumn{2}{c|}{Baseline} & 35K / 70K / 0 / 88 & 300 & 7.4\\
\multirow{2}{*}{DFS}& & 2 & 63K / 123K / 0 / 106 & 300 & 11\\
& BIPA & 4 & 102K / 190K / 0 / 141 & 284 & 17\\
& & 8 & 182K / 308K / 0 / 197 & 199 & 23\\
\hline
\end{tabular}
\end{table}
\section{Insights for HBM HLS Improvement}
\label{sec:insight}
In this section, we provide a list of insights for HLS tool vendors or researchers to further improve the existing FPGA HBM design flow.
\noindent \textit{\textbf{Insight 1}: Customizable Crossbar Template}
The FPGA resource need to enable many-to-many unicast architecture was kept small for Alveo U280 because of the pre-defined AXI crossbar (Section~\ref{sec:many2many}). But the effective bandwidth was reduced beyond 4$\times$4 AXI masters/slaves. By increasing the complexity of the unit crossbar to 8$\times$8 or by employing an additional stage of a crossbar, we could obtain larger effective bandwidth at the cost of increased resource. On Stratix 10 MX, area rapidly increases when accessing multiple PCs, and it was only possible to fully access up to 8 output PCs (16$\times$16 crossbar failed routing). In both platforms, it would significantly improve the performance and reduce the design time if HLS vendors provide highly optimized and customizable templates of the HBM crossbar.
\smallskip
\noindent \textit{\textbf{Insight 2}: Virtual Channel for HLS}
In BICA (Section~\ref{sec:opt}), many different (and under-utilized) FIFOs are needed to enable burst access into different HBM2 PCs. But such architecture increases the number of FIFO control logic and complicates the PnR process. One possible solution is to allow a single physical FIFO to be shared among multiple virtual channels \cite{Dally1987} in HLS. A new HLS syntax would be needed to allow FIFO access with a virtual channel tag. This is likely to make it easier to share the FIFO control logic and thus reduce the FPGA resource consumption.
\smallskip
\noindent \textit{\textbf{Insight 3}: Low-Latency Memory Pipeline}
Since FPGA HBM platforms are likely to be used for various memory-bounded applications, it is important that HBM HLS tools also support memory latency-bounded applications. We have demonstrated with pointer-chasing microbenchmark (Section~\ref{sec:latency}) and the binary search application that the deep memory pipeline of current HLS tools degrade the performance of latency-bounded applications. It would help programmers if HLS tools provide an option of using less pipelined memory system (possibly at the cost of slightly degraded throughput).
\smallskip
\section{Conclusion}
Evaluation result of microbenchmark and application shows that we can achieve effective bandwidth of 310\textasciitilde390GB/s in recent HBM2 FPGA board. This allows various memory-bound applications to achieve high throughput. But due to the overhead and the architectural limitation of the logic generated by HLS tools, it is sometimes difficult to achieve high performance. The novel HLS-based optimization techniques presented in this paper improve the effective bandwidth when a PE accesses multiple PCs or when multiple PEs share access to a PC. Our research also provides insights to further improve the HBM HLS design flow.
\section{Acknowledgments}
This research is in part supported by Intel and NSF Joint Research Center on Computer Assisted Programming for Heterogeneous Architectures (CAPA) (CCF-1723773), NSF Grant on RTML: Large: Acceleration to Graph-Based Machine Learning (CCF-1937599), Xilinx Adaptive Compute Cluster (XACC) Program, and Google Faculty Award. We thank Michael Adler, Aravind Dasu, John Freeman, Lakshmi Gopalakrishnan, Mohamed Issa, Audrey Kertesz, Eriko Nurvitadhi, Manan Patel, Hongbo Rong, Oliver Tan at Intel, Thomas Bollaert, Matthew Certosimo, and David Peascoe at Xilinx for helpful discussions and suggestions.
\bibliographystyle{ACM-Reference-Format}
|
1406.3696
|
\section{Introduction}
\label{sec_introduction}
\afterassignment\docappar\noexpand\let\a Spatial tessellations
are of interest because of their wide applicability.
The perhaps simplest model of a disordered cellular structure
is the Poisson-Voronoi tessellation
obtained by constructing
Voronoi cells around point-like `seeds' distributed
randomly and uniformly in space.
Whereas two- and three-dimensional Poisson-Voronoi cells are relevant for
real-life cellular structures, the higher-dimensional case has
applications in data analyses of various kinds.
An excellent overview of the many applications is given in the
monograph by Okabe {\it et al.} \cite{Okabeetal00}.
\vspace{2mm}
Beginning with the early work of Meijering \cite{Meijering53},
much theoretical effort has been spent on finding exact analytic
expressions for the basic
statistical properties of the Voronoi tessellation, in particular in
spatial dimensions $d=2$ and $d=3$.
Quantities of primary interest
are the probability $p_n(d)$ that a cell have exactly $n$
sides (in dimension $d=2$) or $n$ faces (in dimension $d=3$).
Among the very few analytic results that are available for these quantities,
there is a determination \cite{Hilhorst05a,Hilhorst05b} of the
asymptotic behavior of $p_n(2)$ in the large-$n$ limit.
That calculation also yields the asymptotic behavior of the average
area and perimeter of the two dimensional $n$-sided cell.
Following that exact work
a heuristic theory was developed \cite{Hilhorst09},
valid again in the large-$n$ limit,
that for $d=2$ reproduces the exact
results and that may also be applied in dimension $d>2$.
In this work we will confront the predictions of this
`large-$n$ theory', as we will call it, with newly obtained Monte
Carlo data on 3D Poisson-Voronoi cells.
Large-$n$ theory is based on the idea that certain properties
of a large $n$ cell, just like those of a statistical system
in the thermodynamic limit,
acquire sharply peaked probability distributions
that may for many purposes be replaced with their averages.
We will be interested in
the most characteristic cell properties, {\it viz.}
the average volume $V_{n_F}$ and surface area $S_{n_F}$ of an ${n_F}$-faced cell,
and the average area $A_{n_E}$ and perimeter $P_{n_E}$ of an ${n_E}$-edged
cell face.
Large-$n$ theory assumes that for
${n_F}\to\infty$ the ${n_F}$-faced cell tends to a sphere and
predicts the leading asymptotic
behavior of $V_{n_F}$ and $S_{n_F}$, {\it viz.}
power laws in ${n_F}$, including their prefactor.
We here extend this theory such as to
also make predictions for $A_{n_E}$ and $P_{n_E}$ as ${n_E}\to\infty$.
It appears that in the case of the many-edged face
an important role is played by the distance, to be called $2L$,
between the seeds of the cells sharing that cell face.
We will refer to $L$ as the `focal distance' because of a
superficial
resemblance to the foci of, {\it e.g.,} an ellipse.
The extended theory provides an expression for the probability
distribution of $L$ given ${n_E}$.
It appears that whereas ${A}_{n_E}$ and ${P}_{n_E}$
increase with ${n_E}$, the average focal distance ${L}_{n_E}$ {\it decreases\,} to
zero as ${n_E}\to\infty$.
\vspace{2mm}
Monte Carlo simulation of Poisson-Voronoi cells
has a tradition that is many decades old.
A computer code developed by Brakke
\cite{Brakke8x} in the 1980's is still used today.
The quality of a Monte Carlo
simulation is first of all determined by the
number of cells that it has generated.
Recent Monte Carlo work by Mason {\it et al.} \cite{Masonetal12}
and by Lazar {\it et al.} \cite{Lazaretal13} focused on the
statistical topology of networks in two and three dimensions.
In Ref.\,\cite{Lazaretal13} Lazar {\it et al.}, using Brakke's code,
produced a data set of 250 million three-dimensional Poisson-Voronoi
cells, larger than any ever obtained before.
The simulation generates successive batches of $10^6$ cells from
$10^6$ seeds randomly and uniformly distributed in a cubic volume
with periodic boundary conditions.
The authors provided an analysis of their data%
\footnote{Available on the Internet \cite{website}.}
with strong emphasis on the identification of the frequency of
different topological cell types.
In the present work we extend the data set to
four billion ($4 \times 10^9$) three-dimensional cells.
We then compare this enlarged data set to large-$n$ theory.
We find that in all cases the Monte Carlo data are fully compatible
with the predictions of the theory.
There appear to be significant large
finite size corrections.
We discuss to what extent the theoretical law for these subleading terms
may be inferred from the data.
\vspace{2mm}
This paper is organized as follows.
In section \ref{sec_cell}
we consider first the theory and then the Monte Carlo data for the
${n_F}$-faced cell.
In section \ref{sec_facetheory}
we extend the theory to the ${n_E}$-edged cell face and in section
\ref{sec_faceMC} we present and discuss the Monte Carlo data for
those faces.
In section \ref{sec_higher} we consider subleading terms to the asymptotic
behavior.
In section \ref{sec_discussion} we present a table with our main
results and a critical dicussion of their validity.
In section \ref{sec_conclusion} we conclude.
\section{The many-faced cell}
\label{sec_cell}
\subsection{Theory and simulations}
\label{sec_thsim}
Let there be a three-dimensional Poisson-Voronoi tessellation of seed
density $\lambda$. We will take $\lambda=1$ unless stated otherwise.
Large-$n$ theory as described in Ref.\,\cite{Hilhorst09}
is directly applicable to the volume and surface area of
the three-dimensional ${n_F}$-faced cell.
We will simply state the results for these quantities and delve deeper
into the theory only in section \ref{sec_facetheory}.
When ${n_F}$ gets large, and if we assume that the cell tends towards a
sphere%
\footnote{This is a very natural idea.
The approach of large 2D cells to circles, and
higher-dimensional generalizations of this property, have been
proved rigorously in the mathematical literature
\cite{CalkaSchreiber05,HugSchneider07},
albeit under hypotheses that do not cover our case.}
of an as yet unknown radius $R_{n_F}$,
the first neighbor seeds must lie close to
a spherical surface of radius $2R_{n_F}$.
It was shown in Ref.\,\cite{Hilhorst09}
that the volume enclosed by this spherical
surface must be such that under unconstrained conditions
it would have contained on average ${n_F}$ seeds, that is,
\begin{equation}
\frac{4\pi}{3}(2R_{n_F})^3 \simeq {n_F}.
\label{relRncell}
\end{equation}
Throughout, the sign `$\simeq$' will denote
an equality valid asymptotically
in the limit ${n_F}\to\infty$.
Eq.\,(\ref{relRncell}) yields $R_{n_F}$ as a function of ${n_F}$.
The Voronoi cell of the central seed then has a volume ${V}_{n_F}$
and surface area ${S}_{n_F}$ given by%
\footnote{We let $X_n=V_n, S_n, A_n, P_n, L_n$ denote averages. When a
distinction is needed we write $X_n^{\rm th}$ for the leading order
theoretical behavior and $X_n^{\rm MC}$ for a Monte Carlo
determination of $X_n$.\label{footnote_one}}
\begin{subequations}\label{resVnSn}
\begin{equation}
V_{{n_F}}^{\rm th} = \frac{4\pi}{3} R_{n_F}^3 \simeq \frac{{n_F}}{8},
\label{resVn}
\end{equation}
\begin{equation}
S_{{n_F}}^{\rm th}= 4\pi R_{n_F}^2 \simeq
\left( \frac{9\pi}{16} \right)^{1/3} \!{n_F}^{2/3}.
\label{resSn}
\end{equation}
\end{subequations}
These theoretical averages
have been obtained without the aid of any adjustable parameter.
In figure \ref{fig_cell} we have presented the Monte Carlo data for
$V_{n_F}^{\rm MC}$ and $S_{n_F}^{\rm MC}$
obtained by averaging over a set of four billion ($4 \times 10^9$) cells.
Each quantity has been divided by its theoretical
large-${n_F}$ behavior (\ref{resVnSn}),
so that for both the data points are expected to tend
to unity as ${n_F}\to\infty$. These data appear to fully conform to this
limit behavior, even if the finite-$n$ corrections are still large.
We will analyze these subleading terms to the asymptotic laws
in section \ref{sec_higher}.
\vspace{2mm}
It is worth noting that Eq.\,(\ref{resVn}) generalizes
Lewis' law \cite{Lewis28}
for the average area $A^{(2)}_n$
of a two-dimensional $n$-sided cell.
This law, inspired a long time ago by the study of
epithelial cucumber cells,
hypothesizes that $A^{(2)}_n=cn$ with a coefficient $c$ estimated
in the range from 0.20 to 0.25.
An exact two-dimensional calculation
\cite{Hilhorst05b} has shown that this law effectively holds for
2D Poisson-Voronoi cells, albeit only asymptotically, as
\begin{equation}
A^{(2)}_n \simeq \frac{n}{4}\,.
\label{resA2n}
\end{equation}
The two-dimensional large-$n$ theory reproduces the exact result
(\ref{resA2n}) and this is one reason why we have confidence that the
three-dimensional relations (\ref{resVnSn}) are also exact.
\subsection{Comments}
\label{sec_sphericity}
We conclude this section by a few comments.
\vspace{2mm}
{\it 1. Balance of entropic forces.}
Expression (\ref{relRncell}) results \cite{Hilhorst09}
from a balance between
two `forces,' both of purely entropic origin and extensive in ${n_F}$.
The first one comes from the necessity -- if there is to be an
${n_F}$-faced cell -- to have ${n_F}$ first-neighbor seeds in the
vicinity of the central seed; the entropy of such a configuration
{\it increases\,} with the size of the allowable vicinity.
The second one comes from the necessity for all other seeds not to interfere,
and hence to stay
out of an exclusion volume surrounding this vicinity; the entropy of the
other seeds {\it decreases\,} with growing size of the exclusion volume.
\vspace{2mm}
{\it 2. Local and global deviations from sphericity.}
The statement that the `cell surface tends to a sphere'
may be decomposed into
(i) `the first-neighbor seeds align along a surface,' and
(ii) `this surface tends to a sphere.'
A few words are in place about both.
(i) The {\it local\,} fluctuations of the first-neighbor positions
perpendicular to their surface of alignment
is characterized by a width $w_{{n_F}}$.
The scaling of $w_{n_F}$ with ${n_F}$ results from the entropy balance;
in three dimensions $w_n \sim n^{-2/3}$ was found
\cite{Hilhorst09}.
(ii) How closely the surface of alignment
approaches a sphere is determined by its
{\it global\,} properties. It was shown in Ref.\,\cite{Hilhorst05b}
that the surface of the two-dimensional $n$-sided cell
(actually, a closed curve)
is subject to `elastic' deformations at the scale of the cell itself,
the elasticity being again of entropic origin.
The elastic entropy remains finite as $n\to\infty$ and
does not weigh in the entropy balance that determines
the two-dimensional $R_{n}^{(2)}$ and $w_{n}^{(2)}$.
However, the elastic modes do
contribute to the deviations of the surface from
sphericity (actually, circularity in 2D).
For finite $n$ there is no sharp distinction between (i) and (ii),
but in 2D they were shown to decouple when $n\to\infty$.
\vspace{2mm}
{\it 3. Monte Carlo evidence for the approach to sphericity.}
The fluctuations away from sphericity are still
fairly large for the values of ${n_F}$ that appear in the simulations.
Upon assuming a 3D scenario analogous to the one in 2D we conclude that
these fluctuations are due to a combination of the nonvanishing
shell width $w_{n_F}$ and the elastic deformations.
The Monte Carlo results confirm, however,
the hypothesized approach to sphericity for the following reason.
From Fig.\,\ref{fig_cell} and the known values (\ref{resVnSn}) of
$V_{n_F}^{\rm th}$ and $S_{n_F}^{\rm th}$
one sees that the ratio $6\pi^{1/2}V_{n_F}^{\rm MC}/(S_{n_F}^{\rm MC})^{3/2}$
tends to unity when ${n_F}\to\infty$.
If $S_{n_F}^{\rm MC}$ referred to a single surface enclosing a volume
$V_{n_F}^{\rm MC}$, this ratio could be unity
only if that surface enclosed the largest possible volume,
that is, if it were a sphere.
For the sharply peaked distribution of surface areas
observed in our simulations the same conclusion remains valid.
\vspace{2mm}
{\it 4. Entropy balance and elastic modes.}
The nonextensivity of the elastic entropy allows for the entropy balance
to be set up without taking into account the elastic modes, that is, by
considering the surface of alignment as a sphere right from the start.
In the same spirit, when in the next section we will
consider seed positions that align along
a toroidal surface, we will do so without
regard for the elastic deformations of that surface.
\begin{figure}
\begin{center}
\scalebox{.40}
{\includegraphics{figure_1.eps}}
\end{center}
\caption{\small Monte Carlo
averages $V_{n_F}^{\rm MC}$ and $S_{n_F}^{\rm MC}$ of
the volume and surface area, respectively,
of an ${n_F}$-sided cell, each divided by its theoretical asymptotic
behavior, Eqs.\,(\ref{resVnSn}). Both sets of data points
are predicted, therefore, to tend to unity as ${n_F}\to\infty$.
The solid red lines approach this limit value
as $\sim n^{-2/3}$ and represent our best estimates for the
next-order correction to the leading asymptotic behavior
(section \ref{sec_higher}).}
\label{fig_cell}
\end{figure}
\section{The many-edged face: theory}
\label{sec_facetheory}
\subsection{Torus}
\label{sec_facetorus}
\subsubsection{Preliminaries}
\label{sec_torus}
Let us consider an arbitrarily selected ${n_E}$-edged cell face
between two neighboring Voronoi cells.
Let the seeds
of the two cells (the `focal' seeds) have positions $\mathbf{S}_1$ and $\mathbf{S}_2$.
By a suitable choice of the origin $\mathbf{O}$ and the direction
of the $z$ axis we obtain $\mathbf{S}_1=(0,0,L)$ and $\mathbf{S}_2=(0,0,-L)$,
where $L$ is the `focal distance'.
It is a random variable whose distribution we do not know {\it a priori}.
The ${n_E}$-edged face is then located in the $xy$ plane;
a typical face is shown schematically in figure \ref{fig_OCC}.
We number its edges by $m=1,2,\ldots,{n_E}$ according to increasing
polar angle and let $\ell_m$ denote the line that prolongs the $m$th edge.
We let furthermore
$\mathbf{C}_m$ denote the projection of the origin $\mathbf{O}$ onto $\ell_m$
and $\mathbf{T}_1,\ldots,\mathbf{T}_n$ the vertices of the ${n_E}$-edged face.
The $m$th edge is common to the Voronoi cells of $\mathbf{S}_1$, $\mathbf{S}_2$, and
of a third seed whose position we call $\mathbf{F}_m$.
We will refer to the $\mathbf{F}_m$ as the `first neighbors' of the
pair $(\mathbf{S}_1,\mathbf{S}_2)$.
Figure \ref{fig_SSF} represents the plane through these three seeds,
that we will also refer to as the $m$th `first-neighbor' plane.
The three planes that perpendicularly bisect the line segments
connecting these three seeds intersect along line $\ell_m$.
This line is perpendicular to the $m$th first-neighbor plane
and intersects it in
$\mathbf{C}_m$, which is therefore equidistant to the three seeds, as shown by
the large circular arc of radius $r_m$.
As announced at the end of section \ref{sec_cell}, we are assuming
that it is safe in this discussion to neglect
the elastic deformations of the torus.
\begin{figure}
\begin{center}
\scalebox{.40}
{\includegraphics{figure_2.eps}}
\end{center}
\caption{\small Geometry in the plane (`$xy$' plane)
of the ${n_E}$-edged face shared by two cells having their
seeds in $\mathbf{S}_1$ and $\mathbf{S}_2$. The
line segment connecting these seeds is perpendicular to this plane
and is bisected by it in $\mathbf{O}$.
The $m$th edge of the face connects the vertices $\mathbf{T}_m$ and
$\mathbf{T}_{m+1}$ and lies on a line $\ell_m$.
The $\mathbf{C}_m$ are the projections of $\mathbf{O}$ onto the $\ell_m$.}
\label{fig_OCC}
\end{figure}
\begin{figure}
\begin{center}
\scalebox{.45}
{\includegraphics{figure_3.eps}}
\end{center}
\caption{\small Geometry in the first-neighbor plane
plane passing through the seeds $\mathbf{S}_1$, $\mathbf{S}_2$, and
$\mathbf{F}_m$. Point $\mathbf{C}_m$ is the center of the circle passing through
these three seeds. The cell face studied lies in the plane through
O perpendicular to the axis of revolution (the `$z$' axis).
Rotating the circular arc shown about this axis produces a spindle
torus: its minor radius $r_m$ is larger than its major radius $R_m$.
Each dashed line lies in a plane equidistant to two of the three seeds.}
\label{fig_SSF}
\end{figure}
\subsubsection{Large-$n$ limit}
\label{sec_toruslargen}
For the cell face of figures \ref{fig_OCC} and \ref{fig_SSF}
we now develop the following extension of the
large-$n$ theory. To simplify notation we write $n$ instead of ${n_F}$.
Let us consider the subset of faces with fixed focal distance $L$.
It is natural to assume that in the limit of large $n$ the area of the
$n$-edged face will grow without limit and that its shape
will approach a circle
of some as yet unknown radius that we will call ${\sf R}_n$.
More precisely, all $R_m/{\sf R}_n$ will tend to unit
\footnote{Almost surely, in the mathematical sense.}
when $n\to\infty$.
According to figure \ref{fig_SSF}
there must then also be an ${\sf r}_n$ related to ${\sf R}_n$ by
\begin{equation}
{\sf r}_n^2={\sf R}_n^2+L^2
\label{relRrL}
\end{equation}
and which is such that $r_m/{\sf r}_n$ will tend to unity when $n\to\infty$.
In that limit, as $m$ varies from $1$ to $n$,
the large circular arc in figure \ref{fig_SSF} turns
around the axis of revolution and describes a torus
whose major and minor radii are ${\sf R}_n$ and ${\sf r}_n$.
Since ${\sf R}_n \leq {\sf r}_n$,
this torus has no hole and is actually a spindle torus.
The $\mathbf{F}_m$ lie close to the surface of this torus%
\footnote{The surface of a spindle torus is called an `apple'.}
in a thin shell whose width $w_n$ vanishes with growing $n$.
There can be no seeds inside this torus
as this would destroy the $n$-edgedness of the face.
\subsection{Probability ${\cal P}_n$ of occurrence of an $n$-edged face}
\label{sec_pn}
Given two adjacent cells that share an $n$-edged face,
we now ask for the probability ${\cal P}_n$
that the two focal seeds be at distance $2L$
{\it and\,} that the $n$ first neighbor seeds
be located in a toroidal shell with minor radius ${\sf r}$,
and therefore with major radius ${\sf R}=({\sf r}^2-L^2)^{1/2}$.
It will have advantages to express ${\cal P}_n$ as a function
of the independent variables ${\sf r}$ and
\begin{equation}
x = \frac{L}{{\sf r}}\,.
\label{defx}
\end{equation}
Since it is proportional to the number of
microscopic seed configurations compatible with the constraints
$(n,{\sf r},x)$, and because of the analogy with thermodynamics, we will refer
to $\,\log{\cal P}_n({\sf r},x)\,$ as an `entropy'.
We will now determine an explicit although approximate expression
for this entropy and study its variation with ${\sf r}$ and $x$.
Let us write $V_0$ for the volume of the torus with parameters ${\sf r}$ and
$L$, $S_0$ for its surface area, and
\begin{equation}
V_1=w_nS_0
\label{relV1wnS0}
\end{equation}
for the volume of the shell of width $w_n$ at the surface of the torus.
Let $\lambda$ (which may be scaled away) be the three-dimensional
seed density. We then have
\begin{equation}
{\cal P}_n({\sf r},x) \simeq \mbox{cst} \times (x{\sf r})^2\,
\frac{{\rm e}^{-\lambda V_1}(\lambda V_1)^n}{n!}\, {\rm e}^{-\lambda V_0},
\label{xp}
\end{equation}
in which, here and henceforth, `cst' stands for a constant that may each
time be a different one, and where
$(x{\sf r})^2=L^2$ is the phase space factor associated with two seeds being at
distance $2L$, the Poisson distribution
${\rm e}^{-\lambda V_1}(\lambda V_1)^n/n!$
is the probability that in a random seed distribution of density
$\lambda$ the volume $V_1$ contain exactly $n$ seeds,
and ${\rm e}^{-\lambda V_0}$ is the probability that the volume $V_0$
contain no seeds.
Equation (\ref{xp}) is obviously an approximation:
for one thing, it does not take
into account the detailed individual positions of the first neighbor
seeds in $V_1$, but only restricts them to the shell.
We will take (\ref{xp}) seriously, nevertheless, and see where it leads us.
The expressions, needed in (\ref{xp}),
for the volume $V_0$ and the surface $S_0$
of the torus with parameters ${\sf r}$ and $x=L/{\sf r}$ are
\begin{subequations}\label{xVStorus}
\begin{equation}
V_0 = 2 \pi^2 {\sf r}^3 g(x),
\label{xVtorus}
\end{equation}
\begin{equation}
S_0 = 4 \pi^2 {\sf r}^2 f(x),
\label{xStorus}
\end{equation}
\end{subequations}
in which
\begin{subequations}\label{xfg}
\begin{equation}
\pi f(x) = x + (\pi-\arcsin x)\sqrt{1-x^2},
\label{xf}
\end{equation}
\begin{equation}
\pi g(x) = \pi f(x) - \tfrac{1}{3}x^3.
\label{xg}
\end{equation}
\end{subequations}
For later use we note the small-$x$ expansions
\begin{eqnarray}\label{fgsmallx}
f(x) &=& 1 - \tfrac{1}{2}x^2 + \frac{1}{3\pi}x^3 +{\cal O}(x^4),
\nonumber\\[2mm]
g(x) &=& 1 - \tfrac{1}{2}x^2 + {\cal O}(x^4).
\end{eqnarray}
The shell width $w_n$, also needed in (\ref{relV1wnS0}), is a function
of ${\sf r}$ and $x$ that we will determine in the next section.
\subsection{Shell width $w_n$}
\label{sec_wn}
Our determination of $w_n$ will exploit an invariance
hidden in this problem.
The $m$th edge of the face is a segment of a line $\ell_m$ that
is perpendicular to the plane of figure \ref{fig_SSF} and intersects
this plane in $\mathbf{C}_m$. Along $\ell_m$ the three Voronoi
cells of $\mathbf{S}_1$, $\mathbf{S}_2$, and $\mathbf{F}_m$, join.
The faces separating these cells are located in planes that are also
perpendicular to the plane of figure \ref{fig_SSF} and intersect it along
the dashed lines passing through $\mathbf{C}_m$.
Suppose now that seed $\mathbf{F}_m$ moves along the circular arc in figure
\ref{fig_SSF}. This will leave the position of $\mathbf{C}_m$ invariant;
hence it will leave line $\ell_m$ invariant; and since the set of
lines $\{\ell_m\}$ determines the perimeter of the face, it will leave
the face invariant.
We may therefore rotate all first neighbors $\mathbf{F}_m$ to a position with
$\theta_m=0$, that is, a position in the plane of the face, without
changing the face.
Having performed this rotation (without introducing a new symbol for
the rotated $\mathbf{F}_m$) we obtain the situation of figure \ref{fig_w}.
We are now ready to discuss the width $w_n$.
The filled black dots in figure \ref{fig_w}
are the positions after rotation of the first neighbors
$\mathbf{F}_m$. For convenience we have chosen them as the
vertices of a regular $n$-gon, supposing that this does not affect the
argument below in any essential way. The edges of the $n$-gon have
midpoints $\mathbf{M}_m$. The $\mathbf{T}_m$ are the vertices of the $n$-edged face
of interest, which is also a regular $n$-gon. The
$M_mT_m$ are the perpendicular bisectors of the
$F_mF_{m-1}$, where we write here $AB$ for the line segment
connecting the two points $\mathbf{A}$ and $\mathbf{B}$.
Suppose now that $\mathbf{F}_m$ moves along the line
through $\mathbf{F}'$ and $\mathbf{F}^{\prime\prime}$ (both points marked by filled red dots).
The midpoint $\mathbf{M}_m$ then moves along a parallel line with
corresponding points $\mathbf{M}'$ and $\mathbf{M}^{\prime\prime}$. On the left the midpoint
$\mathbf{M}_{m+1}$ executes the mirrored motion (not shown). As a consequence
line segment
$T_mT_{m+1}$ is displaced parallel to itself.
When it moves down so far that it passes
through $\mathbf{T}'$, its neighboring segments disappear; and when it moves
up so high that it passes through $\mathbf{T}^{\prime\prime}$, it disappears itself. In both
cases the face ceases to be $n$-edged. The limit points $\mathbf{T}'$ and
$\mathbf{T}^{\prime\prime}$ determine $\mathbf{F}'$ and $\mathbf{F}^{\prime\prime}$. We will identify
somewhat arbitrarily
the shell width $w_n$ with the segment length
$|F'F^{\prime\prime}|$, which we calculate as follows.
The angle between $F'F_{m-1}$ and
$F^{\prime\prime} F_{m-1}$ is identical to the one between $T'M'$ and
$T^{\prime\prime} M^{\prime\prime}$. All these angles
become very small as $n$ gets large. Neglecting higher order terms in
the angles we have
\begin{equation}
\frac{|F'F^{\prime\prime}|}{|F_mF_{m-1}|} = \frac{|T'T^{\prime\prime}|}{|T_mM_m|}\,.
\label{ratio}
\end{equation}
Upon using that the $\mathbf{F}_m$ and $\mathbf{T}_m$ are vertices
of regular polygons and substituting
$|F_mF_{m-1}|=2\pi({\sf R}+{\sf r})/n$,
$|T'T^{\prime\prime}|=3\pi{\sf R}/n$, and $|T_mM_m|={\sf r}$ we obtain
\begin{eqnarray}
w_n({\sf r},L) &=& \frac{6\pi^2{\sf R}({\sf R}+{\sf r})}{n^2{\sf r}} \nonumber\\[2mm]
&=& \frac{C}{2}(1-x^2+\sqrt{1-x^2})\frac{{\sf r}}{n^2}\,,
\label{xwn}
\end{eqnarray}
in which $C=12\pi^2$ is a constant that will play no role in what follows.
We will write
\begin{equation}
\tilde{f}(x) = \frac{1}{2}(1-x^2+\sqrt{1-x^2})f(x),
\label{deftf}
\end{equation}
so that from relations (\ref{deftf}), (\ref{relV1wnS0}),
and (\ref{xStorus}) we have
\begin{equation}
V_1 = \frac{4\pi^2 C {\sf r}^3}{n^2}\tilde{f}(x).
\label{xV1}
\end{equation}
Equations (\ref{xVtorus}) and (\ref{xV1})
are the desired expressions for $V_0$ and $V_1$.
\begin{figure}
\begin{center}
\scalebox{.70}
{\includegraphics{figure_4.eps}}
\end{center}
\caption{\small Geometry in the plane of the face after
all first neighbor seeds $\mathbf{F}_j$ have been rotated as explained in
the text.
The heavy line linking $\ldots,\mathbf{T}_{m-1},\mathbf{T}_m,\ldots$ is the face boundary
when the $m$th neighbor is located at $\mathbf{F}_m$.
When $\mathbf{F}_m$ moves to $\mathbf{F}^\prime$ (or to $\mathbf{F}^{\prime\prime}$),
then $\mathbf{T}_m$ moves to $\mathbf{T}^\prime$ (or to $\mathbf{T}^{\prime\prime}$).}
\label{fig_w}
\end{figure}
\subsection{Analysis of ${\cal P}_n({\sf r},x)$}
\label{sec_maximization}
Directly from Eq.\,(\ref{xp}) we have
\begin{equation}
\log{\cal P}_n({\sf r},x) \simeq -\lambda V_1 + n\log\lambda V_1 -\log n!
- \lambda V_0 + 2\log x - \frac{2}{3} \log\lambda{\sf r}^3,
\label{xlogp}
\end{equation}
which we will study as a function of its two variables.
We may simplify this expression
by noting that in the large-$n$ limit $\lambda V_1$ is
negligible with respect to $\lambda V_0$ and
$\log\lambda{\sf r}^3$ with respect to $n\log\lambda V_1$.
Some further rewriting is useful.
First, we substitute in (\ref{xlogp}) the explicit expressions
(\ref{xVtorus}) and (\ref{xV1}) for $V_0$ and $V_1$.
Second, we may discard from (\ref{xlogp})
any terms that do not depend on ${\sf r}$ or $x$
and that we may recover later by normalizing the distribution.
Then, instead of $\log{\cal P}_n$ of Eq.\,(\ref{xlogp}),
we may study $\log\bar{\cp}_n$ given by
\begin{equation}
\log\bar{\cp}_n({\sf r},x) \simeq n \log\Big( 2\pi^2\lambda{\sf r}^3\tilde{f}(x) \Big)
- 2\pi^2\lambda{\sf r}^3g(x) + 2\log x.
\label{xlogbarp}
\end{equation}
The first two terms represent two opposing entropic forces
similar to those referred to in section \ref{sec_sphericity} for the case of
the ${n_F}$-sided cell.
We are first of all interested in the variation of $\log\bar{\cp}_n$ with
${\sf r}$.
For fixed $x$, let (\ref{xlogbarp}) be maximal for
${\sf r}=\sfrhm(x)$.
Setting $\partial\log\bar{\cp}_n/\partial(2\pi^2\lambda{\sf r}^3)=0$
we obtain
\begin{equation}
2\pi^2\lambda\sfrhm^3(x)g(x) = n.
\label{solrhox1}
\end{equation}
We now note that in view of (\ref{xVtorus}) the first member of the
above equation is equal to $\lambda V_0$.
Eq.\,(\ref{solrhox1}) therefore says
that the entropy
is maximized when the volume of the
torus is such that under unconstrained conditions it would have
contained $n$ seeds. This is the torus counterpart of
Eq.\,(\ref{relRncell}).
For $n\to\infty$ the maximum in ${\sf r}$ corresponds to a narrow peak,
as may be shown by
an expansion of (\ref{xlogbarp}) about its maximum. The marginal distribution
of $x$, defined as the integral of $\bar{\cp}_n({\sf r},x)$ with respect to
its first argument, is therefore obtained by simply taking
${\sf r}={\sf r}_{\rm max}(x)$ in (\ref{xlogbarp}), which leads to
\begin{equation}
\bar{\cp}_n(\sfrhm(x),x) \simeq \mbox{cst} \times x^2
\left( \frac{\tilde{f}(x)}{g(x)} \right)^{\! n}.
\label{xbarpm}
\end{equation}
The ratio $\tilde{f}(x)/g(x)$ has its maximum at $x=0$.
Upon expanding for small $x$ with the aid of
(\ref{xfg}) and (\ref{deftf}) we obtain
\begin{equation}
\frac{\tilde{f}(x)}{g(x)} = 1 - \frac{3}{4} x^2 + \frac{1}{3\pi} x^3
+ {\cal O}(x^4).
\label{expfdg}
\end{equation}
The term of order $x^2$ with the negative coefficient $-3/4$ is
the only one that leaves a trace in the limit $n\to\infty$;
it stems directly from the factor $(1-x^2+\sqrt{1-x^2})/2$ in
(\ref{deftf}), which in turn comes from the shell width.
Using (\ref{expfdg}) in (\ref{xbarpm}) and letting
$n\to\infty$ we have to leading order
$\bar{\cp}_n(\sfrhm(x),x) \to \mbox{cst}\times x^2\exp(-(3n/4)x^2)$,
so that $x$ is not sharply peaked but has a well-defined distribution
on scale $n^{-1/2}$.
More precisely, in that limit the scaled variable
\begin{equation}
y= (3\pi n)^{1/2}x/4
\label{defy}
\end{equation}
has the distribution $Q(y)$ given by
\begin{equation}
Q(y) = 32\pi^{-2} y^2\exp\left( -\frac{4}{\pi}y^2 \right), \qquad y>0,
\label{xQy}
\end{equation}
where we have restored the normalization, and where $y$ is such that
its first moment is unity.
\vspace{3mm}
Knowing that $x$ is random on the scale $n^{-1/2}$ we have
from (\ref{fgsmallx}) that $g(x)= 1 + O(n^{-1})$ and
subsequently from (\ref{solrhox1}) the small-$x$ expansion
\begin{equation}
\sfrhm(x) = {\sf r}_n \left[ 1+{\cal O}(n^{-1})\right]
\label{solrmax}
\end{equation}
with leading order term
\begin{equation}
{\sf r}_n = \left( \frac{n}{2\pi^2} \right)^{1/3},
\label{xrn}
\end{equation}
in which we have set $\lambda=1$.
In relation (\ref{defx}) we now replace
${\sf r}$ by its leading order value ${\sf r}_n$ and obtain,
also using (\ref{xrn}),
\begin{eqnarray}
L &\simeq& (n/2\pi^2)^{1/3}x \nonumber\\[2mm]
&=& 2^{5/3}3^{-1/2}\pi^{-7/6}n^{-1/6}y.
\label{relLy}
\end{eqnarray}
This shows that $L$ varies on scale $n^{-1/6}$.
Since $y$ has unit average we now have for the average $L_n$ of $L$
the expression%
\footnote{See footnote \ref{footnote_one}.}
\begin{equation}
L_n^{\rm th} \simeq 2^{5/3}3^{-1/2}\pi^{-7/6}n^{-1/6}.
\label{resLn}
\end{equation}
Furthermore, as $n\to\infty$ the probability distribution $Q_n$
of the scaled variable $y=L/L_n^{\rm th}$
is predicted
to tend to the fixed law $Q(y)$ of Eq.\,(\ref{xQy}).
One may loosely rephrase this scaling with $n^{-1/6}$ by saying that
the many-edgedness of a cell face leads to an attractive force (of
entropic origin) between the two focal seeds.
It was not {\it a priori\,} clear to us that such a phenomenon would occur.
\vspace{3mm}
Knowing now that $L$ is distributed on scale $n^{-1/6}$,
relation (\ref{relRrL}) tells us that
${\sf r}_n$ and ${\sf R}_n$ must be equal to leading order, and hence
\begin{equation}
{\sf R}_n \simeq \left( \frac{n}{2\pi^2} \right)^{1/3}.
\label{xRn}
\end{equation}
For the shell width $w_n$ and the shell volume $V_1$
we find with the aid of (\ref{xRn}), (\ref{xwn}), (\ref{relV1wnS0}),
and (\ref{xStorus}) the scaling behavior
\begin{equation}
w_n \simeq \mbox{cst}\times n^{-5/3}, \qquad V_1 \simeq\mbox{cst}\times n^{-1},
\label{xwnaspt}
\end{equation}
where we have preferred to
denote the prefactors by `cst'
in view of the arbitrariness in the definition of $w_n$.
Eq.\,(\ref{xwnaspt}) tells us that
the shell becomes rapidly thinner as $n$ gets larger.
\vspace{3mm}
We finally return to the averages $A_n$ and $P_n$.
Having determined that for ${n_E}\to\infty$
the ${n_E}$-edged cell face tends to a circle of a now known radius ${\sf R}_n$
we conclude that%
\footnote{See footnote \ref{footnote_one}.}
\begin{subequations}\label{resAnPn}
\begin{equation}
A_{n_E}^{\rm th} = \pi{\sf R}_{n_E}^2 \simeq (4\pi)^{-1/3}{n_E}^{2/3},
\label{resAn}
\end{equation}
\begin{equation}
P_{n_E}^{\rm th} = 2\pi{\sf R}_{n_E} \simeq (4\pi)^{ 1/3}{n_E}^{1/3}.
\label{resPn}
\end{equation}
\end{subequations}
These relations are analogous to the laws (\ref{resVnSn})
for the cell volume and surface area.
This completes the extension of large-$n$ theory to the ${n_E}$-edged
cell face in the limit of asymptotically large ${n_E}$.
\section{The many-edged face: Monte Carlo}
\label{sec_faceMC}
The $4\times 10^9$ cells generated by Monte Carlo simulation
yielded $N_n$ cell faces of edgedness $n$, adding up to a total
of $N=\sum_nN_n= 31\,071\,027\,941$ cell faces.
The distribution $N_n$ has been presented in table \ref{table1}
together with our estimates of the fractions $f_n$ of
$n$-edged faces.
In Ref.\,\cite{Lazaretal13} several comparisons with theoretically
known data have been presented as a demonstration that the algorithm
works correctly.
Here we limit ourselves to two such tests, shown in table
\ref{table1b}.
Let $\langle{n_F}\rangle$ and $\langle{n_E}\rangle$
stand for the average facedness of a cell and the average edgedness
of a cell face, respectively.
The rms deviation of ${n_F}$ is equal to 3.318,
which leads to an estimate of the standard deviation in its Monte Carlo average
equal to $3.318/\sqrt{4\times 10^9}=0.000\,06$.
The rms deviation of ${n_E}$ is equal to 1.579,
which leads to an estimate of the standard deviation in its Monte Carlo average
equal to $1.579/\sqrt{N}=0.000\,009$.
The average values from the Monte Carlo simulations together with
these standard deviations
are shown in the first two lines of table \ref{table1b}.
The theoretical values of both averages are exactly known
(see {\it e.g.} Ref.\,\cite{Okabeetal00}) and shown in the third line.
The agreement between
the Monte Carlo values and these exact results is excellent.
\begin{table}
\begin{center}
\begin{small}
\begin{tabular}{||rrr||rrr||}
\hline
\multicolumn{1}{||r}{$n$}&
\multicolumn{1}{c}{$N_n$}&
\multicolumn{1}{c||}{$f_n$}&
\multicolumn{1}{c}{$n$}&
\multicolumn{1}{c}{$N_n$}&
\multicolumn{1}{c||}{$f_n$}\\
\hline
3 & 4\,187\,261\,126 & $0.134\,764\pm 0.000\,002$ & 12 & 11\,834\,735
&$(3.809\pm 0.002)\times 10^{-4}\,\,$\\
4 & 7\,140\,019\,564 & $0.229\,797\pm 0.000\,003$ & 13 & 2\,174\,618
&$(6.999\pm 0.005)\times 10^{-5}\,\,$\\
5 & 7\,505\,993\,048 & $0.241\,575\pm 0.000\,003$ & 14 & 342\,988
&$(1.104\pm 0.002)\times 10^{-5}\,\,$\\
6 & 5\,914\,222\,488 & $0.190\,345\pm 0.000\,003$ & 15 & 46\,869
&$(1.508\pm 0.007)\times 10^{-6}\,\,$\\
7 & 3\,621\,030\,915 & $0.116\,540\pm 0.000\,002$ & 16 & 5\,690
&$( 1.83\pm 0.03)\times 10^{-7}\,\,$\\
8 & 1\,747\,654\,056 & $0.056\,247\pm 0.000\,002$ & 17 & 613
&$( 1.97\pm 0.08)\times 10^{-8}\,\,$\\
9 & 674\,407\,674 & $0.021\,705\pm 0.000\,001$ & 18 & 41
&$( 1.3\pm 0.2)\times 10^{-9}\,\,$\\
10 & 211\,374\,682 & $0.006\,803\pm 0.000\,001$ & 19 & 7
&$( 2.3\pm 0.9)\times 10^{-10}$\\
11 & 54\,658\,826 & $0.001\,759\pm 0.000\,001$ & 20 & 1
&$( 3\pm3)\times 10^{-11}$\\
\hline
\end{tabular}
\end{small}
\end{center}
\caption{Observed numbers $N_n$ of $n$-edged cell faces
in a set of $4\times 10^9$ Monte Carlo generated 3D Poisson-Voronoi cells,
and their estimated fractions $f_n$.}
\label{table1}
\end{table}
\begin{table}
\begin{center}
\begin{small}
\begin{tabular}{|l|r@{.}l|r@{.}l|}
\hline
\multicolumn{1}{|l|}{}&
\multicolumn{2}{|c|}{ Expected number $\langle{n_F}\rangle$ }&
\multicolumn{2}{|c|}{ Expected number $\langle{n_E}\rangle$ }\\
\multicolumn{1}{|l|}{}&
\multicolumn{2}{|c|}{ of faces of a cell }&
\multicolumn{2}{|c|}{ of edges of a face }\\
\hline
Monte Carlo & \phantom{XXXX}15&535\,51 & \phantom{XXXX}5&227\,576 \\
Standard deviation & 0&000\,06 & 0&000\,009 \\
Theory & 15&535\,457 & 5&227\,573\,4 \\
\hline
\end{tabular}
\end{small}
\end{center}
\caption{Two tests of the Monte Carlo algorithm.}
\label{table1b}
\end{table}
\subsection{Examples of many-edged faces}
\label{sec_examplesMC}
In the original
Monte Carlo simulations by Lazar {\it et al.} \cite{Lazaretal13},
that comprised $0.25\times 10^9$ cells,
faces were found with edge numbers up to ${n_E}=18$.
In figure \ref{fig_fivefaces}
we show the five $18$-edged faces that occurred, superposed
such that their origins coincide.
Some faces, such as the red one, are close to circular,
but the set shows that
there is still considerable variability in shape and size;
also, the origin, which for ${n_E}\to\infty$ should be at the center of
the circle, is still fairly eccentric.
It is relevant to recall here that
these same observations held for the many-sided two-dimensional cells
studied in Ref.\,\cite{Hilhorst07}, for which nevertheless an
efficient simulation algorithm has demonstrated the convergence to a circle
at higher values of $n$.
If the blue face and the gray face
seem to have fewer than $18$ edges, this
is due to some of their vertices
coinciding at the scale of the figure.
Figure \ref{fig_rhotheta} is based on the same set of five
$18$-edged cell faces. With each face there are associated $18$
planes of the type shown in figure \ref{fig_SSF},
each one passing through the two focal seeds and through
one first neighbor seed $\mathbf{F}_m$.
In figure \ref{fig_rhotheta} we have superposed these
$5\times 18$ planes such that the points $\mathbf{C}_m$
coincide in a single point called $\mathbf{C}$
(this blurs of course the positions of the focal seeds).
The positions $(r_m,\theta_m)$ with respect to $\mathbf{C}$,
defined in figure \ref{fig_SSF},
of the first neighbor seeds $\mathbf{F}_m$ are shown.
The figure clearly shows the appearance of the hull of a spindle torus,
indicated by the circular arc. We have chosen in this figure a radius
${\sf r}_{\rm av}$ as well as
somewhat arbitrary values for $L_{\rm av}$ and ${\sf R}_{\rm av}$
such as to obtain a good visual fit.
We now recall the discussion of section \ref{sec_sphericity}
that concerned the spherical surface:
here, in a fully analogous way, the scatter of the dots about the arc is
a measure of the combined effect of
the shell width $w_n$, determined in section \ref{sec_facetheory},
and the elastic deformations, left unstudied, of the toroidal surface.
The scarcity of points as one approaches the axis of revolution is
an effect of diminishing phase space.
\begin{figure}
\begin{center}
\scalebox{.45}
{\includegraphics{figure_5.eps}}
\end{center}
\caption{\small Five $18$-sided cell faces found
in the Monte Carlo simulations of Ref.\,\cite{Lazaretal13},
superposed such that their origins coincide in a single point $\mathbf{O}$.
For each, the value $2L$ of the distance between the two focal seeds
is indicated.}
\label{fig_fivefaces}
\end{figure}
\begin{figure}
\begin{center}
\scalebox{.45}
{\includegraphics{figure_6.eps}}
\end{center}
\caption{\small Figure based on the same five $18$-sided cell faces as
shown in figure \ref{fig_fivefaces}, with the same color code.
All $5\times 18$ first-neighbor planes have been superposed such that
the $z$ axes remain parallel and the
$\mathbf{C}_m$ coincide in a single point $\mathbf{C}$,
taken here as the origin of the coordinate system.
The dots represent points of polar coordinates
$(r_m,\theta_m)$, defined in figure \ref{fig_SSF}.
In order to symmetrize the figure
the points $(r_m,-\theta_m)$ are also shown.
The hull of a spindle torus,
indicated by the circular arc, becomes clearly visible.
See text.
}
\label{fig_rhotheta}
\end{figure}
\subsection{Average area $A_n$ and perimeter $P_n$}
\label{sec_AnPn}
In figure \ref{fig_face}
we have represented our Monte Carlo averages $A_{n_E}^{\rm MC}$
and $P_{n_E}^{\rm MC}$ for the area and perimeter, respectively,
of the ${n_E}$-edged cell face,
averaged over the set of $4\times 10^9$ cells.
Each quantity has been divided by its theoretical large-${n_E}$
behavior (\ref{resAnPn}), so that for both
the data points are expected to tend
to unity as ${n_E}\to\infty$.
We emphasize again that the theory has no adjustable parameters.
The data for $A_{n_E}^{\rm MC}$ and $P_{n_E}^{\rm MC}$
appear to fully conform to the theoretical prediction,
even if the finite-${n_E}$ corrections are still large.
We will analyze these subleading terms to the asymptotic laws
in section \ref{sec_higher}.
\begin{figure}
\begin{center}
\scalebox{.40}
{\includegraphics{figure_7.eps}}
\end{center}
\caption{\small Monte Carlo
averages $A_{n_E}^{\rm MC}$ and $P_{n_E}^{\rm MC}$ of the area and
perimeter, respectively,
of an ${n_E}$-edged cell face, each divided by its theoretical asymptotic
behavior, Eqs.\,(\ref{resAnPn}).
Both sets of data points are predicted, therefore, to tend to unity as
${n_E}\to\infty$. The solid red lines approach this limit value
as $\sim n^{-1}$ and represent our best estimates for the
next-order correction to the leading
asymptotic behavior (section \ref{sec_higher}).}
\label{fig_face}
\end{figure}
\subsection{Focal distance $L$}
\label{sec_2L}
As far as we are aware, the statistics of the focal distance $L$
for given edgedness ${n_E}$ has not hitherto received any attention
in the literature, whether it be its average $L_{n_E}$ or its full
probability distribution $Q_{n_E}(L/L_{n_E}^{\rm th})$.
The theoretical result of Eq.\,(\ref{resLn}) for $L_{n_E}$
is not intuitive and
it is therefore of utmost importance that we compare the predictions
(\ref{resLn}) and (\ref{xQy}) to the Monte Carlo data.
In figure \ref{fig_Ln} we have represented the Monte Carlo average
$L_{n_E}^{\rm MC}$, divided by its theoretical large-${n_E}$ behavior
(\ref{resLn}),
so that the data points are expected to tend to unity for
${n_E}\to\infty$.
The Monte Carlo data
are fully compatible with the asymptotic limit value,
even though there appear, here as before, sizeable finite-${n_E}$ corrections.
\begin{figure}
\begin{center}
\scalebox{.45}
{\includegraphics{figure_8.eps}}
\end{center}
\caption{\small
Monte Carlo average $L_{n_E}^{\rm MC}$ of the
focal distance divided by its
theoretical asymptotic behavior (\ref{resLn}).
The data points are, therefore, predicted to tend to unity as ${n_E}\to\infty$.}
\label{fig_Ln}
\end{figure}
\begin{figure}
\begin{center}
\scalebox{.45}
{\includegraphics{figure_9.eps}}
\end{center}
\caption{\small
Monte Carlo data for the probability distributions $Q_n(L/L_n^{\rm th})$
of the focal distance $L$. The heavy black curve is the theoretical limit
distribution $Q(y)$ of Eq.\,(\ref{xQy}).}
\label{fig_QnL}
\end{figure}
\begin{figure}
\begin{center}
\scalebox{.45}
{\includegraphics{figure_10.eps}}
\end{center}
\caption{\small
Monte Carlo data for the
logarithm of the scaled probability distributions $\bar{Q}_n(L/L_n^{\rm MC})$
of the focal distance $L$. The color code is as in figure \ref{fig_QnL};
the curves for $n=3,4,5,6$ have been labeled explicitly.
These distributions all have unit average.
The heavy black curve is the theoretical limit distribution
$\log Q(y)$ of Eq.\,(\ref{xQy}).}
\label{fig_logQnL}
\end{figure}
In figure \ref{fig_QnL} we proceed to a more detailed comparison. This
figure shows, for $n=7$ through $n=14$,
the distributions $Q_n(L/L_n^{\rm th})$ of the scaled variables
$L/L_n^{\rm th}$.
We constructed this figure
by collecting the values of $L$
for each $n$ separately in bins of width $0.005$.
In order to suppress fluctuations,
we combined for the larger $n$ values
groups of neighboring bins into larger
ones: for $n=11,12,13,14$
we grouped together $2,4,8,16$ of the original bins, respectively.
There is a clear tendency for the $Q_n(y)$ to approach the theoretical
limit distribution.
In figure \ref{fig_logQnL}
we investigate the {\it shape\,} of the distributions $Q_n(y)$.
Let $\alpha = L_n^{\rm th}/L_n^{\rm MC}$ and define
rescaled distributions $\bar{Q}_n(L/L_n^{\rm MC})= \alpha Q_n(y)$,
which have unit average.
We have plotted the $\bar{Q}_n$ semilogarithmically to allow for
comparisons over a wider range of the abscissa.
It appears that the shape of the $\bar{Q}_n$ converges
rapidly to the
theoretically predicted limit given by Eq.\,(\ref{xQy}).
Hence the limiting shape of the distribution is attained well
before the average reaches its limit value.
This excellent agreement comes somewhat as a
surprise since we had no specific reasons beforehand to expect it.
In any case, the Monte Carlo data for $L$
provide ample evidence of the fact that
${L}_{n_E}/{\sf r}_{n_E} \to 0$ as ${n_E}\to\infty$,
and that therefore the limit torus
has equal major and minor radii: it is
a true doughnut but with a hole of zero diameter.
\section{Higher order terms}
\label{sec_higher}
\begin{figure}
\begin{center}
\scalebox{.40}
{\includegraphics{figure_11.eps}}
\end{center}
\caption{\small Trying to fit the next-to-leading term in the
asymptotic expansion of $S_n$ by different powers $a$. From top to
bottom $a=\tfrac{1}{3}, \tfrac{1}{6}, 0, -\tfrac{1}{6}$.}
\label{fig_surnext}
\end{figure}
\begin{figure}
\begin{center}
\scalebox{.40}
{\includegraphics{figure_12.eps}}
\end{center}
\caption{\small Trying to fit the next-to-leading term in the
asymptotic expansion of $V_n$ by different powers $a$. From top to
bottom $a=\tfrac{2}{3}, \tfrac{1}{2}, \tfrac{1}{3}, \tfrac{1}{6}$.}
\label{fig_volnext}
\end{figure}
We will let $n$ stand for either ${n_E}$ or ${n_F}$, and
$X_n$ for any of the four quantities $V_n, S_n, A_n$, and
$P_n$ studied in the preceding sections.
We there determined their leading large-$n$ behavior
$X_n^{\rm th} \simeq c_0 n^{a_0}$,
and now ask if we can go beyond that.
Each of these averages
presumably has an asymptotic expansion in powers $n$ of the form
\begin{equation}
X_n = c_0n^{a_0} + c_1n^{a_1} + \ldots, \qquad n\to\infty,
\label{asptexpX}
\end{equation}
with coefficients $c_1,c_2,\ldots$ and powers $a_1,a_2,\ldots$
of which we have no theoretical knowledge.
We will nevertheless rely
on the idea that the only powers that one may reasonably expect
are powers of $n^{1/3}$.
We will try to determine these from the Monte Carlo data.
Our procedure will follow the definition of an asymptotic expansion:
We plot $(X_n^{\rm MC}-X_n^{\rm th})/n^a$
for selected values of $a$ and
look for the $a$ that makes this quantity tend to a constant
when $n$ gets large. That value of $a$ is then equal to $a_1$ and the
constant is equal to $c_1$.
How well this works depends in part on the accuracy of
the simulation data,
and in part on whether we are sufficiently far in the asymptotic regime,
a question to which we have no certain answer.
Let us consider first the $n$-faced cell.
The most clearcut case is provided by its surface area $S_n$, plotted in
figure \ref{fig_surnext}
for a selection of values of $a$ that also include half-integer powers of
$n^{1/3}$. This plot
seems to clearly single out $a=a_1=0$ as the next
exponent in the series (\ref{asptexpX}) for $X_n=S_n$.
Accepting this exponent value we are led to conclude that the
corresponding constant takes the value $c_1=-1.70$,
indicated by the horizontal dashed line in the figure.
In figure \ref{fig_volnext}
a similar analysis has been performed for $V_n$.
It points towards an exponent $a_1=1/3$ and a
coefficient $c_1=-0.42$.
The resulting two-term asymptotic series for $V_{n_F}$ and $S_{n_F}$
have been listed in table \ref{table2}.
The curve representing the subleading term
has been drawn in figure \ref{fig_cell} for both quantities.
\begin{table}
\begin{center}
\begin{scriptsize}
\begin{tabular}{|lllc|}
\hline
Quantity & Symbol & Leading term(s) for large $n$ & Note\\
\hline
Average surface area of an $n$-faced 3D cell & $S_n$ &
$({9\pi}/{16})^{1/3}n^{2/3} - 1.70$ & {\it a}\\
Average volume of an $n$-faced 3D cell & $V_n$ &
$n/8 - 0.42 n^{1/3}$ & {\it a}\\
Average perimeter of an $n$-edged face of a 3D cell & $P_n$&
$(4\pi)^{1/3}n^{1/3} - 2.95 n^{-2/3}$ & {\it a}\\
Average area of an $n$-edged face of a 3D cell & $A_n$ &
$(4\pi)^{-1/3}n^{2/3} - 1.53 n^{-1/3}$ & {\it a}\\
Average of the distance $L$ between the seeds of && &\\
${}$\hspace{5mm} two 3D cells sharing an $n$-edged face & $L_n$ &
$2^{5/3}3^{-1/2}\pi^{-7/6}n^{-1/6}$ & {\it b}\\
Probability distribution of $y=L/L_n$ & $Q(y)$ &
$32\pi^{-2}y^2\exp(-4y^2/\pi)$ & {\it b}\\
Average perimeter of an $n$-sided 2D cell & $P^{(2)}_n$ &
$\pi^{1/2}n^{1/2} - (5/8)\pi^{1/2}n^{-1/2}$ & {\it c}\\
Average area of an $n$-sided 2D cell & $A^{(2)}_n $ &
$n/4 - 0.6815$ & {\it c}\\
\hline
\end{tabular}
\end{scriptsize}
\end{center}
\vspace{-3.5mm}
\begin{scriptsize}
\noindent $^a$ This work.
First term from large-$n$ theory, expected to be exact;
second term fitted.\\[-2mm]
\noindent $^b$ This work. Leading order term from large-$n$ theory.\\[-2mm]
\noindent $^c$ First term analytically exact \cite{Hilhorst05b};
second term from a high precision fit \cite{Hilhorst07}.
\end{scriptsize}
\caption{\footnotesize
Summary of predictions for the asymptotic large-$n$ behavior
of several quantities associated with Poisson-Voronoi tessellations.
The last two lines concern earlier work.}
\label{table2}
\end{table}
Let us next consider the average perimeter $P_n$ and area $A_n$ of an
$n$-edged face. Figures \ref{fig_pernext} and
\ref{fig_arenext} show the attempts to fit the asymptotic behavior.
The evidence is less convincing here than
for the case of the cell volume and surface area, and
it certainly helps to assume at this point
that the exponents are quantized as
multiples of $1/3$. The values $a_1=-2/3$ for $P_n$ and $a_1=-1/3$ for
$A_n$ appear to best fit the data, and accepting these
we obtain estimates for the coefficients, again indicatd by horizontal
dashed lines.
The resulting two-term asymptotic series for $P_n$ and $A_n$
have also been listed in table \ref{table2}.
The curve representing the subleading term
has been drawn in figure \ref{fig_face} for both quantities.
\begin{figure}
\begin{center}
\scalebox{.40}
{\includegraphics{figure_13.eps}}
\end{center}
\caption{\small Trying to fit the next-to-leading term in the
asymptotic expansion of $P_n$ by different powers $a$. From top to
bottom $a=-\tfrac{1}{3}, -\tfrac{1}{2}, -\tfrac{2}{3},
-\tfrac{5}{6}$, -$1$.}
\label{fig_pernext}
\end{figure}
\begin{figure}
\begin{center}
\scalebox{.40}
{\includegraphics{figure_14.eps}}
\end{center}
\caption{\small Trying to fit the next-to-leading term in the
asymptotic expansion of $A_n$ by different powers $a$. From top to
bottom $a=0,-\tfrac{1}{6}, -\tfrac{1}{3}, -\tfrac{1}{2},
-\tfrac{2}{3}$.}
\label{fig_arenext}
\end{figure}
\section{Discussion}
\label{sec_discussion}
We have summarized
the main results of this paper in table \ref{table2}.
For comparison the two bottom lines
in this table show analogous results
obtained earlier \cite{Hilhorst05b,Hilhorst07}
for the average perimeter $P^{(2)}_n$
and area $A^{(2)}_n$ of a two-dimensional Poisson-Voronoi cell.
The status of these results, briefly indicated in the
notes at the bottom of the table, is as follows.
We basically have two reasons to believe that in three dimensions
the results from large-$n$ theory are exact for
the four quantities $V_n$, $S_n$, $A_n$, and $P_n$.
The first reason is that in two dimensions this theory
reproduces the exactly known leading order
results for $A_n^{(2)}$ and $P_n^{(2)}$. The second one is that the
theory leads to what looks like a sound basic principle:
The probability of occurrence (entropy) of an ``event''
imposing restrictions on the positions of $n$ seeds
is maximized by displacing
(with respect to a random configuration)
only those $n$ seeds,
thus evacuating a spatial region of volume $n/\lambda$ (where $\lambda$
is the seed density).
For the $n$-faced cell
this region is a sphere [Eq.\,(\ref{relRncell})],
for the $n$-edged face it is a torus [Eq.\,(\ref{solrhox1})]
with major and minor radii that for $n\to\infty$ become equal.
Large-$n$ theory, at least in its present form, does not allow for a
systematic expansion of the averages considered above in negative powers
of $n$. We have therefore based our determination of the correction
terms on fits of the Monte Carlo data,
guided by theoretical considerations.
In next-to-leading order
there is in each case a power of $n$ and a coefficient to estimate.
In the case of $V_n$ and $S_n$ these come out fairly
unambiguously.
In the case of $A_n$ and $P_n$ we have been led, in addition, by a
certain systematics that appears: just like $A_n^{(2)}$ and $P_n^{(2)}$
in two dimensions,
and for reasons that we do not at this point fully understand,
the correction terms for $A_n$ and $P_n$ turn out to differ from the
leading order behavior by integer powers of $n^{-1}$.
The focal distance $L$ is a quantity that enters in a different way
into the theory. First, in contradistinction
to the four averages discussed above,
its theoretical mean value $L^{\rm th}_n$
does not diverge with growing $n$ but tends to zero as $\sim n^{-1/6}$.
The Monte Carlo data for $L^{\rm MC}_n$
are fully compatible with this prediction;
there are again substantial finite-$n$ corrections which,
in this quantity, we have not attempted to estimate.
Secondly, it appears that even for large $n$
the probability distribution $Q_n$ of the scaled variable
$y=L/L^{\rm th}_n$ does not become sharply peaked but
approaches a well-defined limit law $Q(y)$ [Eq.\,(\ref{xQy})].
Although we had no {\it a priori\,} indication
about the reliability of these conclusions from large-$n$ theory,
the distribution $Q(y)$ appears to be in excellent agreement with theory.
From the theoretical point of view it is worthwhile to recall an invariance
property exploited in section \ref{sec_wn},
{\it viz.} the fact that a cell face does not change when
any or all of the first neighbors (to its two focal seeds)
are rotated over arbitrary angles in their `first-neighbor' planes.
We suspect that this invariance may open the road to an exact
determination of the properties of the many-sided cell face.
\section{Conclusion}
\label{sec_conclusion}
We have performed and theoretically analyzed
Monte Carlo simulations of three-dimensional
Poisson-Voronoi cells. The number of cells generated, namely
equals $4\times 10^9$, is larger than in all earlier work.
Our method of analysis has been the heuristic `large-$n$' theory,
applicable to Voronoi cells with a large
number ${n_F}$ of faces, and to cell faces with a large number ${n_E}$ of
edges. The latter application has required a substantial extension of
the theory that we describe in this paper.
Whereas many-faced cells must be analyzed in terms of a spherical
geometry, we found that the many-edged cell face
requires the geometry of a spindle torus.
The squared major and minor radii of that torus differ by $L^2$,
where the `focal' distance $L$ is half the distance
between the seeds of the two cells sharing that face.
We were natuarally led to investigate
the statistics of $L$ and found again good agreement between theory
and Monte Carlo data.
The results presented here highlight, in addition, the potential use of Monte
Carlo simulations in conjunction with large-$n$ theory as a means of gaining
insight into the properties of 3D Poisson-Voronoi cells.
|
1805.06994
|
\section{Introduction}\label{sec:intro}
In this survey, we will be interested in gaining an insight into
asymptotic properties of chaotic group actions. There are several quite distinct points of view
on how this problem may be studied. Our approach here
is based on the analysis of higher-order correlations which characterise
random-like behaviour of observables computed along orbits. For instance, let us consider a measure-preserving
transformation $T:X\to X$ of a probability space $(X,\mu)$. Then
for functions $\phi_1,\ldots,\phi_r\in L^\infty(X)$,
the {\it correlations of order r} are defined as
\begin{equation}
\label{eq:corr0}
\int_X \phi_1(T^{n_1}x)\cdots \phi_r(T^{n_r}x)\, d\mu(x),\quad
n_1,\ldots, n_r \in \mathbb{N}.
\end{equation}
The transformation $T$ is called {\it mixing of order $r$} if
for all $\phi_1,\ldots,\phi_r\in L^\infty(X)$,
\begin{equation}
\label{eq:mmix0}
\int_X \phi_1(T^{n_1}x)\cdots \phi_r(T^{n_r}x)\, d\mu(x)\longrightarrow
\left(\int_X\phi_1\,d\mu\right)\cdots \left(\int_X\phi_r\,d\mu\right)
\end{equation}
as $|n_i-n_j|\to \infty$ for all $i\ne j$.
The multiple mixing property, in particular, implies that the
family of functions $\{\phi\circ T^n\}$
is quasi-independent asymptotically.
The study of this property was initiated by Rokhlin \cite{roh}
who showed that ergodic endomorphisms of compact abelian groups
are mixing of all orders.
In this work, Rokhlin also raised the question,
which still remains open,
whether mixing of order two implies mixing of all orders
for general measure-preserving transformations.
Kalikow \cite{kal} established this for rank-one transformations,
Ryzhikov \cite{ryz} --- for transformations of finite rank,
and Host \cite{host} --- for transformations with singular spectrum.
The multiple mixing property has been extensively studied for
flows on homogeneous spaces. Ornstein and Weiss \cite{ow} established that
the geodesic flow on compact hyperbolic surfaces is Bernoulli,
which implies that it is mixing of all orders.
Dani \cite{dani1,dani2} proved that a quite general partially hyperbolic one-parameter homogeneous flow satisfies the Kolmogorov property so that, in particular, it is mixing of all orders.
Sinai \cite{sinai} conjectured that the horocycle flow is also mixing of all orders.
This conjecture was proved by Marcus \cite{marcus}.
In fact, Marcus' work established mixing of all order for general flows
on homogeneous spaces of semisimple groups.
Ultimately Starkov \cite{starkov}, building on the work of Mozes \cite{mozes}
and the theory of unipotent flows,
proved mixing of all orders for general mixing one-parameter flows on
finite-volume homogeneous spaces.
Quantitative estimates on higher-order correlations \eqref{eq:corr0}
are also of great importance. Using Fourier-analytic techniques,
Lind \cite{lind} proved exponential convergence of correlations
of order two for ergodic toral automorphisms,
and P\`ene \cite{pe} proved this for correlations of all orders.
Dolgopyat \cite{dolg} established a general result about exponential convergence
of correlations for partially hyperbolic dynamical systems
under the assumption of quantitative equidistribution of translates
of unstable manifolds. Gorodnik and Spatzier \cite{GS0}
showed exponential convergence of correlations of all orders for ergodic
automorphisms of nilmanifolds.
\medskip
More generally, we consider a measure-preserving action of a locally compact
group $G$ on a probability space space $(X,\mu)$.
For functions $\phi_1,\ldots,\phi_r\in L^\infty(X)$,
we define the {\it correlations of order $r$} as
\begin{equation}
\label{eq:corr1}
\int_X \phi_1(g_1^{-1}x)\cdots \phi_r(g_r^{-1}x)\, d\mu(x),\quad
g_1,\ldots, g_r \in G.
\end{equation}
We assume that the group $G$ is equipped with a (proper) metric $d$.
We say that the action is {\it mixing of order $r$} if
for functions $\phi_1,\ldots,\phi_r\in L^\infty(X)$,
\begin{equation}
\label{eq:mmix}
\int_X \phi_1(g_1^{-1}x)\cdots \phi_r(g_r^{-1}x)\, d\mu(x)
\longrightarrow \left(\int_X\phi_1\,d\mu\right)\cdots \left(\int_X\phi_r\,d\mu\right)
\end{equation}
as $d(g_i,g_j)\to \infty$ for all $i\ne j$.
At present, the available results about higher-order mixing for
multi-parameter actions are limited to several particular classes of dynamical systems.
del Junco and Yassawi \cite{jy1,jy2} proved
that for finite rank actions of countable abelian groups
satisfying additional technical conditions, mixing of order two implies mixing of all orders.
It was discovered by Ledrappier \cite{led} that mixing of order two
does not imply mixing of order three in general for $\mathbb{Z}^2$-actions.
The example constructed in \cite{led} is an action
by automorphisms on a (disconnected) compact abelian group.
On the other hand,
Schmidt and Ward \cite{SW} established that
$\bb Z^k$-actions by automorphisms on compact connected abelian groups
that are mixing of order two are always mixing of all orders.
We refer to the monograph of Schmidt \cite{sch} for extensive
study of mixing properties for higher-rank abelian actions
by automorphisms of compact abelian groups.
It turns out that this problem is intimately connected
to deep number-theoretic questions that involve analysing solutions of $S$-unit equations.
Gorodnik and Spatzier \cite{GS} proved that mixing
$\bb Z^k$-actions by automorphisms on nilmanifolds are mixing of all orders.
Using Diophantine estimates on logarithms of algebraic numbers,
the work \cite{GS} also established quantitative estimates for correlations of order up to three. The problem of producing explicit quantitative bounds on general higher-order
correlations in this setting is still open, even for $\bb Z^k$-actions by toral automorphisms.
\medskip
In this paper, we provide a self-contained accessible treatment of the analysis
of higher-order correlations for measure-preserving
actions of a (noncompact) simple connected Lie group $G$
with finite centre (a less advanced reader may think about the groups like $\hbox{SL}_d(\bb R)$ or $\hbox{Sp}_{2n}(\bb R)$).
This topic has long history going back at least to the works of Harish-Chandra (see, for instance, \cite{hc}).
Indeed, the correlations of order two can be interpreted as matrix coefficients
of the corresponding unitary representation on the space $L^2(X)$,
and quantitative estimates on matrix coefficients have been established
in the framework of the Representation Theory
(see \S\ref{sec:decay} and \S\ref{sec:cor_2} for an extensive discussion).
This in particular leads to a surprising corollary that every ergodic
action of $G$ is always mixing. Moreover, Mozes \cite{mozes}
proved that every mixing action of $G$ is always mixing of all order,
and Konstantoulas \cite{konst} and Bj\"orklund, Einsiedler, Gorodnik \cite{beg}
established quantitative estimates on higher-order correlations.
The main goal of these notes is to outline a proof of the following result.
Let $L$ be a connected Lie group and $\Gamma$ a discrete subgroup of $L$ with finite
covolume. We denote by $(X,\mu)$ the space $L/\Gamma$ equipped with the
invariant probability measure $\mu$.
Let $G$ be a (noncompact) simple connected higher-rank Lie group with finite center
equipped with a left-invaraint Riemannian metric $d$.
We consider the measure-preserving action of $G$ on $(X,\mu)$ given
by a smooth representation $G\to L$.
In this setting, we establish quantitative estimates on higher-order correlations:
\begin{theorem*}
Assuming that the action of $G$ on $(X,\mu)$ is ergodic,
there exists $\delta_r>0$ (depending only on $G$ and $r$) such that
for all elements $g_1,\ldots,g_r\in G$ and all
functions $\phi_1,\ldots,\phi_r\in C_c^\infty(X)$,
\begin{align}
\int_X \phi_1(g_1^{-1}x)\cdots \phi_r(g_r^{-1}x)\, d\mu(x)\label{eq:main}
=\,& \left(\int_X\phi_1\,d\mu\right)\cdots \left(\int_X\phi_r\,d\mu\right)\\
&\quad+O_{\phi_1,\ldots,\phi_r,r}\left( e^{-\delta_r D(g_1,\ldots,g_r)}\right),\nonumber
\end{align}
where
$$
D(g_1,\ldots,g_r)=\min_{i\ne j} d(g_i,g_j).
$$
\end{theorem*}
As we shall explain below, a version of this theorem also holds for rank-one groups $G$
provided that the action of $G$ on $L^2(X)$ satisfies the necessary condition of having
the spectral gap. In this case, the exponent $\delta_r$ also depends on the action.
\medskip
It turns out that analysis of correlations has several far-reaching applications,
and here we outline how to use this approach
\begin{itemize}
\item to establish an asymptotic formula for the number of lattice points,
\item to show existence of approximate configurations in lattice subgroups,
\item to prove the Central Limit Theorem for group actions.
\end{itemize}
Other interesting applications of quantitative bounds on correlations, which we do not discuss
here, involve the Kazhdan property (T) \cite[\S V.4.1]{ht},
the cohomological equation \cite{KS,GS0,GS}, the global rigidity of actions \cite{fks},
and analysis of the distribution of arithmetic counting functions \cite{bg1,bg2}.
\medskip
This paper is based on a series of lectures given at the Tata Institue of Fundamental Research
which involved participants with quite diverse backgrounds ranging from starting PhD
students to senior researchers. When I was choosing the material, I was aiming
to make it accessible, but at the same time to give a reasonably detailed
exposition of the developed methods as well as to survey current state of the art in the field.
This inevitably required some compromises. In particular, we assumed very little knowledge
of the Theory of Lie groups, and some of the arguments are carried out only in the
case when $G=\hbox{SL}_d(\bb R)$. I hope that a less prepared reader should be able to follow
this paper by thinking that a ``connected simple Lie group with finite center'' is $\hbox{SL}_d(\bb R)$,
and advanced readers might be able to infer from
our exposition how to deal with the general case.
Besides giving a self-contained proof of the bound \eqref{eq:main},
we also state a number of more advanced results without proofs, which are indicated by
the symbol $(^*)$.
\subsection*{Organisation of the paper}
We are not aware of any direct way for proving the main bound \eqref{eq:main},
and our arguments proceeds in several distinct steps.
First, in \S\ref{sec:decay} and \S\ref{sec:cor_2},
we study the behaviour of correlations of order two using representation-theoretic techniques.
In \S\ref{sec:decay} we show that the correlations of order two decay at infinity
(see Theorem \ref{th:hm}), and in \S\ref{sec:cor_2} we establish quantitative bounds
on the correlations of order two (see Theorem \ref{th:high-rank2}).
Then the main bound \eqref{eq:main} is established using an elaborate inductive argument in \S\ref{sec:cor_gen}
(see Theorem \ref{th:cor_high}).
We also discuss several application of the established bounds for correlations:
in \S\ref{sec:counting} we derive an asymptotic formula for the number of lattice points,
in \S\ref{sec:conf} we establish existence of approximate configurations,
and in \S\ref{sec:clt} we prove the Central Limit Theorem for group actions.
{\small
$$
\xymatrixcolsep{1.4pc} \xymatrixrowsep{1.4pc}
\xymatrix{
\fbox{\begin{tabular}{c} \hbox{\bf Mixing}\\ \hbox{(\S \ref{sec:decay})}\end{tabular}} \ar[d] \ar[r] & \fbox{\begin{tabular}{c} \hbox{\bf Counting lattice points}\\ \hbox{(\S \ref{sec:counting})}\end{tabular}}\\
\fbox{\begin{tabular}{c} \hbox{\bf Quantitative mixing}\\ \hbox{(\S \ref{sec:cor_2})}\end{tabular}} \ar[d] \ar@{-->}[ru] & \fbox{\begin{tabular}{c} \hbox{\bf Configurations}\\ \hbox{(\S \ref{sec:conf})}\end{tabular}}\\
\fbox{\begin{tabular}{c} \hbox{\bf Higher-order quantitative mixing}\\ \hbox{(\S \ref{sec:cor_gen})}\end{tabular}} \ar[ru] \ar[r] & \fbox{\begin{tabular}{c} \hbox{\bf Central limit theorem}\\ \hbox{(\S \ref{sec:clt})}\end{tabular}}
}
$$
}
\subsection*{Acknowledgement}
This survey paper has grown out of the lecture series given by the author
at the Tata Institute of Fundamental Research in Spring 2017.
I would like to express my deepest gratitude to
the Tata Institute for the hospitality
and to the organisers of this programme
-- Shrikrishna Dani and Anish Ghosh -- for all their hard work on setting up this event
and making it run smoothly.
\section{Decay of matrix coefficients}\label{sec:decay}
Let $G$ be a (noncompact) connected simple Lie group with finite center
(e.g., $G=\hbox{SL}_d(\bb R)$). We consider a measure-preserving action
$G$ on a standard probability space $(X,\mu)$.
The goal of this section is to show
a surprising result that ergodicity of any such action implies that it is mixing:
\begin{Theorem}\label{th:hm1}
Let $G$ be a (noncompact) connected simple Lie group with finite centre
and $G\times X\to X$ a measurable measure-preserving
action on a standard probability space $(X,\mu)$.
We assume that the action of $G$ on $(X,\mu)$ is ergodic (that is, the space $L^2(X)$ has no nonconstant $G$-invariant functions).
Then for all $\phi,\psi\in L^2(X)$,
$$
\int_X \phi(g^{-1}x)\psi(x)\, d\mu(x)\longrightarrow \left(\int_X \phi\, d\mu\right)
\left(\int_X \psi\, d\mu\right)
$$
as $g\to \infty$ in $G$.
\end{Theorem}
We observe that a measure-preserving action as above defines a unitary representation $\pi$ of $G$
on the space $\mathcal{H}=L^2(X)$ given by
\begin{equation}
\label{eq:rep}
\pi(g)\phi(x)=\phi(g^{-1}x)\quad \hbox{for $g\in G$ and $x\in X.$}
\end{equation}
One can also check (see, for instance, \cite[A.6]{bhv}) that
this representation is strongly continuous (that is, the map $g\mapsto \pi(g)\phi$, $g\in G$,
is continuous).
\begin{convention*}
{\rm
Throughout these notes, we always implicitly assume that
representations are strongly continuous and Hilbert spaces are separable.
}
\end{convention*}
Theorem \ref{th:hm1} can be formulated more abstractly in terms of
asymptotic vanishing of matrix coefficients of unitary representations.
\begin{Theorem}\label{th:hm2}
Let $G$ be a (noncompact) connected simple Lie group with finite center
and $\pi$ a unitary representation of $G$ on a Hilbert space $\mathcal{H}$.
Then for all $v,w\in \mathcal{H}$,
$$
\left<\pi(g)v,w\right>\to \left<P_Gv,P_G w\right>\quad \hbox{as $g\to \infty$ in $G$, }
$$
where $P_G$ denotes the orthogonal projection on the subspace of the $G$-invariant vectors.
\end{Theorem}
\medskip
The study of matrix coefficients for unitary representations of semisimple Lie
groups has a long history. In particular, this subject played important role
in the research programme of Harish-Chandra. We refer to the monographs
\cite{war1,war2,gv,knapp} for expositions of this theory.
Explicit quantitative bounds on the matrix coefficient,
which in particular imply Theorem \ref{th:hm2},
were derived
in the works of Borel and Wallach \cite{BW}, Cowling \cite{cowling}, and
Casselman and Milici\'c \cite{cm}. This initial approach to study of asymptotic
properties of matrix coefficients used elaborate analytic arguments that
involved representing them as solutions of certain systems of PDE's.
Subsequently, Howe and Moore \cite{HM} developed a different approach
to prove Theorem \ref{th:hm2} that used
the Mautner phenomenon (cf. Theorem \ref{th:mautner} below)
and an inductive argument that derived vanishing of matrix coefficients
on the whole group from
vanishing along a sufficiently rich collection of subgroups.
We present a version of this method here.
Other treatments of Theorems \ref{th:hm1} and \ref{th:hm2} can be also found in the monographs
\cite{zim,ht,bm}.
It is worthwhile to mention that the Howe--Moore argument \cite{HM}
is not restricted just to semisimple groups, and it gives the following
general result.
Given an irreducible unitary representation $\pi$ of a group $G$, we denote by
$$
R_\pi=\{g\in G:\, \pi(g)\in \C^\times\hbox{id}\}
$$ its projective kernel.
Since $\pi$ is unitary, it is clear that
the matrix coefficients $|\left<\pi(g)v,w\right>|$ are constant on cosets of $R_\pi$.
One of the main results of \cite{HM} is asymptotic vanishing of matrix coefficients along $G/R_\pi$:
\begin{theo}\label{th:hm}
Let $G$ be a connected real algebraic group and $\pi$ an irreducible representation of $G$
on a Hilbert space $\mathcal{H}$. Then for any $v,w\in \mathcal{H}$,
$$
\left<\pi(g)v,w\right>\to 0\quad \hbox{as $g\to \infty$ in $G/R_\pi$.}
$$
\end{theo}
\medskip
Now we start the proof of Theorem \ref{th:hm2}. First, we note that because of the decomposition
$$
\mathcal{H}=\mathcal{H}_G\oplus \mathcal{H}_G^\perp,
$$
where $\mathcal{H}_G$ denotes the subspace of $G$-invariant vectors, it is sufficient to prove that
for all vectors $v,w\in \mathcal{H}_G^\perp$,
$$
\left<\pi(g)v,w\right>\to 0\quad \hbox{as $g\to \infty$,}
$$
and without loss of generality, we may assume that $\mathcal{H}$ contains no
nonzero $G$-invariant vectors.
The proof will proceed by contradiction.
Suppose that, in contrary,
$$
\left<\pi(g^{(n)})v,w\right>\not\rightarrow 0
$$
for some sequence $g^{(n)}\to \infty$ in $G$.
We divide the proof into four steps.
\bigskip
\noindent {\it \underline{Step 1:} Cartan decomposition.}
We shall use the Cartan decomposition for $G$:
$$
G=KA^+K,
$$
where $K$ is a maximal compact subgroup of $G$, and $A^+$ is a positive Weyl chamber
of a Cartan subgroup of $G$.
For instance, when $G=\hbox{SL}_d(\mathbb{R})$, this decomposition holds with
$$
K=\hbox{SO}(d)\quad\hbox{and}\quad A^+=\{\hbox{diag}(a_1,\ldots,a_d):\, a_1\ge a_2\ge \cdots\ge a_d>0\}.
$$
We write
$$
g^{(n)}=k^{(n)} a^{(n)}\ell^{(n)}\quad\hbox{with}\;\; \hbox{$k^{(n)},\ell^{(n)}\in K$ and $a^{(n)}\in A^+$.}
$$
Since $K$ is compact, it follows that $a^{(n)}\to\infty$ in $A^+$.
Passing to a subsequence, we may arrange that the sequences $k^{(n)}$ and $\ell^{(n)}$ converge in $K$ so that, in particular,
$$
\pi(\ell^{(n)})v\to v'\quad\hbox{and}\quad \pi(k^{(n)})^*w\to w'
$$
for some vectors $v',w'\in \mathcal{H}$.
We observe that
\begin{align*}
\left<\pi(g^{(n)})v,w\right>-\left<\pi(a^{(n)})v',w'\right>
=&\left<\pi(a^{(n)})\pi(\ell^{(n)})v,\pi(k^{(n)})^*w\right>-\left<\pi(a^{(n)})v',w'\right>\\
=&\left<\pi(a^{(n)})(\pi(\ell^{(n)})v-v'),\pi(k^{(n)})^*w\right>\\
&+
\left<\pi(a^{(n)})v',\pi(k^{(n)})^*w-w'\right>.
\end{align*}
Using that the representation $\pi$ is unitary, we deduce that
\begin{align*}
\left|\left<\pi(a^{(n)})(\pi(\ell^{(n)})v-v'),\pi(k^{(n)})^*w\right>\right|
&\le \left\| \pi(a^{(n)})(\pi(\ell^{(n)})v-v')\right\| \left\|\pi(k^{(n)})^*w\right\|\\
&= \left\| \pi(\ell^{(n)})v-v'\right\| \left\|w\right\|\to 0.
\end{align*}
Similarly, one can show that
$$
\left<\pi(a^{(n)})v',\pi(k^{(n)})^*w-w'\right>\to 0.
$$
Hence, we conclude that
$$
\left<\pi(g^{(n)})v,w\right>=\left<\pi(a^{(n)})v',w'\right>+o(1),
$$
and
$$
\left<\pi(a^{(n)})v',w'\right>\not\rightarrow 0.
$$
\bigskip
\noindent {\it \underline{Step 2:} weak convergence.}
We use the notion of `weak convergence'.
We recall that a sequence of vectors $x^{(n)}$
in a Hilbert space converges weakly to a vector $x$
if $\left<x^{(n)},y\right>\to \left<x,y\right>$ for all $y\in \mathcal{H}$.
We use the notation: $x^{(n)}\stackrel{w}{\longrightarrow}x$.
It is known that every bounded sequence has a weakly convergent subsequence.
In particular, it follows that, after passing to a subsequence,
we may arrange that
$$
\pi(a^{(n)})v'\stackrel{w}{\longrightarrow}v''
$$ for some vector $v''\in \mathcal{H}$.
Then, in particular,
$$
\left<\pi(a^{(n)})v',w'\right>\to\left<v'',w'\right>\ne 0.
$$
\bigskip
\noindent {\it \underline{Step 3:} the case when $G=\hbox{\rm SL}_2(\mathbb{R})$.}
From the previous step, we know that
$$
\left<\pi(a^{(n)})v',w'\right>\to \left<v'',w'\right>\ne 0\quad\hbox{for $a^{(n)}=\left(\begin{tabular}{cc} $t_n$ & $0$ \\ 0 & $t_n^{-1}$\end{tabular} \right)$ with $t_n\to\infty$.}
$$
We claim that the vector $v''$ is invariant under the subgroup
$$
U=\left\{u(s)= \left(\begin{tabular}{cc} 1 & $s$ \\ 0 & 1\end{tabular} \right):\, s\in \bb R\right\}.
$$
This property
will be deduced from the identity
$$
(a^{(n)})^{-1} u(s) a^{(n)}=u(s/t_n^2)\to e.
$$
One can easily check that
$$
\pi(u(s))\pi(a^{(n)})v'\stackrel{w}{\longrightarrow}\pi(u(s))v'',
$$
so that
$$
\pi(u(s))v''= \hbox{w-lim}_{n\to \infty}\, \pi(u(s))\pi(a^{(n)})v'=\hbox{w-lim}_{n\to \infty}\,\pi(a^{(n)})\pi(u(s/t_n^2))v'.
$$
Since
$$
\|\pi(a^{(n)})\pi(u(s/t_n^2))v'-\pi(a^{(n)})v'\|=\|\pi(u(s/t_n^2))v'-v'\|\to 0,
$$
it follows that
$$
\hbox{w-lim}_{n\to \infty}\,\pi(a^{(n)})\pi(u(s/t_n^2))v'
=\hbox{w-lim}_{n\to \infty}\,\pi(a^{(n)})v'=v''.
$$
This proves that indeed the vector $v''$ is invariant under $U$.
Next, we show that the vector $v''$ is $G$-invariant.
We consider the function
$$
F(g)=\left<\pi(g)v'',v''\right>\quad\hbox{ with $g\in G$.}
$$
Since $v''$ is $U$-invariant, the function $F$ is bi-invariant under $U$.
We observe that the map $gU\mapsto ge_1$
defines the isomorphism of the homogeneous spaces $G/U$ and $\mathbb{R}^2\backslash \{0\}$.
Hence, we may consider $F$ as a function $\mathbb{R}^2\backslash \{0\}$.
Since the $U$-orbits in $\mathbb{R}^2$ are the lines $y=c$ with $c\ne 0$ and
the points $(x,0)$, we conclude that $F$ is constant on each line $y=c$ with $c\ne 0$.
By continuity, it follows that $F$ is also constant on the line $y=0$.
For $a(t)=\left(\begin{tabular}{cc} $t$ & $0$ \\ 0 & $t^{-1}$\end{tabular} \right)$,
$$
\left<\pi(a(t))v'',v''\right>=F(a(t)e_1)= F(te_1)=F(e_1)=\|v''\|^2.
$$
Since this gives the equality in the Cauchy--Schwarz inequality,
the vectors $\pi(a(t)) v''$ and $v''$ must be colinear, and
we deduce that $\pi(a(t))v''=v''$, so that the vector $v''$
is also invariant under the subgroup $A=\{a(t)\}$.
Hence, the function $F$ is also constant on $AU$-orbits in $\mathbb{R}^2$.
Since the half-spaces $\{y>0\}$ and $\{y<0\}$ are single $AU$-orbits,
It follows from the continuity of $F$, that this function is identically constant,
that is,
$$
F(g)=\left<\pi(g)v'',v''\right>=\|v''\|^2\quad\hbox{for all $g\in G$.}
$$
This gives the equality in the Cauchy--Schwarz inequality, and as before
we deduce that $\pi(g)v''=v''$ for all $g\in G$. However, we have assumed that
there is nonzero $G$-invariant vectors. This gives a contradiction, and
completes the proof of the theorem in the case $G=\hbox{SL}_2(\bb R)$.
\medskip
We note that the above argument, in fact, implies
the following ``Mautner property'' of unitary representations of $\hbox{SL}_2(\bb R)$:
every $U$-invariant vector is always $\hbox{SL}_2(\bb R)$-invariant.
More generally, one says that a closed subgroup $H$ of topological group $G$
has {\it Mautner property} if for every unitary representation of $G$,
$H$-invariant vectors are also invariant under $G$. Subgroups
satisfying this property have appeared in a work of Segal and von Neumann \cite{seg_new},
and Mauntner \cite{mau} used this phenomenon to study ergodicity of the geodesic flow
on locally symmetric spaces. The following general version of the Mautner property
was established by Moore \cite{m0}:
\begin{theo}\label{th:mautner}
Let $G$ be a (noncompact) simple connected Lie group with finite center.
Then every noncompact closed subgroup of $G$ has the Mautner property.
\end{theo}
Subsequently, more general versions of this result were proved
by Moore \cite{m00}, Wang \cite{wang1,wang2}, and Bader, Furman, Gorodnik, Weiss \cite{bfgw}.
\bigskip
\noindent {\it \underline{Step 4:} inductive argument.}
Our next task is to develop an inductive argument which
allows to deduce asymptotic vanishing of matrix coefficients
using vanishing along smaller subgroup.
We give a complete proof when $G=\hbox{SL}_d(\bb R)$,
but similar ideas can be also extended to general semisimple Lie groups
using their structure theory.
For $a\in A^+$, the set of simple roots is given by
$$
\alpha_i(a)=a_i/a_{i+1}\quad\hbox{ for $i=1,\ldots,d-1$.}
$$
The functions $\alpha_i$, $i=1,\ldots,d-1$, provide a coordinate system on $A^+$.
Given a sequence $a^{(n)}\in A^+$ such $a^{(n)}\to \infty$, we have
$\max_i \alpha_i(a^{(n)})\to \infty$.
After passing to a subsequence, we may assume that $\alpha_i(a^{(n)})\to \infty$
for some $i$. We introduce the subgroup
$$
U_i=\left\{\left(
\begin{tabular}{cc}
$I_i$ & $u$ \\
$0$ & $I_{d-i}$
\end{tabular}
\right):\, u\in \hbox{M}_{i, d-i}(\mathbb{R}) \right\}.
$$
For $a\in A^+$,
$$
a^{-1}\left(
\begin{tabular}{cc}
$I_i$ & $(u_{lk})$ \\
$0$ & $I_{d-i}$
\end{tabular}
\right) a=
\left(
\begin{tabular}{cc}
$I_i$ & $\left(\frac{a_k}{a_l}u_{lk}\right)$ \\
$0$ & $I_{d-i}$
\end{tabular}
\right).
$$
Since for $l\le i <k$,
$$
\frac{a_l}{a_k}=\frac{a_l}{a_{l+1}}\cdots \frac{a_{k-1}}{a_k}\ge \frac{a_i}{a_{i+1}}=\alpha_i(a),
$$
it follows that $a^{(n)}_l/a^{(n)}_k\to\infty$. Hence, for $g\in U_i$,
$$
(a^{(n)})^{-1} ga^{(n)}\to e.
$$
Using this property, we may argue exactly as in Step 3 to conclude that
the vector $v''$ is $U_i$-invariant.
For $1\le l\le i$ and $i+1\le k\le d$,
we denote by $U_{lk}$ the corresponding one-parameter unipotent subgroup of $U_i$.
We observe that $U_{lk}$ can be embedded in an obvious way as a subgroup
of the group $G_{lk}\simeq \hbox{SL}_2(\bb R)$ contained in $G$.
Since the vector $v''$ is invariant under $U_{lk}$,
it follows from Step 3 that it is also invariant under $G_{lk}$
when $1\le l\le i$ and $i+1\le k\le d$.
Finally, we check that these groups $G_{lk}$ generate $G=\hbox{SL}_d(\bb R)$,
so that the vector $v''$ is $G$-invariant.
This gives a contradiction and completes the proof of the theorem.
\section{Application: counting lattice points}\label{sec:counting}
Given a lattice $\Lambda$ in the Euclidean space $\mathbb{R}^d$ and
a Euclidean ball $B$ in $\mathbb{R}^d$, one can show using a simple geometric argument
that
$$
|\Lambda\cap B|\sim \frac{\operatorname{vol}(B)}{\operatorname{vol}(\mathbb{R}^d/\Lambda)}\quad \hbox{as $\operatorname{vol}(B)\to\infty.$}
$$
This result also holds for more general families of domains satisfying some regularity
assumptions. The analogous lattice counting problem for the hyperbolic space is more difficult
because of the exponential volume growth of the hyperbolic balls, and proving an asymptotic
formula even without an error term requires analytic tools.
The hyperbolic lattice point counting problem was studied by Delsarte \cite{del},
Huber \cite{hub}, and Patterson \cite{pat}.
These works used spectral expansion of the counting functions
in terms of the eigenfunctions of the Laplace-Beltrami operator.
Margulis in his PhD thesis \cite{mar0,mar} discovered that the lattice point counting problem on manifolds of variable negative curvature
can be solved using solely the mixing property.
Bartels \cite{bart} proved an asymptotic formula
for the number of lattice points in connected semisimple Lie groups
using a version of Theorem \ref{th:hm1}.
Subsequently, this approach was generalised to counting
lattice orbit points on affine symmetric varieties
by Duke, Rudnick, Sarnak \cite{drs} and Eskin, Mcmullen \cite{em}.
We refer to the survey \cite{bab} for a comprehensive discussion of
the lattice point counting problems.
\medskip
In this section, we consider a more general counting
problem for lattice points in locally compact groups.
In particular, we prove the following result:
\begin{Theorem}\label{th:count0}
Let $G$ be a (noncompact) connected simple matrix Lie group with finite centre, and the sets
\begin{equation}\label{eq:norm_ball}
B_t=\{g\in G:\, \|g\|<t \}
\end{equation}
are defined by a norm on the space of matrices.
Then for any lattice subgroup $\Gamma$ of $G$,
$$
|\Gamma\cap B_t|\sim \frac{\hbox{\rm vol}(B_t)}{\hbox{\rm vol}(G/\Gamma)}\quad \hbox{as $t\to\infty$.}
$$
\end{Theorem}
More generally, we establish an asymptotic counting formula in a setting of locally compact groups satisfying a certain mixing assumption.
Let $G$ be a locally compact second countable group and $\Gamma$ a lattice subgroup in $G$.
We fix a Haar measure $m$ on $G$ which also induced the measure $\mu$
on the factor space $X=G/\Gamma$ by
$$
\int_{G/\Gamma} \Big(\sum_{\gamma\in \Gamma}\psi(g\gamma)\Big)\,d\mu(g\Gamma)=\int_G \psi \, dm
\quad\hbox{for $\psi\in C_c(G/\Gamma).$}
$$
We normalise the measure $m$ so that $\mu(X)=1$. Then we obtain a continuous
measure-preserving action of $G$ on the probability space $(X,\mu)$.
We say that a family of bounded measurable sets $B_t$ in $G$ is {\it well-rounded} (cf. \cite{em}) if
for every $\delta>1$, there exists a symmetric neighbourhood $\mathcal{O}$ of
identity in $G$ such that
\begin{equation}
\label{eq:well_round}
\delta^{-1}\, m\left(\bigcup_{g_1,g_2\in \mathcal{O}} g_1 B_t g_2 \right)\le m(B_t)\le \delta\, m\left(\bigcap_{g_1,g_2\in \mathcal{O}} g_1 B_t g_2 \right)
\end{equation}
for all $t$.
\begin{Theorem}\label{th:count}
Let $G$ be a locally compact second countable group, and let $B_t$ be a family of
well-rounded compact sets in $G$ such that $m(B_t)\to \infty$.
Let $\Gamma$ be a lattice subgroup in $G$ such that the action of $G$ on the space $G/\Gamma$ is mixing.
Then
$$
|\Gamma\cap B_t|\sim m(B_t)\quad \hbox{as $t\to\infty$.}
$$
\end{Theorem}
It follows from a Fubini-type argument that the action of $G$ on $X=G/\Gamma$
is ergodic (i.e., every almost everywhere invariant function is constant almost everywhere).
Hence, when $G$ is a connected simple Lie group with finite centre,
it follows from Theorem \ref{th:hm1} that the actions of $G$ on $X=G/\Gamma$ is mixing of order two. One can also check that the regularity condition \eqref{eq:well_round}
is satisfied for the norm balls \eqref{eq:norm_ball} (see \cite{drs},\cite{em}).
Hence, Theorem \ref{th:count0} follows from Theorem \ref{th:count}.
\medskip
We start the proof of Theorem \ref{th:count}
by realising the counting function as a function on the homogeneous space $X=G/\Gamma$.
We set
\begin{equation}
\label{eq:ft}
F_t(g_1,g_2)=\sum_{\gamma\in\Gamma} \chi_{B_t}(g_1\gamma g_2^{-1}).
\end{equation}
In the first part of the argument,
we do not impose any regularity assumptions
on the compact domains $B_t$ and just assume that $m(B_t)\to\infty$.
Since
$$
F_t(g_1\gamma_1,g_2\gamma_2)=F_t(g_1,g_2)\quad\hbox{for all $g_1,g_2\in G$ and $\gamma_1,\gamma_2\in\Gamma$,}
$$
$F_t$ defines a function on $G/\Gamma\times G/\Gamma$.
We note that $F_t(e,e)=|\Gamma\cap B_t|$, so that it remains to investigate
the asymptotic behaviour of $F_t$ at the identity coset.
The crucial connection between the original counting problem and estimating correlations
is provided by the following computation.
For a real-valued test-function $\phi\in C_c(G/\Gamma)$, we obtain that
\begin{align*}
\left<F_t,\phi\otimes\phi\right>
&=\int_{G/\Gamma\times G/\Gamma} F_t(g_1,g_2)\phi(g_1)\phi(g_2)\, d\mu(g_1\Gamma)d\mu(g_2\Gamma)\\
&=\int_{G/\Gamma\times G/\Gamma} \left(\sum_{\gamma\in\Gamma} \chi_{B_t}(g_1\gamma g_2^{-1})\right) \phi(g_1)\phi(g_2)\, d\mu(g_1\Gamma)d\mu(g_2\Gamma)\\
&=\int_{G/\Gamma\times G/\Gamma} \left(\sum_{\gamma\in\Gamma} \chi_{B_t}(g_1(g_2\gamma)^{-1})\right) \phi(g_1)\phi(g_2)\, d\mu(g_1\Gamma)d\mu(g_2\Gamma)\\
&=\int_{G/\Gamma\times G} \chi_{B_t}(g_1g_2^{-1}) \phi(g_1)\phi(g_2)\, d\mu(g_1\Gamma)dm(g_2).
\end{align*}
We denote by $\pi$ the unitary representation of $G$ on $L^2(G/\Gamma)$ defined as in \eqref{eq:rep}. Using a change of variables $b=g_1g_2^{-1}$, we deduce that
\begin{align*}
\left<F_t,\phi\otimes\phi\right>
&=\int_{G/\Gamma\times G} \chi_{B_t}(b) \phi(g_1)\phi(b^{-1}g_1)\, d\mu(g_1\Gamma)dm(b)\\
&= \int_{B_t} \left<\pi(b)\phi,\phi\right>\, dm(b).
\end{align*}
According to our assumption,
$$
\left<\pi(b)\phi,\phi\right>\longrightarrow \left(\int_{G/\Gamma} \phi\, d\mu\right)^2\quad \hbox{as $b\to\infty$.}
$$
Hence, since $m(B_t)\to \infty$, it follows that
\begin{equation}
\label{eq:weak}
\frac{\left<F_t,\phi\otimes\phi\right>}{m(B_t)}=
\frac{1}{m(B_t)}\int_{B_t} \left<\pi(b)\phi,\phi\right>\, dm(b)
\longrightarrow \left(\int_{G/\Gamma} \phi\, d\mu\right)^2
\end{equation}
as $t\to\infty$.
We note that \eqref{eq:weak} holds for any functions $F_t$ defined in terms of compact subsets
$B_t$ such that $m(B_t)\to\infty$.
\medskip
Our next task is to upgrade the weak convergence of functions $F_t$ established in \eqref{eq:weak} to the pointwise convergence. For this step, we use the regularity assumption
\eqref{eq:well_round} on the domains $B_t$.
We take any $\delta>1$ and choose the neighbourhood $\mathcal{O}$ of identity in $G$
as in \eqref{eq:well_round}. We set
$$
B_t^+=\bigcup_{g_1,g_2\in \mathcal{O}} g_1 B_t g_2
\quad\hbox{and}\quad
B_t^-=\bigcap_{g_1,g_2\in \mathcal{O}} g_1 B_t g_2,
$$
and consider the corresponding functions $F_t^+$ and $F_t^-$ defined as in \eqref{eq:ft}.
It follows from \eqref{eq:well_round} that
\begin{equation}\label{eq:well2}
\delta^{-1}\, m(B_t^+)\le m(B_t)\le \delta\, m(B_t^-).
\end{equation}
In particular, $m(B_t^\pm)\to \infty$.
We take a nonnegative function $\tilde \phi\in C_c(G)$ such that
$$
\hbox{supp}(\tilde \phi)\subset \mathcal{O}\quad\hbox{and}\quad \int_G\tilde \phi\, dm=1,
$$
and define
a function $\phi\in C_c(G/\Gamma)$ as $\phi(g)=\sum_{\gamma\in\Gamma} \tilde \phi(g\gamma)$.
Then
\begin{align*}
\left<F^+_t,\phi\otimes \phi\right>&=\int_{G/\Gamma\times G/\Gamma} F^+_t(g_1,g_2)\phi(g_1)\phi(g_2)\, d\mu(g_1\Gamma)d\mu(g_2\Gamma)\\
&=\int_{G/\Gamma\times G/\Gamma} F^+_t(g_1,g_2) \left( \sum_{\gamma_1,\gamma_2\in\Gamma} \tilde \phi(g_1\gamma_1)\tilde \phi(g_2\gamma_2)\right)\, d\mu(g_1\Gamma)d\mu(g_2\Gamma)\\
&=\int_{G/\Gamma\times G/\Gamma} \left( \sum_{\gamma_1,\gamma_2\in\Gamma} F^+_t(g_1\gamma_1,g_2\gamma_2)\tilde \phi(g_1\gamma_1)\tilde \phi(g_2\gamma_2)\right)\, d\mu(g_1\Gamma)d\mu(g_2\Gamma)\\
&=\int_{G\times G} F^+_t(g_1,g_2)\tilde \phi(g_1)\tilde \phi(g_2)\, dm(g_1)dm(g_2).
\end{align*}
We observe that when $g_1,g_2\in\mathcal{O}$,
$$
F^+_t(g_1,g_2)=\sum_{\gamma\in \Gamma} \chi_{g_1^{-1}B^+_t g_2}(\gamma)\ge
\sum_{\gamma\in \Gamma} \chi_{B_{t}}(\gamma)=|\Gamma\cap B_{t}|,
$$
so that since $\hbox{supp}(\tilde \phi)\subset \mathcal{O}$, we obtain that
$$
\left<F^+_t,\phi\otimes \phi\right>\ge |\Gamma\cap B_t| \left(\int_G \tilde \phi \, dm\right)^2\ge |\Gamma\cap B_{t}|.
$$
Hence, it follows from \eqref{eq:weak} and \eqref{eq:well2} that
$$
\limsup_{t\to\infty} \frac{|\Gamma\cap B_{ t}|}{m(B_t)}\le
\delta\limsup_{t\to\infty} \frac{\left<F^+_{t},\phi\otimes \phi\right>}{m(B^+_t)}
=\delta
$$
for all $\delta>1$.
A similar argument applied to the function $F_t^-$ gives
$$
\liminf_{t\to\infty} \frac{|\Gamma\cap B_{ t}|}{m(B_t)}\ge
\delta^{-1}\liminf_{t\to\infty} \frac{\left<F^-_{t},\phi\otimes \phi\right>}{m(B^-_t)}
=\delta^{-1}
$$
for all $\delta>1$. This implies Theorem \ref{th:count}.
\medskip
It is worthwhile to mention that
Gorodnik and Nevo \cite{gn1,gn2} showed
the asymptotic formula for counting lattice points can be deduced solely from
an ergodic theorem for averages along the sets $B_t$ on the space $X=G/\Gamma$. `Ergodic theorem' is a much more prolific phenomenon than `mixing property'.
\section{Quantitative estimates on matrix coefficients}\label{sec:cor_2}
The goal of this section is to establish quantitative estimates on
matrix coefficients for unitary representations $\pi$ of
higher-rank simple groups $G$ (for instance, for $G=\hbox{SL}_d(\bb R)$
with $d\ge 3$). It is quite remarkable that this quantitative bound
for higher-rank groups holds uniformly for all representations without invariant vectors.
A qualitative bound on matrix coefficients may only hold on a proper subset of vectors,
and to state such a bound, we introduce a notion of $K$-finite vectors.
Let $K$ be a maximal compact subgroup of $G$.
By the Peter--Weyl Theorem,
a unitary representation $\pi|_K$
splits as a sum of finite-dimensional irreducible representations.
A vector $v$ is called {\it $K$-finite} if the span of $\pi(K)v$ is of
finite dimension.
We set
$$
d_K(v)=\dim \left<\pi(K)v\right>.
$$
The space of $K$-finite vectors is dense in the represenation space.
With this notation, we prove:
\begin{Theorem}
\label{th:high-rank}
Let $G$ be a (noncompact) connected simple higher-rank matrix Lie group with finite centre
and $K$ a maximal compact subgroup of $G$.
Then there exist $c,\delta>0$ such that for any unitary representation $\pi$ of
$G$ on a Hilbert space $\mathcal{H}$ without nonzero $G$-invariant vectors,
the following estimate holds: for all elements $g\in G$ and all $K$-finite vectors $v,w\in \mathcal{H}$,
$$
|\left<\pi(g)v,w\right>|\le c\, d_K(v)^{1/2}d_K(w)^{1/2}\|v\|\|w\|\, \|g\|^{-\delta}.
$$
\end{Theorem}
As we already remarked in Section \ref{sec:decay},
asymptotic properties of matrix coefficients for semisimple Lie groups
has been studied extensively starting with foundational works of Harish-Chandra
(see, for instance, \cite{hc}).
Explicit quantitative bounds on matrix coefficients
have been obtained, in particular, in the works of
Borel and Wallach \cite{BW}, Cowling \cite{cowling}, Howe \cite{h},
Casselman and Milici\'c \cite{cm},
Cowling, Haagerup, and Howe \cite{chh},
Li \cite{li}, Li and Zhu \cite{li2}, and Oh \cite{oh1,oh2}.
Here we follow the elegant elementary approach of Howe and Tan \cite{ht}
to prove Theorem \ref{th:high-rank}.
\medskip
We start our investigation by analysing the unitary representations of the semidirect product
$$
L=\hbox{SL}_2(\bb R)\ltimes \bb R^2.
$$
We shall use the following notation:
\begin{align}
S&=\hbox{SL}_2(\bb R), \nonumber \\
K_S&=\hbox{SO}(2)=\left\{k(\theta)=
\left( \begin{tabular}{cc} $\cos\theta$ & $-\sin\theta$ \\ $\sin\theta$ & $\cos\theta$ \end{tabular}\right):\, \theta\in [0,2\pi)\right\}, \nonumber\\
A_S&=\left\{a(t)=\left( \begin{tabular}{cc} $t$ & $0$ \\ $0$ & $t^{-1}$ \end{tabular}\right):\, t>0\right\},\label{eq:notation}\\
U_S&=\left\{u(s)=\left(\begin{tabular}{cc} 1 & $s$ \\ 0 & 1\end{tabular} \right):\,
s\in \bb R\right\}.\nonumber
\end{align}
\begin{Prop}\label{p:howe}
Let $\pi$ be a unitary representation of $L$ on the Hilbert space $\mathcal{H}$
that does not have any nonzero $\mathbb{R}^2$-invariant vectors.
Then for all vectors $v, w$ belonging to a $K_S$-invariant dense subspace of $\mathcal{H}$,
\begin{equation}
\label{eq:bound0}
|\left<\pi(a(t))v,w\right>|\le c(v,w)\, t^{-1} \quad \hbox{when $t\ge 1$.}
\end{equation}
\end{Prop}
\begin{proof}
We consider the restricted representation $\pi|_{\mathbb{R}^2}$
which can be decomposed with respect irreducible one-dimensional unitary representations of $\bb R^2$
--- the unitary characters of $\bb R^2$:
$$
r\mapsto \chi_z(r)=e^{i\left<z,r\right>},\quad z\in\bb R^2.
$$
Namely, there exists a Borel projection-valued measure $P$ on $\bb R^2$ such that
$$
\pi(r)=\int_{\bb R^2}\chi_z(r)\, dP_z\quad\hbox{ for $r\in \bb R^2$. }
$$
We shall use that the measure $P$ satisfies an equivariance property with respect
to the action of $S$: since
$$
\pi(g)\pi(r)\pi(g)^{-1}=\pi(g(r))\quad\hbox{for $g\in S$ and $r\in \bb R^2$,}
$$
and
$$
\pi(g(r))=\int_{\bb R^2}\chi_z(g(r))\, dP_z=\int_{\bb R^2}\chi_{g^{t}(z)}(r)\, dP_z,
$$
it follows that
\begin{equation}
\label{eq:equiv}
\pi(g)P_B\pi(g)^{-1}=P_{(g^t)^{-1}B}\quad \hbox{ for Borel $B\subset \bb R^2$ and $g\in S$.}
\end{equation}
For $s>1$, we set
$$
\Omega_s=\{r\in\bb R^2:\, s^{-1}\le \|r\|\le s \},
$$
and consider the closed subspace $\mathcal{H}_s=\hbox{Im}(P_{\Omega_s})$.
Since the set $\Omega_s$ is invariant under $K_S$, it follows
from \eqref{eq:equiv} that the subspace $\mathcal{H}_s$ is $K_S$-invariant.
Using that the projection-valued measure $P$ is strongly continuous,
we deduce that for all $v\in \mathcal{H}$,
$$
P_{\Omega_s}v\to P_{\bb R^2\backslash \{0\}}v=v-P_{0}v\quad\hbox{ as $s\to\infty$.}
$$
Moreover, since we assumed that there is no nonzero $\bb R^2$-invariant vectors, $P_0=0$.
This shows that $\cup_{s>1} \mathcal{H}_s$ is dense in $\mathcal{H}$.
Using that the span of $K_S$-eigenvectors is dense in $\mathcal{H}_s$,
it remains to show that \eqref{eq:bound0} holds
for all $K_S$-eigenvectors in $\mathcal{H}_s$ with $s>1$.
We recall that $P_{B_1}P_{B_2}=P_{B_1\cap B_2}$ for Borel $B_1,B_2\subset \bb R^2$.
This allows us to compute that for $v,w\in \mathcal{H}_s$,
\begin{align*}
\left<\pi(a(t))v,w\right>&=\left<\pi(a(t))P_{\Omega_s}v, P_{\Omega_s}w\right>=
\left<P_{a(t)^{-1}\Omega_s}\pi(a(t))v, P_{\Omega_s}w\right>\\
&=\left<P_{a(t)^{-1}\Omega_s}\pi(a(t))v, P_{a(t)^{-1}\Omega_s}P_{\Omega_s}w\right>\\
&=\left<P_{a(t)^{-1}\Omega_s}\pi(a(t))v, P_{a(t)^{-1}\Omega_s\cap\Omega_s}w\right>\\
&=\left<P_{a(t)^{-1}\Omega_s\cap \Omega_s}\pi(a(t))v, P_{a(t)^{-1}\Omega_s\cap\Omega_s}w\right>\\
&=\left<\pi(a(t)) P_{\Omega_s\cap a(t)\Omega_s}v, P_{a(t)^{-1}\Omega_s\cap\Omega_s}w\right>.
\end{align*}
Hence, by the Cauchy--Schwarz inequality,
\begin{align}\label{eq:cs}
|\left<\pi(a(t))v,w\right>|\le
\|P_{\Omega_s\cap a(t)\Omega_s}v\|\, \|P_{a(t)^{-1}\Omega_s\cap\Omega_s}w\|.
\end{align}
We observe that the region $\Omega_s\cap a(t)\Omega_s$ is contained in two sectors of angle
$$
\alpha\le 2\sin^{-1}(s^2/t)
$$
(see Figure \ref{f:ellip}).
\begin{figure}[b]
\includegraphics[width=0.6\linewidth]{ellip.png} \label{f:ellip}
\caption{Estimating the angle $\alpha$.} \label{f:ellip}
\end{figure}
We take $\theta_m=2\pi/m$ such that $\theta_{m+1}\le\alpha<\theta_m$, and consider the partition
\begin{equation}
\label{eq:decomp0}
\bb R^2\backslash \{0\}=\bigsqcup_{i=1}^m S_i
\end{equation}
into sectors such that
\begin{equation}
\label{eq:contain}
\Omega_s\cap a(t)\Omega_s\subset S_1.
\end{equation}
We note that $k_{\theta_m}(S_i)=S_{(i+1)\,\hbox{\tiny mod}\, m}$.
Now we suppose that $v$ is an eigenvector of $K_S$, that is,
$\pi(k_\theta)v=e^{i\lambda\theta}v$ for some $\lambda\in \bb R$. Then
using \eqref{eq:equiv}, we deduce that
$$
\pi(k_{\theta_m})P_{S_i}v=P_{k_{\theta_m}S_i}\pi(k_{\theta_m})v=e^{i\lambda\theta_m} P_{S_{(i+1)\,\hbox{\tiny mod}\, m}}v.
$$
Hence, it follows that
$$
\|P_{S_i}v\|=\|P_{S_{(i+1)\,\hbox{\tiny mod}\, m}}v\|.
$$
For \eqref{eq:decomp0}, we obtain
the orthogonal decomposition
$$
v=P_{\bb R^2\backslash \{0\}}v=\sum_{i=1}^m P_{S_i} v,
$$
so that
$$
\|v\|^2=\sum_{i=1}^m \|P_{S_i}v\|^2\quad\hbox{and}\quad \|P_{S_i}v\|=m^{-1/2}\|v\|.
$$
It follows from the inclusion \eqref{eq:contain} that
$$
\|P_{\Omega_s\cap a(t)\Omega_s}v\|\le \|P_{S_1}v\|\ll \left(\sin^{-1}(s^2/t)\right)^{1/2} \|v\|\ll_s t^{-1/2}\|v\|.
$$
A similar argument also gives that when $w$ is an eigenvector of $K_S$,
$$
\|P_{a(t)^{-1}\Omega_s\cap\Omega_s}w\|\ll_s t^{-1/2}\|w\|.
$$
Finally, we conclude from \eqref{eq:cs} that
$$
|\left<\pi(a(t))v,w\right>|\ll_s t^{-1}\|v\|\|w\|,
$$
which proves the proposition.
\end{proof}
The above argument has been generalised by Konstantoulas \cite{konst} to give
bounds on correlations of higher orders.
\medskip
Proposition \ref{p:howe} gives the best possible bound in terms of the parameter $t$,
but the drawback is that the dependence on the vectors $v,w$ is not explicit. Our goal will
be to derive a more explicit estimate for matrix coefficients.
We observe that Proposition \ref{p:howe} implies an integrability estimate
on the functions $g\mapsto \left<\pi(g)v,w\right>$, $g\in G$.
This will eventually allow us to reduce our study to the case of the regular representation.
We recall that the invariant
measure on $S=\hbox{SL}_2(\bb R)$
with respect to the Cartan decomposition $S=K_SA_SK_S$ is given by
$$
\int_S f(g)\,dg=\int_{[0,2\pi)\times [1,\infty)\times [0,2\pi)} f(k({\theta_1})a(t)k({\theta_2}))(t^2-t^{-2})\, d\theta_1 \frac{dt}{t}d\theta_2
$$
for $f\in C_c(G)$. Hence, from Proposition \ref{p:howe}, we deduce that
\begin{Cor}\label{c:integral}
With the notation as in Proposition \ref{p:howe},
for vectors $v,w$ belonging to a dense $K_S$-invariant subspace of $\mathcal{H}$,
the functions $g\mapsto \left<\pi(g)v,w\right>$, $g\in S$, are $L^{2+\epsilon}$-integrable
for all $\epsilon>0$.
\end{Cor}
We say that a unitary representation $\pi$ of a group $G$ is {\it $L^p$-integrable}
if the matrix coefficients $g\mapsto \left<\pi(g)v,w\right>$, $g\in G$,
belong to $L^p(G)$ for vectors $v,w$ from a dense subset.
\medskip
The following result concerns representations of a general locally compact group $G$.
We define the {\it regular representation} $\lambda_G$ on $L^2(G)$ by
$$
\lambda_G(g)\phi(x)=\phi(xg)\quad \hbox{for $\phi\in L^2(G).$}
$$
\begin{Prop}\label{p:l2}
Let $\rho$ be a unitary representation of $G$ on a Hilbert space $\mathcal{H}$.
We assume that the functions $g\mapsto \left<\rho(g)v,w\right>$
belong to $L^2(G)$ for vectors $v,w$ belonging to a dense subspace of $\mathcal{H}$.
Then there exists an isometric embedding
$$
\mathcal{I}:\mathcal{H}\to \oplus_{n\ge 1} L^2(G)
$$
such that
for $g\in G$, we have
\[\xymatrixcolsep{6pc}\xymatrix{
\mathcal{H} \ar[d]_{\rho(g)} \ar[r]^{\mathcal{I}} & {\bigoplus}_{n\ge 1} L^2(G) \ar[d]^{\oplus_{n\ge 1}\lambda_G(g)}\\
\mathcal{H} \ar[r]^{\mathcal{I}} & \bigoplus_{n\ge 1} L^2(G)
}\]
\end{Prop}
\begin{proof}
Let $\mathcal{H}_0$ be a countable orthonormal dense subset of $\mathcal{H}$ such that
the functions $g\mapsto \left<v,\rho(g)w\right>$ belong to $L^2(G)$ for all $v,w\in \mathcal{H}_0$.
For $v,w\in \mathcal{H}$, we set
$$
f_{v,w}(x)=\left<v,\rho(x)w\right>,\quad x\in G,
$$
and define the map
$$
\mathcal{I}:\left<\mathcal{H}_0\right> \to {\bigoplus}_{n\ge 1} L^2(G): \, w\mapsto \left(f_{v,w}: v\in\mathcal{H}_0\right).
$$
Since
$$
\lambda_G(g)f_{v,w}=f_{v,\rho(g)w}\quad\hbox{for $g\in G$ and $v,w\in \mathcal{H}$,}
$$
we conclude that
$$
\mathcal{I}\circ \rho(g)=\big(\oplus_{n\ge 1}\lambda_G(g)\big)\circ \mathcal{I}\quad\hbox{for $g\in G$.}
$$
Moreover, since $\mathcal{H}_0$ forms an orthonormal basis of $\mathcal{H}$,
$$
\sum_{v\in\mathcal{H}_0} \|f_{v,w}\|_{2}^2= \|w\|^2\quad\hbox{for $w\in \mathcal{H}_0$,}
$$
it follows that
$$
\|\mathcal{I} w\|=\|w\|\quad \hbox{for $w\in \mathcal{H}_0$}.
$$
Hence, one can check that $\mathcal{I}$ extends to an isometric embedding, as required.
\end{proof}
Although Proposition \ref{p:l2} can not be directly applied to the representation $\pi$ appearing in Proposition \ref{p:howe},
we deduce the following corollary about its tensor square $\pi\otimes \pi$.
\begin{Cor}\label{c:embed}
Let $\pi$ be a unitary representation of $L$ as in Proposition \ref{p:howe}.
Then the representation $(\pi\otimes \pi)|_S$ embeds as
a suprepresentation of $\bigoplus_{n\ge 1} \lambda_S$.
\end{Cor}
\begin{proof}
According to Corollary \ref{c:integral}, the functions
$g\mapsto \left<\pi(g)v,w\right>$, $g\in S$, are $L^p$-integrable for all $p>2$
when $v,w$ belongs to a suitable orthonormal basis $\mathcal{H}_0$ of $\mathcal{H}$.
Then it follows from the Cauchy--Schwarz inequality that for
$v_1,v_2,w_1,w_2\in \mathcal{H}_0$, the functions
$$
\left<(\pi\otimes \pi)(g)(v_1\otimes v_2),w_1\otimes w_2\right>
=\left<\pi(g)v_1,w_1\right>
\left<\pi(g)v_2,w_2\right>
$$
belong to $L^2(S)$. Hence, the claim is implied by Proposition \ref{p:l2}.
\end{proof}
The above result ultimately reduces our original problem
regarding representations $\pi|_S$ to the study of matrix coefficients
for the regular representation $\lambda_S$. It turns out that the matrix coefficients of the latter representation can be estimated in terms of an explicit
function that we now introduce. In fact, this is true
for general connected semisimple Lie groups $G$. We
recall that in this setting, there is the {\it Iwasawa decomposition}
$$
G=UAK,
$$
where $K$ is a maximal compact subgroups of $G$, $A$ is a Cartan subgroup, and
$U$ is the subgroup generated by positive root subgroups.
The invariant measure with respect to the Iwasawa decomposition is given by
\begin{equation}
\label{eq:iwa_measure}
\int_G f(g)\, dg=\int_{U\times A\times K} f(uak)\Delta(a)\, du da dk\quad\hbox{for $f\in C_c(G)$,}
\end{equation}
where $\Delta$ denotes the modular function of the group $UA$, and
$du$, $da$, and $dk$ denote the invariant measures on the corresponding factors.
For example, for the group $S=\hbox{SL}_2(\bb R)$, using the notation \eqref{eq:notation},
we have the the Iwasawa decomposition $S=U_SA_SK_S$, and the modular function is
given by $\Delta(a(t))=t^{-2}$.
The product map $U\times A\times K\to G$ defines a diffeomorphism,
and for $g\in G$, we denote by ${\sf u}(g)\in G$, ${\sf a}(g)\in A$, and ${\sf k}(g)\in K$
the unique elements such that
$$
g={\sf u}(g){\sf a}(g){\sf k}(g).
$$
The {\it Harish-Chandra function} is defined as
$$
\Xi(g)=\int_{K} \Delta ({\sf a}(kg))^{-1/2}\, dk \quad\hbox{ for $g\in G$.}
$$
It is easy to check that the function $\Xi$ is bi-$K$-invariant.
In the case when $S=\hbox{SL}_2(\bb R)$, the Harish-Chandra function
can be explicitly computed as
$$
\Xi(a(t))=\frac{1}{2\pi}\int_{0}^{2\pi}(t^{-2}\cos^2\theta+ t^2 \sin^2\theta)^{-1/2}\, d\theta.
$$
Moreover, one can check for all $\epsilon>0$
\begin{equation}
\label{eq:xi_b}
\Xi(a(t))\ll_\epsilon t^{-1+\epsilon}\quad\hbox{when $t\ge 1$.}
\end{equation}
Surprisingly, it turns out that matrix coefficients of general $K$-eigenfunctions
in $L^2(G)$ can be explicitly estimated in terms of the Harish-Chandra function:
\begin{Prop}
\label{p:harish}
For all $K$-eigenfunctions $\phi,\psi\in L^2(G)$,
$$
|\left<\lambda_G(g)\phi,\psi\right>|\le \Xi(g)\|\phi\|_2\|\psi\|_2.
$$
\end{Prop}
The following argument is a version of Herz's majoration principle \cite{herz}.
\begin{proof}[Proof of Proposition \ref{p:harish}]
Replacing $\phi$ and $\psi$ by $|\phi|$ and $|\psi|$, we may assume without loss
of generality that $\phi,\psi\ge 0$, and the functions $\phi$ and $\psi$ are $K$-invariant.
Then because of the Cartan decomposition $G=KAK$, it is sufficient to prove the estimate when $g=a\in A$.
Using the decomposition of the invariant measure on $G$ given by \eqref{eq:iwa_measure},
we obtain that
\begin{align*}
\left<\lambda_G(a)\phi,\psi\right>= \int_G \phi(ga)\psi(g)\, dg
=\int_{U\times A\times K} \phi(ubka)\psi(ubk)\Delta(b)\, dudbdk.
\end{align*}
Then by the Cauchy--Schwarz inequality,
$$
\left<\lambda_G(a)\phi,\psi\right>\le \int_K
\left(\int_{U\times A} \phi^2(ubka)\Delta(b)dudb\right)^{1/2}
\left(\int_{U\times A} \psi^2(ubk)\Delta(b)dudb\right)^{1/2}\, dk.
$$
Using that $\psi$ is $K$-invariant, we obtain that
\begin{align*}
\int_{U\times A} \psi^2(ubk)\Delta(b)dudb
=\int_{U\times A\times K} \psi^2(ubk)\Delta(b)dudbdk=\|\psi\|_2^2.
\end{align*}
To estimate the other term, we write
\begin{align*}
ubka=ub\cdot {\sf u}(ka){\sf a}(ka){\sf k}(ka)=ub{\sf u}(ka)b^{-1}\cdot b{\sf a}(ka)\cdot {\sf k}(ka).
\end{align*}
Since $\phi$ is $K$-invariant, using the invariance of the integrals, we deduce that
\begin{align*}
&\int_K
\left(\int_{U\times A} \phi^2(ubka)\Delta(b)dudb\right)^{1/2}
dk\\
=&
\int_K
\left(\int_{U\times A} \phi^2(ub{\sf u}(ka)b^{-1}\cdot b{\sf a}(ka))\Delta(b)dudb\right)^{1/2}
dk\\
=&
\int_K
\left(\int_{U\times A} \phi^2(u\cdot b)\Delta(b{\sf a}(ka)^{-1})dudb\right)^{1/2}
dk\\
=& \left(\int_K \Delta({\sf a}(ka))^{-1/2}dk\right)
\left(\int_{U\times A} \phi^2(ub)\Delta(b)dudb\right)^{1/2}.
\end{align*}
Finally, because of the $K$-invariance of $\phi$,
\begin{align*}
\int_{U\times A} \phi^2(ub)\Delta(b)dudb
=\int_{U\times A\times K} \phi^2(ubk)\Delta(b)dudbdk=\|\phi\|_2^2
\end{align*}
so that
\begin{align*}
\int_K
\left(\int_{U\times A} \phi^2(ubka)\Delta(b)dudb\right)^{1/2}
dk=\Xi(a)\|\phi\|_2.
\end{align*}
This implies the required estimate.
\end{proof}
Using Proposition \ref{p:harish}, we deduce our main result about
representations of the group $L=\hbox{\rm SL}_2(\bb R)\ltimes \bb R^2$:
\begin{Theorem}\label{thy:semi}
Let $\pi$ be a unitary representation of $L=\hbox{\rm SL}_2(\bb R)\ltimes \bb R^2$ on a Hilbert space $\mathcal{H}$
such that there is no nonzero $\bb R^2$-invariant vectors.
Then for all elements $g\in \hbox{\rm SL}_2(\bb R)$ and all $\hbox{\rm SO}(2)$-finite vectors $v,w\in \mathcal{H}$,
$$
|\left<\pi(g)v,w\right>|\le d_{\hbox{\tiny\rm SO}(2)}(v)^{1/2}d_{\hbox{\tiny\rm SO}(2)}(w)^{1/2} \|v\|\|w\|\, \Xi(g)^{1/2}.
$$
\end{Theorem}
More general results giving quantitative bounds for representations
of semidirect products have been established by Oh \cite{oh2} and Wang \cite{wang}.
\medskip
In relation to Proposition \ref{p:harish}, we mention that
Cowling, Haagerup, and Howe \cite{chh} discovered that the bound in Proposition \ref{p:harish}
holds more generally for any representation which is $L^{2+\epsilon}$-integrable
for all $\epsilon>0$:
\begin{theo}
\label{th:chh}
Let $G$ be a semisimple real algebraic group and $\pi$
a unitary representation of $G$ on a Hilbert space $\mathcal{H}$
which is $L^{2+\epsilon}$-integrable for all $\epsilon>0$.
Then for all elements $g\in G$ and all $K$-finite vectors $v,w\in \mathcal{H}$,
$$
|\left<\pi(g)v,w\right>|\le d_K(v)^{1/2}d_K(w)^{1/2} \|v\|\|w\|\, \Xi(g).
$$
\end{theo}
In view of Corollary \ref{c:integral}, Theorem \ref{th:chh}
applies to the setting of Theorem \ref{thy:semi} and implies a bound which
is essentially optimal in terms of the decay rate along $G$.
\begin{theo}\label{thy:semi1}
Let $\pi$ be a unitary representation of $L=\hbox{\rm SL}_2(\bb R)\ltimes \bb R^2$ on a Hilbert space $\mathcal{H}$
such that there is no nonzero $\bb R^2$-invariant vectors.
Then for all elements $g\in \hbox{\rm SL}_2(\bb R)$ and all $\hbox{\rm SO}(2)$-finite vectors $v,w\in \mathcal{H}$,
$$
|\left<\pi(g)v,w\right>|\le d_{\hbox{\tiny\rm SO}(2)}(v)^{1/2}d_{\hbox{\tiny\rm SO}(2)}(w)^{1/2} \|v\|\|w\|\, \Xi(g).
$$
\end{theo}
Here we only prove the weaker bound given by Theorem \ref{thy:semi}:
\begin{proof}[Proof of Theorem \ref{thy:semi}]
First, we consider the case when $v$ and $w$ are eigenvectors of $K_S=\hbox{SO}(2)$.
We recall that by Corollary \ref{c:embed}, the representation $(\pi\otimes \pi)|_S$,
where $S=\hbox{\rm SL}_2(\bb R)$,
embeds as a subrepresentation of $\bigotimes_{n\ge 1}\lambda_S$.
Hence, it follows from Proposition \ref{p:harish} that
for $g\in S$,
$$
|\left<\pi(g)v,w\right>|=
|\left<(\pi\otimes \pi)(g)(v\otimes v),w\otimes w\right>|^{1/2}\le \|v\|\|w\|\,\Xi(g)^{1/2}.
$$
In general, we write $v$ and $w$ as $v=\sum_{i=1}^n v_i$ and $v=\sum_{j=1}^m w_j$,
where $v_i$'s and $w_j$'s are orthogonal $K_S$-eigenvectors. Then for $g\in S$,
\begin{align*}
|\left<\pi(g)v,w\right>|&\le \sum_{i=1}^n\sum_{j=1}^m |\left<\pi(g)v_i,w_j\right>|\le
\left(\sum_{i=1}^n \|v_i\|\right) \left(\sum_{j=1}^m \|w_j\|\right) \Xi(g)^{1/2}\\
&\le n^{1/2} \left(\sum_{i=1}^n \|v_i\|^2\right)^{1/2}\, m^{1/2}\left(\sum_{j=1}^m \|w_j\|^2\right)^{1/2}\, \Xi(g)^{1/2}\\
&\le d_{K_S}(v)^{1/2}d_{K_S}(w)^{1/2} \|v\|\|w\|\, \Xi(g)^{1/2}.
\end{align*}
This proves the theorem.
\end{proof}
Now we can derive uniform bounds for matrix coefficients of higher-rank simple Lie groups and prove Theorem \ref{th:high-rank}:
\begin{proof}[Proof of Theorem \ref{th:high-rank}]
We give a proof of the theorem for
$$
G=\hbox{SL}_d(\bb R)\supset K=\hbox{SO}(d).
$$
Because of the Cartan decomposition
$$
G=KAK\quad\hbox{where $A=\{(a_1,\ldots,a_d):\, a_1,\ldots, a_d>0,\, a_1\cdots a_d=1\}$,}
$$
it is sufficient to prove this estimate when $g=a\in A$.
We consider the subgroup $L=S\ltimes \bb R^2$,
where $S=\hbox{SL}_2(\bb R)$, embedded into the top left corner of $G$. It follows from Theorem \ref{th:hm2} that for all $v,w\in \mathcal{H}$,
$$
\left<\pi(g)v,w\right>\to 0\quad \hbox{as $g\to \infty$ in $G$.}
$$
In particular, it follows that there is no nonzero $\bb R^2$-invariant vectors.
Hence, Theorem \ref{thy:semi} can be applied to the representation $\pi|_L$.
We write $a\in A$ as $a=a'a''$ with
\begin{align*}
a'&=\hbox{diag}\left((a_1/a_2)^{1/2}, (a_2/a_1)^{1/2}, 1,\ldots,1\right),\\
a''&=\hbox{diag}\left((a_1a_2)^{1/2}, (a_1a_2)^{1/2}, a_3,\ldots,a_d\right).
\end{align*}
We note that $a'\in A_S\subset L$ and $a''$ commutes with $S$.
In particular, it commutes with $K_S=\hbox{SO}(2)$.
This implies that the vector $\pi(a'')v$ is $K_S$-finite,
and
$$
d_{K_S}(\pi(a'')v)\le d_{K_S}(v)\le d_{K}(v).
$$
Hence, we deduce from Theorem \ref{thy:semi} and \eqref{eq:xi_b} that
\begin{align*}
|\left<\pi(a)v,w\right>|&=|\left<\pi(a')\pi(a'')v,w\right>|
\le d_{K_S}(\pi(a'')v)^{1/2}d_{K_S}(w)^{1/2} \|v\|\|w\|\, \Xi(a')^{1/2}\\
&\ll_\epsilon d_{K}(v)^{1/2}d_{K}(w)^{1/2} \|v\|\|w\|\, \left(\frac{a_1}{a_2}\right)^{-1/4+\epsilon}
\end{align*}
for all $\epsilon>0$.
The same argument can be applied to other embeddings of $\hbox{SL}_2(\bb R)\ltimes \bb R^2$
into $\hbox{SL}_d(\bb R)$. This gives the bound
\begin{align*}
|\left<\pi(a)v,w\right>|
&\ll_\epsilon d_{K}(v)^{1/2}d_{K}(w)^{1/2} \|v\|\|w\|\, \left(\max_{i\ne j }\frac{a_i}{a_j}\right)^{-1/4+\epsilon}
\end{align*}
for all $\epsilon>0$, and proves the theorem.
\end{proof}
\medskip
It is useful for applications to have the estimate as Theorem \ref{th:high-rank}
in terms of H\"older norms or Sobolev norms of smooth vectors,
as in the works of Moore \cite{m}, Ratner \cite{rat}, and
Katok and Spatzier \cite{KS}.
Given a unitary represenation $\pi$ of $G$
on a Hilbert space $\mathcal{H}$, one can also define an action of the Lie algebra
$\hbox{Lie}(G)$ on a dense subspace $\mathcal{V}$ of $\mathcal{H}$ that satisfies
$$
\pi(\mathcal{X})v=\frac{d}{dt}\pi(\exp(t\mathcal{X}))v|_{t=0}\quad \hbox{for $\mathcal{X}\in \hbox{Lie}(G)$ and $v\in\mathcal{V}$.}
$$
We fix an (ordered) basis $(\mathcal{X}_1,\ldots,\mathcal{X}_n)$ of the Lie algebra
$\hbox{Lie}(G)$. Then the Sobolev norm of order $\ell$ is defined as
\begin{equation}
\label{eq:sobolev}
S_\ell(v)^2=\sum_{(i_1,\ldots,i_\ell)} \left\|\pi(\mathcal{X}_{i_1})\ldots \pi(\mathcal{X}_{i_\ell})v\right\|^2
\end{equation}
With this notation, we prove:
\begin{Theorem}
\label{th:high-rank2}
Let $G$ be a (noncompact) connected simple higher-rank matrix Lie group with finite centre
and $K$ a maximal compact subgroup of $G$.
Then there exist $c,\delta,\ell>0$ such that for any unitary representation $\pi$ of
$G$ on a Hilbert space $\mathcal{H}$ without nonzero $G$-invariant vectors,
$$
|\left<\pi(g)v,w\right>|\le c\, S_\ell(v) S_\ell(w)\, \|g\|^{-\delta}\quad\hbox{for all $g\in G$ and all $v,w\in \mathcal{V}$}.
$$
\end{Theorem}
\begin{proof}
The proof will require several more advanced facts about representations of semisimple groups.
We indicate how to complete the proof using these facts when $\pi$ is irreducible.
Then the general case will follow by using the integral decomposition.
We decompose $\mathcal{H}$ as a direct sum of irreducible representations of a maximal compact subgroup of $K$. This gives the decomposition
\begin{equation}
\label{eq:decomp}
\mathcal{H}=\bigoplus_{\sigma\in \hat K} \mathcal{H}_\sigma,
\end{equation}
where $\hat K$ denotes the unitary dual of $K$, and $\mathcal{H}_\sigma$ is the direct sum of
the irreducible components isomorphic to $\sigma$. There is the Casimir operator
$\mathcal{C}$ of $K$ which
is the second order differential operator commuting with the action of $K$.
It leaves each of the subspaces $\mathcal{H}_\sigma$ invariant.
Moreover, one can deduce from the Schur Lemma that
$$
\mathcal{C}|_{H_\sigma}=\lambda_\sigma\, \hbox{id}_{\mathcal{H}_\sigma}
$$
for some $\lambda_\sigma>0$.
The eigenvalues $\lambda_\sigma$ and the dimensions $\dim(\sigma)$ are computed
explicitly in the Representation Theory of compact groups, and one can verify that
\begin{align*}
\dim(\sigma)\le \lambda_\sigma^{c_1}\quad\hbox{and}\quad
\sum_{\sigma\in\hat K} \dim(\sigma)^{-c_2}<\infty
\end{align*}
for some $c_1,c_2>0$.
We shall also use a result of Harish-Chandra regarding admissibility of
irreducible unitary representation (see, for instance, \cite{war1}) that gives the bound
$$
\dim (H_\sigma)\le \dim(\sigma)^2.
$$
Now utilising these estimates, we proceed with the proof of the theorem.
We decompose the vectors with respect to the decomposition
\eqref{eq:decomp} and deduce from Theorem \ref{th:high-rank} that
\begin{align*}
|\left<\pi(g)v,w\right>|&\le \sum_{\sigma,\tau\in \hat K}|\left<\pi(g)v_\sigma,w_\tau\right>|\\
&\ll \left(\sum_{\sigma\in \hat K} \dim(H_\sigma)^{1/2}\|v_\sigma\|\right)
\left(\sum_{\tau\in \hat K} \dim(H_\tau)^{1/2}\|w_\tau\|\right)\, \|g\|^{-\delta}.
\end{align*}
Since the above sums can be estimates as
\begin{align*}
\sum_{\sigma\in \hat K} \dim(H_\sigma)^{1/2}\|v_\sigma\|
&\le \sum_{\sigma\in \hat K} \dim(\sigma) \lambda_\sigma^{-s}\left\|\pi(\mathcal{C})^s v_\sigma\right\|\\
&\le \sum_{\sigma\in \hat K} \dim(\sigma)^{1-s/c_1} \left\|\pi(\mathcal{C})^s v_\sigma\right\|
\\
&\le \left(\sum_{\sigma\in\hat K} \dim(\sigma)^{-2(s/c_1-1)}\right)^{1/2}
\left(\sum_{\sigma\in\hat K} \left\| \pi(\mathcal{C})^sv_\sigma\right\|^2\right)^{1/2} \\
&\ll \left\|\pi(\mathcal{C})^s v\right\|
\end{align*}
for sufficiently large $s$, this implies the theorem.
\end{proof}
We note that if the assumption that $G$ is of higher rank
is removed, the statements of Theorems \ref{th:high-rank} and \ref{th:high-rank2} are not true.
Although we know from Theorem \ref{th:hm2} that
$$
\left<\pi(g)v,w\right>\to 0 \quad \hbox{as $g\to \infty$ in $G$,}
$$
there are unitary representations without invariant vectors
whose matrix coefficients do not possess explicit estimates.
For example, in the case of $\hbox{SL}_2(\bb R)$, the complementary series
representations provide examples with arbitrary slow decay rate.
Nonetheless, it is known that for every nontrivial irreducible representation $\pi$, there exists $c(\pi),\delta(\pi)>0$ such that
$$
|\left<\pi(g)v,w\right>|\le c(\pi)\, S_\ell(v) S_\ell(w)\, \|g\|^{-\delta(\pi)}.
$$
Moreover, this bound also holds for any unitary representation $\pi$
which is isolated from the trivial representation in the sense of the Fell topology.
\medskip
Theorem \ref{th:high-rank2} can be applied to finite-volume
homogeneous spaces $X$ of $G$. Indeed, a simple Fubini-type argument
implies that the corresponding unitary representation of $G$
on $L^2(X)$ has no nonconstant invariant vectors so that
the bound of Theorem \ref{th:high-rank2} can be applied to all
function in $L_0^2(X)$, which denotes the subspace of functions with zero integral. Even when $G$ has rank one, one can show that the unitary
representation of $G$ on $L^2_0(X)$ is isolated from the trivial representation
(see, for instance, \cite[Lemma~3]{bekka})
so that the quantitative bounds on correlations hold in this case as well.
\begin{theo}
\label{th:high-rank3}
Let $G$ be a (noncompact) connected simple matrix Lie group with finite centre and $(X,\mu)$ is a probability homogeneous space of $G$.
Then there exist $\delta,\ell>0$ such that for
all elements $g\in G$ and all functions $\phi,\psi\in C_c^\infty(X)$,
$$
\int_X \phi(g^{-1}x)\psi(x)\,d\mu(x) =\left(\int_X\phi\, d\mu\right)\left(\int_X\psi\, d\mu\right)+
O\left( S_\ell(\phi) S_\ell(\psi)\, \|g\|^{-\delta}\right).
$$
\end{theo}
Maucourant \cite{mauc}
used estimates on the correlations from Theorem \ref{th:high-rank3}
to prove a version of Theorem \ref{th:count} that gives an asymptotic formula for the
number of lattice points with an error term.
\medskip
In conclusion,
we note that in Theorems \ref{th:high-rank}, \ref{th:high-rank2} and \ref{th:high-rank3},
the matrix norm $\|\cdot\|$
can be estimated in terms of a (left) invariant Riemannian metric $d$ on $G$ as
$$
e^{c_1 d(g,e)}\le \|g\|\le e^{c_2 d(g,e)}\quad\hbox{for all $g\in G$.}
$$
In particular, in Theorem \ref{th:high-rank3}, this gives the error term
$$
O\left( S_\ell(\phi) S_\ell(\psi)\, e^{-\delta'd(g,e)}\right)
$$
for some $\delta'>0$.
\section{Bounds on higher-order correlations}\label{sec:cor_gen}
Building on the results from Section \ref{sec:cor_2}, we intend to establish
quantitative estimates on correlations of arbitrary order. We follow the argument
of Bj\"orklund, Einsiedler, Gorodnik \cite{beg}.
Throughout this section, $G$ denotes a (noncompact) connected simple matrix Lie group
with finite center. We consider a measure preserving action of $G$ on
a standard probability space $(X,\mu)$. To simplify notation,
we set
$$
(g\cdot \phi)(x)=\phi(g^{-1}x)\quad\hbox{for $g\in G$ and $\phi\in L^\infty(X)$.}
$$
Our goal is to estimate the correlations
$$
\mu((g_1\cdot \phi_1)\cdots (g_r\cdot \phi_k))=\int_X \phi_1(g_1^{-1}x)\cdots \phi_r(g_k^{-1}x)\, d\mu(x)
$$
for suitable functions $\phi_1,\ldots,\phi_r$ on $X$.
We shall assume that we know how to estimate correlations of order two.
Namely, we assume that there exist a subalgebra $\mathcal{A}$ of $L^\infty(X)$
and $\delta>0$ such that for all functions $\phi_1,\phi_2\in\mathcal{A}$
and all $g\in G$,
\begin{equation}
\label{eq:m}
\mu((g\cdot \phi_1)\, \phi_2)=\mu(\phi_1)\mu(\phi_2)+O\big(S_\ell(\phi_1)S_\ell(\phi_2)\, \|g\|^{-\delta}\big),
\end{equation}
where $S_\ell$ denotes a norm on the algebra $\mathcal{A}$.
The precise definition of the family of norms
$$
S_1\le S_2\le \cdots \le S_\ell\le \cdots
$$
will not be important for our arguments. We shall only use that these norms
satisfy the following properties:
\begin{enumerate}
\item[(${\rm N}_1$)] there exists $\ell_1$ such that
$$
\|\phi\|_{L^\infty}\ll S_{\ell_1}(\phi),
$$
\item[(${\rm N}_2$)] there exists $\ell_2$ such that
$$
\|g\cdot \phi-\phi\|_{L^\infty}\ll \|g-e\|\, S_{\ell_2}(\phi)\quad\hbox{for all $g\in G$,}
$$
\item[(${\rm N}_3$)] for all $\ell$, there exists $\sigma_\ell>0$ such that
$$
S_\ell(g\cdot \phi)\ll_\ell \|g\|^{\sigma_\ell} \, S_\ell(\phi)\quad\hbox{for all $g\in G$,}
$$
\item[(${\rm N}_4$)] for every $\ell$, there exists $\ell'$ such that
$$
S_\ell(\phi_1\phi_2)\ll_\ell S_{\ell'}(\phi_1) S_{\ell'}(\phi_2).
$$
\end{enumerate}
\medskip
For instance, when $X=L/\Gamma$ where $L$ is a connected Lie group and $\Gamma$
a discrete cocompact subgroup, it follows from a version of the Sobolev embedding theorem
that the Sobolev norms defined in \eqref{eq:sobolev} satisfy these properties.
More generally, when $\Gamma$ is discrete subgroup of finite covolume,
one can also introduce a family of norms majorating the usual Sobolev norms
satisfying these properties (see, for instance, \cite[\S3.7]{emv}). In particular, it follows from Section \ref{sec:cor_2}
that the bound \eqref{eq:m} holds in this setting.
\medskip
The main result of this section is the following:
\begin{Theorem} \label{th:cor_high}
For every $r\ge 2$, there exist $\delta_r,\ell_r>0$ such that
for all elements $g_1,\ldots,g_r\in G$ and all functions $\phi_1,\ldots,\phi_r\in \mathcal{A}$,
\begin{align*}
\mu((g_1\cdot \phi_1)\cdots (g_r\cdot \phi_r))=\, & \mu(\phi_1)\cdots \mu(\phi_r)\\
&+O_r\big( S_{\ell_r}(\phi_1)\cdots S_{\ell_r}(\phi_r)\, N(g_1,\ldots,g_r)^{-\delta_r}\big),
\end{align*}
where
$$
N(g_1,\ldots,g_r)=\min_{i\ne j} \|g_i^{-1}g_j\|.
$$
\end{Theorem}
We first explain the strategy of the proof of Theorem \ref{th:cor_high}.
It will be convenient to consider the correlation of order $r$ as a measure on the product
space $X^r$: we introduce a measure $\eta=\eta_{g_1,\ldots,g_r}$ on $X^r$
defined by
$$
\eta(\phi)=\int_X\phi(g_1^{-1}x,\ldots, g_r^{-1}x)\, d\mu(x)\quad\hbox{for $\phi\in L^\infty(X^r)$.}
$$
Theorem \ref{th:cor_high} amounts to showing that
the measure $\eta$ is ``approximately'' equal to the product
measure $\mu^r$ on $X^r$.
Our argument will proceed by induction on the number of factors.
Let us take a nontrivial partition $\{1,\ldots,r\}=I\sqcup J$
which defines the projection maps $X^r\to X^I$ and $X^r\to X^J$.
We obtain the measures $\eta_I$ and $\eta_J$ on $X^I$ and $X^J$ respectively
that are the projections of the measure $\eta$. Formally, these measure are defined as
\begin{align*}
\eta_I(\phi)&=\eta(\phi\otimes 1)\quad\hbox{for $\phi\in L^\infty(X^I)$,}\\
\eta_J(\psi)&=\eta(1\otimes \psi)\quad\hbox{for $\psi\in L^\infty(X^J)$.}
\end{align*}
Ultimately, our proof will involve comparing the diagrams:
$$
\xymatrixcolsep{1.3pc} \xymatrixrowsep{1.2pc}\
\xymatrix{
&& X^r \ar@/_1pc/[lldd] \ar@/^1pc/[rrdd] \\
&& \eta \ar@/_/[ld] \ar@/^/[rd] \ar@{.>}[u] &&\\
X^I & \eta_I \ar@{.>}[l] && \eta_J \ar@{.>}[r] & X^J
}\;\;
\xymatrix{
&& X^r \ar@/_1pc/[lldd] \ar@/^1pc/[rrdd] && \\
&& m^r \ar@/_/[ld] \ar@/^/[rd] \ar@{.>}[u] &&\\
X^I & m^I \ar@{.>}[l] && m^J \ar@{.>}[r] & X^J
}
$$
We may assume by induction that
$$
\eta_I\approx \mu^I\quad\hbox{ and } \quad \eta_J\approx \mu^J
$$
and need to show that
\begin{equation}
\label{eq:approx}
\eta\approx \mu^r= \mu^I\otimes \mu^J.
\end{equation}
To establish this estimate, we use that the measure $\eta$ is invariant under the subgroup
$$
D=\{(g_1^{-1}hg_1,\ldots, g_r^{-1}hg_r): \, h\in G\}.
$$
We take a one-parameter subgroup
$$
h(t)=(\exp(tZ_1),\ldots,\exp(tZ_r))
$$
in $D$ which also can be written as
$$
h(t)=(h_I(t),h_J(t))
$$
for one-parameter subgroups acting on $X^I$ and $X^J$.
We note that the measure $\eta_I$ is $h_I(t)$-invariant,
and the measure $\eta_J$ is $h_J(t)$-invariant.
We consider the averaging operator
\begin{equation}
\label{eq:P_T}
P_T:L^\infty(X^I)\to L^\infty(X^I):\, \phi \mapsto \frac{1}{T}\int_0^T \phi(h_I(t)x)\, dt
\end{equation}
that preserves the measure $\eta_I$. Given functions $\phi_1,\ldots,\phi_r\in L^\infty(X)$,
we write
$$
\phi_I=\otimes_{i\in I} \phi_i\quad\hbox{and} \quad \phi_J=\otimes_{i\in J} \phi_i.
$$
To simplify notation, we write $S_\ell(\phi_I)=\prod_{i\in I} S_\ell(\phi_i)$ below.
We establish \eqref{eq:approx} using the following key estimate
\begin{align}
|\eta(\phi_I\otimes \phi_J)-\mu^r(\phi_1\otimes \cdots \phi_r)|=&\;
|\eta(\phi_I\otimes \phi_J)-\mu^I(\phi_I)\mu^J(\phi_J)| \label{eq:key}\\
\le &\;\; |\eta(\phi_I\otimes \phi_J)-\eta(P_T\phi_I\otimes \phi_J)| \tag{I}\\
&\;+|\eta(P_T\phi_I\otimes \phi_J)-\eta_I(\phi_I)\eta_J(\phi_J)| \tag{II}\\
&\;+|\eta_I(\phi_I)\eta_J(\phi_J)-\mu_I(\phi_I)\mu_J(\phi_J)|. \tag{III}
\end{align}
We will estimate the terms (I), (II), and (III) separately for a carefully chosen partition $\{I,J\}$ and
a carefully chosen one-parameter subgroup $h(t)$. To simplify notation,
we shall assume that for all $i=1,\ldots,r$,
$$
S_{\ell'}(\phi_i)\le 1
$$
for a fixed sufficiently large $\ell'$. In particular,
it also follows from properties (${\rm N}_1$) and (${\rm N}_2$) of the norms that
for all $i=1,\ldots,r$ and $g\in G$,
\begin{equation}
\label{eq:bbound}
\|\phi_i\|_{L^\infty}\ll 1\quad\hbox{and}\quad \|g\cdot \phi_i-\phi_i\|_{L^\infty}\ll \|g-e\|.
\end{equation}
It will be convenient to replace the matrix norm by a different norm defined
in terms of the adjont representation $\hbox{Ad}:G\mapsto \hbox{GL}(\hbox{Lie}(G))$
with
$$
\hbox{Ad}(g):X\mapsto gX g^{-1}\quad\hbox{for $X\in \hbox{Lie}(G)$.}
$$
We fix a norm on the Lie algebra $\hbox{Lie}(G)$ and set
$$
\|g\|=\max\left\{\|\hbox{Ad}(g)Z\|: \|Z\|=1\right\}.
$$
It is not hard to check that for any $g\in G$,
\begin{align*}
&\|g\|\ge 1\quad\hbox{and}\quad
&\|g\|=\|\hbox{Ad}(g)Z\|\quad\hbox{for some nilpotent $Z$ with $\|Z\|=1$.}
\end{align*}
Now we describe the choice of the one-parameter $h(t)$ subgroup that we use.
Let
$$
Q=\max_{i\ne j} \|g_i^{-1}g_j\|\quad\hbox{and}\quad q=\min_{i\ne j} \|g_i^{-1}g_j\|\ge 1
$$
We take $i_1\ne i_s$ such that
$$
Q=\|g_{i_1}^{-1}g_{i_s}\|=\|\hbox{Ad}(g_{i_1}^{-1}g_{i_s})Z\|
$$
for some nilpotent $Z$ with $\|Z\|=1$. Then
$$
\|\hbox{Ad}(g_{i_1}^{-1}g_{i_s})Z\|\ge \|\hbox{Ad}(g_{i}^{-1}g_{j})Z\|\quad\hbox{for all $i\ne j$.}
$$
For a suitable choice of indices, we obtain that
$$
\|\hbox{Ad}(g_{i_1}^{-1}g_{i_s})Z\|\ge \|\hbox{Ad}(g_{i_2}^{-1}g_{i_s})Z\|\ge \cdots \ge \|\hbox{Ad}(g_{i_r}^{-1}g_{i_s})Z\|.
$$
In fact, after relabelling, we may assume that
$$
\|\hbox{Ad}(g_{1}^{-1}g_{s})Z\|\ge \|\hbox{Ad}(g_{2}^{-1}g_{s})Z\|\ge \cdots \ge \|\hbox{Ad}(g_{r}^{-1}g_{s})Z\|.
$$
We note that
$$
\|\hbox{Ad}(g_{r}^{-1}g_{s})Z\|\le \|\hbox{Ad}(g_{s}^{-1}g_{s})Z\|=1.
$$
We set
$$
Z_j=\frac{\hbox{Ad}(g_{j}^{-1}g_{s})Z}{\|\hbox{Ad}(g_{1}^{-1}g_{s})Z\|}\quad\hbox{and}\quad
w_j=\|Z_j\|.
$$
Then
\begin{equation}
\label{eq:w}
1=w_1\ge w_2\ge \cdots \ge w_r\quad\hbox{and}\quad w_r\le Q^{-1}\le q^{-1}
\end{equation}
We take
$$
I=\{1,\ldots,p\}\quad\hbox{and}\quad I=\{p+1,\ldots,r\},
$$
where the index $p$ will be specified later.
We observe that with these choices, the one-parameter subgroups $h_I(t)$ and $h_J(t)$
satisfy the following properties with some exponents $a,b>0$,
\begin{enumerate}
\item[(a)] $\|h_J(t)\cdot \phi_J -\phi_J\|_{L^\infty} \ll w_{p+1}|t|$,
\item[(b)] $S_\ell (h_I(t)\cdot \phi_I)\ll \max(1,|t|)^a$,
\item[(c)] $|\mu^I((h_I(t)\cdot \phi_I) \phi_I)-\mu^I(\phi_I)^2|\ll \max(1,w_p|t|)^{-b}$.
\end{enumerate}
Indeed, (a) can be deduced from the property (${\rm N}_2$) of the Sobolev norms,
(b) --- from the property (${\rm N}_3$), and (c) --- from the bound \eqref{eq:m}
on correlations of order two.
\medskip
Now we proceed to estimate \eqref{eq:key}.
Our argument proceeds by induction on $r$, and we suppose that
we have established existence of $E=E(g_1,\ldots,g_r)$ such that for all
proper subsets $L$ of $\{1,\ldots,r\}$ and functions $\psi_1,\ldots,\psi_r\in \mathcal{A}$,
\begin{equation}
\label{eq:induction}
|\eta_L(\psi_L)-\mu^L(\psi_L)|\le E\, S_\ell(\psi_L).
\end{equation}
We estimate each of the terms (I), (II), (III) appearing in \eqref{eq:key}.
We note that it follows immediately from the assumption \eqref{eq:induction} and \eqref{eq:bbound}
that
\begin{equation}
\label{eq:III}
|\eta_I(\phi_I)\eta_J(\phi_J)-\mu^I(\phi_I)\mu^J(\phi_J)|\ll E.
\end{equation}
This provides an estimate for the term (III).
\medskip
To estimate the term (I), we observe that
$$
\eta(P_T\phi_I\otimes \phi_J)=\eta\left(\frac{1}{T}\int_0^T (h_I(t)\cdot \phi_I)\otimes \phi_J\, dt \right).
$$
Using that the measure $\eta$ is invariant under $h(t)=(h_I(t),h_J(t))$,
we obtain that
$$
\eta(\phi_I\otimes \phi_J)=\eta\left(\frac{1}{T}\int_0^T (h_I(t)\cdot \phi_I)\otimes (h_J(t)\cdot\phi_J)\, dt \right).
$$
Hence, the term (I) can be estimated as
\begin{align} \label{eq:I}
&|\eta(\phi_I\otimes \phi_J)-\eta(P_T\phi_I\otimes \phi_J)|\\
\nonumber \le&\,
\eta\left(\frac{1}{T}\int_0^T \big| (h_I(t)\cdot \phi_I)\otimes (h_J(t)\cdot\phi_J)
-(h_I(t)\cdot \phi_I)\otimes \phi_J\big|\, dt \right)\\
\nonumber \le&\, \frac{1}{T}\int_0^T \big\|(h_I(t)\cdot \phi_I)\otimes (h_J(t)\cdot\phi_J -\phi_J)\big\|_{L^\infty}\, dt\\
\nonumber \le & \, \|\phi_I\|_{L^\infty}\cdot \max_{0\le t\le T} \|h_J(t)\cdot\phi_J-\phi_J\|_{L^\infty}
\ll w_{p+1} T,
\end{align}
where we used \eqref{eq:bbound} and (a).
\medskip
To estimate the term (II), we use that
$$
\eta_I(\phi_I)\eta_J(\phi_J)=\eta_I(\phi_I)\eta(1\otimes \phi_J)=\eta(\eta_I(\phi_I)\otimes \phi_J).
$$
We first show that the term (II) can be estimated in terms of the quantity
$$
D_T(\eta_I)=\eta_I\left( |P_T\phi_I-\eta_I(\phi_I)|^2\right)^{1/2}.
$$
Indeed, we obtain that
\begin{align*}
|\eta(P_T\phi_I\otimes \phi_J)-\eta_I(\phi_I)\eta_J(\phi_J)|
&=|\eta((P_T\phi_I-\eta_I(\phi_I))\otimes \phi_J)|\\
&\le \eta\left(|P_T\phi_I-\eta_I(\phi_I)|\otimes |\phi_J|\right)\\
&\le \eta\left(|P_T\phi_I-\eta_I(\phi_I)|\right)\|\phi_J\|_{L^\infty}\\
&\le D_T(\eta_I)
\end{align*}
by \eqref{eq:bbound} and the Cauchy--Schwarz inequality.
To deal with $D_T(\eta_I)$, we use that it can be approximated by
$$
D_T(\mu^I)=\mu^I\left( |P_T\phi_I-\mu^I(\phi_I)|^2\right)^{1/2}.
$$
The corresponding estimate is given by:
\begin{Lemma}\label{l:D}
$|D_T(\eta_I)-D_T(\mu^I)|\ll T^{a/2} E^{1/2}$.
\end{Lemma}
\begin{proof}
Using the inequality $|x-y|\le \sqrt{|x^2-y^2|}$ with $x,y\ge 0$, we obtain that
$$
|D_T(\eta_I)-D_T(\mu^I)|\le \sqrt{|D_T(\eta_I)^2-D_T(\mu^I)^2|}.
$$
Expanding the averaging operator \eqref{eq:P_T} and changing the order of integration, we deduce that
\begin{align*}
D_T(\eta_I)^2&=\int_{X^I} |P_T\phi_I-\eta_I(\phi_I)|^2\, d\eta_I\\
&=\frac{1}{T^2}\int_0^T\int_0^T \big( \eta_I ( (h_I(s-t)\cdot\phi_I) \phi_I)-\eta_I(\phi_I)^2 \big)\, dsdt.
\end{align*}
Similarly,
\begin{align*}
D_T(\mu^I)^2
&=\frac{1}{T^2}\int_0^T\int_0^T \big( \mu^I ( (h_I(s-t)\cdot\phi_I) \phi_I)-\mu^I(\phi_I)^2 \big)\, dsdt.
\end{align*}
Hence,
\begin{align*}
&|D_T(\eta_I)^2-D_T(\mu^I)^2|\\
\le&\, \frac{1}{T^2}\int_0^T\int_0^T \Big(
\big|\eta_I ( (h_I(s-t)\cdot\phi_I) \phi_I)-\mu^I ( (h_I(s-t)\cdot\phi_I) \phi_I) \big|
\\
&\quad\quad\quad\quad\quad\quad+
\big|\eta_I(\phi_I)^2-\mu^I(\phi_I)^2\big|
\Big)\, dsdt.
\end{align*}
The first term inside the integral is estimated using \eqref{eq:induction} as
\begin{align*}
&\ll E\, S_\ell \big((h_I(s-t)\cdot \phi_I)\phi_I\big)\ll E\,
S_{\ell'} ((h_I(s-t)\cdot \phi_I) S_{\ell'}(\phi_I)\\
&\ll E\, \max(1,|s-t|)^a,
\end{align*}
where we used (${\rm N}_4$), (b), and \eqref{eq:bbound}.
The second term inside the integral is estimated using \eqref{eq:induction} and \eqref{eq:bbound} as
\begin{align*}
=|\eta_I(\phi_I)-\mu^I(\phi_I)|\cdot |\eta_I(\phi_I)+\mu^I(\phi_I)|\le E\cdot 2\|\phi_I\|_{L^\infty}\ll E.
\end{align*}
Finally, the lemma follows from the bound
$$
\frac{1}{T^2}\int_0^T\int_0^T \max(1,|s-t|)^a\, dsdt\ll T^a.
$$
\end{proof}
It follows from (c) that
\begin{align*}
D_T(\mu^I)^2
&=\frac{1}{T^2}\int_0^T\int_0^T \big( \mu^I ( (h_I(s-t)\cdot\phi_I) \phi_I)-\mu^I(\phi_I)^2 \big)\, dsdt\\
&\ll
\frac{1}{T^2}\int_0^T\int_0^T \max(1,w_p|s-t|)^{-b}\, dsdt \ll (w_p T)^{-b}.
\end{align*}
Hence, we conclude from Lemma \ref{l:D} that the term (II) can be estimated as
\begin{equation}
\label{eq:II}
|\eta(P_T\phi_I\otimes \phi_J)-\eta_I(\phi_I)\eta_J(\phi_J)|\ll \max \big( T^{a/2}E^{1/2}, (w_pT)^{-b/2}\big).
\end{equation}
\medskip
Now combining the bounds \eqref{eq:I}, \eqref{eq:II}, and \eqref{eq:III},
we deduce from \eqref{eq:key} that for all $T\ge 1$,
$$
|\eta(\phi_I\otimes \phi_J)-\mu^I(\phi_I)\mu^J(\phi_J)|\ll
\max \big(w_{p+1} T, T^{a/2}E^{1/2}, (w_pT)^{-b/2}\big).
$$
This estimate will be used to complete the proof of Theorem \ref{th:cor_high} by induction on $r$.
We suppose that \eqref{eq:induction} holds with $E=q^{-\tau}$ for some $\tau>0$.
We have to pick the index $p$ and the parameter $T$ to minimise
\begin{equation}
\label{eq:max}
\max \big(w_{p+1} T, T^{a/2}q^{-\tau/2}, (w_pT)^{-b/2}\big).
\end{equation}
We seek a bound which is uniform on the parameters $w_1,\ldots,w_r$ satisfying \eqref{eq:w}.
We take $\theta>0$ with $\theta<(r-1)^{-1}$.
Then since $w_r\le q^{-1}$, all the $r$ points
$$
1,q^{-\theta}\ldots, q^{-(r-1)\theta}
$$
are contained in the union of $r-1$ intervals
$$
[w_r,w_{r-1}],\ldots, [w_2,w_1].
$$
Hence, by the Pigeonhole Principle, there exist $p$ and $i$ such that
$$
w_{p+1}\le q^{-(i+1)\theta} < q^{-i\theta}\le w_p.
$$
Taking $T=q^{(i+1/2)\theta}$, we obtain that \eqref{eq:max} is
estimated by $q^{-\tau'}$ with $\tau'>0$.
This completes the proof of Theorem \ref{th:cor_high}.
\section{Application: existence of configurations}\label{sec:conf}
Analysis of higher-order correlation can be used to study existence of
combinatorial configurations. Perhaps, the most striking example
of this is the Szemer\'edi theorem \cite{sz} which states that any
subset of integers of positive upper density contains arbitrary long
arithmetic progressions. Furstenberg \cite{furst} discovered that this problem
can be modelled using dynamical systems. His approach is based on the ``Furstenberg Correspondence Principle''
which associates to a subset of positive density in $\mathbb{Z}$ a shift-invariant measure
on the space $\{0,1\}^\bb Z$. The crux of Furstenberg's proof \cite{furst} of
the Szemer\'edi theorem is the following result which implies
nonvanishing of higher-order correlations.
\begin{theo}\label{th:furst}
Let $T:X\to X$ be a measure-preserving transformation of a probability space $(X,\mu)$.
Then for every nonnegative $\phi\in L^\infty(X)$ which is not zero almost everywhere,
$$
\liminf_{N\to\infty}\frac{1}{N}\sum_{i=0}^{N-1} \int_X \phi(x)\phi(T^{i}x)\cdots \phi (T^{(r-1)i}x)\, d\mu(x)>0.
$$
\end{theo}
This result was generalised by Furstenberg and Katznelson \cite{fk} to systems
of commuting transformations which allowed to prove the following generalisation
of the Szemer\'edi theorem.
\begin{theo}\label{th:sz}
Let $\Omega$ be a subset of $\bb Z^d$ of positive upper density. Then
for any finite subset $F$ of $\bb Z^d$, there exist $a\in \mathbb{Z}^d$ and $t\in \bb N$
such that
$$
a+tF\subset \Omega.
$$
\end{theo}
We recall that a set $\Omega$ has {\it positive upper density} if there exists
a sequence of boxes $B_n$ with lengths of all sides going to infinity such that
$$
\limsup_{n\to \infty} \frac{|\Omega\cap B_n|}{|B_n|}>0.
$$
Existence of configurations in subsets of the Euclidean space $\bb R^d$ has been
also extensively studied. The following results was proved by Furstenberg, Katznelson, and Weiss \cite{fkw} for $d=2$ using ergodic-theoretic
techniques and by Bourgain \cite{bur} in general using Fourier analysis.
\begin{theo}\label{th:conf1}
Let $\Omega$ be a subset of positive density in $\bb R^d$, and $F=\{0,x_1,\ldots,x_{d-1}\}$ is a subset of points in $\bb R^d$ in general position. Then there exists $t_0$ such that for
every $t\ge t_0$, the set $\Omega$ contains an isometric copy of $tF$.
\end{theo}
It was shown by Bourgain \cite{bur} and Graham \cite{gr} that an analogue of
this theorem fails for general configurations.
Nonetheless, one may ask whether the set $\Omega$ contains approximate
configurations. This was settled by
Furstenberg, Katznelson, and Weiss \cite{fkw}
by configurations of three points and by Ziegler \cite{zie} in general:
\begin{theo}\label{th:conf2}
Let $\Omega$ be a subset of positive density in $\bb R^d$ and
$x_1,\ldots,x_{r-1}\in\bb R^d$. Then there exists $t_0$ such that
for every $t\ge t_0$ and $\epsilon>0$, one can find $(y_0,y_1,\ldots,y_{r-1})\subset\Omega^{r}$ and an isometry of $I$ of $\bb R^d$ such that
$$
d(0,I(y_0))<\epsilon\quad\hbox{and}\quad d(tx_i,I(y_i))<\epsilon\quad\hbox{for $i=1,\ldots,r-1$.}
$$
\end{theo}
The proof of Theorem \ref{th:conf2} requires more detailed analysis of the averages
of correlations
\begin{equation}
\label{eq:limit}
\frac{1}{N}\sum_{i=0}^{N-1} \int_X \phi_0(x)\phi_1(T^{i}x)\cdots \phi_{r-1} (T^{(r-1)i}x)\, d\mu(x)
\end{equation}
for $\phi_0,\ldots,\phi_{r-1}\in L^\infty(X)$.
While the case when $r=3$ can be reduced to investigating translations on
compact abelian groups. The general case have presented a significant challenge
that was solved in the ground-breaking works of Host and Kra \cite{hk}, and Ziegler \cite{zie_factor}.
These works developed a comprehensive method that allowed to understand
limits in $L^2(X)$ of the averages \eqref{eq:limit}. It turns out that this reduces
to analysing
this limit for the so-called characteristic factors which are shown to be
inverse limits of dynamical systems which are translations on nilmanifolds.
Thus, remarkably to investigate the general limits of the averages \eqref{eq:limit}
it suffices to deal with these limits for nilsystems.
We also mention that
Leibman \cite{lei} and Ziegler \cite{zie_0} established existence of the limit
of \eqref{eq:limit} for translations on nilmanifolds.
\medskip
More generally, let us consider a locally compact group $G$ equipped with a left-invariant metric. Given a ``large'' subset of $G$, we would like to show that
it approximately contains an isometric copy of a given configuration $(g_1,\ldots,g_r)\in G^r$.
It is not clear what a natural notion of largeness in $G$ is, especially when the group $G$
is not amenable. In any case, one definitely views a subgroup $\Gamma$ in $G$ with finite covolume as being ``large''. We will be interested in investigated
how rich the set of configurations $(\gamma_1,\ldots, \gamma_r)\in \Gamma^r$ is. In particular, one may wonder whether general configurations
$(g_1,\ldots,g_r)\in G$ can be approximated by isometric copies of
the configurations $(\gamma_1,\ldots, \gamma_r)\in \Gamma^r$ (see Figure \ref{f:conf}),
namely, whether for every $\epsilon>0$, there exists an isometry $I:G\to G$
such that
\begin{align}\label{eq:conf}
d(g_i,I(\gamma_i))<\epsilon\quad\hbox{for $i=1,\ldots,r$}.
\end{align}
\begin{figure}[h]
\includegraphics[width=0.7\linewidth]{conf.png}
\caption{Existence of approximate configurations.}\label{f:conf}
\end{figure}
It was observed by Bj\"orklund, Einsiedler, and Gorodnik \cite{beg}
that the estimates on higher-order correlations (Theorem \ref{th:cor_high}) can be used to solve this problem in an optimal way when
$G$ is a connected simple Lie group with finite center, and $\Gamma$ is a discrete subgroup of $G$ with finite covolume.
It is clear that since $\Gamma$ is discrete, the approximation
\eqref{eq:conf} can not hold when the points $g_i$ are not too ``clustered''
together. To address this issue, we introduce the notion of {\it width}:
for $(g_1,\ldots,g_r)\in G^r$, we set
$$
{\sf w}(g_1,\ldots,g_r)=\min_{i\ne j} d(g_i,g_j).
$$
We shall show that \eqref{eq:conf} can be established provided that
the points are sufficiently spread out in terms of $\epsilon$.
\begin{Theorem}\label{th:conf}
For every $r\ge 2$, there exist $c_r,\epsilon_r>0$ such that for all tuples
$(g_1,\ldots,g_r)\in G^r$ satisfying
\begin{equation*}
{\sf w}(g_1,\ldots,g_r)\ge c_r\log(1/\epsilon)\quad \hbox{with $\epsilon\in (0,\epsilon_r)$},
\end{equation*}
there exists a tuples $(\gamma_1,\ldots,\gamma_r)\in \Gamma^r$
and $g\in G$ such that
$$
d(g_i,g\cdot \gamma_i)<\epsilon\quad\hbox{for $i=1,\ldots,r$.}
$$
\end{Theorem}
Let us illustrate Theorem \ref{th:conf} by an example of the orbit $\Gamma \cdot i$ in the hyperbolic plane $\mathbb{H}^2$ for $\Gamma=\hbox{PSL}_2(\mathbb{Z})$.
For $g\in \hbox{PSL}_2(\mathbb{R})$,
$$
d(g i,i)=\cosh^{-1} (\|g\|^2/2)
$$
where $\|\cdot\|$ is the Euclidean norm. In this case, Theorem \ref{th:conf} with $r=2$
reduces to showing that any distance $D>0$ can be approximated by distances from the set
$$
\Delta=\{\cosh^{-1} ((a^2+b^2+c^2+d^2)/2):\,\, a,b,c,d\in\bb Z^4,\, ad-b c=1 \}.
$$
Namely, when $D\ge c_2 \log(1/\epsilon),$
there exists $\delta\in \Delta$ such that
$|D-\delta|<\epsilon.$
On the other hand, one can show that the set $\Delta$ is not $\epsilon$-dense in an
interval $[a_\epsilon,\infty)$ with $a_\epsilon=o(\log(1/\epsilon))$ as $\epsilon\to 0^+$.
\begin{proof}[Proof of Theorem \ref{th:conf}]
We consider the action of $G$ on the space $X=G/\Gamma$ equipped
with the normalised invariant measure $\mu$ and
apply Theorem \ref{th:cor_high} to a suitably chosen family of test
functions supported on $X$. We take nonnegative $\tilde \phi_\epsilon\in C_c^\infty(G)$
such that
$$
\hbox{supp}(\tilde \phi_\epsilon)\subset B_\epsilon(e),\quad
\mu(\tilde \phi_\epsilon)=1,\quad
S_\ell (\tilde \phi_\epsilon) \ll \epsilon^{-\alpha},
$$
for some fixed $\alpha>0$ depending only on $\ell$ and $G$.
Such a family of function can be constructed using
a local coordinate system in a neighbourhood of identity in $G$. We set
$$
\phi_\epsilon (g\Gamma)=\sum_{\gamma\in \Gamma} \tilde \phi_\epsilon(g\gamma),\quad g\in G,
$$
which defines a function in $C_c^\infty(G/\Gamma)$.
Then Theorem \ref{th:cor_high} give that
\begin{align*}
\mu((g_1\cdot \phi_\epsilon)\cdots (g_r\cdot \phi_\epsilon))=1
+ O_r \left( e^{-\delta\, {\sf w}(g_1,\ldots,g_r)} \epsilon^{-\alpha r}\right)
=1
+ O_r \left( \epsilon^{c_r\delta -\alpha r }\right).
\end{align*}
If we take $c_r>\alpha r/\delta$, then it follows from this estimate that
for all sufficiently small $\epsilon$,
$$
\mu((g_1\cdot \phi_\epsilon)\cdots (g_r\cdot \phi_\epsilon))>0.
$$
Since
$$
\mu((g_1\cdot \phi_\epsilon)\cdots (g_r\cdot \phi_\epsilon))=\int_{G/\Gamma} \left(\sum_{\gamma_1,\ldots,\gamma_r\in \Gamma} \tilde\phi_\epsilon (g_1^{-1}g\gamma_1)\cdots \tilde\phi_\epsilon (g_r^{-1}g\gamma_r)\right)\, d\mu(g\Gamma),
$$
it follows that there exist
$(\gamma_1,\ldots,\gamma_r)\in \Gamma^r$ and $g\in G$ such that
$$
g_i^{-1}g\gamma_i\in \hbox{supp}(\tilde \phi_\epsilon)\subset B_\epsilon(e)\quad\hbox{for $i=1,\ldots,r$},
$$
so that
$$
d(g_i, g\gamma_i)=d(g_i^{-1}g\gamma_i,e)<\epsilon\quad\hbox{for $i=1,\ldots,r$},
$$
as required.
\end{proof}
\section{Application: Central Limit Theorem}\label{sec:clt}
Suppose that the time evolution of a physical system is given by a one-parameter flow $T_t:X\to X$ on the phase space $X$. Observables of this system are represented by functions $\phi$
on $X$ so that studying the transformation of this system
as time progresses involves the analysis of the values
$\phi(T_tx)$ with $t\ge 0$ and $x\in X$. Often these values fluctuate quite erratically
which makes it difficult to understand them in deterministic terms.
Instead, one might attempt to study their statistical properties.
Formally, we consider $\{\phi\circ T_t:\, t \ge 0\}$ as a family of random
variables on $X$. For chaotic flows, these family typically exhibits quasi-independence
properties, and it is natural to expect that they satisfy probabilistic limit laws
known for independent random variables.
For instance, we mention one of the first results in this direction
which was proved by Sinai \cite{S}:
\begin{Theorem}\label{th:sinai}
Let $g_t:T^1(M)\to T^1(M)$ be the geodesic flow on
on a compact manifold $M$ with constant negative curvature. Then for any $\phi\in C^{1+\alpha}(X)$
with zero integral, the family of functions
$$
F_t(x)=t^{-1/2}\int_0^t \phi(g_sx)\,ds
$$
converges in distribution to the Normal Law as $t\to\infty$; that is, for all $\xi\in \mathbb{R}$,
$$
\frac{\hbox{\rm vol}\big(\{x\in T^1(M): F_t(x)<\xi\}\big)}{\hbox{\rm vol}(T^1(M))}\longrightarrow \hbox{\rm Norm}_{\sigma(\phi)}(\xi)\quad\hbox{as $t\to\infty$, }
$$
where
\begin{align*}
\hbox{\rm Norm}_{\sigma}(\xi)&=
(\sqrt{2\pi}\sigma)^{-1}\int_{-\infty}^\xi e^{-s^2/(2\sigma^2)}\, ds
\end{align*}
denotes the Normal Distribution with variance $\sigma$.
\end{Theorem}
Validity of the Central Limit Theorem for one-parameter dynamical systems
has been extensively studied in the last decades, and
we refer to surveys \cite{D,Den,gou,lB2,v} for an introduction
to this vast area of research. However, there was very little known
about the distribution of averages for more general groups actions.
In this section, we present a method developed by Bj\"orklund and Gorodnik \cite{bg}
for proving the Central Limit Theorem, which is based on the quantitative estimates
for higher-order correlations established in the previous sections.
\medskip
Let us consider an action of a group $H$ on a standard probability space $(X,\mu)$.
Given a function $\phi$ on $X$, we consider the family of its translations
$$
(h\cdot\phi)(x)=\phi(h^{-1}x)\quad\hbox{with $h\in H$.}
$$
One may think about $\{h\cdot\phi:\,h\in H\}$ as a collection of
identically distributed random variables on the probability space $(X,\mu)$.
When the action exhibits chaotic behaviour, it is natural to expect
that these random variable are quasi-independent in a suitable sense
which leads to the question whether these random variables satisfy
analogues of the standard probabilistic laws such as, for instance,
the Central Limit Theorem, the Law of Iterated Logarithms, etc.
Here we prove a general Central Limit theorem for group actions.
From the perspective of this notes, the chaotic
nature of group actions is reflected in the asymptotic behaviour
of the higher-order correlations. We demonstrate that
quantitative estimates on correlations imply the Central Limit Theorem.
Although we do not pursue this direction here, we mention that this approach
has found interesting applications in Number Theory
to study the distribution of arithmetic counting functions
(see \cite{bg1,bg2}).
Let $H$ be a (noncompact) locally compact group $H$ equipped with a left-invariant metric $d$.
We consider a measure-preserving action of $H$ on a standard probability space $(X,\mu)$.
We assume that this action is mixing of all orders in the following
quantitative sense. There exists a subalgebra $\mathcal{A}$ of $L^\infty(X)$
equipped with a family of norms
$$
S_1\le S_2\le \cdots \le S_\ell\le \cdots
$$
satisfying the following properties:
\begin{enumerate}
\item[(${\rm N}_1$)] there exists $\ell_1$ such that
$$
\|\phi\|_{L^\infty}\ll S_{\ell_1}(\phi),
$$
\item[(${\rm N}_3$)] for all $\ell$, there exists $\sigma_\ell>0$ such that
$$
S_\ell(g\cdot \phi)\ll_\ell e^{\sigma_\ell\, d(g,e)} \, S_\ell(\phi)\quad\hbox{for all $g\in G$,}
$$
\item[(${\rm N}_4$)] for every $\ell$, there exists $\ell'$ such that
$$
S_\ell(\phi_1\phi_2)\ll_\ell S_{\ell'}(\phi_1) S_{\ell'}(\phi_2).
$$
\end{enumerate}
We suppose that for every $r\ge 2$
there exist $\delta_r,\ell_r>0$
such that for all elements $h_1,\ldots,h_r\in H$ and all functions $\phi_1,\ldots,\phi_r\in \mathcal{A}$,
\begin{align}\label{eq:mix_end}
\mu((h_1\cdot \phi_1)\cdots (h_r\cdot \phi_r))= \,&\mu(\phi_1)\cdots \mu(\phi_r)\\
&+
O_r\left( S_{\ell_r}(\phi_1)\cdots S_{\ell_r}(\phi_r)\, e^{-\delta_r D(h_1,\ldots,h_r)}\right), \nonumber
\end{align}
where
$$
D(h_1,\ldots,h_r)=\min_{i\ne j} d(h_i,h_j).
$$
We shall additionally assume that the group $H$ has {\it subexponential growth}
which means tha the balls
$$
B_t=\{h\in H:\, d(h,e)<t\}
$$
satisfy
\begin{equation}
\label{eq:subexp}
\frac{\log\hbox{vol}(B_t)}{t}\to 0\quad\hbox{as $t\to\infty$.}
\end{equation}
Our main result is the following:
\begin{Theorem}\label{th:clt}
For every $\phi\in \mathcal{A}$ with integral zero,
the family of functions
\begin{equation}
\label{eq:f_t}
F_t(x)=\hbox{\rm vol}(B_t)^{-1/2} \int_{B_t} \phi(h^{-1}x)\,dh
\end{equation}
converges in distribution as $t\to \infty$ to the Normal Law with mean zero and the variance
$$
\sigma(\phi)^2=\int_H \left<h\cdot \phi,\phi\right>dh.
$$
Explicitly, this means that for every $\xi\in \mathbb{R}$,
$$
\mu\big(\left\{x\in X:\, F_t(x)<\xi \right\}\big)\longrightarrow \hbox{\rm Norm}_{\sigma(\phi)}(\xi)\quad\hbox{as $t\to \infty$.}
$$
\end{Theorem}
We remark that the condition that $H$ has subexponential growth is important in Theorem \ref{th:clt}.
Indeed, Gorodnik and Ramirez \cite{gor_r} constructed examples of actions of rank-one simple Lie groups
on homogeneous spaces which are exponentially mixing of all orders, but do not satisfy
the Central Limit Theoorem.
In particular, Theorem \ref{th:clt} immediately implies the following
results about higher-rank abelian actions on homogeneous spaces.
\begin{Cor}\label{cor:cartan}
Let $G$ be a (noncompact) connected simple matrix Lie group with finite centre
and $H$ a (noncompact) closed subgroup of a Cartan subgroup of $G$.
Then a measure-preserving action of $H$ on finite-volume homogeneous spaces $X$ of $G$
satisfies the Central Limit Theorem. Namely, for every $\phi\in C_c^\infty(X)$
with zero integral, the family of functions
$$
F_t=\hbox{\rm vol}(B_t)^{-1/2} \int_{B_t} (h\cdot \phi)\,dh
$$
converges in distribution to the Normal Law as $t\to\infty$.
\end{Cor}
We note that when $M$ is compact surface with constant negative curvature,
its unit tangent bundle can be realised as
$$
T^1(M)\simeq \hbox{PSL}_2(\bb R)/\Gamma,
$$
where $\Gamma$ is a discrete cocompact subgroup of $\hbox{PSL}_2(\bb R)$, and
the geodesic flow is given by
$$
g_t:x\mapsto
\left(
\begin{tabular}{cc}
$e^{t/2}$ & 0 \\
0 & $e^{-t/2}$
\end{tabular}
\right)
x\quad \hbox{for $x\in \hbox{PSL}_2(\bb R)/\Gamma.$}
$$
Hence, our method also provides a new proof of Theorem \ref{th:sinai}.
\medskip
It is well-known from Probability that in order to establish that
a family of bounded random variables $X_t$ converges in distribution
to a normal random variable $N$, it is sufficient to establish convergence of
all moments, that is, that for all $r\ge 1$
$$
\mathbb{E}(X_t^r)\to \mathbb{E}(N^r) \quad\hbox{as $t\to\infty$.}
$$
We essentially follows this route, but it will be more convenient
to work with cumulants instead of moments.
Given random variables $X_1,\ldots, X_r$,
the joint {\it cumulant} is defined as
$$
\operatorname{cum}(X_1,\ldots,X_r)=(-i)^r\frac{\partial^r}{\partial z_1\cdots \partial z_r}
\log \mathbb{E} \left[ e^{i\sum_{k=1}^r z_k X_k} \right] \Big|_{z_1=\cdots=z_r=0}.
$$
It is useful to keep in mind that the joint cumulants can be expressed
in terms of joint moments and conversely (see, for instance, \cite{ls}):
\begin{align*}
\operatorname{cum}(X_1,\ldots,X_r)&=
\sum_{P\in\mathcal{P}_r} (-1)^{|P|-1} (|P|-1)!\,{\prod}_{I\in P} \mathbb{E}\left({\prod}_{i\in I }X_i\right),\\
\mathbb{E}(X_1\cdots X_r)&=\sum_{P\in \mathcal{P}_r } {\prod}_{I\in P} \operatorname{cum}(X_i:\, i\in I),
\end{align*}
where the sums are taken over the set $\mathcal{P}_r$ consisting of all partitions of $\{1,\ldots,r\}$.
Hence, studying cumulants is essentially equivalent to studying moments.
However, it turns our that cumulants have several very convenient
additional vanishing properties that will be crucial for our argument:
\begin{itemize}
\item If there exists a nontrivial partition $\{1,\ldots,r\}=I\sqcup J$
such that $\{X_i:i\in I\}$ and $\{X_i:i\in J\}$ are independent, then
\begin{equation}
\label{eq:indep}
\operatorname{cum}(X_1,\ldots,X_r)=0.
\end{equation}
\item if $N$ is a normal random variable, than the cumulants of order at least three satisfy
$$
\operatorname{cum}(N,\ldots,N)=0
$$
\end{itemize}
\medskip
Now we adopt this probabilistic notation to our setting.
For functions $\phi_1,\ldots,\phi_r\in L^\infty(X)$ and a subset $I\subset \{1,\ldots, r\}$,
we set
$$
\phi_I={\prod}_{i\in I} \phi_i.
$$
We use the convention that $\phi_\emptyset=1$.
Then we define the joint {\it cumulant} of $\phi_1,\ldots,\phi_r$ as
$$
\operatorname{cum}_r(\phi_1,\ldots,\phi_r)=\sum_{P\in\mathcal{P}_r} (-1)^{|P|-1} (|P|-1)!\,{\prod}_{I\in P} \mu(\phi_I).
$$
For a function $\phi\in L^\infty(X)$, we also set
$$
\operatorname{cum}_r(\phi)=\operatorname{cum}_r(\phi,\ldots,\phi).
$$
The following proposition,
which is essentially equivalent to the more widely known Method of Moments,
provides a convenient criterion for proving the Central Limit Theorem.
\begin{Prop}\label{p:cum}
Let $F_t\in L^\infty(X)$ be a family of functions such that as $t\to\infty$,
\begin{align}
\mu(F_t)&\to 0, \label{eq:c1}\\
\|F_t\|_{L^2}&\to \sigma, \label{eq:c2} \\
\operatorname{cum}_r(F_t)&\to 0\quad\hbox{for all $r\ge 3$.} \label{eq:c3}
\end{align}
Then
for every $\xi\in \mathbb{R}$,
$$
\mu\big(\left\{x\in X:\, F_t(x)<\xi \right\}\big)\longrightarrow \hbox{\rm Norm}_{\sigma}(\xi)
\quad\hbox{as $t\to \infty$.}
$$
\end{Prop}
Estimates on cumulants were also used by Cohen and Conze \cite{CC1,CC2,CC3} to prove the Central Limit Theorem for $\bb Z^k$-actions by automorphisms of compact abelian groups.
\medskip
We begin the proof of Theorem \ref{th:clt}.
In view of Proposition \ref{p:cum}, it remains to verify that the family of functions $F_t$ defined in \eqref{eq:f_t}
satisfies \eqref{eq:c1}, \eqref{eq:c2}, and \eqref{eq:c3}. The first condition
is immediate, and the second is verified as follows.
We observe that
\begin{align*}
\|F_t\|_{L^2}^2 &= \hbox{vol}(B_t)^{-1}\int_{B_t\times B_t} \left<h_1\cdot \phi,h_2\cdot \phi\right>\, dh_1dh_2\\
&=\hbox{vol}(B_t)^{-1}\int_{H\times H} \chi_{B_t}(h_1)\chi_{B_t}(h_2)\left<(h_1^{-1}h_2)\cdot \phi, \phi\right>\, dh_1dh_2\\
&=\int_{H} \frac{\hbox{vol}(B_t\cap B_t h^{-1})}{\operatorname{vol}(B_t)}
\left<h\cdot \phi, \phi\right>\, dh.
\end{align*}
It is not hard to check using the subexponential growth property \eqref{eq:subexp} that
the balls $B_t$ satisfy the F\o lner property, that is, for all $h\in H$,
$$
\frac{\hbox{vol}(B_t\cap B_t h^{-1})}{\operatorname{vol}(B_t)}\to 1\quad\hbox{as $t\to\infty$}.
$$
Moreover, it follows from \eqref{eq:mix_end} with $r=2$ that
the function $h\mapsto \left<h\cdot \phi, \phi\right>$ is in $L^1(H)$.
Thus, using the Dominated Convergence Theorem, we deduce that
$$
\|F_t\|_{L^2}^2\to \int_H \left<h\cdot \phi, \phi\right>\, dh\quad\hbox{as $t\to\infty$.}
$$
This implies \eqref{eq:c2}.
\medskip
Verification of \eqref{eq:c3} is the most challenging part of the proof
because it requires to show asymptotic vanishing of the cumulants
$$
\operatorname{cum}_r(F_t)=\hbox{vol}(B_t)^{-r/2}\int_{B_t^r} \operatorname{cum}_r(h_1\cdot \phi,\ldots,h_r\cdot \phi)\, d{h},
$$
which is even more than the square-root cancellation in this integral.
The first crucial input for estimating $\operatorname{cum}_r(F_t)$ comes from the bound
on correlations \eqref{eq:mix_end}. However, these bound will be only useful
for certain ranges of tuples $h=(h_1,\ldots,h_r)$.
To utilise the bound \eqref{eq:mix_end} most efficiently,
we introduce a decomposition of the product $H^r$ into a union
of domains where the components $h_i$ are either separated or clustered
on suitable scales. For subsets $I,J\subset \{1,\ldots,r\}$ and $h=(h_1,\ldots,h_r)\in H^r$,
we set
\begin{align*}
d^I(h)&=\max\{d(h_i,h_j):\, i,j\in I\}\\
d_{I,J}(h)&=\min\{d(h_i,h_j):\, i\in I, j\in J\},
\end{align*}
and for a partition $Q\in\mathcal{P}_r$, we set
\begin{align*}
d^Q(h)&=\max\{ d^I(h):\, I\in Q\},\\
d_Q(h)&=\min\{d_{I,J}(h):\, I\ne J\in Q\}.
\end{align*}
Using this notation, we define for $0\le \alpha\le\beta$,
\begin{align*}
\Delta_Q(\alpha,\beta)&=\{h\in H^r:\, d^Q(h)\le \alpha,\ d_Q(h)>\beta \},\\
\Delta(\beta)&=\{h\in H^r:\, d(h_i,h_j)\le \beta \quad\hbox{for all $i,j$}\}.
\end{align*}
For $h=(h_1,\ldots,h_r)\in \Delta_Q(\alpha,\beta)$, we think about components
$h_i$ with $i$ in the same atom of $Q$ as ``clustered'' and about
$h_i$ with $i$ in different atoms of $Q$ as ``separated'' (see Figure \ref{f:tuples}).
\begin{figure}[h]
\includegraphics[width=0.7\linewidth]{sep.png}
\caption{Tuples in the sets $\Delta(\alpha,\beta)$.}\label{f:tuples}
\end{figure}
These features allow to estimate $\operatorname{cum}_r(h_1\cdot \phi,\ldots,h_r\cdot \phi)$
on the sets $\Delta_Q(\alpha,\beta)$:
\begin{Prop}\label{p:bound}
There exist $\delta_r,\sigma_r>0$ such that for every
$0\le\alpha\le \beta$, $Q\in\mathcal{P}_r$ with $|Q|\ge 2$, and
$(h_1,\ldots,h_r)\in \Delta_Q(\alpha,\beta)$,
$$
\operatorname{cum}_r(h_1\cdot \phi,\ldots,h_r\cdot \phi)\ll_{r,\phi} e^{-\delta_r \beta-\sigma_r\alpha}.
$$
\end{Prop}
\begin{proof}
The proof will exploit a certain cancellation property of cumulants.
For $Q\in \mathcal{P}_r$ and $\phi_1,\ldots,\phi_r\in L^\infty(X)$,
we define the conditional cumulant as
$$
\operatorname{cum}_r(\phi_1,\ldots,\phi_r|Q)=\sum_{P\in\mathcal{P}_r} (-1)^{|P|-1} (|P|-1)!\prod_{I\in P}
\prod_{J\in Q} \mu(\phi_{I\cap J}).
$$
One can show that
when the partition $Q$ is nontrivial,
\begin{equation}
\label{eq:cum_cond}
\operatorname{cum}_r(\phi_1,\ldots,\phi_r|Q)=0.
\end{equation}
This fact is an analogue of \eqref{eq:indep}, but
it is not a probabilistic property, but rather a combinatorial
cancellation feature of the cumulant sums, and we refer, for instance, to \cite{bg}
for a self-contained proof of \eqref{eq:cum_cond}.
In order to bound $\operatorname{cum}_r(h_1\cdot \phi,\ldots,h_r\cdot \phi)$,
we shall show that when $(h_1,\ldots,h_r)\in \Delta_Q(\alpha,\beta)$,
$$
\operatorname{cum}_r(h_1\cdot \phi,\ldots,h_r\cdot \phi)\approx \operatorname{cum}_r(h_1\cdot \phi,\ldots,h_r\cdot \phi|Q)
$$
which reduces to verifying that for $I\in P$,
$$
\mu\left({\prod}_{i\in I} h_i\cdot \phi \right)\approx
{\prod}_{J\in Q} \mu\left({\prod}_{i\in I\cap J} h_i\cdot \phi \right).
$$
This is where the full strength of the estimate \eqref{eq:mix_end} on higher-order correlations comes into play.
For each $J$, we pick $h_J$ as one of $h_j$, $j\in I\cap J$. Then
$$
\mu\left({\prod}_{i\in I} h_i\cdot \phi_i \right)=
\mu\left({\prod}_{J\in Q} h_{J} \Phi_J \right),
$$
where $\Phi_J=\prod_{i\in I\cap J} (h_J^{-1}h_i)\cdot \phi$.
Since $(h_1,\ldots,h_r)\in \Delta_Q(\alpha,\beta)$, we have
\begin{align*}
&d(h_J^{-1}h_i,e)=d(h_i,h_J)\le \alpha & \hbox{for $i\in J\in Q$},\\
&d(h_{J_1},h_{J_2})>\beta & \hbox{for $J_1\ne J_2\in Q$.}
\end{align*}
Hence, it follows from \eqref{eq:mix_end} that
$$
\mu\left({\prod}_{J\in Q} h_{J} \Phi_J \right)
={\prod}_{J\in Q} \mu(\Phi_J)+O_r\left({\prod}_{J\in Q} S_{\ell_r}(\Phi_J) e^{-\delta_r\beta}\right),
$$
and by the properties (${\rm N}_3$) and (${\rm N}_4$) of the norms,
$$
S_{\ell}(\Phi_J)\ll_\ell {\prod}_{i\in I\cap J} S_{\ell'}((h_J^{-1}h_i)\cdot \phi)
\ll_{\ell',\phi} e^{r\sigma_\ell \alpha}.
$$
This implies that for some $\sigma_r>0$,
$$
\mu\left({\prod}_{i\in I} h_i\cdot \phi_i \right)=
{\prod}_{J\in Q} \mu\left({\prod}_{i\in I\cap J} h_i\cdot \phi_i \right)
+O_{r,\phi}\left( e^{-(\delta_r \beta-\sigma_r\alpha)} \right).
$$
which can be used to prove the proposition.
\end{proof}
We shall use the following decomposition of the space of tuples $H^r$.
\begin{Prop}\label{prop_decom}
Given parameters
$$
0=\beta_0<\beta_1< 3\beta_1\le\beta_2<\cdots <\beta_{r-1}<3\beta_{r-1}\le\beta_r,
$$
we have the decomposition
$$
H^r=\Delta(\beta_r)\cup \left( \bigcup_{j=0}^{r-1} \bigcup_{Q:\,|Q|\ge 2} \Delta_Q(3\beta_j,\beta_{j+1}) \right).
$$
\end{Prop}
The proof of Proposition \ref{prop_decom} uses the following lemma:
\begin{Lemma}\label{l:coarse}
Let $Q\in \mathcal{P}_r$ with $|Q|\ge 2$ and $0\le\alpha\le \beta$.
Suppose that for $h\in H^r$,
$$
d^Q(h)\le \alpha\quad\hbox{and}\quad d_Q(h)\le \beta.
$$
Then there exists a partition $Q_1$ which is strictly coarser than $Q$ such that
$$
d^{Q_1}(h)\le 3\beta.
$$
\end{Lemma}
\begin{proof}
We observe that the sets $\{h_i:\, i\in I\}$ with $I\in Q$ have diameters at most $\alpha$,
and the distance between at least two of these sets is bounded by $\beta$.
We define the new partition $Q_1$ by combining the sets whose distance at most $\beta$
between them. This gives a strictly coarser partition.
It follows from the triangle inequality that
the diameters of the sets $\{h_i:\, i\in J\}$ with $J\in Q_1$
are at most $2\alpha+\beta\le 3\beta$. This implies that $d^{Q_1}(h)\le 3\beta$.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop_decom}]
Let us take arbitrary $h\in H^r$. Suppose that $h\not\in \Delta_{Q_0}(0,\beta_1)$ for
$Q_0=\{\{1\},\ldots,\{r\}\}$. It is clear that $d^{Q_0}(h)=0$ so that also
$d_{Q_0}(h)\le \beta_1$. Hence, it follows from Lemma \ref{l:coarse}
that there exists a partition $Q_1$ coarser than $Q_0$ such that
$d^{Q_1}(h)\le 3\beta_1$. If $d_{Q_1}(h)>\beta_2$, then $h\in \Delta_{Q_1}(3\beta_1,\beta_2)$
and $h$ belongs to the union. On the other, if $d_{Q_1}(h)\le \beta_2$,
we apply Lemma \ref{l:coarse} again to conclude that there exists a partition $Q_2$
coarser than $Q_1$ such that $d^{Q_2}(h)\le 3\beta_2$.
This argument can be continued, and we deduce that after at most $r$ steps,
we see that $h$ belongs to the union of $\Delta_{Q_j}(3\beta_j,\beta_{j+1})$ with $|Q_j|\ge 2$,
or we get $Q_i=\{\{1,\ldots,r\}\}$ and $d^{Q_i}(h)\le 3\beta_i<\beta_r$.
In the latter case, we deduce that $h\in\Delta(\beta_r)$. This proves
the required decomposition.
\end{proof}
Now we are ready to complete the proof of Theorem \ref{th:clt}.
As we have already remarked, it remains to prove \eqref{eq:c3}.
Using the decomposition established in Proposition \ref{prop_decom}, we deduce that
\begin{align*}
\operatorname{cum}_r(F_t)=&\,\hbox{vol}(B_t)^{-r/2}\int_{B_t^r} \operatorname{cum}_r(h_1\cdot \phi,\ldots,h_r\cdot \phi)\, d{h}\\
\ll_r &\, \hbox{vol}(B_t)^{-r/2}\Big(\hbox{vol}(B_t^r\cap \Delta(\beta_r))\|\phi\|^r_{L^\infty} \\
&\quad\quad\quad\quad\quad\;+\max_{j,\, Q:|Q|\ge 2}
\int_{B_t^r\cap\Delta_Q(3\beta_j,\beta_{j+1})} |\operatorname{cum}_r(h_1\cdot \phi,\ldots,h_r\cdot \phi)|\, dh \Big).
\end{align*}
It follows from invariance of the volume on $H$ that
$$
\hbox{vol}(B_t^r\cap \Delta(\beta_r))
\le \int_{B_t} \operatorname{vol}(B(h,\beta_r))^{r-1}\, dh=\hbox{vol}(B_t)\operatorname{vol}(B_{\beta_r})^{r-1}.
$$
Hence, using Proposition \ref{p:bound}, we conclude that
$$
\operatorname{cum}_r(F_t)\ll_{r,\phi} \hbox{vol}(B_t)^{1-r/2}\operatorname{vol}(B_{\beta_r})^{r-1}+\hbox{vol}(B_t)^{r/2}\Big(\max_j e^{-\delta_r\beta_{j+1}-3\sigma_r\beta_j}\Big).
$$
For a parameter $\theta>0$, we choose $\beta_j$'s recursively as
$$
\beta_0=0,\, \beta_{j+1}=\max\{3\beta_j, \delta_r^{-1}(\theta +3\sigma_r \beta_j) \}.
$$
Then $\beta_r\le c_r\theta$ with some $c_r>0$, and
$$
\operatorname{cum}_r(F_t)\ll_{r,\phi} \hbox{vol}(B_t)^{1-r/2}\operatorname{vol}(B_{c_r\theta})^{r-1}+\hbox{vol}(B_t)^{r/2} e^{-\theta}.
$$
We take $\theta=r\log\hbox{vol}(B_t)$. Then it follows from
the subexponential growth condition \eqref{eq:subexp} that when $r\ge 3$,
$$
\operatorname{cum}_r(F_t)\to 0\quad \hbox{as $t\to\infty$.}
$$
This completes the proof of Theorem \ref{th:clt}.
|
1601.02849
|
\section{Introduction}
Stars more massive than $\geq$20\,$M_{\odot}$ experience the
short-lived luminous blue variable (LBV) stage \cite{Conti1984},
which among other evolutionary stages of massive stars is the most
interesting in observational manifestations and perhaps the most
important in the evolutionary sense
\cite{HD1994,L1994,vG2001,Groh2013}. During this stage a massive
star exhibits irregular spectroscopic and photometric variability
on time-scales from years to decades or longer, which is reflected
in changes of the stellar type from late O/early B supergiants to
A/F-type ones (see e.g. \cite{Stahl2001,Groh2009}) and changes in
the brightness by several magnitudes. At the brightness maximum,
LBVs could be confused with supernovae (e.g.
\cite{Good1989,VDyk2002}), and it is believed that some LBVs could
be the direct progenitors of supernovae (e.g.
\cite{KV2006,Groh2013}). The LBV stars experience episodes of
enhanced, sometimes eruptive, mass loss, so that most of them (see
\cite{Clark2005,KGB2015a} and Table\,\ref{LBV}) are surrounded by
compact ($\sim 0.1-1$ pc in diameter) shells with a wide range of
morphologies (e.g. \cite{Nota1995,Weis2001,GK2010c}).
The LBV phenomenon is still ill-understood, which is mostly
because the LBV stars are very rare objects. The recent census of
Galactic confirmed and candidate LBVs (cLBVs) presented in
\cite{Vink2012} lists only 13 and 25 stars, respectively. The
discovery of additional LBVs would, therefore, be of great
importance for understanding their evolutionary status and their
connection to other massive transient stars, as well as for
unveiling the driving mechanism(s) of the LBV phenomenon.
Detection of LBV-like shells can be considered as an indication
that their associated stars are massive and evolved, and therefore
could be used for the selection of candidate massive stars for
follow-up spectroscopy. Because of the huge interstellar
extinction in the Galactic plane, the most effective channel for
the detection of circumstellar shells is through imaging with
modern infrared (IR) telescopes. Application of this approach
using the {\it Spitzer Space Telescope} and {\it Wide-field
Infrared Survey Explorer} ({\it WISE}) resulted in the discovery of
hundreds of such shells whose central stars could be LBVs or other
types of evolved massive stars
(\cite{GK2010c,Wach2010,Miz2010,GK2011}). Indeed, follow-up
optical and IR spectroscopy of these central stars led to the
discovery of dozens of new cLBV, blue supergiant and Wolf-Rayet
(WR) stars in the Milky Way
(\cite{G2009,GK2010a,GK2010b,GK2010c,Wach2010,Wach2011,GK2012,Str2012a,Str2012b,Burgm2013,Flag2014,
GM2014,GC2014,GK2014,KGB2015a,GK2015a,GK2015b,KGB2015b}). Because
of reddening many of the central stars are very dim in the optical,
which makes inevitable the use of 8--10-m class telescopes like
the Southern African Large Telescope (SALT). Here we report the
results of optical spectroscopy with the SALT of 54 central stars
of compact mid-IR nebulae discovered with {\it Spitzer} and {\it
WISE}.
\begin{figure*}
\begin{center}
\includegraphics[angle=0,width=14cm,height=16.0cm,clip=]{LBV_spectra.eps}
\caption{The observed SALT spectra of a dozen emission-line stars from
our sample of candidate evolved massive stars revealed with {\it Spitzer}
and {\it WISE}.}
\label{spec}
\end{center}
\end{figure*}
\section{Observations}
\label{obs}
The SALT observations were carried out in 2010--2015 with the
Robert Stobie Spectrograph (RSS; \cite{Burg2013}) in the long-slit
mode. In most cases, the spectra covered the range of 4200$-$7300
\AA. The primary reduction of the data was done with SALT
science pipeline. After that, the bias and gain corrected and
mosaicked long-slit data were reduced in the way described in
\cite{Kniazev2008}. Examples of 1D finally reduced spectra of a
dozen emission-line stars are shown in Figure~\ref{spec}, while
Figure\,\ref{neb} presents the mid-IR images of circumstellar
nebulae associated with these stars. The list of all observed
targets is given in Table\,\ref{list}.
\begin{figure*}
\begin{center}
\includegraphics[angle=0,width=13.0cm,clip=]{LBV-1.eps}
\includegraphics[angle=0,width=13.0cm,clip=]{LBV-2.eps}
\caption{Mid-IR images of circumstellar nebulae around stars whose
spectra are shown in Figure\,1. All but one of these nebulae
were discovered with {\it Spitzer} at 24\,$\mu$m. The nebula around
J115443-631331 was discovered with {\it WISE} at 22\,$\mu$m. The nebula
shown in the right panel of the second row contains two stars near the
geometrical centre. One of them (marked by a white circle) is the cLBV
J154842-550742/MN30. The second one (marked by a black circle) is
the WC9 star J154842-550755. The coordinates are in units of RA(J2000) and
Dec.(J2000) on the horizontal and vertical scales,
respectively.}
\label{neb}
\end{center}
\end{figure*}
To search for possible spectroscopic and photometric variability
of the newly identified cLBVs, we obtained additional spectra with
the SALT and performed photometric monitoring of these stars with
the 76-cm telescope of the South African Astronomical Observatory.
\section{Results}
\label{res}
We carried out optical SALT spectroscopy of 54 candidate evolved
massive stars. The first results of our observing program were
presented in
\cite{GK2012,Todt2013,GC2014,GK2014,GK2015a,KGB2015a,GK2015b}.
Table\,\ref{list} summarizes the (mostly preliminary) spectral
classification of the observed targets. We detected about two
dozen of emission-line stars, of which 15 stars were classified
as cLBVs. Subsequent spectroscopic and photometric monitoring of
these stars allowed us to confirm the LBV status of four of them
(see Tables\,\ref{list} and
\cite{GK2014,KGB2015a,GK2015b,KGB2015b}). Figure\,\ref{Wray} shows
the evolution of the spectrum of Wray\,16-137 (one of the four newly
identified Galactic bona fide LBVs) in 2011--2014. One can see
that the He\,{\sc i} emission lines have almost disappeared, while
numerous Fe\,{\sc ii} emissions have become prominent. These changes
along with significant brightness increase of the star (by about 1
mag during three years) indicate that currently Wray\,16-137
experiences an S\,Dor-like outburst.
\begin{table*}
\begin{center}
\caption{The observed central stars of mid-IR nebulae and their
(mostly preliminary) spectral classification. The stars whose
spectra and nebulae are presented in Figures\,1 and 2 are
starred.} \label{list} \small
\begin{tabular}{lll||lll} \\ \hline
\multicolumn{1}{c}{ Object } & \multicolumn{1}{c}{ Name } & \multicolumn{1}{c||}{
Type } & \multicolumn{1}{c}{ Object } & \multicolumn{1}{c}{ Name } &
\multicolumn{1}{c}{ Type } \\
\multicolumn{1}{c}{ (1) } & \multicolumn{1}{c}{ (2) } & \multicolumn{1}{c||}{ (3) } &
\multicolumn{1}{c}{ (1) } & \multicolumn{1}{c}{ (2) } &
\multicolumn{1}{c}{ (3) } \\
\hline
J045304-692352 & BAT99\,3a & WN3b+O6\,V$^a$ & J153856-563722 & MN25 & OB \\
J052848-705105 & & OB & J154527-535602 & MN26 & OB \\
J052412-683011 & LHA\,120\,N & OB & J154842-550742$^\ast$ & MN30 & cLBV$^i$ \\
J071810-265124 & CD$-$26\,4148 & OB & J154842-550755 & & WC9$^j$ \\
J091714-495502 & & M & J161132-512906 & MN40 & OB \\
J100122-550046 & & OB & J161517-505219$^\ast$ & & cLBV$^k$ \\
J103638-580012 & & OB & J163239-494213 & MN44 & LBV$^l$ \\
J104823-593226$^\ast$ & HD\,93795 & A & J164316-460042$^\ast$ & MN46 & cLBV$^m$ \\
J110340-592559 & HD\,96042 & OB & J164937-453559$^\ast$ & MN48 & LBV$^n$ \\
J111428-611820 & Wray\,15-780 & OB & J170723-395651 & MN50 & M$^o$ \\
J114418-624520$^\ast$ & MN1 & cLBV$^b$ & J171307-384734 & CD$-$38\,11646 & OB \\
J115443-631331$^\ast$ & & cLBV & J172031-330949 & WS2 & cLBV$^p$ \\
J120058-631259 & MN2 & OB & J173753-302311 & & OB \\
J124626-632427 & WRAY\,17-56 & PN$^c$ & J173918-312424 & MN59 & OB$^q$ \\
J131004-631130$^\ast$ & MN7 & cLBV & J174359-302838$^\ast$ & MN64 & cLBV$^r$ \\
J131028-621331 & & OB & J174627-302001 & & OB \\
J131043-631745$^\ast$ & MN8 & cLBV$^d$ & J180433-210326 & HD\,313642 & A \\
J131933-623844 & MN10 & OB$^e$ & J180612-211745 & MN74 & OB \\
J132647-615924 & & M & J180823-221939 & ALS\,4684 & OB \\
J133628-634538 & WS1 & LBV$^f$ & J182721-132209 & & OB \\
J133654-632552 & & OB & J183217-091614 & & OB$^s$ \\
J135015-614855 & Wray\,16-137 & LBV$^g$ & J183528-064415 & & OB \\
J140707-652934 & CPD$-$64\,2731 & O & J184159-051539$^\ast$ & MN84 & cLBV$^t$ \\
J143111-610202 & MN14 & OB & J184246-031317 & & [WN5]$^u$ \\
J151342-585318 & MN17 & OB & J185404+033544 & & M \\
J151641-582226 & MN18 & B1\,Ia$^h$ & J190421+060001 & MN100 & OB$^v$ \\
J151959-572415 & MN19 & OB & J190624+082201$^\ast$ & MN101 & cLBV$^w$ \\
\hline \multicolumn{6}{p{15cm}}{{\it Notes}: $^a$ \cite{GC2014}; $^b$
classified as an Oe/WN in \cite{Wach2010}; $^c$ originally
classified as a PN in \cite{Par2006}; $^d$ classified as an Oe/WN
in \cite{Wach2010}; $^e$ originally classified as an OB in
\cite{Wach2011}; $^f$ \cite{GK2012,KGB2015a}; $^g$ \cite{GK2014};
$^h$ \cite{GK2015a}; $^i$ classified as a Be/B[e]/LBV in
\cite{Wach2011}; $^j$ originally classified as a WC9 in
\cite{Wach2011}; $^k$ classified as a star in transition from AGB
to PN in \cite{Van1989}; $^l$ \cite{GK2015b}; $^m$ \cite{GK2010c};
$^{n}$ \cite{KGB2015b}; $^o$ classified as a M1\,I in
\cite{Wach2010}; $^p$ \cite{GK2012}; $^q$ originally classified as
an OB in \cite{Wach2010}; $^r$ classified as a Be in
\cite{Wach2010} and as an OB in \cite{Wach2011}; $^s$ classified
as a BA in \cite{Wach2011}; $^t$ classified as a Be/B[e]/LBV in
\cite{Wach2010} and as a cLBV in \cite{Str2012b}; $^u$ originally
classified as a WN6 in \cite{Wach2010} and then re-classified as a
[WN5] in \cite{Todt2013}; $^v$ classified as a FG in
\cite{Wach2011}; $^w$ classified as B[e]/LBV in \cite{Wach2011}
and as a cLBV in \cite{Str2012b}.}
\end{tabular}
\end{center}
\end{table*}
\begin{figure*}
\begin{center}
\includegraphics[angle=270,width=14.0cm,clip=]{Wray16-137.eps}
\caption{Evolution of the (normalized) spectrum of the new Galactic
bona fide LBV Wray 16-137 in 2011--2014 (adopted from \cite{GK2014}).}
\label{Wray}
\end{center}
\end{figure*}
We also discovered a rare WN-type central star of a planetary
nebula (Abell\,48), which is the second known example of [WN]
stars \cite{Todt2013}. Thanks to the high angular resolution of
{\it Spitzer} images (6 arcsec at 24\,$\mu$m), we detected a new
circular shell in the Large Magellanic Cloud. Follow up
spectroscopy of its central star with SALT (and several other
telescopes) resulted in the discovery of a new WR star in a close,
eccentric binary system with an O6\,V star \cite{GC2014}.
The majority of the remaining targets were tentatively classified
as OB, A and M stars.
Finally, we present in Table\,\ref{LBV} the current census of the
Galactic bona fide LBVs (eighteen stars in total). The objects
with detected circumstellar nebulae are starred. As follows from
the table, 72 per cent of the Galactic confirmed LBVs are
associated with nebulae. This provides further proof that
the detection of compact mid-IR shells is a powerful tool for
identifying (candidate) LBVs. Searches for new mid-IR nebulae
continue, with further discoveries of LBVs and other related stars
anticipated.
\begin{table*}
\caption{Current census of the Galactic bona fide LBVs. The
objects with detected circumstellar nebulae are starred.}
\label{LBV}
\begin{tabular}{p{2.8cm}p{2.4cm}p{3.0cm}p{5.2cm}}
\hline HR\,Car$^\ast$ & $\eta$\,Car$^\ast$ & AG\,Car$^\ast$ & Wray\,15-751$^\ast$ \\
$[$GKM2012$]$\,WS1$^\ast$ & Wray\,16-137$^\ast$ & $[$GKF2010$]$\,MN44$^\ast$ &
Cl*\,Westerlund\,1\,W\,243 \\
$[$GKF2010$]$\,MN48$^\ast$ & HD\,160529 & GCIRS\,34W & $[$MMC2010$]$\,LBV\,G0.120$-$0.048$^\ast$ \\
qF\,362 & HD\,168607 & MWC\,930$^\ast$ & G24.73+0.69$^\ast$ \\
AFGL\,2298$^\ast$ & P\,Cyg$^\ast$ & & \\
\hline
\end{tabular}
\end{table*}
\acknowledgments{
The observations reported in this paper were obtained with the SALT
programs \mbox{2010-1-RSA\_OTH-001}, \mbox{2011-3-RSA\_OTH-002},
\mbox{2013-1-RSA\_OTH-014}, \mbox{2013-2-RSA\_OTH-003} and
\mbox{2015-1-SCI-017}. AYK acknowledges support from the National
Research Foundation (NRF) of South Africa. VVG acknowledges the
Russian Science Foundation grant 14-12-01096.
This work was partially supported by the Russian Foundation
for Basic Research grant 16-02-00148.
We are grateful to referees for useful suggestions on the manuscript.
}
|
1901.06427
|
\section{Introduction}
Suspensions of micron sized rigid colloidal particles can be investigated by an ever expanding set of experimental tools such as optical tweezers and externally applied gravitational, magnetic and electric fields. Passive and active colloidal suspensions studied in the lab or in nature are almost always confined to be near one or more physical boundaries such as microscope slips \cite{ConfinedSphere_Sedimented,BoomerangDiffusion}. The confinement geometry strongly influences the underlying hydrodynamics of micro-particle suspensions due to the long-ranged nature of hydrodynamic interactions in the steady Stokes regime \cite{AutophoreticSpheres_Adhikari}, and boundaries may also play a key role in propulsion mechanisms in active suspensions \cite{ActiveDimers_EHD}. In recent years, precise and detailed measurements have been performed on quasi--two-dimensional (quasi--2D) colloidal crystal monolayers in order to interrogate kinetic friction at the microscopic scale \cite{Bohlein2011,ColloidSheetFriction}, but numerical simulations of this kind of system with proper accounting for hydrodynamic interactions are lacking. In this paper we introduce a numerical method that can simulate Brownian suspensions of rigid colloidal particles of complex shape in fully confined rectangular domains, with arbitrary combinations of periodic, no-slip, or free-slip boundary conditions along different dimensions.
Designing scalable simulation techniques for Brownian suspensions of many passive or active non-spherical colloids in confined domains is still an outstanding challenge in the field. While a number of existing methods can efficiently handle triply-periodic domains using Ewald techniques \cite{BrownianDynamics_OrderNlogN,StokesianDynamics_Brownian,FluctuatingFCM_DC,SpectralSD}, many experiments are carried out in some form of tight confinement, such as the dynamics of particles pressed between two microscope slides \cite{ColloidsInTightSlit,BoomerangDiffusion} or flowing through a microchannel. Stokesian Dynamics (SD) has become an industry standard in chemical engineering circles for Brownian suspensions \cite{BrownianDynamics_OrderNlogN,StokesianDynamics_Brownian,SD_SpectralEwald,SpectralSD}, and the method has been adapted to non-spherical rigid particles \cite{RigidBody_SD,StokesianDynamics_Rigid,StokesianDynamics_Confined}, as well as particles in partial or full confinement \cite{StokesianDynamics_Confined,StokesianDynamics_Wall,StokesianDynamics_Slit}. However, traditional SD and recent closely-related boundary-integral \cite{AutophoreticSpheres_Adhikari,BoundaryIntegralWall_Adhikari} `implicit-fluid' methods remain limited to spherical particles in specific geometries, for which a (grand) mobility tensor can be constructed explicitly. Furthermore, existing methods only have computational complexity that scales linearly in the number of particles for periodic boundary conditions, for which Fast Fourier Transforms (FFTs) accelerate the many-body computations \cite{BrownianDynamics_OrderNlogN,StokesianDynamics_Brownian,FluctuatingFCM_DC,SpectralSD}.
Fluctuating Lattice Boltzmann simulations have also been used for suspensions \cite{LB_SoftMatter_Review}. These techniques, however, require introducing artificial fluid compressibility and fluid inertia, which imposes a severe restriction on the time step size in order to achieve a physically-realistic Schmidt number.
The need to explicitly construct and compute Green's functions for nontrivial boundary conditions can be avoided by using a grid-based solver for the fluctuating Stokes equations in order to compute the action of the Green's function \emph{and} generate Brownian forces.
The authors of \cite{SELM_FEM} presented a Finite Element Method (FEM) capable of simulating Brownian suspensions in very general, confined domains. This method however, was limited to minimally resolved (point-like) spherical particles and did not correctly account for the stochastic drift term that appears in the overdamped Langevin equations when the mobility is configuration dependent. The authors of \cite{DECORATO_FEM} proposed a FEM scheme with body-fitted grids that is capable of simulating Brownian particles with arbitrary shape in confined domains. However, the method requires complex remeshing every time step and the temporal integrator used in this work requires solving expensive resistance problems. Because of this, the body-fitted FEM approach does not scale well in the number of particles and is in practice limited to one or a few individual particles.
The Rigid-Body Fluctuating Immersed Boundary Method (RB-FIB) method presented in this work is, to our knowledge, the first method that can simulate Brownian suspensions of rigid particles with arbitrary shape in fully confined domains with controllable accuracy and in computational time that scales linearly in the number of particles (at finite packing fractions). The method is built on contributions from a number of past works by some of us. In \cite{BrownianBlobs}, the authors presented a Fluctuating Immersed Boundary (FIB) method which could simulate fluctuating suspensions of minimally resolved spheres (or blobs) in general physical domains. In \cite{RigidMultiblobs}, the authors presented a rigid multiblob method to simulate \emph{deterministic} suspensions in general domains by constructing complex particle shapes out of agglomerates of minimally resolved spheres/blobs, termed \emph{rigid multiblobs}. Both \cite{BrownianBlobs} and \cite{RigidMultiblobs} employ the Immersed Boundary (IB) method to handle the fluid-particle coupling. IB methods provide a low-accuracy but inexpensive and flexible alternative to body-fitted grid-based methods since no remeshing is required as particles move around in the domain.
In \cite{BrownianMultiblobSuspensions}, some of us presented an efficient temporal integration scheme to simulate the dynamics of Brownian suspensions with many rigid particles of arbitrary shape confined above a single no-slip wall. This method relies on the simple geometry of the physical domain for which an explicit form for the hydrodynamic mobility operator is available. Here we present an amalgamation of these past approaches: we develop a generalization of the temporal integration of \cite{BrownianMultiblobSuspensions} that fits the IB framework used in \cite{RigidMultiblobs} to handle the hydrodynamic interactions including with boundaries, and use the fluctuating hydrodynamics approach proposed in \cite{BrownianBlobs} to account for Brownian motion.
We develop a novel Split Euler--Maruyama (SEM) temporal integration scheme to capture the stochastic drift which strongly affects even a single particle. The SEM scheme modifies the preconditioned Krylov method of \cite{RigidMultiblobs} to maintain its linear scaling but includes the necessary stochastic contributions to the rigid-body dynamics.
In \cite{Bohlein2011} the authors experimentally observed soliton wave patterns in a driven colloidal monolayer moving above a bottom substrate. The bottom wall is patterned with a periodic potential meant to mimic surface roughness. The monolayer is forced into quasi-2D confinement by laser-induced forces, and driven by the flow generated by moving the bottom wall. Brownian motion is crucial in activating the transitions of the colloids between the minima of the patterned potential, and must be captured accurately to resolve the dynamics of the monolayer. In section \ref{lattice}, we use the RB-FIB method to numerically investigate a modified version of the experiment performed in \cite{Bohlein2011} where we confine the monolayer in a thin microchannel. We observe novel wave patters in the colloidal monolayer which emerge due to the physical confinement. While simulations of this type have been performed \cite{ColloidSheetFriction,LatticeNoHydro}, our work is, to our knowledge, the first which includes an accurate treatment of the hydrodynamics.
The rest of this work is organized as follows. In section \ref{background} we give the continuous formulation of the problem and introduce some relevant notation. In section \ref{DF} we formulate the spatial discretization of the continuum equations. In section \ref{tint} we introduce the SEM scheme as an efficient temporal discretization that maintains discrete fluctuation dissipation balance. To numerically validate our scheme, we consider several test cases. Appendix \ref{sec:boom} considers a boomerang shaped particle in a slit channel, and confirms that the RB-FIB method is first order weakly accurate for expectations with respect to the Gibbs-Boltzmann equilibrium distribution. Section \ref{twoSphere} considers two spherical particles trapped in a tight cuboidal box to examine the effect of spatial resolution on dynamic statistics. In section \ref{lattice} we study the transition from static to dynamic friction for a suspension of many spherical colloids confined to a quasi--two dimensional slit channel, and hydrodynamically driven across a periodic substrate potential by translating the microchannel with constant velocity. We conclude with a summary and discussion of future directions in section \ref{sec:conc}
\section{Continuum Formulation} \label{background}
The fluctuating Stokes equations in a physical domain $\Omega$ can be written as \cite{FluctHydroNonEq_Book}
\begin{align} \label{FHD}
\rho \partial_{t} \V{v} &= \M{\nabla} \cdot \V{\Sigma} + \V{g} = \M{\nabla} \cdot \left( \V{\sigma} + \sqrt{2 k_{B} T \eta} \sM{Z} \right) + \V{g}, \\
\M{\nabla} \cdot \V{v} &= 0, \nonumber
\end{align}
where the fluid has density $\rho$, shear viscosity $\eta$, and temperature $T$, all of which we take to be constant in this work. Denoting Cartesian coordinates with $\V{x} \in \Omega$ and time with $t$, $\V{g}\left(\V{x},t \right)$ represents a fluid body force and $\sigma = -\pi \M{I} + \eta \left( \M{\nabla} \V{v} + \M{\nabla} \V{v}^{T}\right)$ is the dissipative component of the fluid stress tensor, where $\V{v}\left(\V{x},t \right) $ is the fluid velocity, and $\pi\left(\V{x},t \right)$ is the pressure. The \emph{stochastic stress tensor} $\sqrt{2 k_{B} T \eta} \sM{Z}$ accounts for the fluctuating contribution to the fluid stress and ensures fluctuation dissipation balance, where $\sM{Z}\left(\V{x},t \right)$ is a symmetric random Gaussian tensor whose components are delta correlated in space and time,
\[
\av{\mathcal{Z}_{ij}(\V{x},t)\mathcal{Z}_{kl}(\V{x}',t')} = \left( \delta_{ik} \delta_{jl} + \delta_{il} \delta_{jk} \right) \delta(t-t')\delta(\V{x} - \V{x}').
\]
We consider $N_b$ arbitrarily shaped rigid particles (bodies) $\mathcal{B}_p$, $1 \leq p \leq N_b$, suspended in the fluctuating Stokesian fluid. Because the particles are treated as rigid, their position in space can be completely described by the Cartesian location $\V{q}_p(t)$ of a representative tracking point, and an orientation $\V{\theta}_p(t)$ relative to a reference configuration. The bulk of our discussion will be independent of how one wishes to describe the orientation of the particles, however, in practice, we use unit quaternions, as described in \cite{BrownianMultiBlobs}. The composite configuration of particle $p$ will be denoted by $\V{Q}_p = \left[\V{q}_p, \V{\theta}_p \right]$.
Rigid body $p$ moves with translational velocity $\V{u}_p = d\V{q}_p/dt$ and rotates with angular velocity $\V{\omega}_p$ around the tracking point. The composite velocity $\V{U}_p = \left[\V{u}_p, \V{\omega}_p \right]$ can be used to express the velocity of an arbitrary point $\V{r} \in \partial \mathcal{B}_p$ on the surface of the particle to give the \emph{no-slip condition}
\begin{equation}\label{rigidBC}
\V{v}\left( \V{r} \right) = \V{u}_{p} + \left( \V{r} - \V{q}_{p} \right) \times \V{\omega}_{p} + \breve{\V{u}}_p.
\end{equation}
Here $\breve{\V{u}}_p$ is an apparent slip between the fluid and the particle surface which may be freely prescribed up to the condition that the integral of the slip velocity vanish over the surface of the particle \cite{FBIM}. The free slip velocity can be used to account for active layers formed by, say, electrokinetic flows or beating flagella.
Newton's second law gives the acceleration of particle $p$ in terms of the applied force $\V{f}_{p}$ and the applied torque $\V{\tau}_{p}$,
\begin{align}
m_{p} \frac{\partial \V{u}_p}{\partial t} &= \V{f}_{p} - \displaystyle \int_{\partial \mathcal{B}_{p}} \left(\V{\Sigma} \cdot \V{n} \right) d A( \V{r}), \label{CFTB1} \\
\M{I}_{p} \frac{\partial \V{\omega}_p}{\partial t} &= \V{\tau}_{p} - \displaystyle \int_{\partial \mathcal{B}_p} \left( \V{r} - \V{q}_{p} \right) \times \left(\V{\Sigma} \cdot \V{n} \right) d A( \V{r}), \label{CFTB2}
\end{align}
where $\V{n}\left( \V{r}\right) $ is the outward pointing surface normal vector, and $m_{p}$ and $\M{I}_{p}$ are the particles mass and angular inertia tensor respectively.
In this work we are interested in the \emph{overdamped} or steady Stokes limit of the dynamics, which is the one relevant for colloidal suspensions due to the very large Schmidt numbers and very small Reynolds numbers. In the \emph{absence} of thermal fluctuations, the steady Stokes or inertia-less limit of equations \eqref{CFTB1}-\eqref{CFTB2}, \eqref{rigidBC}, and \eqref{FHD} is taken by simply deleting the inertial terms to obtain
\begin{align}
- \eta \M{\nabla}^2 \V{v} + \M{\nabla} \pi = \V{g}, \ &\M{\nabla} \cdot \V{v} = 0 \label{DetC1} \text{ in } \Omega \setminus \left\lbrace \cup_p \mathcal{B}_p \right\rbrace, \\
\V{v}(\V{r}) &= \V{u}_{p} + \left( \V{r} - \V{q}_{p} \right) \times \V{\omega}_{p} + \breve{\V{u}}: \hspace{0.5cm} \forall p, \forall \V{r} \in \partial \mathcal{B}_{p}, \label{DetC2} \\
\V{f}_{p} = \displaystyle \int_{\partial \mathcal{B}_{p}} \V{\sigma} \cdot \V{n} \left(\V{r}\right) d A( \V{r}), \ \ \V{\tau}_{p} &= \displaystyle \int_{\partial \mathcal{B}_p} \left( \V{r} - \V{q}_{p} \right) \times \left( \V{\sigma} \cdot \V{n} \right) \left(\V{r}\right) d A( \V{r}):\hspace{0.5cm} \forall p. \label{DetC3}
\end{align}
The solution to the linear system of equations \eqref{DetC1}--\eqref{DetC3} can be expressed using a symmetric, positive semi-definite \emph{body mobility matrix} $\sM N(\V{Q})$ which is a function of the composite configuration $\V{Q} = \left[\V{q}_{1}, \V{\theta}_{1}, \ldots ,\V{q}_{N_b}, \V{\theta}_{N_b} \right]$. The body mobility matrix acts on the external forces and torques applied to each particle $\V{F} = \left[ \V{f}_{1}, \V{\tau}_{1}, \ldots ,\V{f}_{N_b}, \V{\tau}_{N_b} \right]$ to produce a composite vector $\V{U} = \left[ \V{u}_{1}, \V{\omega}_{1}, \ldots, \V{u}_{N_b}, \V{\omega}_{N_b} \right]$ of the rigid body velocities of the particles, $\V{U}=\sM N \V{F}$.
If we \emph{include} the thermal fluctuations, the overdamped limit of equations \eqref{CFTB1}-\eqref{CFTB2}, \eqref{rigidBC}, and \eqref{FHD} is the overdamped Langevin equation \cite{LangevinDynamics_Theory,FBIM}
\begin{align}
\V{U} &= \sM N \V{F} + \sqrt{2 k_{B} T} \ \sM N^{1/2} \diamond \sM{W} \label{LangevinNd} \\
&= \sM N \V{F} + k_{B} T \left(\partial_{\V{Q}} \cdot \sM N \right) + \sqrt{2 k_{B} T} \ \sM N^{1/2} \sM{W} \label{LangevinN}
\end{align}
where $\diamond$ is the kinetic stochastic product \cite{KineticStochasticIntegral_Ottinger}, and $\sM{W}(t)$ is a collection of independent Wiener processes. Equation \eqref{LangevinN} is the conversion of \eqref{LangevinNd} to Ito form, by introducing the term $k_{B} T \left(\partial_{\V{Q}} \cdot \sM N \right)$ which we call the `thermal' or `stochastic' drift \footnote{The precise mathematical interpretation of this notation when $\V{Q}$ includes particle orientations expressed in terms of quaternions is explained in \cite{BrownianMultiBlobs}.}. Fluctuation dissipation balance is maintained in \eqref{LangevinN} through the random vector $\sM N^{1/2} \sM{W}$ such that $ \sM N^{1/2} \left(\sM N^{1/2}\right)^{\star} = \sM N$, where star denotes an $L_2$ adjoint (i.e., a conjugate transpose for matrices). Note that the dynamics of the particle configurations $d\V{Q}/dt$ can be directly expressed in terms of $\V{U}$ using the quaternion representation of the particle orientations, as discussed in \cite{BrownianMultiBlobs}.
Numerical methods for temporal integration of \eqref{LangevinN} in time are discussed in \cite{BrownianMultiblobSuspensions}. These methods require an efficient method to generate both the deterministic and Brownian velocities of the particles. Specifically, efficient Brownian dynamics for rigid bodies requires efficiently computing an Euler-Maruyama approximation of the apparent linear and angular velocities over a time step of duration $\D{t}$,
\begin{equation} \label{UEMdef}
\V{U}_{\text{EM}} = \sM N \V{F} + \sqrt{\frac{2 k_{B} T}{\D{t}}} \ \sM N^{1/2} \V{W},
\end{equation}
where $\V{W}$ is a collection of independent standard Gaussian random variables. Much of the strength and flexibility of the RB-FIB method we introduce in this work comes from the fact that we compute $\V{U}_{\text{EM}}$ by discretizing a semi--continuum formulation of the overdamped particle dynamics (temporarily neglecting the thermal drift) instead of relying on explicit representations of Greens functions as in \cite{BrownianMultiblobSuspensions,FBIM}. The missing stochastic drift term in \eqref{LangevinN} involving $\partial_{\V{Q}} \cdot \sM N$ can be obtained in expectation by adding a correction to the Euler-Maruyuama method, as we explain in detail in Section \ref{tint}.
Temporarily neglecting the terms that contribute to the stochastic drift, the overdamped limit of equations \eqref{CFTB1}-\eqref{CFTB2}, \eqref{rigidBC}, and \eqref{FHD} can be obtained quite simply by deleting all of the inertial terms and replacing the space-time white noise field $\sM{Z}(\V{x},t)$ with a white-in-space random Gaussian field $\V{Z}(\V{x})/\sqrt{\D{t}}$. In the immersed boundary approach we use here, we extend the fluid equation over the whole domain, including inside the bodies, since \eqref{rigidBC} is satisfied for all $\V{r}\in \mathcal{B}_p$ including the rigidly moving interior \cite{RigidMultiblobs}. This gives a system of semi--continuum linear equations for $\V{U}_{\text{EM}} = \left[ \V{u}^{\text{EM}}_{1}, \V{\omega}^{\text{EM}}_{1}, \ldots \V{u}^{\text{EM}}_{N_b}, \V{\omega}^{\text{EM}}_{N_b} \right]$,
\begin{align}
-\eta \M{\nabla}^2 \V{v} + \M{\nabla} \pi = \sqrt{\frac{2 k_{B} T \eta}{\D{t}}}& \M{\nabla} \cdot \V{Z} + \displaystyle \sum_p \displaystyle \int_{\partial \mathcal{B}_{p}} \delta \left(\V{x} -\V{r} \right) \V{\lambda}\left(\V{r}\right) d A( \V{r}) + \V{g}: \hspace{0.5cm} \forall \V{x} \in \Omega \label{LangevinC1} \\
\M{\nabla} \cdot \V{v} &= 0 : \hspace{0.5cm} \forall \V{x} \in \Omega \label{LangevinC2} \\
\displaystyle \int_{\Omega} \delta \left(\V{x} -\V{r} \right) \V{v}\left( \V{x} \right) d V( \V{x}) &= \V{u}^{\text{EM}}_{p} + \left( \V{r} - \V{q}_{p} \right) \times \V{\omega}^{\text{EM}}_{p} + \breve{\V{u}}: \hspace{0.5cm} \forall p, \forall \V{r} \in \partial \mathcal{B}_{p}, \label{LangevinC3} \\
\V{f}_{p} = \displaystyle \int_{\partial \mathcal{B}_{p}} \V{\lambda}\left(\V{r}\right) d A( \V{r}), \ \ \V{\tau}_{p} &= \displaystyle \int_{\partial \mathcal{B}_p} \left( \V{r} - \V{q}_{p} \right) \times \V{\lambda}\left(\V{r}\right) d A( \V{r}):\hspace{0.5cm} \forall p. \label{LangevinC4}
\end{align}
We do not add a superscript `EM' to the velocity and pressure here with the understanding that in the overdamped limit they are just auxiliary variables used to obtain the motion of the particles. Here $\V{\lambda}$ is the jump in the fluid stress across the boundary of the particles, $\V{\lambda} \equiv \llbracket \V{\Sigma} \cdot \V{n} \rrbracket$, which can be simply identified as the traction force when $\breve{\V{u}}=\V{0}$; see Appendix A in \cite{RigidMultiblobs} for an explanation why the same formulation works even when the apparent slip $\breve{\V{u}}$ is nonzero. The use of a Dirac--delta distribution to restrict quantities to their values on the surface of the bodies $\V{r} \in \partial \mathcal{B}_{p}$ is the basis for the immersed boundary spatial discretization described next.
\section{Discrete Formulation} \label{DF}
In this section we describe an efficient and robust means of discretizing the problem formulated in section \ref{background}. We will use standard finite difference and immersed boundary methods to construct matrix discretizations of the differential and integral operators appearing in \eqref{LangevinC1}--\eqref{LangevinC4}. We will briefly review efficient methods to solve the large-scale linear system which arises in the discretized equations; details can be found in \cite{RigidIBM,RigidMultiblobs}.
\subsection{Discrete Fluctuating Stokes Equations}
To begin discretizing equations \eqref{LangevinC1}--\eqref{LangevinC4} we consider the discretization of the fluctuating Stokes equations without any suspended particles. Specifically, we will temporarily ignore the effect of the rigid bodies on the fluid in equations \eqref{LangevinC1}-\eqref{LangevinC2} by setting $\V{\lambda} = 0$. Importantly, all of the discussion in this section will remain agnostic to the choice of physical boundary conditions on $\Omega$.
We discretize the Stokes equations on a regular Cartesian `Eulerian' grid of cells with volume $\D{V}=h^d$, where $h$ is the grid spacing and $d$ is the space dimension. Velocity variables are staggered on the faces of the grid relative to the cell-centered pressure variables. The infinite dimensional white noise field $\V{Z}$ is spatially discretized as $\V{W}$, a collection of random Gaussian variables generated on the faces and nodes of the fluid grid, as described in \cite{LLNS_Staggered}. We use staggered discretizations of the vector divergence $\M{D}$ and scalar gradient $\M{G}$ operators that obey the adjoint relation $\M{G} = -{\M{D}}^{\star}$. We also define discrete tensor divergence $\pmb{\mathds{D}}$ and vector gradient $\pmb{\mathds{G}}=-\pmb{\mathds{D}}^{\star}$ operators, which account for the imposed boundary conditions on $\partial\Omega$. The discrete scalar Laplacian is $\M{L} = \M{D} \M{G}$ and the discrete vector Laplacian is $\pmb{\mathbb{L}} = \pmb{\mathds{D}} \pmb{\mathds{G}}$ in order to satisfy a discrete fluctuation dissipation balance principle \cite{LLNS_Staggered}.
With $\V{\lambda} = 0$, we discretize equations \eqref{LangevinC1}-\eqref{LangevinC2} as
\begin{align} \label{DFHD}
- \eta \pmb{\mathbb{L}} \V{v} + \M{G} \pi &= \V{g} + \left(\frac{2 k_{B} T \eta}{\D{V} \D{t}}\right)^{1/2} \pmb{\mathds{D}} \M{W} = \V{f}, \\
\M{D} \V{v} &= 0. \nonumber
\end{align}
This maintains discrete fluctuation dissipation balance even in the presence of physical boundaries. It is convenient at this point to introduce the symmetric positive-semidefinite discrete Stokes solution operator
$$-\sM{L}^{-1} = \frac{1}{\eta}\left(\pmb{\mathbb{L}}^{-1}-\pmb{\mathbb{L}}^{-1}\M G\left(\M D\M \pmb{\mathbb{L}}^{-1}\M G\right)^{-1}\M D\M \pmb{\mathbb{L}}^{-1}\right),$$
such that the solution to \eqref{DFHD} can be written as
$\V{v} = \sM{L}^{-1} \V{f}.$
\subsection{Immersed Boundary Method for Rigid Bodies}
To mediate the fluid-structure interaction we use a discrete approximation $\delta_h$ to the Dirac delta distribution appearing in \eqref{LangevinC1}--\eqref{LangevinC4}. In the results presented here $\delta_h$ is the 6-point kernel developed in \cite{New6ptKernel}. While the discretization on the grid introduces numerical artifacts in general \cite{IBM_PeskinReview}, the 6-point kernel we use here was specifically designed with grid invariance and smoothness in mind \cite{New6ptKernel}. \deleted{There are a number of other common choices for the kernel, $\delta_h$, used in $\sM J$ and $\sM S$. For instance, the Gaussian kernel of the force coupling method \cite{FluctuatingFCM_DC} which is isotropic and generalizes to non-spherical blob shapes, or the classical and well documented 4 point Immersed boundary kernel described by Peskin \cite{IBM_PeskinReview}.}
Throughout the rest of this work, we will represent a body $\mathcal{B}_p$ as a rigid agglomerate of markers or \emph{blobs} with positions $\V{r}^{p}_{i} \in \partial\mathcal{B}_p$ and we refer to the collection of these points as the `Lagrangian' grid. The ideal spacing between the blobs $s \sim h$ is related to the meshwidth $h$ used in discretizing the fluid equations \eqref{DFHD}, as discussed in detail in section IV of \cite{RigidMultiblobs}. We discretize $\V{\lambda}$ on the Lagrangian grid as a collection of force vectors $\V{\lambda}_{i}^{p} \approx \V{\lambda}\left( \V{r}^{p}_{i} \right) \D{A}\left( \V{r}^{p}_{i} \right)$. It is important to note that the discrete $\V{\lambda}_{i}^{p}$ has units of force rather than force density as the continuum $ \V{\lambda}\left( \V{r} \right)$. Recall that the fluid velocity $\V{v}$ is defined on the centers of the faces $\V{x}_\alpha$ of the Eulerian grid.
To discretize equations \eqref{LangevinC1}--\eqref{LangevinC4} we use simple trapezoidal rule quadratures to approximate the integrals as appropriate sums over Eulerian or Lagrangian grid points. This leads us to define the \emph{spreading operator} $\sM S$ and the \emph{interpolation operator} $\sM J$ as
\begin{align}
\left( \sM J \V{v} \right)_i^p &= \displaystyle \sum_{\V{x}_\alpha \in \Omega } \delta_{h}\left(\V{x}_{\alpha} - \V{r}^p_{i} \right) \V{v}\left( \V{x}_{\alpha} \right) \approx \displaystyle \int_{\Omega} \delta \left( \V{x} - \V{r}_i^p \right) \V{v}\left( \V{x} \right) d V(\V{x}), \\
\left( \sM S \V{\lambda} \right)_\alpha &= \frac{1}{\D{V}} \displaystyle \sum_p \sum_{\V{r}^{p}_{i} } \delta_{h}\left( \V{x}_\alpha - \V{r}^p_{i} \right) \V{\lambda}_{i}^p \approx \displaystyle \int_{\partial \mathcal{B}^{p}} \delta \left( \V{x}_{\alpha} - \V{r} \right) \V{\lambda}\left( \V{r} \right) d A(\V{r}).
\end{align}
It is important to note that $\sM J$ and $\sM S$ satisfy the adjoint relation $\sM J = \D{V} \sM S^{\star}$. This property ensures discrete conservation of energy \cite{IBM_PeskinReview} as well as discrete fluctuation dissipation balance. The definitions of $\sM S$ and $\sM J$ are modified when the support of the kernel overlaps with a physical boundary of $\Omega$. Appropriate ghost points across the physical boundary are used which ensures the effects of the boundary are incorporated into the spreading and interpolation operations while preserving the adjoint property, as described in Appendix D of \cite{RigidIBM}.
We discretize the integrals giving the total force and torque in \eqref{LangevinC4} using simple trapezoidal quadrature to define the geometric matrix $\sM K(\V{Q})$ \cite{RigidMultiblobs_Swan},
\begin{align}
\left(\sM K \V{U} \right)_i^p &= \V{u}_{p} + \left( \V{r}^{p}_i - \V{q}_{p} \right) \times \V{\omega}_{p},\\
\left(\sM K^T \V{\lambda} \right)_p &= \begin{bmatrix}
\displaystyle \sum_{\V{r}^{p}_{i} } \V{\lambda}^p_i \\
\displaystyle \sum_{\V{r}^{p}_{i} } \left( \V{r}^{p}_i - \V{q}_{p} \right) \times \V{\lambda}^p_i
\end{bmatrix}
\approx \begin{bmatrix}
\displaystyle \int_{\partial \mathcal{B}_{p}} \V{\lambda}\left(\V{r}\right) d A(\V{r}) \\
\displaystyle \int_{\partial \mathcal{B}_p} \left( \V{r} - \V{q}_{p} \right) \times \V{\lambda}\left(\V{r}\right) d A(\V{r})
\end{bmatrix}.
\end{align}
\subsection{The Discrete System}
We can now compactly state the spatially discretized system \eqref{LangevinC1}--\eqref{LangevinC4} as
\begin{align}
- \eta \pmb{\mathbb{L}} \V{v} + \M{G} \pi &= \V{g} + \sM S \V{\lambda} + \left(\frac{2 k_{B} T \eta}{\D{V} \D{t}}\right)^{1/2} \pmb{\mathds{D}} \M{W}, \label{LangevinD1} \\
\M{D} \V{v} &= 0, \label{LangevinD2} \\
\sM J \V{v} &= \sM K \V{U}_{\text{EM}} + \breve{\V u}, \label{LangevinD3} \\
\sM K^{T} \V{\lambda} &= \V{F}. \label{LangevinD4}
\end{align}
For simplicity in the following discussion, we will take the fluid body force $\V{g}=\V{0}$.
Using $\sM S$ and $\sM J$ we may define a regularized, symmetric, positive semi-definite blob-blob mobility matrix $\sM M = \sM J \sM L^{-1} \sM S$, where we recall that $\sM L^{-1}$ denotes the discrete Stokes solution operator. The block $\sM M_{ij}$ gives the pairwise mobility matrix between two blobs $\V{r}_{i}$ and $\V{r}_{j}$ \cite{BrownianBlobs,RigidIBM,RigidMultiblobs},
\begin{equation} \label{MobDefn}
\sM M_{ij} \approx \displaystyle \iint_{\Omega \times \Omega} \delta_{h}\left(\V{x} - \V{r}_{i} \right) \Set{G} \left( \V{x},\V{y} \right) \delta_{h}\left( \V{y} - \V{r}_{j} \right) d V(\V{y}) d V(\V{x})
\end{equation}
where $\Set{G}$ is the Green's function for the Stokes equations in $\Omega$ with the specified boundary conditions. For two markers/blobs that are sufficiently far apart in a sufficiently large domain, $\sM M_{ij}$ approximates the mobility for a pair of spheres of radius \footnote{Specifically, $a=1.47 h$ for the 6-point kernel used in this work \cite{RigidIBM,RigidMultiblobs}.} $a \sim h$. When the Green's function is available analytically, $\sM M$ can be computed explicitly; here we handle more general boundary conditions by solving the discretized steady Stokes equations numerically to compute the action of $\Set{G}$.
We may use the definition of $\sM M$ to eliminate the velocity and pressure from \eqref{LangevinD1}--\eqref{LangevinD4} to obtain the reduced linear system
\begin{align}
\sM M \V{\lambda} &= \sM K \V{U}_{\text{EM}} + \breve{\V u} -\sqrt{\frac{2 k_{B} T}{\D{t}}} \sM M^{1/2} \V{W}, \label{LangevinM1} \\
\sM K^{T} \V{\lambda} &= \V{F}. \label{LangevinM2}
\end{align}
Here we can identify the matrix $\sM M^{1/2} = \sqrt{\eta/\D{V}} \sM J \sM L^{-1} \pmb{\mathds{D}}$ as was done in \cite{BrownianBlobs}, such that $\sM M^{1/2} \left(\sM M^{1/2}\right)^{\star} = \sM M$. Equation \eqref{LangevinM1}--\eqref{LangevinM2} is identical to equation (9) in \cite{BrownianMultiblobSuspensions} -- the only difference here is that we do not explicitly have access to $\sM M$ because we do not necessarily know the Green's function $\Set{G}$. Following \cite{BrownianMultiblobSuspensions}, the solution of \eqref{LangevinM1}--\eqref{LangevinM2} can be written in terms of the body mobility matrix $\sM N = \left( \sM K^{T} \sM M^{-1} \sM K \right)^{-1}$,
\begin{equation} \label{incompleteN}
\V{U}_{\text{EM}} = \sM N \V{F} - \sM N \sM K^{T} \sM M^{-1} \breve{\V u} + \sqrt{\frac{2 k_{B} T}{\D{t}}} \sM N^{1/2} \M{W},
\end{equation}
where we have identified $\sM N^{1/2} \equiv \sM N \sM K^T \sM M^{-1} \sM M^{1/2}$, such that $\sM N^{1/2} \left(\sM N^{1/2}\right)^{\star}=\sM N$.
We have now shown that we can compute $\V{U}_{\text{EM}}$ given by \eqref{incompleteN} efficiently by solving
the linear system \eqref{LangevinC1}--\eqref{LangevinC4}, which can be done with a complexity linear in the number of particles thanks to the preconditioned GMRES solver developed in \cite{RigidMultiblobs}.
All the remains to perform Brownian Dynamics is an efficient means of computing the stochastic drift term $k_{B} T \left(\partial_{\V{Q}} \cdot \sM N \right)$ in \eqref{LangevinN}, as we discuss next.
\section{Temporal Integration} \label{tint}
In this section, we develop a temporal integration scheme for equation \eqref{LangevinN} which efficiently captures the contribution from the stochastic drift term $k_{B} T \left(\partial_{\V{Q}} \cdot \sM N \right)$. Specifically, we use \emph{random finite differences} (RFDs) \cite{BrownianBlobs,BrownianMultiBlobs,MagneticRollers} to compute terms that will be included on the right hand side of equations \eqref{LangevinD1}--\eqref{LangevinD4} to account for the stochastic drift in expectation.
The algorithm we develop here requires the solution of an additional linear system similar to \eqref{LangevinD1}--\eqref{LangevinD4} each time step in order to capture the stochastic drift. This is still more efficient than the classical Fixman scheme which requires solving a resistance problem, which is a lot more expensive than solving a mobility problem when iterative methods are used \cite{RigidMultiblobs,libStokes}.
\subsection{Random Finite Differences}
Solving equations \eqref{LangevinD1}--\eqref{LangevinD4} without any modifications yields an efficient means of computing $\V{U}_{\text{EM}}$ and all that remains to simulate equation \eqref{LangevinN} is a means of computing $k_{B} T \left(\partial_{\V{Q}} \cdot \sM N \right)$. In past work \cite{BrownianBlobs,BrownianMultiBlobs,MagneticRollers}, some of us have proposed to use random finite differences (RFD) to generate this term as follows.
Consider two Gaussian random vectors $\Delta \V{P}$ and $\Delta \V{Q}$, such that $\av{\Delta \V{P}} = \av{\Delta \V{Q}} = 0$ and $\av{\Delta \V{P} \Delta \V{Q}^{T}} = \M{I}$. For an arbitrary matrix $\sM{R}\left(\V{Q}\right)$, it holds that
\begin{align} \label{RFD}
\partial_{\V{Q}} \cdot \V{R} \left( \V{Q} \right) &= \lim_{\delta \rightarrow 0} \frac{1}{\delta}\av{\V{R} \left( \V{Q} + \delta \Delta \V{Q} \right) - \V{R} \left( \V{Q} \right)} \Delta \V{P}\\
&= \lim_{\delta \rightarrow 0} \frac{1}{\delta}\av{\V{R} \left( \V{Q} + \frac{\delta}{2} \Delta \V{Q} \right) - \V{R} \left( \V{Q} - \frac{\delta}{2} \Delta \V{Q} \right)} \Delta \V{P},
\end{align}
where $\av{\cdot}$ denotes an average over realizations of the random vectors.
The limits in equations \eqref{RFD} may be discretely approximated by simply choosing a small value for $\delta$ at the cost of introducing a truncation error of $\mathcal{O}\left( \delta \right)$ if the one--sided difference is used (first line) or $\mathcal{O}\left( \delta^2 \right)$ if the centered difference is used (second line). The value of $\delta$ should be chosen to balance the magnitude of the truncation errors in the RFD with any numerical error associated with the application of (multiplication with) $\V{R}$. Specifically, we take $\delta=\epsilon^{1/2}$ for the one-sided difference and $\delta=\epsilon^{1/3}$ for the centered difference, where $\epsilon$ is the relative error with which matrix-vector products with $\V{R}$ are computed.
A simple modification of the Euler--Maruyama scheme is to use a RFD with $\sM{R} \equiv \sM N$ to account for the stochastic drift \cite{BrownianMultiBlobs}. Since application of $\sM N$ requires solving \eqref{LangevinD1}--\eqref{LangevinD4} iteratively with some loose relative tolerance $\epsilon$, the required value of $\delta$ would be relatively large especially for the one-sided difference. Using the two-sided difference requires solving two additional mobility problems per time step, which is quite expensive. We now propose an alternative approach.
\subsection{The Split--Euler--Maruyama (SEM) Scheme}
Our goal to design a means of computing the stochastic drift with as few linear solves as possible. In our prior work \cite{BrownianMultiblobSuspensions}, we accomplished this by expanding $\partial_{\V{Q}} \cdot \sM N$ using the chain rule,
\begin{align} \label{RFDsplit}
&\partial_{\V{Q}} \cdot \sM N = \partial_{\V{Q}} \cdot \left(\sM K^{T} \sM M^{-1} \sM K \right)^{-1} = \nonumber \\
&-\sM N \left( \partial_{\V{Q}} \sM K^{T} \right) \colon \sM M^{-1} \sM K \sM N - \sM N \sM K^{T} \sM M^{-1} \left( \partial_{\V{Q}} \sM M \right) \colon \sM M^{-1} \sM K \sM N + \sM N \sM K^{T} \sM M^{-1} \left( \partial_{\V{Q}} \sM K \right) \colon \sM N.
\end{align}
In this work we do not have explicit access to $\sM M$ so we carry this expansion one step further as done in \cite{BrownianBlobs},
\begin{align} \label{divM}
\partial_{\V{Q}} \cdot \sM M = \partial_{\V{Q}} \cdot \left(\sM J \sM L^{-1}\sM S\right) = \left( \partial_{\V{Q}} \sM J \right) \colon \sM{L}^{-1} \sM S + \sM J \sM{L}^{-1} \left( \partial_{\V{Q}} \cdot \sM S \right).
\end{align}
Our aim is to generate each term in \eqref{RFDsplit} and \eqref{divM} through a separate RFD on $\sM J$, $\sM S$, $\sM K$, or $\sM K^{T}$. This is particularly advantageous as these operators can all be applied efficiently to within roundoff tolerance in linear time, without requiring linear solvers.
The Euler--Maruyama--Traction (EMT) scheme proposed in \cite{BrownianMultiblobSuspensions} can be adapted to the present context to give the \emph{Split--Euler--Maruyama} (SEM) scheme outlined in Algorithm \ref{alg:sem}. In the first step of this algorithm we generate random forces and torques on each body
\begin{equation}
\V{W}^{\text{FT}} = k_{B} T \begin{bmatrix}
\frac{1}{L_p}\V{W}_{p}^{\text{f}} \\
\V{W}_{p}^{\tau}
\end{bmatrix},
\end{equation}
where $\V{W}_{p}^{\text{f}/\tau}$ are standard Gaussian random variables generated independently for each body $p$. Here $L_p$ is a length scale for body $p$, which we take as the maximum pairwise distance between blobs on a body. In step \ref{step:solve1} of Algorithm \ref{alg:sem} we solve a mobility problem with $\V{W}^{\text{FT}}$ as the applied force/torque on each body, to obtain the random variables
\begin{align} \label{ulvRFD}
\V{U}^{RFD} &= \sM N \V{W}^{\text{FT}}, \\
{\V{\lambda}}^{RFD} &= \sM M^{-1} \sM K \sM N \V{W}^{\text{FT}}, \\
\V{v}^{RFD} &= - \sM L^{-1} \sM S \sM M^{-1} \sM K \sM N \V{W}^{\text{FT}}.
\end{align}
Defining a random translational and rotational displacement for body $p$
\begin{equation}
\Delta \V{Q}_p = \begin{bmatrix}
L_p \V{W}_{p}^{\text{f}} \\
\V{W}_{p}^{\tau}
\end{bmatrix},
\end{equation}
gives two randomly diplaced positions \footnote{Here for simplicity of notation we use addition to denote a random rotation of the body by an oriented angle $(\delta/2) \V{W}_{p}^{\tau}$ even though in practice this is realized as a quaternion multiplication in three dimensions \cite{BrownianMultiBlobs}.} of body $p$,
\begin{equation}
\V{Q}_p^{\pm} = \V{Q}_p \pm \frac{\delta}{2} \Delta \V{Q}_p.
\end{equation}
We can produce the desired drift term through the following random finite differences on the matrices $\sM K^{T}$ and $\sM K$:
\begin{align}
\V{D}^{\sM K^{T}} &= \frac{1}{\delta} \left[ \sM K^{T} \left(\V{Q}^{+}\right) - \sM K^{T} \left(\V{Q}^{-}\right) \right] {\V{\lambda}}^{RFD} \approx \left( \partial_{\V{Q}} \sM K^{T} \right) \colon \sM M^{-1} \sM K \sM N \left(\V{W}^{\text{FT}} \Delta \V{Q}^{T} \right), \label{allDemBoysa} \\
\V{D}^{\sM K} &= \frac{1}{\delta} \left[ \sM K \left(\V{Q}^{+}\right) - \sM K \left(\V{Q}^{-}\right) \right] \V{U}^{RFD} \approx \left( \partial_{\V{Q}} \sM K \right) \colon \sM N \left(\V{W}^{\text{FT}} \Delta \V{Q}^{T} \right), \label{allDemBoysb}
\end{align}
which are analogous to the quantities $\V{D}^{F}$ and $\V{D}^{S}$ computed in Algorithm 1 of \cite{BrownianMultiblobSuspensions}.
However, the RFD performed directly on $\sM M$ to compute $\V{D}^{S}$ in the EMT scheme (see Algorithm 1 of \cite{BrownianMultiblobSuspensions}) is computed in the SEM scheme using \eqref{divM} as a sum of RFDs on the interpolation and spreading operators,
\begin{align}
\V{D}^{\sM J} &= \frac{1}{\delta} \left[ \sM J \left(\V{Q}^{+}\right) - \sM J \left(\V{Q}^{-}\right) \right] \V{v}^{RFD} \approx \left( \partial_{\V{Q}} \sM J \right) \colon \sM{L}^{-1} \sM S \sM M^{-1} \sM K \sM N \left(\V{W}^{\text{FT}} \Delta \V{Q}^{T} \right). \label{allDemBoys2a}\\
\V{D}^{\sM S} &= \frac{1}{\delta} \sM J \sM L^{-1} \left[\sM S \left(\V{Q}^{+}\right) - \sM S \left(\V{Q}^{-}\right) \right] {\V{\lambda}}^{RFD} \approx \sM J \sM{L}^{-1} \left( \partial_{\V{Q}} \sM S \right) \colon \sM M^{-1} \sM K \sM N \left(\V{W}^{\text{FT}} \Delta \V{Q}^{T} \right). \label{allDemBoys2b}
\end{align}
Note that the computation of $\V{D}^{\sM S}$ in step \ref{alg:uncon} of Algorithm \ref{alg:sem} requires an additional application of $\sM L^{-1}$ and therefore an additional unconstrained Stokes solve. This does not add much additional complexity to the computation because unconstrained (fluid only) Stokes systems of the form \eqref{DFHD} have fewer degrees of freedom and are far better conditioned than constrained (fluid + rigid bodies) systems of the form \eqref{LangevinD1}--\eqref{LangevinD4} \cite{RigidMultiblobs}.
To produce the correct drift term, we add $\V{D}^{\sM K} - \V{D}^{\sM J} - \V{D}^{\sM S}$ as a random slip and add $\V{D}^{\sM K^{T}}$ as a random force on the right hand side of the linear system in step \ref{step:solve2} of Algorithm \ref{alg:sem}. This generates an additional contribution to the velocity of the rigid particles, $\V{U}^{n} = \V{U}_{\text{EM}} + \V{U}_{\text{Drift}}$, where
\begin{align} \label{DriDef}
\V{U}_{\text{Drift}} &= \sM N \V{D}^{\sM K^{T}} + \sM N \sM K^{T} \sM M^{-1} \left( -\V{D}^{\sM K} + \V{D}^{\sM J} + \V{D}^{\sM S} \right) \nonumber \\
&=\partial_{\V{Q}} \sM N \colon \left(\V{W}^{\text{FT}} \Delta \V{Q}^{T} \right).
\end{align}
We used equations \eqref{allDemBoysa}--\eqref{allDemBoys2b} as well as equation \eqref{RFDsplit} to simplify from the first to the second line in \eqref{DriDef}. On average (i.e. in expectation) this will produce the desired drift term,
\begin{equation}
\av{\V{U}_{\text{Drift}}} = (k_B T) \partial_{\V{Q}} \cdot \sM N,
\end{equation}
as shown in more detail in Appendix \ref{appendix}.
\begin{algorithm}
\caption{Split RFD Euler--Maruyama (SEM) scheme \label{alg:sem}}
\begin{enumerate}
\item Generate random forces and torques for all bodies $p$,
\[
\V{W}_p^{\text{FT}} = k_{B} T \begin{bmatrix}
\frac{1}{L}_p\V{W}_p^{\text{f}} \\
\V{W}_p^{\tau}.
\end{bmatrix}
\]
\item Solve the constrained Stokes system,\label{step:solve1}
\begin{equation*}
\begin{bmatrix}
-\eta \pmb{\mathbb{L}} & \M{G} & -\sM S^{n} & 0 \\
-\M{D} & 0 & 0 & 0 \\
-\sM J^{n} & 0 & 0 & -\sM K^n \\
0 & 0 & -(\sM K^n)^{T} & 0
\end{bmatrix}
\begin{bmatrix}
\V{v}^{RFD} \\
\pi^{RFD} \\
{\V{\lambda}}^{RFD} \\
\V{U}^{RFD}
\end{bmatrix} =
\begin{bmatrix}
0 \\
0 \\
0 \\
\V{W}^{FT}
\end{bmatrix}.
\end{equation*}
\item Generate randomly-displaced configurations for all bodies $p$,
\begin{align*}
\V{q}_p^{\pm} &= \V{q}_p^{n} \pm \frac{\delta}{2} L_p \V{W}_p^{\text{f}} \\
\V{\theta}_p^{\pm} &= \text{Rotate}\left(\V{\theta}_p^{n},\pm \frac{\delta}{2} \V{W}_p^{\tau} \right).
\end{align*}
\item Solve the unconstrained Stokes system \label{alg:uncon}
\begin{equation*}
\begin{bmatrix}
-\eta \pmb{\mathbb{L}} & \M{G} \\
-\M{D} & 0
\end{bmatrix}
\begin{bmatrix}
\V{v}^{\#} \\
\pi^{\#}
\end{bmatrix} =
\begin{bmatrix}
\frac{1}{\delta} \left[\sM S \left(\V{Q}^{+}\right) - \sM S \left(\V{Q}^{-}\right) \right] {\V{\lambda}}^{RFD} \\
0
\end{bmatrix}.
\end{equation*}
\item Compute the random finite differences\label{alg:RFDbits}
\begin{align*}
\V{D}^{\sM K^{T}} &= \frac{1}{\delta} \left[ \sM K^{T} \left(\V{Q}^{+}\right) - \sM K^{T} \left(\V{Q}^{-}\right) \right] {\V{\lambda}}^{RFD} \\
\V{D}^{\sM K} &= \frac{1}{\delta} \left[ \sM K \left(\V{Q}^{+}\right) - \sM K \left(\V{Q}^{-}\right) \right] \V{U}^{RFD} \\
\V{D}^{\sM J} &= -\frac{1}{\delta} \left[ \sM J \left(\V{Q}^{+}\right) - J \left(\V{Q}^{-}\right) \right] \V{v}^{RFD} \\
\V{D}^{\sM S} &= \sM J \V{v}^{\#}.
\end{align*}
\item Compute the velocities of the rigid bodies by solving the constrained Stokes system\label{step:solve2}
\begin{equation*}
\begin{bmatrix}
-\eta \pmb{\mathbb{L}} & \M{G} & -\sM S^{n} & 0 \\
-\M{D} & 0 & 0 & 0 \\
-\sM J^{n} & 0 & 0 & -\sM K^n \\
0 & 0 & -(\sM K^n)^{T} & 0
\end{bmatrix}
\begin{bmatrix}
\V{v}^{n} \\
\pi^{n} \\
{\V{\lambda}}^{n} \\
\V{U}^{n}
\end{bmatrix} =
\begin{bmatrix}
\sqrt{\frac{2 \eta k_B T}{\D{t}\D{V}}} \pmb{\mathds{D}} \V{W} \\
0 \\
\V{D}^{\sM K} - \V{D}^{\sM J} - \V{D}^{\sM S} \\
-\V{F}^{n} + \V{D}^{\sM K^T}
\end{bmatrix}.
\end{equation*}
\item Update the positions and orientations of all bodies $p$,
\begin{align*}
\V{q}_p^{n+1} &= \V{q}_p^{n} + \D{t} \V{U}_p^{n} \\
\V{\theta}_p^{n+1} &= \text{Rotate}\left(\V{\theta}_p^{n}, \D{t} \V{\omega}_p^{n} \right).
\end{align*}
\end{enumerate}
\end{algorithm}
\section{Numerical Results}\label{numerical}
To numerically investigate the RB-FIB algorithm, we have implemented it in the IBAMR code \cite{IBAMR}, freely available at \url{https://github.com/IBAMR}, by modifying existing codes developed for deterministic Stokesian suspensions in \cite{RigidMultiblobs}. In particular, we have reused the existing linear solvers in steps \ref{step:solve1} and \ref{step:solve2} of Algorithm \ref{alg:sem}. The scaling and convergence of the numerical linear algebra routines, as well as optimal parameters for the outer and inner Krylov and multigrid iterative solvers are discussed in \cite{RigidMultiblobs}. In all of the following numerical examples (including the Appendix), we use an absolute tolerance proportional to the time step size in the outer FGMRES solver required by steps \ref{step:solve1} and \ref{step:solve2} of the SEM scheme, as well as in the unconstrained GMRES solve required by step \ref{alg:uncon}.
As recommended for the 6-point immersed boundary kernel in \cite{RigidMultiblobs}, we take the spacing of the Lagrangian blobs $s \approx 3 h$, where $h$ is the Eulerian grid spacing. Time steps which generate unphysical configurations (such as a blob overlapping the wall) are rejected and the step repeated. However, these instances are very rare because we use repulsive potentials to prevent particle-particle and particle-wall overlaps, and because we employ the modifications of $\sM S$ and $\sM J$ near the boundaries introduced in Appendix D in \cite{RigidIBM}. In all of the following examples, the fluid is water at room temperature $T = 300$ K and viscosity $\eta = 1$ mPa s.
Data for the examples studied in this section was gathered on Northwestern University's QUEST computing cluster. Multiple independent trajectories are run for each case considered in both of the following examples as well as for Appendix \ref{sec:boom}, where we take as many time steps as would complete in a fixed amount of computation time (typically on the order of one week). We accounted for the variable lengths of the runs by using means and variances weighted by the trajectory length when computing relevant statistics. The error bars included in the figures of this section show $95\%$ confidence intervals, computed using the weighted means and variances.
In Appendix \ref{sec:boom} we examine a single colloidal boomerang suspended in a slit channel to demonstrate the first order weak accuracy of the SEM scheme and validate our implementation. This appendix also demonstrates the viability of the RB-FIB method for arbitrary particle shapes and shows the importance of numerically capturing the stochastic drift term in \eqref{LangevinN}.
In Sections \ref{twoSphere} and \ref{lattice} we numerically investigate how tight physical confinement affects hydrodynamic coupling between particles. In section \ref{twoSphere} we use a simple example of two spheres confined in a cuboid in order to establish the spatial resolution required for the RB-FIB method to capture dynamic statistics with sufficient accuracy. In section \ref{lattice} we numerically investigate a variant of an experiment reported in \cite{Bohlein2011} that measured the friction forces and resulting wave patterns in a hydrodynamically driven colloidal monolayer. In our setup the monolayer is confined in a narrow slit channel and is also confined in the lateral directions by walls, mimicking an experiment performed in a microfluidic channel. Using the RB-FIB method we find some novel behavior in the propagation of the waves through the mololayer.
\subsection{Two Spheres in a Tight Cavity}\label{twoSphere}
In this section we investigate how spatial resolution effects the accuracy of the RB-FIB method. We simulate two neutrally buoyant spheres of hydrodynamic radius $R_{h} = 0.656 \ \mu$m, tightly confined in a $3.478 \times 1.739 \times 1.739 \ \mu \text{m}^3$ rectangular box. These physical dimensions ensure that the spheres are almost always in near contact with each other and/or a physical boundary, allowing us to highlight that the RB-FIB method can tackle problems in which confinement plays an important role in the dynamics.
To reduce the severe time step size restriction required to ensure that there are no particle-particle or particle-wall overlaps, we introduce a soft repulsive potential between the two particles as well as between the particles and the wall of the form
\begin{equation}\label{Usoft}
\Phi(r) = \Phi_0
\begin{cases}
1 + \frac{d-r}{b} & r < d \\
\text{exp}\left( \frac{d-r}{b} \right) & r \geq d
\end{cases}.
\end{equation}
For the interparticle repulsion, $d = 2 R_H$ and $r$ is the distance between the particle centers. For the repulsion between a particle and a wall, $d = R_H$ and $r$ is the distance between a particle center and a wall. We take $b = 0.1 R_H$ and $\Phi_0 = 4 k_B T$ for both the particle-particle and particle-wall potentials as in \cite{MagneticRollers}, as this choice ensures that the time scale associated with the steric repulsion isn't much smaller than the diffusive time scale, while also maintaining a low probability of unphysical configurations. We use a dimensionless time step size $\Delta \tau = \frac{k_B T}{6 \pi \eta R_{h}^3} \D{t} = 0.0044$ as this was found to be small enough to ensure that the temporal integration errors are smaller than the statistical errors.
We discretize the domain using a grid spacing $h = 0.0543 \times (1,2,4) \mu$m and discretize the particles with $162, 42, 12$ approximately equally spaced blobs respectively, as shown in figure \ref{fig:TwoSphere}. The geometric radius of the particles is determined according to table I in \cite{RigidMultiblobs} in order to maintain a constant hydrodynamic radius $R_{h} = 0.656 \mu$m as the resolution is refined. The physical domain is taken to be of dimensions $2L \times L \times L$ where $L = 8 \times 0.0543 \mu$m.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{./Long_Stack.pdf}
\caption{Comparison of different spatial resolutions for two tightly confined spherical colloids where each case is shown in an adjacent cell. Blobs are depicted with their appropriate hydrodynamic radii and the grid spacing of the Eulerian mesh can be seen on the walls of each cell. From left to right, the grid spacing is $h = 0.0543 \times (4,2,1) \mu$m and the spheres are discretized using $12,42,162$ blobs.}
\label{fig:TwoSphere}
\end{figure}
In appendix \ref{sec:boom} we study the temporal integration errors by examining the marginals of the equilibrium (static) Gibbs--Boltzmann (GB) distribution for a boomerang confined in a slit channel; we performed similar tests for the two-sphere example studied here and found negligible errors in the static distributions for all resolutions. This is expected for any equilibrium average for sufficiently small time step sizes because only dynamic quantities are affected by the resolution of the hydrodynamics. Here we investigate dynamic statistics in the form of the equilibrium mean squared displacement (MSD) block matrix with tensor blocks
\begin{equation}
\V{\text{MSD}}_{pr}\left(t\right) = \av{\left( \V{q}_p (t) - \V{q}_p (0) \right) \left( \V{q}_r (t) - \V{q}_r (0) \right)^{T} },
\end{equation}
where the average is taken over equilibrium trajectories and the subscript $p,r = 1,2$ denotes the particle.
To compute the components of the MSD we use the SEM scheme to simulate several independent equilibrium trajectories for each of the three resolutions.
In Fig. \ref{fig:TwoSphereMSD} we compare the components of the MSD for different spatial resolutions. Because of the symmetry in the problem, $\V{\text{MSD}}_{11} = \V{\text{MSD}}_{22}$ and $\V{\text{MSD}}_{pp}^{yy} = \V{\text{MSD}}_{pp}^{zz} \equiv \V{\text{MSD}}_{pp}^{\perp}$, so we show the average of the equivalent components of the MSD tensor.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{./TwoSphereMSD.pdf}
\caption{Components of the self MSD ($\V{\text{MSD}}_{11} = \V{\text{MSD}}_{22}$) normalized by the free space mobility for different spatial resolutions (see legend). Dashed lines indicate $95\%$ confidence intervals. Also shown as a dashed black line is the normalized short time diffusion component computed according to \eqref{SErel}. The dotted black line shows the asymptotic long-time value as computed using \eqref{asymRef}. The insets in panels (a),(b) show the respective components of the MSD at short times. (a) MSD in the $\parallel$ (long) direction for the two spheres. (b) MSD in the $\perp$ ($y,z$) directions.}
\label{fig:TwoSphereMSD}
\end{figure}
Because the particles are completely confined, every component of the translational MSD approaches an asymptote
\begin{equation} \label{asymRef}
\lim_{t \rightarrow \infty} \V{\text{MSD}}_{pr}\left(t\right) = \av{(\V{z}^p_1 - \V{z}^p_2) (\V{z}^r_1 - \V{z}^r_2)^{T}},
\end{equation}
where $\V{z}^{p}_{1}, \V{z}^{p}_{2}$ are two independent samples from the equilibrium distribution of particle $p$, generated from an MCMC method. The appropriate asymptotes are plotted in Figs. \ref{fig:TwoSphereMSD}(a),(b) and we can see that the correct asymptotic MSD is approached using all three resolutions considered. This is again expected because the asymptotic MSD is controlled by the Gibbs-Boltzmann equilibrium distribution and not by the (hydro)dynamics.
The Stokes--Einstein relation gives the short time diffusion tensor
\begin{equation} \label{SErel}
\V{D}_{pr} = \lim_{t \rightarrow 0} \frac{\V{\text{MSD}}_{pr} \left(t\right)}{t} = 2 k_{B} T \av{\sM N_{pr}^{(tt)}}_{GB},
\end{equation}
where the superscript in $\av{\sM N_{pr}^{(tt)}}_{GB}$ refers to the translation--translation block of the body mobility matrix $\sM N$ and $\av{\cdot}_{GB}$ denotes an average with respect to the Gibbs--Boltzmann (GB) distribution. To compute $\av{\sM N}_{GB}$ we use $642$ blobs to discretize each sphere ($h = 0.5 \times 0.0543 \mu$m) and compute a sample mean of $\sM N$ over equilibrium configurations sampled using a Markov Chain Monte Carlo (MCMC) method \footnote{We generate enough samples to measure each component of $\av{\sM N_{pr}^{(tt)}}_{GB}$ to within $1 \%$ statistical error with $95\%$ confidence.}.
The insets of figure \ref{fig:TwoSphereMSD} show that the short-time Stokes--Einstein relation \eqref{SErel} is accurately maintained for both of the finer resolutions but not for the coarsest resolution. Theoretical results are unavailable for the self-diffusion coefficient at intermediate times. We see in Figs. \ref{fig:TwoSphereMSD}(a),(b) close agreement between the two higher resolutions ($162$ and $42$ blobs), but with some visible deviations for the lowest resolution case ($12$ blobs), indicating insufficient resolution.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{./TwoSphereMSDcross.pdf}
\caption{Components of the cross MSD ($\V{\text{MSD}}_{12} = \V{\text{MSD}}_{21}$) normalized by the free space mobility for different spatial resolutions (see legend). Dashed lines indicate $95\%$ confidence intervals. Also shown as a dashed black line is the normalized short time diffusion component computed according to \eqref{SErel}. The dotted black line shows the asymptotic long-time value as computed using \eqref{asymRef}. The insets zoom in on the short-time behavior. (a) MSD in the $\parallel$ (long) direction. (b) MSD in the $\perp$ ($y,z$) directions.}
\label{fig:TwoSphereMSDcross}
\end{figure}
The authors of \cite{TwoSphereCouple} found analytically and experimentally that two nearby colloidal spheres were strongly hydrodynamically coupled but the coupling weakens significantly near a confining boundary. In our example we see a competition of influence: the two spheres are always close to each other and hence their motion should be coupled, but they are also always close to a physical boundary which would decouple their motion. To get a clearer picture of this competition, Fig. \ref{fig:TwoSphereMSDcross} shows the non-vanishing components (the $xx$ or $\parallel$ and $yy,zz$ or $\perp$ components) of $\V{\text{MSD}}_{1,2}$. The inset of Fig. \ref{fig:TwoSphereMSDcross}(a) shows that the short time diffusive motion in the $\parallel$ direction between the two spheres (given by \eqref{SErel}) is very weakly coupled. However, the full panel of Fig. \ref{fig:TwoSphereMSDcross}(a) shows a much stronger coupling in the parallel motion of the spheres for longer times. Therefore, the influence of another nearby particle eventually dominates over the influence of the walls, which initially decouples the particles' motion. In Fig. \ref{fig:TwoSphereMSDcross}(b) we see that the short time diffusion in the $\perp$ ($y,z$) directions between the spheres is somewhat strongly anti-correlated. After $t \approx 4$s the behavior of the MSD inflects and decays to nearly zero for larger times and the perpendicular motion of the spheres effectively decouples due to the confinement. The results in Fig. \ref{fig:TwoSphereMSDcross} indicate that 12 blobs per sphere is not sufficient to accurately predict the time correlation functions for more than one particle, and at least 42 blobs per sphere are required for particles this close to each other and the walls.
\subsection{Friction in a Colloidal Monolayer}\label{lattice}
In \cite{Bohlein2011}, the authors performed an experiment in which a colloidal monolayer was hydrodynamically driven across a bottom wall on which a substrate potential was generated by optical traps. This potential mimics the effect of corrugation of the wall, which is a key contributor to the effective friction with the wall. In \cite{Bohlein2011} the system was kept quasi--two--dimensional by forcing the monolayer to remain near the wall using a vertically incident laser to form a confining potential that dramatically reduced out of plane motion. The colloids used in the experiment were negatively charged polystyrene spheres suspended in water. Due to their negative charge, the particles spontaneously formed a stable 2D triangular crystal \cite{YukawaLattice20KT}. The corrugation potential used in the experiments reported in \cite{Bohlein2011} took the form of a periodic lattice with 3-fold symmetry around its minima. The colloidal crystal and the corrugation potential are called commensurate if the lattice constants agree and incommensurate otherwise; both cases were considered in \cite{Bohlein2011}.
In the experiments reported in \cite{Bohlein2011}, the sample cell was translated to generate a flow field and hence a fluid drag force on the colloids. This lateral drag force on the crystal served as a control parameter and, under commensurate conditions, the authors observed a critical translation velocity (and hence lateral force) at which the colloidal crystal became unpinned from the corrugation potential. This critical value represents the transition of the crystal from static friction to kinetic as it becomes unpinned. Just above this critical velocity, they observed localized density variations in the colloidal crystal taking the form of traveling kink solitons.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{./Colloid_Lattice_Cartoon.pdf}
\caption{A diagram of a typical simulation configuration that we consider in this section. Quasi--two dimensionality is achieved through screened Coulombic interactions between the particles (represented here using their 42--blob discretizations) and the two walls in the $z$-direction. Walls at the boundaries in the $y$-direction are also shown. The physical boundaries themselves are represented as corrugated sheets (transparency is added in the $xy$-direction for visual clarity) to represent the periodic substrate potential used in the simulations (note however that this is simply for visualization and the physical boundaries are indeed flat). A drag force is applied to the colloidal monolayer by prescribing a wall velocity $v_{\text{wall}}$ in the positive $x$ direction (flow direction shown as a black arrow). Boundaries of the domain in the $x$-direction are taken to be periodic to allow for continuous movement of the monolayer.}
\label{fig:lattice}
\end{figure}
In this section, we will use the RB-FIB method to numerically investigate a novel variant of this experiment wherein the quasi--two--dimensionality of the problem (essential for the existence of a stable 2D monolayer) is achieved by confining the monolayer in a slit channel between two walls, see illustration in figure \ref{fig:lattice}. This type of confinement is easy to realize in experiments using a microchannel. Additionally, this example serves to demonstrate an important feature of the RB-FIB method: not only can it account for tight confinement, but it can also easily account for external flow fields generated by the imposed boundary conditions.
\subsubsection{Physical Parameters}
The electrostatic repulsion between the colloids is accounted for by a pairwise Yukawa potential of the form
\begin{equation} \label{YukawaPot}
U_{\text{pair}}(r) = U_0 \frac{\exp\left(\frac{D_H - r}{\lambda_D} \right)}{r/ D_H},
\end{equation}
where $U_0$ is the repulsion strength, $\lambda_D$ is the Debye screening length, and $r$ is the distance between particle centers. We set the screening length $\lambda_D=0.16 \mu$m to the value reported in \cite{Bohlein2011}. The repulsion strength $U_0$ is a free parameter which must be chosen to ensure that a stable colloidal crystal is formed in the absence of a trapping potential, and that this crystal remains stable in the presence of the thermal fluctuations and background flow. In agreement with \cite{YukawaLattice20KT,YukawaLattice80KT}, we find that choosing $20 k_{B} T \leq U_0 \leq 80 k_{B} T$ is reasonable as the results presented later in this section were not strongly effected by taking $U_0$ to be either of these extremes; we will fix $U_0 = 20 k_{B} T$ henceforth. To reduce the probability of blob--wall overlaps we include a soft repulsive potential between each blob and all of the physical boundaries of the domain. This potential takes the form of \eqref{Usoft}, where $d = a$, $b = 0.1 a$, and $\Phi_0 = 4 k_B T$, and $r$ is the distance between a blob and a wall.\footnote{For spherical particles we could have put a wall-repulsive potential on each sphere's center rather than each blob and avoided small spurious torques on the particles. However this would not generalize easily to arbitrary particle shapes.} We use a dimensionless steric time step size $\Delta \tau = \D{t} \left(\Phi_0/(6 \pi \eta a^2 b)\right) = 0.138$ which we found to be sufficiently small to make particle--wall overlaps infrequent.
To capture the effects of corrugation, each colloid feels the potential \cite{ColloidSheetFriction}
\begin{equation}
U_{\text{corr}}\left(\V{X}_{2D}\right) = \frac{U_{0}^{\text{corr}}}{2} \left\{ 3 - \cos \left( \left[ \V{k}_1 - \V{k}_2 \right] \cdot \V{X}_{2D} \right) - \cos \left( \V{k}_1 \cdot \V{X}_{2D} \right)- \cos \left( \V{k}_2 \cdot \V{X}_{2D} \right) \right\},
\end{equation}
where a particle's $y$--symmetrized 2D position is $\V{X}_{2D} = [x, y-L_y / 2]^{T}$.
Here the scaled lattice directions are
\begin{equation}
\V{k}_1 = \frac{4 \pi}{\sqrt{3} s_{\text{Lattice}}} \begin{bmatrix}
-\frac{\sqrt{3}}{2} \\
-\frac{1}{2}
\end{bmatrix}, \hspace{1cm}
\V{k}_2 = \frac{4 \pi}{\sqrt{3} s_{\text{Lattice}}} \begin{bmatrix}
-\frac{\sqrt{3}}{2} \\
\frac{1}{2}
\end{bmatrix}.
\end{equation}
Following \cite{Bohlein2011}, we take the average particle separation to be $s_{\text{Lattice}} = 5.7 \mu$m and the particle radius $R_H = 1.95$. In the absence of a substrate potential the spacing of the colloidal crystal is controlled by the number of particles and the domain dimensions. To ensure that the spacing of the substrate potential $s_{\text{Lattice}}$ is commensurate with the spacing in the colloidal crystal, we use 272 particles and take $L_x = 16 s_{\text{Lattice}}$, $L_y = 15 s_{\text{Lattice}}$. The domain is taken to be periodic in the direction of the applied flow field ($x$ direction) and no--slip boundaries are used in every other direction. While the periodicity in the $x$ direction introduces some unphysical artifacts, $L_x$ is large enough to produce kink solitons, which were found to have a support of $\approx 8 s_{\text{Lattice}}$ \cite{Bohlein2011}. The width of the domain in the $z$ direction is taken to be $L_z = 1.28 D_H$, where $D_H = 2 R_H$ is the particle diameter. This is to ensure the fairly strict quasi--two dimensionality required for colloidal crystals to form.
To drive the colloidal monolayer, we move the top, bottom and side walls with velocity $v_{\text{wall}}$ along the $x$ axes, i.e., we impose a fluid velocity $\left(v_{\text{wall}},0,0\right)$ on all walls as a boundary condition for the Stokes equations
\footnote{In this simple case, the flow created by the wall slip is simple constant plug flow, so one could move/shift the corrugation potential with velocity $\left(-v_{\text{wall}},0,0\right)$ instead of imposing a velocity on the walls. However, in more general situations, e.g., (time-dependent) pressure-driven flow in square channels, it is much simpler and more flexible to let the Stokes solver compute the fluid flow generated by the boundaries.}.
We non--dimensionalize the control parameter $v_{\text{wall}}$ using the work required to move one colloid one lattice site in an unbounded domain,
\begin{equation}
W_{\text{wall}} = \frac{6 \pi \eta R_H v_{\text{wall}} \cdot s_{\text{Lattice}} }{U_{0}^{corr}}.
\end{equation}
In what follows, we vary $W_{\text{wall}} = \left\{ 1.63, 1.86, 2.1, 2.33, 2.56\right\}$ and investigate the dynamics of the colloidal monolayer shown in figure \ref{fig:lattice}.
\subsubsection{Discretization Parameters}
We discretize the colloids using 42 blobs, as this resolution was found in section \ref{twoSphere} to provide sufficient accuracy for the dynamics. The fluid grid spacing $h=s_{\text{Lattice}}/16$ so that substrate potential aligns with the periodic domain size in the $x$ direction. The blob spacing $s = 0.95$ was chosen so that the particles' hydrodynamic radius would be $R_H = 1.95$ (geometric radius $R_{G} = 1.7380$) and $s \approx 3 h$ as was recommended in \cite{RigidMultiblobs}.
\subsubsection{Numerical Observations of Kinks in a Colloidal Monolayer}
In this section we vary the dimensionless wall velocity $W_{\text{wall}}$ to investigate the critical value at which static friction is broken and the colloidal monolayer begins to slide through the thin 'corrugated' channel via kink solitons. To measure the point at which the colloidal monolayer breaks from the corrugated substrate potential, we simply determine whether any particle displaces more than $s_{\text{Lattice}}$ from its initial position. Partial depinning of the colloidal crystal is observed at $W_{\text{wall}} = 1.86$ wherein only some of the particles are sufficiently displaced. Full depinning is observed for all $W_{\text{wall}} > 1.86$ wherein all of the particles are sufficiently displaced after some amount of time. Brownian motion is crucial to the formation and propagation of kinks. While it is certainly possible to see depinning of the monolayer in the absence of thermal fluctuations, the tiny 'kicks' provided by the fluctuating fluid activate a particle's transition between potential wells. So much so, in fact, that no depinning was observed in complimentary deterministic simulations for all values of $W_{\text{wall}}$ considered.
As noted in \cite{Bohlein2011}, a kink is formed when one particle escapes the potential well it is confined to through a combination of thermal forces as well as hydrodynamic drag from the background flow. Once a particle escapes, it enters the neighboring potential well in the direction of the flow. Once confined to this new well, which also typically has a particle trapped in it, a combination of Yukawa and steric repulsion forces the original occupant of the well into its neighboring well, and the process repeats. We identify as a kink the propagation of particles escaping their well along the direction of flow.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{./Colloid_Kink_multirow.pdf}
\caption{(a) Three rows of colloidal particles in the confined monolayer, highlighted in different color gradients. The rows are at a distance of one (yelow and green), two (green and blue), or three (yellow and blue) rows apart. Note that other particles as well as the top and side walls are not drawn here for visual clarity. (b) Trajectories in the $x$ direction of the rightmost row (blue) of particles shown in panel (a). The color of the trajectories corresponds to the particle color. The black contours which are orthogonal to the particle trajectories are traveling kinks. These contours connect the local maxima of the velocity of each particle, averaged over 50 time steps in order to filter out high frequency fluctuations due to thermal motion. (c) Comparison of the kink propagation (black lines in panel (b)) for each of the three rows of particles, showing only a small degree of coordination between different rows.}
\label{fig:kink}
\end{figure}
Figure \ref{fig:kink} shows the propagation of kinks in three nearby rows of particles along the direction of flow; see the SI for a movie. Figure \ref{fig:kink}(b) shows the $x$ position of each particle from the `blue' row over time. The transverse black contour lines show local extrema in the velocities of the particles (averaged over 50 time steps to filter out the Brownian velocities), which are seen to correspond to the times at which a particle jumps to another lattice site. The bends in the black velocity contours show a finite speed of propagation for the kink and the `S' shaped profile is due to the periodicity in the $x$ direction. Figure \ref{fig:kink}(c) shows the contours of the maximum velocity for the particle rows highlighted in panel (a). The prevailing `S' shape in all of the contours in panel (c) demonstrates that there are propagating kinks in all of the rows of particles in the monolayer. However, it is difficult to appreciate the correlations in the motion of the kinks in nearby rows in these results.
\begin{figure}
\centering
\subfloat{\includegraphics[width=\textwidth]{./Kink_Displacements_Top.pdf}}
\hfill
\subfloat{\includegraphics[width=\textwidth]{./Kink_Displacements_Bottom.pdf}}
\caption{The top row of panels shows the positions of the colored particles at three different times for $W_{\text{wall}} = 2.10$. The hopping (displacement) times for the colored particles are shown in the bottom row of panels for each particle for different values of $W_{\text{wall}}$. The row number on the $x$ axes corresponds to the $y$ position of the particles, increasing from left to right, and a marker is placed along the time axis whenever the given particle displaces more than a lattice width $s_{\text{Lattice}}$. The first, second, third, etc. hopping times of the particles are connected with a black line. The black lines develop a clear `M' shape in time for each case considered, indicating that the particles near the $y$ boundaries displace first, followed by the particles in the middle.}
\label{fig:kink_wave}
\end{figure}
In figure \ref{fig:kink_wave} we investigate how kinks influence other kinks in the direction transverse to their motion. The authors of \cite{Bohlein2011} observed that kinks extend also perpendicular to the direction of the force with a small lag in time. Their system however, was large enough to be considered unconfined in the $y$ direction. Here we examine the correlation of kinks between rows of particles, where a row is defined by binning the particles' $y$ coordinates with bin width $2 s_{\text{Lattice}}$. By selecting a representative particle from each row in the monolayer, as seen in the top panels of figure \ref{fig:kink_wave}, we can track the hopping or `displacement' times for each row. That is, we track the time at which the representative particle of a row has been displaced by an integer multiple of $s_{\text{Lattice}}$. The bottom row of panels in Fig. \ref{fig:kink_wave} shows the hopping times for representative particles shown in the top row of panels, where the color of the markers corresponds to the color of the particle. The displacement times of the particles are grouped based on how many lattice sites the particles have displaced. That is, every particle's first displacement time is connected, as is every particle's second, and so on.
The bottom row of panels in Fig. \ref{fig:kink_wave} shows that for every value of $W_{\text{wall}} > 1.63$, the first rows to be displaced are the rows closest to the walls which bound the domain in the $y$ direction. For $W_{\text{wall}} = 1.87$, the only particles which become displaced are those in the rows nearest these walls and their immediate neighbors. This is likely because of the additional hydrodynamic screening provided by the walls, as we investigated in section \ref{twoSphere}. By analogy with what was observed in \cite{Bohlein2011}, one might expect that the next rows to be displaced are the immediate neighbors of the rows nearest the walls, and then their neighboring rows, and so forth, with the middle rows displacing last. To the contrary, we see that one of the middle rows displaces soon after the displacement of the rows nearest the walls. This is a surprising result and may stem from the lateral ($y$) confinement of the system. A more thorough investigation of the unusual collective dynamics of kinks in this system is deferred to future work.
\section{Conclusion} \label{sec:conc}
In this work, we described the Rigid Body Fluctuating Immersed Boundary (RB-FIB) method to simulate the Brownian dynamics of arbitrarily shaped rigid particles in fully confined domains. The fluctuating solvent was treated explicitly, allowing for well-known staggered finite difference schemes to be used and general boundary conditions to be applied on the boundaries of the computational domain. We designed an efficient Split--Euler--Maruyama temporal integrator that uses a nontrivial combination of random finite differences to capture the stochastic drift appearing in the overdamped Langevin equation. We implemented this method in the IBAMR software infrastructure, which is freely available at \url{https://github.com/IBAMR}.
We studied the dynamical correlation functions of two tightly confined spheres in close proximity to each other and physical walls, and examined the effect of spatial resolution on the accuracy of both the short- and long-time equilibrium mean-square displacement.
We also used the RB-FIB method to model a quasi--2D colloidal crystal confined in a narrow slit channel. The layer was hydrodynamically driven across a commensurate periodic substrate potential mimicking the effect of a corrugated wall. We observed partial and full depinning of the colloidal monolayer from the substrate potential above a certain wall speed, consistent with the transition from static to kinetic friction observed in \cite{Bohlein2011}. Further, we observed the propagation of kink solitons parallel to the direction of flow. These kinks extended along the colloidal monolayer in the direction transverse to the flow. We observed a curious `M' pattern in the particle displacements across the domain wherein particles nearest the boundaries of the domain \emph{and} particles in the middle of the domain are the first to be displaced.
The SEM scheme presented here is based on the Euler--Maruyama method and as such is first order weakly accurate (as shown both numerically and theoretically in Appendix \ref{sec:boom} and \ref{appendix}), as well as only first order deterministically accurate. Higher order deterministic accuracy has been shown to be very beneficial in designing temporal integrators for Brownian dynamics of rigid particles in half-space domains \cite{BrownianMultiblobSuspensions}. The SEM scheme can be extended to deterministically second-order accurate Adams--Bashforth, trapezoidal and midpoint variants, but whether or not the additional computational cost incurred by these methods is justified should be investigated in future work.
The RB-FIB method we present here has notable advantages over other methods. First, it can handle a variety of combinations of boundary conditions in different directions seamlessly. Second, it can handle colloidal particles of complex shapes with varying levels of spatial resolution (fidelity), i.e., with controllable accuracy. Third, the method scales linearly in the number of particles and fluid grid cells. Fourth, the explicit solvent approach also facilitates coupling to additional physics, including elastic bodies handled using the immersed boundary method, non-Newtonian or multicomponent solvents, and electrohydrodynamics.
However, there are also some disadvantages compared to other methods. First, because the method uses an explicit representation of the fluid domain, infinite domains cannot be considered in the present formulation. Second, because the method scales linearly in the number of \emph{fluid} and particle degrees of freedom, phenomena involving only a few particles and large fluid domains are particularly inefficient to simulate using the RB-FIB method. Third, lubrication forces between nearly touching particles are not accurately handled for realistic number of blobs per particle, and the method becomes less efficient (in terms of both memory use and computing time) when there are many blobs per particle. Some of these shortcomings can be addressed by using explicit Green's functions \cite{BrownianMultiblobSuspensions}, or specializing to spherical particles \cite{Galerkin_Wall_Spheres,AutophoreticSpheres_Adhikari,BrownianDynamics_OrderNlogN}.
One major advantage of the RB-FIB method not exploited in this work is it's ability to include fluid body forces in the momentum equation. This allows for the inclusion of a range of multiphysics phenomena whose coupling with the fluid is via a fluid body force. An example are electrohydrodynamic phenomena for which the body force is the divergence of the Maxwell stress tensor \cite{Bhalla2014FullyRI}. Since electrostatic fields are non--dissipative, no additional effort is needed to be taken to account for thermal fluctuations. However, charge separation creates a thin Debye layer near solid surfaces which is difficult to resolve numerically, but can be approximated using asymptotics \cite{ElectroneutralAsymptotics_Yariv}. The ability of the RB-FIB method scheme to prescribe active slip velocities on the surfaces of the particles \emph{and} the boundaries can be used to account for electroosmotic slip flows without resolving the Debye layers.
\begin{acknowledgments}
This work was supported by the MRSEC Program of the National Science Foundation under Award Number DMR-1420073.
This work was also partially supported by the National Science Foundation under collaborative
award DMS-1418706 and by DMS-1418672. We thank Northwestern University's Quest high performance computing service for the resources used to perform the simulations in this work. Brennan Sprinkle and Aleksandar Donev were supported in part by the Research Training Group in Modeling and Simulation funded by the National Science Foundation via grant RTG/DMS-1646339. Amneet Bhalla acknowledges research support provided by the San Diego State University.
\end{acknowledgments}
|
1901.06551
|
\section{Introduction}
The generation of realistic examples of everyday objects is a challenging and interesting problem which relates to several research fields such as geometry, computer graphics, and computer vision. The ability to capture the essence of a class of objects is key to the task of generating diverse datasets which may be used in turn during the training of many machine learning based algorithms. The main challenge posed by the task of data generation is to construct the model that is able to generalize to many variations while still maintaining high detail and quality. Furthermore, the challenge of generating geometric data is even greater since Both geometry and texture of an object must be synthesized while taking into account the underlying relations between them.
In this work, we propose to learn the latent space of 3D textured objects. We focus our efforts on human faces, and show that by using a canonical transformation that maps geometric data to images, we are able to learn the distribution of such images via the GAN framework. By representing both texture and geometry of the face as transformed geometric images, we can learn the underlying distribution of faces, and later generate new faces at will. The generation of realistic human faces is a useful tool with applications in face recognition, puppetry, reconstruction and rendering. Our main contributions are the proposition of a new model for 3D human faces which is composed in the 2D image domain, as well as the modeling of the relation between texture and geometry, further improving realism. By generating geometries and textures using state of the art GANs, it is possible create highly detailed data samples while maintaining the ability go generalize to unseen data, two desirable propertied that are often at odds.
While deep learning and convolutional networks have revolutionized many fields in recent years, they have been mostly employed on structured data which is intrinsically ordered. Arranged data such as audio, video, images, and text can be processed according to the order of samples, frames, pixels or words. This inherent ordering permits the application of convolution operations which are the main building block of convolutional networks, a powerful and popular variant of deep networks. Contrary to typical parameterized data, geometric data represented by two dimensional manifolds lacks an intrinsic parameterization and is therefore more difficult to process via convolutional networks. This important class of data is crucial to the task of modeling our world as most solid objects can be represented by a closed manifold accompanied by a texture overlay.
Recently, geometric data has grown dramatically in availability as more accurate and affordable acquisition devices have come into use. This abundance of data has attracted the attention of the computer vision and machine learning communities, leading to many new approaches for modeling and processing of geometries. One family of techniques for geometric data processing aims to define new operators which can be applied directly to the manifold and are able to replace to some extent the convolution operation within the processing pipeline. Other methods attempt to process geometries in the spectral domain or represent them in voxel space. These families of methods each have their merits but suffer from other issues such as loss of generality and memory inefficiency. In contrast, we propose to transform our geometric data via a canonical mapping into two dimensional gridded data. This allows us to process the geometric data as images. While this approach on its own is not new we show that by careful construction of the transformed dataset we are able to harness the power of convolutional networks with little loss of data fidelity. Furthermore, we are able to design our transformation process in order to control the distortion, thus reducing it in important areas while spreading it to the non essential areas of the data. Finally, we propose to encode both the geometry and texture as mapped images which means the processing pipeline remains identical for both cases.
\section{Related work}
Data augmentation is a common practice within the machine learning community. By applying various transformations to existing data samples it is possible to simulate a much larger dataset than is available and introduce robustness to transformations. A more advanced method for data augmentation takes into account the geometry of the scene. The technique which we term geometric data augmentation consists of a geometry recovery stage, then transformation is performed on the geometry and finally a new image is created by projecting the geometry. In \cite{masi2016we}, the authors show that by performing geometric data augmentation on a dataset of facial images they are able to reach state of the art results on difficult facial recognition benchmarks. Despite its proven usefulness geometric augmentation still lacks the ability to create completely new data samples outside the scope of the dataset.
A complementary method to data augmentation is data generation. Bu constructing a high quality model for data generation it is possible to produce an infinitely large dataset. In addition, some models may permit control over the characteristics of each data sample. Within the domain of faces this would mean control over parameters such as age, gender, expression, pose and lighting conditions. When dealing with image data a recent popular approach is to use a GAN \cite{goodfellow2014generative} which is in essence a neural network with a trainable loss function. While this class of methods is well suited for images, reformulation in the context of geometry is more challenging and several competing approaches exist in this field. \cite{Gecer_2018_ECCV} and \cite{shrivastava2017learning} propose to construct samples from a low quality linear model, and then use a GAN in order enforce the realism of the data. \cite{Litany_2018_CVPR} and \cite{ranjan2018generating} both propose the use of convolutional autoencoders which are trained on pre-aligned geometric data. These methods however do not take into account the model texture. In addition \cite{wu2016learning} have used the popular voxel grid representation for geometries, and are able to generate 3D objects using this notion. This method however is memory inefficient and in practice can produce only coarse geometries.
In addition to data augmentation and generation the objective of pose normalization is to decouple the subjects identity from other factors such as expression and pose which may confuse a classifier. This can be either done by geometric reconstruction manipulation of the facial geometry or by performing normalization directly in the image domain. While \cite{chu20143d} and \cite{bas20173d} leverage a geometric representation in order to transform the data, \cite{tran2017disentangled} and \cite{huang2017beyond} are able to frontalize faces directly in the image domain as part of their pipeline. Although useful methods which help the training process by limiting data variation, these methods still do not explicitly model new data samples which is our ultimate goal.
An additional method for geometrically manipulating facial data which has gained success is geometric reconstruction from a single image. One popular family of methods aim to fit a parametric model to an image. This idea was first introduced by \cite{blanz1999morphable} and has since been extended by works such as \cite{booth20173d}. An approach which involves regressing the coefficients of a given model via a deep network were popularized by \cite{richardson20163d} and extended by \cite{richardson2017learning} and \cite{tran2017regressing}. More recently methods which are not restricted to a specific model or attempt to learn the model during training time such as \cite{sela2017unrestricted}, \cite{Tewari_2018_CVPR} and \cite{tran2018nonlinear} have been able leave the restricting assumptions of linear models such as 3DMM. Complementary efforts such as
\cite{Deng_2018_CVPR} propose to reconstruct occluded texture regions in order to gain a full textured reconstruction from challenging poses as well. Another recent work by \cite{saito2017photorealistic} focuses on improving the quality of facial texture used in reconstructed faces in order to improve realism. An additional complimentary approach proposed by \cite{Guler2016DenseReg} is to learn a direct mapping from an image to a template model. All of the above approaches while useful, are based on fitting some geometry to a given image by relying on some underlying geometric model. This model however is not explicitly used in order to generate novel faces but rather to reconstruct existing ones.
Our most direct competition comes from several works in the field of facial generative modeling. The seminal work by \cite{blanz1999morphable} which pioneered the field almost two decades ago is still widely used within many methods, some of which were mentioned above. The linear 3D Morphable Model proposed is extremely flexible; however it has the drawback of using a small number of PCA vectors which limit its ability to present highly detailed models. A recent large scale effort taken by \cite{booth20163d} and \cite{booth2018large} has produced the largest publicly known 3DMM by scanning $10k$ subjects and using their scans to construct the model. In contrast to linear models much more complex relations can be captured by training deep networks to take the part of data generators. To this end, \cite{Tewari_2018_CVPR} and \cite{tran2018nonlinear} were able to jointly learn a reconstruction encoder while also learning the facial model itself. Given the trained model one could plausibly generate faces, however the authors have not shown any experiments to this effect. \cite{ranjan2018generating} on the other hand has employed mesh autoencoders to construct new facial geometries, however this method does not produce texture and was trained on a limited dataset of very few subjects. In this work we will propose a new GAN based facial geometric generative model, and analyze the ability of our model to extend to new identities. We also relate between the geometric and texture models which are intrinsically correlated and discuss different ways of exploiting this correlation for our cause.
\section{3D Morphable Model}
\label{sec:3dmm}
One of the early attempts to capture facial geometry and photometry (texture) by a linear low dimensional space is the Blanz and Vetter \cite{blanz1999morphable} {\em 3D Morphable Model} (3DMM).
Using the 3DMM, textures and geometries of faces can be synthesized as a linear combination of the elements of an orthogonal basis.
The basis is constructed from a collection of facial scans and by applying the principal component analysis after alignment of the faces.
That is, the basis construction process relies on a vertex to vertex alignment of the facial scans, which is achieved by computationally finding a dense correspondence between each scan to a template model.
The aligned vertices provide a set of spatial and texture coordinates which are then decomposed into the principal components of the set.
Once the basis is constructed, it is possible to represent each face by projecting it onto the first $k$ components of both the geometry and the texture bases.
This linear model was used to reconstruct 3D faces from 2D images;
Blanz and Vetter \cite{blanz1999morphable} took an analysis-by-synthesis approach, which attempts to fit a projected surface model embedded in $\mathbb{R}^3$ into a given 2D image.
This was established by constructing a fully differentiable parametric image formation pipeline, and performing a gradient descent procedure optimizing for an image to image loss on the model parameters.
The parameters consist of the geometry and texture models coefficients of the face, as well as the lighting and pose parameters.
This process results in a set of coefficients which encode the geometry and texture of any given face up to their projections on the principal components basis, effectively reconstructing the curved surface structure and the photometry of the given image of a face.
\subsection{Model Construction}
According to the 3DMM model, each face is represented as an ordered set of $m$ geometric coordinates \mbox{$g = (\hat x^1, \hat y^1, \hat z^1, \hat x^2,\ldots,\hat y^m, \hat z^m) \in \mathbb{R}^{3m}$} and texture coordinates in RGB space\\ \mbox{$t = (\hat r^1, \hat g^1, \hat b^1, \hat r^2,\ldots,\hat g^m, \hat b^m) \in \mathbb{R}^{3m}$}.
Given a set of $n$ faces, each represented by geometry $g_i$ and texture $t_i$ vectors, construct the $3m \times n$ matrices $G$ and $T$ by column wise concatenation of all geometric coordinates and all corresponding texture coordinates.
Since the alignment process ensures an ordered universal representation of all faces, Principal Component Analysis (PCA) \cite{jolliffe1986principal} can be applied to extract the optimal first $k$ orthogonal basis components in terms of $L_2$ reconstruction error.
To that end, denote by $V_g$ and $V_t$ the $3m \times n$ matrices that contain the left singular vectors of $\Delta G = G - \mu_g\mathbbm{1}^T$ and $\Delta T = T - \mu_t\mathbbm{1}^T$, respectively, where $\mu_g$ and $\mu_t$ are the average geometry and texture of the faces and $\mathbbm{1}$ is a vector of ones.
By ordering $V_g$ and $V_t$ according to the magnitude of the singular values in a descending order,
the texture and the geometric coordinates of each given face can be approximated by the linear combination
\begin{equation}
g_i = \mu_g + V_g\alpha_{g_i},\ \ \ t_i = \mu_t + V_t\alpha_{t_i},
\label{eq:3dmm_model}
\end{equation}
where $\alpha_{g_i}$ and $\alpha_{t_i}$ are the coefficients vectors, obtained by $\alpha_{g_i} = V_g^T(g_i - \mu_g)$ and $\alpha_{t_i} = V_t^T(t_i - \mu_t)$.
Following this formulation, it is possible to use such a model to generate new faces by randomly selecting the geometry and texture coefficients and plugging them into \autoref{eq:3dmm_model}.
According to \cite{blanz1999morphable}, the distribution of the coefficients can be approximated as a multivariate normal distribution, such that the probability for a coefficient vector $\alpha$ is given by
\begin{equation}
P(\alpha) \sim \exp\left \{-\frac{1}{2}\alpha^T\Sigma^{-1}\alpha\right \},
\label{eq:3dmm_dist}
\end{equation}
where $\Sigma$ is a covariance matrix that can be empirically estimated from the data, and is generally assumed to be diagonal.
\subsection{Synthesis model}
The 3D morphable model is useful not only in the context of representation and reconstruction, but, as noted in the previous section, it also allows for the generation of new faces which can not be found in the training set.
The synthesis is achieved by randomizing linear combinations of the basis vectors.
The random coefficients are drawn according to the model prior from the distribution described in \autoref{eq:3dmm_dist}.
As is common practice when dealing with principal components, only the first $k \ll n$
vectors can be taken into account as part of the model.
The number $k$ can be obtained by analyzing the decay of the singular values which is proportional to the error produced by ignoring the associated basis vector.
By excluding the vectors for which the singular variables are sufficiently small we can guarantee minimal loss of data.
Even though two decades have passed since the inception of the 3DMM, it is still widely used in cutting edge applications.
By harnessing the generative powers of this model, it has been used as a tool for data augmentation and data creation for training of convolutional networks \cite{sela2017unrestricted,richardson20163d,richardson2017learning,Gecer_2018_ECCV}.
Furthermore, the model has been integrated into deep learning pipelines in order to provide structure and regularization to the learning process \cite{Tewari_2018_CVPR}.
In spite of the wide use and apparent success of the model it is clear that the faces obtained from it tend to be over-smoothed and in some cases non-realistic.
Furthermore, the multivariate normal distribution model from which the coefficients are drawn is over simplified and does not represent the true distribution of faces.
In particular, the texture and geometry are treated as two uncorrelated variables, in contradiction to empirical evidence. \autoref{fig:3DMM_faces} shows a few samples of synthesized 3DMM faces and depicts the difference between the distributions of 3DMM generated faces and real ones.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{images/3DMM/3DMM_figure.jpg}
\caption{Top: Multi Dimensional Scaling \cite{shamai2018efficient} used to depict the first $k=200$ 3DMM texture and geometry coefficients for real faces versus 3DMM generated faces. Bottom: Examples of 3DMM generated faces. The 3DMM was constructed by our own training set. }
\label{fig:3DMM_faces}
\end{figure}
\section{Progressive growing GAN}
\label{sec:GAN}
Generation of novel plausible data samples requires learning the underlying distribution of the data. Given a perfect discriminator which can differentiate between real and fake data samples it is possible to construct a training loss for a generator model which tries to maximally confuse the discriminator.
For complex realistic data, finding such a discriminator is a difficult problem on its own and requires learning from realistic and fake examples.
The fundamental idea of the GAN framework is to train both of these networks simultaneously.
Essentially, this means that we use a trainable loss function for the generator which constantly evolves as the generator improves.
This process can be formulated as \autoref{eq:minimaxgame-definition}
\begin{equation}
\footnotesize
\label{eq:minimaxgame-definition}
\min_G \max_D V(D, G) = \mathbb{E}_{x \sim p_{\text{data}}(x)}[\log D(x)] + \mathbb{E}_{z \sim p_{z}(z)}[\log (1 - D(G(z)))],
\end{equation}
where $D,G$ are the discriminator and generator parametric functions, $x,z$ are the real data samples and latent representation vector respectively.
Since we wish to produce high resolution textures for facial geometries, we propose to use a recent successful GAN, namely \cite{karras2017progressive}.
The progressive growing GAN is built in levels which gradually increasing the resolution of the output image.
During the training process each level is added consecutively while smoothly blending the new levels into the output as they are added.
This, and several other techniques were shown to increase the training stability as well as the variation of the produced data.
The difficulty concerning geometric data is that it lacks the regular intrinsic ordering which exists in 2D images, which are essentially large matrices.
For this reason, it is unclear how to apply spatial filtering, which is the core building block of neural network layers, to arbitrary geometric structures.
Significant progress has been made in this direction by several recent papers. A comprehensive survey is presented in \cite{bronstein2017geometric}. These methods, however, are not yet widely used and supported within standard deep learning coding libraries.
In order to harness the full power of recent state of the art developments in the field, it is sometimes preferable to work in the domain of images. For this reason, we built a data processing pipeline which maps the geometric scanned data into a flat canonical image which allows the utilization of the progressively growing GAN without major modifications.
\section{Training data construction}
\label{sec:training_data_construction}
In this section we describe the process by which we produce our training data.
We start with digital geometric scans of human faces.
By making use of a surface to surface alignment process \cite{weise2009face}, we are able to bring all the scans into correspondence with each other.
Next, applying a universal mapping from the mesh to the 2D plane, we can transfer the facial texture into a canonically parametrized image.
These aligned texture images are used to train our texture generation model.
We provide several alternatives for constructing the facial geometry which accompanies each texture. One solution is to learn the relation between 3DMM texture and geometry coefficients which is prevalent in the training data.
In addition, we can similarly process the geometric data of the faces as well. By applying the same canonical transformation and encoding the $\left(x,y,z\right)$ coordinates of the model vertices as RGB channels of an image, we can learn to generate geometries as well as textures using the same methodology.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth, trim={2cm 5cm 2cm 5cm}, clip]{images/pipeline/data_acq.jpg}
\caption{Left to right: Raw scan with landmark points, The template is deformed to fit the scan and texture is transferred onto the template, canonical UV mapping between the deformed template and the 2D image, final mapped texture as image.}
\label{fig:my_label}
\end{figure}
\subsection{Face scanning and marking}
Our training data formulation process starts by acquiring digital high resolution facial scans.
Using a 3DMD scanner, roughly $1000$ different subjects were scanned, each making five distinct facial expressions including a neutral expression.
The subjects were selected to form a balanced mixture of genders and ethnic backgrounds.
Each scan is comprised of the facial geometry, represented by a triangulated mesh, as well as two high resolution photographs, which capture a 180 degree view of the subject's face.
Each mesh triangle is automatically mapped to one of the photos, allowing the facial texture to be transferred onto the mesh.
Due to the variety of facial geometries, as well as limitations of the scanning process, the meshes may contain imperfections such as holes and areas of missing texture.
These data corruptions may affect the training data samples that are given to the network and care must be taken not to hinder the training process.
The straightforward path is to filter out the erroneous data samples completely.
This leads to a significant reduction in the overall size of the training set size of roughly 20\%. Instead, we propose a new approach which incorporates corrupted scans without compromising the integrity of the training data.
We describe our approach to learning from corrupted data in \autoref{sec:corrupted_data}.
In order to facilitate the alignment process described in \autoref{sec:Non-Rigid_Alignment}, we annotate each face by 43 landmark locations.
These locations are determined automatically by projecting the facial surface into an image and applying one of many 2D facial landmark detectors such as Dlib \cite{dlib09}.
The landmarks are then back-projected onto the surface to determine their location.
Finally, the locations of the automatically generated landmarks are manually refined in order to prevent displacements that could lead to large errors during the alignment process.
\subsection{Non-Rigid Alignment}
\label{sec:Non-Rigid_Alignment}
The goal of the alignment process is to find a dense correspondence between all the facial geometries.
It is performed by aligning all scans to a single facial template.
This correspondence is achieved by deforming the template into the scanned surface, a process guided by the pre-computed landmarks.
Initially, a rigid alignment between the scanned surface and template is performed as preprocessing step.
This is done by solving for the rotation, translation, and uniform scaling between the scan and template landmarks.
The deformation process is performed by defining a fitting energy which takes into account both surfaces and known landmarks and measures how closely they fit each other.
The energy also includes a regularization term which penalizes non-smooth deformations.
The template mesh is deformed by moving each vertex according to the energy gradient in an iterative manner.
The loss function which is minimized during the alignment process was first described by \cite{blanz1999morphable} and is comprised of $3$ terms which contribute to the final alignment.
The first term accumulates the distances between the facial landmark points on the scanned facial surface and their corresponding points on the template mesh.
The second term accumulates the distances between all the template mesh points to the scanned surface.
The third term serves as a regularization, and penalizes non-smooth deformations.
The loss term is minimized by taking the derivative of the loss with respect to the template vertex coordinates, then deforming the template in the gradient direction.
This process naturally provides a dense point to point correspondence between each and every scanned surface.
\subsection{Universal mapping}
\label{sec:mapping}
Given a facial scanned surface with unknown parametrization, our goal in this section is to discover a 2D parameterization of the surface which maps it to a unit rectangle, such that this mapping is consistent for all scans.
In \autoref{sec:Non-Rigid_Alignment}, we described the process of aligning the facial surface template to a scanned facial surface, and by that, bring them into correspondence.
The obtained correspondence allows to transfer the parametrization from the template to all scans, thus establishing a universal parametrization. In the following section, we define the unique parameterization between the template face and the unit rectangle.
The authors of \cite{slossberg2018high} defined tbe mapping between the scan and the plane by using a ray casting technique built into the animation rendering toolbox of Blender \cite{blender}.
\autoref{fig:uv_result} depicts several examples of the resulting mapped facial photometry.
Although it would be possible to make use of the same parametrizatoin, an alternative definition may suite us better.
The Blender mapping, for example, does not exploit the entire squared image for the mapping.
Moreover, it does not take the facial structure into account.
The eyes, nose, and mouth, for instance, clearly contain more details than smoother parts of the face such as the cheeks and forehead.
It is reasonable to assume that it would be easier to learn and reconstruct the main features, perhaps at the expense of other parts if they take up a larger portion of the input images. To that end, we propose to construct a weighted parametrization that will allow us to control the relative area in the plane taken up by each facial feature.
In \cite{floater1997parametrization}, the authors presented a parametrization technique that allows to choose for each vertex its baricentric coordinates with respect to its neighbors.
The authors demonstrate that any set of baricentric coordinates has a unique planar graph with a valid triangulation that fulfills it.
As an extension, they also provided a method for a weighted least square parametrization that allows some control over the edge lengths in the resulting parametrization.
The method is briefly described as below.
Given any triangulated mesh, the object is to map it into a valid planar graph with the same connectivity.
Assuming a mesh with $N$ vertices, choose a set of $K$ boundary vertices from the mesh and fix their 2D mapping values to some desired convex boundary, $u_1,...,u_K$.
For any other vertex $i > K$ in the mesh, choose a set of non-negative baricentric coordinates $\lambda_{i,j}$, such that
$\sum_{j=1}^N \lambda_{i,j} = 1$, and $\lambda_{i,j} = 0$ if and only if $i$ and $j$ are not connected.
Then, for $i=K+1,...,N$, solve the linear system of equations
\begin{equation}
\label{eq:baricentric}
u_i = \sum_{j=1}^N \lambda_{i,j}u_j.
\end{equation}
The authors in \cite{floater1997parametrization} prove that \autoref{eq:baricentric} has a closed form unique solution that coincides with the chosen baricentric coordinates.
According to \cite{floater1997parametrization}, this technique could be extended to a weighted least square parametrization.
For any desired set of weights $w_{i,j}$, it was shown that the choice of
\begin{equation}
\lambda_{i,j} = \frac{w_{i,j}}{\sum_{j:(i,j)\in E} w_{i,j}},
\end{equation}
minimizes the functional $\sum_{j:(i,j)\in E} w_{i,j} \|u_i - u_j\|^2$, where $E$ represents the set of edges.
Following this technique, we designed the weights $w_{i,j}$ such that eyes, nose and mouth would recieve a larger area in the parametrization plane. We defined a weight for each vertex in the template face, and then gave each edge the average weight of its two adjacent vertices.
Note that the resulting edge lengths also depend on the density of vertices in the mesh.
In other words, when choosing a constant weight for all edges, the edge lengths of the resulting parametrization termed the uniform baricentric parametrization, is not constant.
To design the edge weights more intuitively, we normalize the edge weights by the ones resulting from the uniform baricentric parametrization.
A visualization of the edge weights is shown in \autoref{fig:uv_result}.
To choose the boundary vertices $u_1,\ldots,u_K$, we follow the outer boundary of the facial mesh, starting from the center bottom (a point on the chin), while measuring the length of edges we pass through, $L_1,...,L_k$.
Assume the image boundary is parametrized by $C(t) = \{x(t), y(t)\}$ for $0 \leq t \leq 1$, such that $C(0)=C(1)$ is the bottom center of the image.
Then, we set $u_i = C(t_i)$, where
\begin{equation}
t_i = \frac{\sum_{j=1}^i L_j}{\sum_{j=1}^K L_j}.
\end{equation}
Lastly, unlike \cite{slossberg2018high}, we propose to construct a symmetric mapping, in order to augment the data by mirroring the training samples.
This could be done by ensuring that the template is intrinsically symmetric, as well as the choice of boundary vertices and edge weight.
The resulting mapping and a visualization of the edge weights are shown in \autoref{fig:uv_result}.
The rightmost part in \autoref{fig:uv_result} shows that when mapping back the unwrapped texture to the facial geometry, a better resolution is obtained when using the proposed method.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{images/uv_mapping/uv_mapping.jpeg}
\caption{Left: A visualization of the proposed edge weights. Each vertex is colored by the average weights of its edges. Bright colors represent larger weights.
Next: The deformed template, after its alignment process to some arbitrary scan. Center: the resulting 2D image, using the texture mapping suggested in \cite{slossberg2018high} and the proposed one.
Right: A comparison between the texture mapping methods. An unwrapped texture is mapped back to the template, showing a slightly better resolution in the proposed method.}
\label{fig:uv_result}
\end{figure}
\section{Learning from corrupted data}
\label{sec:corrupted_data}
The semi-automatic data acquisition pipeline described in \autoref{sec:training_data_construction} is used to construct a dataset of 2D images that will be used to train the GAN.
Naturally, some of the generated data samples contain corrupted parts due to errors in one or more of the pipeline stages.
In the so-called 3D scanning process, for example, facial textures that contain hair are often not captured well.
Another reasons for incomplete texture are occlusions and limited camera field of view.
The geometry of the eyes is occasionally distorted due to their high specular reflection properties.
In the landmark annotation stage, some landmarks can be inaccurate or even wrong, resulting in various distortions in the final output. \autoref{fig:PDG_corruptions} provides several examples of such data corruptions.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{images/partial-data_GAN/corruptions_examples.jpeg}
\caption{Examples of different distortions resulting from the noisy scanning and inexact alignment.}
\label{fig:PDG_corruptions}
\end{figure}
One way to handle data corruption is to ignore imperfect images and keep only the valid ones.
In our case, manual screening of the data reduced the number of samples from $4,963$ to only $3,679$ valid ones, thus, eliminating $25\%$ of the data.
Here, we propose a novel technique for training GANs using partially incomplete data that is able to exploit undamaged parts and robustly deal with corrupted data.
To that end, we propose to pair a binary \textit{valid mask} to each training data image, that represents areas in the image that should be ignored.
Without loss of generality, black areas in the masks (zero values) correspond to corrupted regions in the image we would like to ignore, and white regions (values of one) correspond to valid parts we would like to exploit for training the network.
We propose to multiply these valid masks by their corresponding images, as well as concatenate them as a forth channel (R-G-B-mask).
Recall that the discriminator receives as an input a batch of real images and a batch of fake images.
To prevent the discriminator from discriminating real and fake images by the valid masks, the same masks are multiplied and concatenated to both real and fake batches.
The generator, which does not get the masks as an input, must produce complete images in-painting the masked regions.
Otherwise, the discriminator would be able to easily identify masked parts that do not match the valid masks and conclude that the image is fake.
The valid masks could be constructed either manually or using automatic image processing technique for detection of the unwanted parts.
The discriminator and generator of the proposed GAN model are demonstrated in \autoref{fig:PDG_model}.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{images/partial-data_GAN/GAN_model.jpeg}
\caption{The proposed GAN model of learning from incomplete data.
A valid mask is fitted for each training sample according to the corrupted regions.
The real and fake samples are concatenated and masked by the same valid masks and then passed to the discriminator unit.
The discriminator cannot distinguish between the real and fake images by their masks.
The generator cannot generate corrupted images since corruptions are masked.
The generator does not have any information about the mask and therefore cannot generate corrupted images since they would not fit the masked regions.}
\label{fig:PDG_model}
\end{figure}
To demonstrate the performance of the proposed GAN we constructed a synthetic dataset of different colored shapes randomly located in $10,000$ images of size $256 \times 256$.
In this simple experiment, we treat the red circles as corruptions that we would like our model to ignore. \autoref{fig:PDG} shows the data images, the valid masks, and the resulting GAN output.
It is clearly seen that the proposed GAN model generated new data images without the unwanted red circles.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{images/partial-data_GAN/Synthesized.jpeg}
\caption{Left three: examples of the constructed synthetic data, that consists of images with different colored shapes.
In this construction, red circles are unwanted in the output. Middle three: valid masks that correspond to the unwanted elements.
Right three: examples of the output data, generated by the proposed GAN.}
\label{fig:PDG}
\end{figure}
\section{Facial Surface Generator}
We propose to train a model which is able to generate realistic geometries and photometries (textures or colors) of human faces.
The training data for our model is constructed according to \autoref{sec:training_data_construction}, and used to train a NN according to a GAN loss.
At inference, the trained model is used to produce random plausible facial textures which are mapped by our predefined parametrization described in \autoref{sec:mapping}.
In order to also generate corresponding facial geometries for each new texture, we propose two novel approaches.
The first approach is based on training a similar model for geometries. This is done by mapping the training set geometry coordinates using the canonical parametrization into the unit rectangle. By treating each coordinate as a color channel, an we form geometry images which we use to train our geometry generator model.
The second approach relies on the classical 3DMM model.
For both approaches we suggest a method to generate a geometry which is a plausible fit for a given texture.
In the following sections we describe the two proposed approaches in detail.
\subsection{Generating textures using GAN}
Our texture generation model is based on a Convolutional Neural Network which is trained using a GAN loss.
Due to this loss, we are able to train a model that satisfies the distribution of real data samples, by drawing new samples out of this distribution.
By training our model on our dataset which we constructed according to \autoref{sec:training_data_construction}, we are able generate new plausible textures which are all mapped to the unit rectangle plane according to the predefined parametrization described in \autoref{sec:mapping}.
As we will show in the following sections, the generated textures by the proposed model present novel yet realistic human faces.
Since texture and geometry are both inseparable attributes of the same geometric entity, it is necessary to take the relationship between them into account when generating the corresponding geometries.
In \autoref{sec:geom_gen} and \autoref{sec:geom_fit} we describe in detail the process of the proposed geometry generation pipeline which takes as input a generated texture and produces a corresponding plausible geometry.
Several outputs from the suggested texture generation model are depicted in \autoref{fig:fake_real_textures}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{images/unwrapped_textures/unwrapped.jpg}
\caption{Left: Facial textures generated by the suggested pipeline. Right: Real textures from training set.}
\label{fig:fake_real_textures}
\end{figure}
\subsection{Assigning geometries to textures}
\label{sec:geom_fit}
Once novel textures have been generated, we would like to assign them plausible synthetic geometries in order to obtain realistic face models.
One way to generate geometries is by exploiting the 3DMM model by which geometries can be recovered through proper selection of the coefficients.
In what follows, we discuss and compare several methods for obtaining the 3DMM geometry coefficients.
\subsubsection{Random}
The simplest way of synthesizing a geometry to a given texture is by picking random 3DMM geometry coefficients.
We follow the formulation in \autoref{eq:3dmm_dist}.
The probability of a coefficient $\alpha_i$ is given by
\begin{equation}
P(\alpha_i) \sim \exp\left \{-\frac{\alpha_i^2}{2\sigma_i^2}\right \},
\end{equation}
where, $\sigma_i^2$ is the $i$-th eigenvalue of the covariance of $\Delta G$.
$\sigma_i^2$ can be computed more efficiently as $\sigma_i^2 = \frac{1}{n}\delta_i^2$, where $\delta_i$ is the $i$-th singular value of $\Delta G$.
To fit a geometry to a given texture, we randomize a vector of coefficients from the above probability distribution and reconstruct the geometry using the 3DMM formulation.
Random geometries are simple to generate.
Yet, not every geometry can actually fit any texture.
As a convincing visualization, we computed the \textit{canonical correlation} \cite{hotelling1936relations} between the 3DMM texture and geometry coefficients, $\{\alpha_{ti}\}_{i=1}^n$ and $\{\alpha_{gi}\}_{i=1}^n$, of the facial scans. \autoref{fig:fit_corr} shows $u$ w.r.t. $v$, the first two canonical variables of the correlation.
In what follows, we attempt to generate geometries which are suited for their textures.
\begin{figure}
\centering
\includegraphics[width=0.4\linewidth]{images/fitting_geom/corr_visualize_3.jpg}
\caption{
Visualization of the correlation between textures and geometries of the facial scans, where the axes $u$ and $v$ are the first two variables of the canonical correlation.
}
\label{fig:fit_corr}
\end{figure}
\subsubsection{Nearest neighbour}
Given a new texture, a simple way to fit a geometry that is likely to match it, is by finding the data sample with the nearest texture, and projecting its geometry onto the 3DMM subspace.
For that task, we define a distance between two textures as the $L_2$ norm between their 3DMM texture coefficients.
Only the 3DMM texture and geometry coefficients of the data need to be stored.
Nearest neighbor geometries are simple to obtain; however they are restricted to the training data geometries alone.
\subsubsection{Maximum likelihood}
The maximum likelihood estimator (ML) is typically used when one can formulate assumptions about the data distribution.
In our case, given input facial textures, ML could be used to obtain the most likely geometries under a set of assumptions.
We first construct a mutual 3DMM basis by concatenating textures and geometries.
Define a vertical concatenation of geometries $G$ and textures $T$ as the $6m \times n$ matrix $M=\begin{pmatrix}G\\T\end{pmatrix}$,
such that $G,T$ are defined in \autoref{sec:3dmm}.
Define $\Delta M = M - \mu_M\mathbbm{1}^T$, where $\mu_M$ holds the average of the rows of $M$.
Denote by $U$ the $6m \times k$ matrix that contains the first $k$ basis vectors of $\Delta M$, i.e., corresponding to the largest magnitude eigenvalues.
These vectors can be computed either as eigenvectors of $\Delta M \Delta M^T$ or, more efficiently, as the left singular vectors of $\Delta M$.
Denote by $U_g$ and $\mu_{M_g}$ the upper halves of $U$ and $\mu_{M}$, and denote by $U_t$ and $\mu_{M_t}$ the lower halves $U$ and $\mu_{M}$, respectively, such that
\begin{equation}
U = \begin{pmatrix}
U_g\\U_t
\end{pmatrix},\,\,\,\,\,\,\,
\mu_{M} =
\begin{pmatrix}
\mu_{M_g} \\ \mu_{M_t}
\end{pmatrix}.
\end{equation}
Note that $U_g$ and $U_t$, unlike $V_g$ and $V_t$ that were defined in \autoref{sec:3dmm}, are not orthogonal.
Nevertheless, any geometry $g$ and texture $t$ of a given face in $M$ can be represented as a linear combination
\begin{equation}
\begin{pmatrix} g \\ t \end{pmatrix} =
\begin{pmatrix} \mu_{M_g} \\ \mu_{M_t} \end{pmatrix} +
\begin{pmatrix} U_g \\ U_t \end{pmatrix}\beta,
\end{equation}
where the coefficient vector $\beta$ is mutual to the geometry and texture.
Using the notations and definitions above, any new facial texture could be approximated through a coefficient vector $\beta$ as
\begin{eqnarray}
t &=& U_t\beta + \mu_{M_t} + noise_t. \cr
g &=& U_g\beta + \mu_{M_g} + noise_g.
\label{eq:texture_noise_model}
\end{eqnarray}
The maximum likelihood assumption is that $\beta$, $noise_t$, and $noise_g$ follow a multivariate normal distributions with zero mean.
Given a facial texture $t$, our goal is to compute the most likely coefficient vector $\beta^*$ under this assumption, and then obtain the most likely geometry as
\begin{equation}
g = U_g\beta^* + \mu_{M_g}.
\end{equation}
Following Bayes' rule, one could formulate the most likely coefficient vector as
\begin{eqnarray}
\beta^* &=& \argmax_{\beta} P(\beta|t)\cr
& =& \argmax_{\beta} \frac{P(t|\beta)P(\beta)}{P(t)} \cr
& =& \argmax_{\beta} P(t|\beta)P(\beta).
\end{eqnarray}
Since $P(t|\beta)$ and $P(\beta)$ follow multivariate normal distributions, denote their covariance matrices by $\Sigma_{t|\beta}$ and $\Sigma_{\beta}$, and their mean vectors by $\mu_{t|\beta} = U_t\beta$ and $\mu_\beta = \vec{0}$, respectively.
Thus,
\begin{eqnarray}
\beta^* &=& \argmax_{\beta} P(t|\beta)P(\beta) \cr
&=& \argmax_\beta \exp\left \{-\frac{1}{2}(t-\mu_{t|\beta})\Sigma_{t|\beta}^{-1}(t-\mu_{t|\beta})^T\right \}\cdot
\exp\left \{-\frac{1}{2}\beta\Sigma_{\beta}^{-1}\beta^T\right \} \cr
&=& \argmin_{\beta} (t-U_t\beta)\Sigma_{t|\beta}^{-1}(t-V_t\beta)^T + \beta\Sigma_{\beta}^{-1}\beta^T \,\,\,.
\end{eqnarray}
One could obtain a closed form solution for $\beta^*$ by vanishing its gradient, which yields
\begin{equation}
\beta^* = (U_t^T\Sigma_{t|\beta}^{-1}U_t+\Sigma_{\beta})^{-1}(t^T\Sigma_{t|\beta}^{-1}U_t).
\end{equation}
We estimate the covariance matrices $\Sigma_{\beta}$ and $\Sigma_{t|\beta}$ empirically from the data. Since the mean of each coefficient in $\beta$ with respect to all samples is zero, the covariance $\Sigma_{\beta}$ can be estimated by
\begin{equation}
\Sigma_{\beta} = \frac{1}{n-1}\sum_{i=1}^{n}\beta_i\beta_i^T,
\end{equation}
where, $\beta_i$ is the coefficient vector for face sample $i$.
The $3m \times 3m$ covariance matrix $\Sigma_{t|\beta}$ is very large, impractical to estimate from a few thousands of samples or to invert once estimated.
Hence, for simplicity, we approximate it as a diagonal matrix that does not depend on $\beta$.
One can verify that the mean of each element in $noise_t = U_t\beta + \mu_{M_t} - t$ with respect to all samples is zero.
Hence, we estimate its $j$-th diagonal value as
\begin{equation}
\Sigma_{t|\beta,jj} = \frac{1}{n-1}\sum_{i=1}^{n}noise_{t,ij}^2.
\end{equation}
\subsubsection{Least squares}
Least squares (LS) minimization is a simple and very useful approach that should be typically used when the amount of data samples is large enough.
It can be thought of as training a multivariate linear regression with an $L_2$ loss.
Assume a facial sample is represented by a texture vector $t$ and a geometry vector $g$.
Denote by $\alpha_t$ and $\alpha_g$ the column vectors with the first $k_t$ and $k_g$ texture and geometry 3DMM coefficients of the face.
These coefficients can be obtained by projecting $t$ and $g$ onto the 3DMM basis $V_t$ and $V_g$.
Let the $k_t \times n$ matrix $A_t$ hold the texture coefficient vectors of all samples in its columns, and let the $k_g \times n$ matrix $A_g$ hold the geometry coefficient vectors of all samples in its columns in the same order as $A_t$.
The correlation between $A_t$ and $A_g$ could be linearly approximated by
\begin{equation}
A_g \approx W^TA_t.
\label{eq:LS}
\end{equation}
Note that we would not benefit from generalizing from a linear to an affine correlation.
This is because the mean of each row in $A_t$ and $A_g$ are zero, as they hold singular values of a centered set of samples.
Following \autoref{eq:LS}, we would like to find a matrix $W$ that minimizes
\begin{equation}
\mbox{loss}(W) = \|W^TA_t - A_g\|_{F}.
\end{equation}
A closed form solution is easily obtained to be
\begin{equation}
W^* = (A_tA_t^T)^{-1}A_tA_g^T = A_t^{+}A_g^T.
\end{equation}
Define $\tilde V_t$ and $\tilde V_g$ as holding the first $k_t$ and $k_g$ texture and geometry 3DMM basis vectors.
$W^*$ can be estimated using a set of training samples.
Then, given a new texture $t$, one could fit a geometry $g$ by computing the texture coefficients as
\begin{equation}
\alpha_t = \tilde V_t^T(t - \mu_t),
\end{equation}
computing the geometry coefficients as
\begin{equation}
\alpha_g=W^*\alpha_t,
\end{equation}
and finally, computing the geometry as
\begin{equation}
g = \tilde V_g \alpha_g + \mu_g.
\end{equation}
\subsubsection{Geometry reconstruction method comparison}
To evaluate how well the assigned geometries fit each textures, we use a test set of textures and their corresponding geometries obtained from $297$ scans that did not participate in neither the GAN training or geometry fitting procedures.
Given these unseen textures as input, we estimate their geometries using the approaches presented above, and compare them to their ground truth. For this comparison, we computed the average $L_2$ norm between the vertices of the reconstructed and true geometry for each of the methods.
In this experiment, we chose $k = k_t = k_g = 200$.
\autoref{fig:geom_fit_1} shows examples of test textures mapped onto their assigned geometries that were obtained using each of the above methods.
It is clear that the LS approach obtains the best results on the test set.
Since it is also simple and efficient, we choose to use the LS approach to approximate the geometries in the following sections.
Note, however, that the rest of the methods could be beneficial for other applications, depending on each case. \autoref{fig:geom_fit_2} visually compares the reconstructed geometries to the true ones for different textures from the test scans, using the LS approach.
The $L_2$ norm between the reconstruction and true geometries are given below each example.
The geometries, predicted solely from textures of identities that were never seen before, are surprisingly very similar to the true ones.
This validates the strong correlation assumption between textures and geometries.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{images/fitting_geom/face_comparison_1.jpg}
\caption{ Left: Mapped textures from the test set. Right: Corresponding geometries that were reconstructed using the each proposed approach.
Average L2 reconstruction error for each method appears above each method illustration.}
\label{fig:geom_fit_1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{images/fitting_geom/face_comparison_2_prev.jpg}
\caption{Top: examples of unwrapped textures of test scans.
Below each texture we show the corresponding scanned geometry, and compare it to the one estimated using the least squares approach.
The $L_2$ norm between the reconstruction and true geometries is indicated at the bottom.}
\label{fig:geom_fit_2}
\end{figure}
\section{Generating geometries using GAN}
\label{sec:geom_gen}
In \autoref{sec:geom_fit}, we reconstructed geometries using the 3DMM model.
Indeed, projecting geometries onto the subspace of 3DMM has almost no visual effect on the appearance of the faces.
The 3DMM, however, is constrained to the subspace of training set geometries and cannot generalize to unseen examples.
In \autoref{sec:mapping}, we mapped facial textures into 2D images, with the goal of producing new textures.
The same methodology can be used for producing new geometries as well.
To that end, we propose to construct a dataset of aligned facial geometries and train a GAN to generate new ones by repeating the texture mapping process while replacing its RBG texture values by its XYZ geometry values.
As for data augmentation, while the amount of texture samples can be doubled by horizontal mirroring each of the images, we found that mirroring each one of the $X$, $Y$, and $Z$ values independently results in a valid facial geometry. Thus, the amount of geometry samples can be augmented by a factor of $8$.
Note, however, that when mirroring $X$ values, one should perform $C - X$, where $C$ is a constant that could be set, for example, to the maximal $X$ value in all training samples.
The training data and resulting generated geometries are shown in \autoref{fig:generating_geom}.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{images/generating_geometries/GAN_geom.jpg}
\caption{Left: the template aligned to $4$ real 3D facial scans, colored by their $X$, $Y$, and $Z$ values. Center-left: geometry values mapped to an image using the proposed universal mapping. These images are used to train a GAN.
Center-right: fake geometry image generated by the GAN.
Right: the synthesized geometries mapped back to 3D.
}
\label{fig:generating_geom}
\end{figure}
\section{Adding expressions}
\label{sec:expressions}
In previous sections we proposed a method for generating new facial textures and corresponding geometries.
In order to complete the model we must also take expressions into consideration.
We follow \cite{chu20143d} who define a linear basis for expressions by taking the difference $g_{diff} = g_{expr}-g_{neutral}$ for every face in the set. We then remove the mean difference vector to obtain $\Delta G_{diff} = G_{diff}-\mu_{diff}\mathbbm{1}^T$. We compute the principal components of $\Delta G_{diff}$ to obtain our geometric expression model. The expression difference model can be used by randomizing the expression coefficients and adding the linear combination of the difference vectors to a generated neutral face as so $g_{exp} = g_{neutral} + \mu_{diff} + \alpha_{g} + V_{exp} \alpha_{exp}$.
Since the expression model must be applied to neutral faces, we should define a model for neutral faces.
In order to span only the space of neutral expressionless faces we suggest to replace all the geometries in our training set with their neutral counterpart.
By following this course, the texture model still benefits from all the textures available to us while our geometry model learns to predict only neutral models for any texture with or without expression.
This method can be applied to either the 3DMM based geometry model from \autoref{sec:geom_fit} or the GAN based geometry model described in \autoref{sec:geom_gen}.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{images/blendshapes/blendshapes.jpg}
\caption{Expression vectors applied to a single face.}
\label{fig:blend_expression}
\end{figure}
\section{Experimental results}
\label{sec:expermintal}
In order to demonstrate the ability of our model to generate new realistic identities we perform several quantitative as well as qualitative experiments. As our first results we generate several random textures and obtain their corresponding geometry according to \autoref{sec:geom_fit}. We are able to vary the expression by applying a linear expression model as described in \autoref{sec:expressions}. According to this model each expression can be represented by a variation from the mean face which leads to a specific facial movement. By combining various movements of the face one can generate any expression desired. The faces are then rendered under various poses and lighting conditions. The rendered faces are depicted in \autoref{fig:render_experiment}
Our next qualitative experiment demonstrates the ability of our model to generate completely new textures by combining facial features from different training examples. To this end we search for the nearest neighbor to the generated texture from within the training data. It can be seen in \autoref{fig:ID_var} that the demonstrated examples have nearest neighbors that are significantly different from them and cannot be considered as the same identity. Within the following section we will analyze both the generative ability of our model to produce new faces as well its realism and similarity to realistic examples. In addition, we also search for generated texture samples which are nearest to several validation set examples. By finding close by textures we demonstrate the generalization capability of our model to unseen textures. This is demonstrated in \autoref{fig:NN-exp}
The previous qualitative assessment is complimented by a more in depth examination of the nearest neighbors across $10K$ generated faces. For the following experiments we have freedom to choose our distance metric. We aim to find a natural metric which coincides with human perception of faces. We therefore choose to render each generated and real face from a frontal perspective and process the rendered images via a pre-trained facial recognition network. Using a model based on \cite{amos2016openface} we extract the last feature from within the network as our facial descriptor. The distance is calculated as $dist=\|D_1 - D_2\|_{2}$ where $D_1,D_2$ are the descriptors corresponding to the first and second face respectively. By analyzing the distribution of such distances we can assess the spread of identities which exists within each dataset as well as the relation between different datasets.
We use the distribution of distances between generated faces and the training and validation sets of real faces in order to assess the quality of our generative model. In \autoref{fig:ID_var} we plot the distribution of distances between generated sample and their nearest real training sample. This plot implies that these distances are distributed as a shifted Gaussian. This implies that on average new identities are located in between the real samples and not directly near them. Our analysis of the distances to the neighbors of the validation set also depicted in \autoref{fig:ID_var} shows that our model is able to produce identities of subjects similar to ones found in the validation set which were not used to train the model. This validates our claim that our model is producing new identities by combining features from the training set faces, and that these identities are not too similar to the training set yet can still generalize to the unseen validation set.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{images/generated_faces/generated_faces.jpg}
\caption{Several generated faces rendered with varying expression and pose under varying lighting conditions.}
\label{fig:render_experiment}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{images/ID_variation/ID_variation.jpg}
\caption{Left: Distribution of distances between training and generated IDs. Right: Cumulative sum of distances between training and test in light dark blue and generated to test in light blue.}
\label{fig:ID_var}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{images/NN/NN.jpg}
\caption{Left: Generated textures coupled with their nearest neighbor within the training set. Right: Validation textures coupled with the nearest out of 10k generated texture. }
\label{fig:NN-exp}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{images/tsne/tsne.jpg}
\caption{T-SNE embedding of face ID's. Left: Real versus generated ID's. Center: Real versus 3DMM ID's Right: generated ID's labeled according to real data clusters}
\label{fig:T-SNE-exp}
\end{figure}
Following \cite{karras2017progressive} we perform an analysis of sliced Wasserstein distance \cite{rabin2011wasserstein} on our generated textures and geometries. By assessing the distance between the distribution of patches taken from textures generated by our model relative to patches taken from faces generated by 3DMM we can analyze the multi-resolution similarity between the generated and real examples. \autoref{tbl:SWD_texture} and \autoref{tbl:SWD_geom} show the SWD for each resolution relative to the training data. In both experiments it is clear that the SWD is lower for our model at every resolution indicating that at every level of detail the patches produced by our model are more similar to patches in the training data.
In addition to assessing the patch level feature resemblance, we wish to uncover the distances between the distribution of identities. To this end we conduct two more experiments which gauge the similarity between the distributions of generated identities to that of the real ones. In order to qualitatively assess these distributions we depict our identities using the common dimensionality reduction scheme T-SNE \cite{maaten2008visualizing}. \autoref{fig:T-SNE-exp} depicts the low dimensional representation of the embedding proposed by our model and the 3DMM overlaid on top of the real data embedding. In addition \autoref{fig:T-SNE-exp} also depicts the clustering of different ethnic groups as well as gender as data points of different colors. By assigning each generated sample to the nearest cluster, we can automatically assign each new sample with its nearest cluster in order to obtain automatic annotation of our generated data. In addition we perform a quantitative analysis of the difference between identity distribution using SWD. The results of this experiment are depicted in \autoref{tbl:SWD_ids}.
\begin{table}[]
\centering
{\small
\begin{tabularx}{\linewidth}{lXXXXXXXX}
Resolution & 1024 & 512 & 256& 128& 64& 32& 16& avg\\ \hline
Real & 3.33 & 3.33 & 3.35 & 2.93 & 2.53 & 2.47 & 4.16 & 3.16\\
\textbf{{Proposed}} & 35.32 & 18.13 & 10.76 & 6.41 & 7.42 & 10.76 & 34.86 & 17.67\\
PCA &92.7 & 224.01 & 156.87 & 66.20 & 15.71 & 33.97 & 104.08 & 99.08\\
\end{tabularx}
}
\caption{Sliced Wasserstein distance between generated and real texture images. }
\label{tbl:SWD_texture}
{\small
\begin{tabularx}{\linewidth}{lXXXXXXXX}
Resolution & 1024 & 512 & 256& 128& 64& 32& 16& avg\\ \hline
Real & 6.08 & 2.41 & 3.40 & 2.45 & 3.10 & 2.75 & 1.86 & 3.15\\
\textbf{{Proposed}} & 11.8 & 9.58 & 27.13 & 44.5 & 38.05 & 11.03 & 2.16 & 20.61\\
PCA & 272.7 & 43.94 & 29.6 & 50.72 & 44.43 & 13.21 & 4.67 & 65.61 \\
\end{tabularx}
}
\caption{Sliced Wasserstein distance between generated and real geometry images. }
\label{tbl:SWD_geom}
{\small
\begin{tabularx}{\linewidth}{lXXXX}
Method & 3DMM & \textbf{{Proposed}} & train\\ \hline
train & 59.88 & 35.82 & - \\
test & 75.3 & 62.09 & 42.4 \\
\end{tabularx}
}
\caption{Sliced Wasserstein distance between distributions of identities from different sets. }
\label{tbl:SWD_ids}
\end{table}
\section{Discussion}
In this paper we present a new model for generating high detail textures and corresponding geometries of human faces. Our claim is that an effective method for processing geometric surfaces via CNNs is to first align the geometric dataset to a template model, and then map each geometry to a 2D image using a predefined mapping. Once in image form, A GAN loss can be employed to train a generator model which aims to imitate the distribution of the training data images. We further show that by training a generator for both textures and geometries it is possible to synthesize high-detail textures and geometries which are linked to each other by a single canonical mapping.
In addition, we describe in \autoref{sec:geom_fit} several methods for fitting 3DMM geometries by learning the underlying relation between texture and geometry, a relation that has been largely neglected in previous work. In \autoref{sec:geom_fit} we also provide a quantitative and qualitative evaluation of each geometry reconstruction method. our proposed face generation pipeline therefore consists of a high resolution texture generator combined with a geometry that was either produced by a similar geometric generation model or by employing a learning scheme which produces the most likely corresponding 3DMM coefficients.
Besides the main pipeline, we propose two extra data processing steps which improve sample quality. In \autoref{sec:mapping} we describe the design and construction of our canonical mapping. Our mapping by design is intended to reduce distortion in important high detail areas while spreading the flattening distortion to non essential areas. Our mapping was also designed in order to take maximal advantage of the available area in each image. In \autoref{sec:mapping} we also show that our improved mapping compared to \cite{slossberg2018high} indeed preserves delicate texture details in our predefined high importance regions. In \autoref{sec:corrupted_data} we also present a new technique for dealing with partially corrupted data. This is especially important when the data acquisition is expensive and prone to errors. By adding a corruption mask to the data at train time the network is able to ignore the affected areas while still learning from the mostly unaffected ones. In the case of our dataset this increases the amount of usable data by roughly $20\%$.
In order to evaluate our proposed model we preformed a quantitative as well as qualitative analysis of several aspects of our model. Our main objective was to create a realistic model, a requirement which we break down into several factors. Our model should produce high quality plausible facial textures which look as much like the training data as possible, but also compose new faces not seen during training rather than repeat previously seen faces. To that end we use an efficient approximation of Wasserstein distance between distributions in order to evaluate the local and global scale features of the produced textures and geometries as well as the distance between distributions of real and generated identities. Our results show that in both identity distribution and image feature resemblance we outperform the 3DMM model which the most widely used model to date.
\begin{acks}
This research was partially supported by the Israel Ministry of Science, grant number 3-14719 and the Technion Hiroshi Fujiwara Cyber Security Research Center and the Israel Cyber Bureau.
We would also like to thank Intel RealSense group for sharing their data and computational resources with us.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
\section{Introduction}
The generation of realistic examples of everyday objects is a challenging and interesting problem which relates to several research fields such as geometry, computer graphics, and computer vision. The ability to capture the essence of a class of objects is key to the task of generating diverse datasets which may be used in turn during the training of many machine learning based algorithms. The main challenge posed by the task of data generation is to construct the model that is able to generalize to many variations while still maintaining high detail and quality. Furthermore, the challenge of generating geometric data is even greater since Both geometry and texture of an object must be synthesized while taking into account the underlying relations between them.
In this work, we propose to learn the latent space of 3D textured objects. We focus our efforts on human faces, and show that by using a canonical transformation that maps geometric data to images, we are able to learn the distribution of such images via the GAN framework. By representing both texture and geometry of the face as transformed geometric images, we can learn the underlying distribution of faces, and later generate new faces at will. The generation of realistic human faces is a useful tool with applications in face recognition, puppetry, reconstruction and rendering. Our main contributions are the proposition of a new model for 3D human faces which is composed in the 2D image domain, as well as the modeling of the relation between texture and geometry, further improving realism. By generating geometries and textures using state of the art GANs, it is possible create highly detailed data samples while maintaining the ability go generalize to unseen data, two desirable propertied that are often at odds.
While deep learning and convolutional networks have revolutionized many fields in recent years, they have been mostly employed on structured data which is intrinsically ordered. Arranged data such as audio, video, images, and text can be processed according to the order of samples, frames, pixels or words. This inherent ordering permits the application of convolution operations which are the main building block of convolutional networks, a powerful and popular variant of deep networks. Contrary to typical parameterized data, geometric data represented by two dimensional manifolds lacks an intrinsic parameterization and is therefore more difficult to process via convolutional networks. This important class of data is crucial to the task of modeling our world as most solid objects can be represented by a closed manifold accompanied by a texture overlay.
Recently, geometric data has grown dramatically in availability as more accurate and affordable acquisition devices have come into use. This abundance of data has attracted the attention of the computer vision and machine learning communities, leading to many new approaches for modeling and processing of geometries. One family of techniques for geometric data processing aims to define new operators which can be applied directly to the manifold and are able to replace to some extent the convolution operation within the processing pipeline. Other methods attempt to process geometries in the spectral domain or represent them in voxel space. These families of methods each have their merits but suffer from other issues such as loss of generality and memory inefficiency. In contrast, we propose to transform our geometric data via a canonical mapping into two dimensional gridded data. This allows us to process the geometric data as images. While this approach on its own is not new we show that by careful construction of the transformed dataset we are able to harness the power of convolutional networks with little loss of data fidelity. Furthermore, we are able to design our transformation process in order to control the distortion, thus reducing it in important areas while spreading it to the non essential areas of the data. Finally, we propose to encode both the geometry and texture as mapped images which means the processing pipeline remains identical for both cases.
\section{Related work}
Data augmentation is a common practice within the machine learning community. By applying various transformations to existing data samples it is possible to simulate a much larger dataset than is available and introduce robustness to transformations. A more advanced method for data augmentation takes into account the geometry of the scene. The technique which we term geometric data augmentation consists of a geometry recovery stage, then transformation is performed on the geometry and finally a new image is created by projecting the geometry. In \cite{masi2016we}, the authors show that by performing geometric data augmentation on a dataset of facial images they are able to reach state of the art results on difficult facial recognition benchmarks. Despite its proven usefulness geometric augmentation still lacks the ability to create completely new data samples outside the scope of the dataset.
A complementary method to data augmentation is data generation. Bu constructing a high quality model for data generation it is possible to produce an infinitely large dataset. In addition, some models may permit control over the characteristics of each data sample. Within the domain of faces this would mean control over parameters such as age, gender, expression, pose and lighting conditions. When dealing with image data a recent popular approach is to use a GAN \cite{goodfellow2014generative} which is in essence a neural network with a trainable loss function. While this class of methods is well suited for images, reformulation in the context of geometry is more challenging and several competing approaches exist in this field. \cite{Gecer_2018_ECCV} and \cite{shrivastava2017learning} propose to construct samples from a low quality linear model, and then use a GAN in order enforce the realism of the data. \cite{Litany_2018_CVPR} and \cite{ranjan2018generating} both propose the use of convolutional autoencoders which are trained on pre-aligned geometric data. These methods however do not take into account the model texture. In addition \cite{wu2016learning} have used the popular voxel grid representation for geometries, and are able to generate 3D objects using this notion. This method however is memory inefficient and in practice can produce only coarse geometries.
In addition to data augmentation and generation the objective of pose normalization is to decouple the subjects identity from other factors such as expression and pose which may confuse a classifier. This can be either done by geometric reconstruction manipulation of the facial geometry or by performing normalization directly in the image domain. While \cite{chu20143d} and \cite{bas20173d} leverage a geometric representation in order to transform the data, \cite{tran2017disentangled} and \cite{huang2017beyond} are able to frontalize faces directly in the image domain as part of their pipeline. Although useful methods which help the training process by limiting data variation, these methods still do not explicitly model new data samples which is our ultimate goal.
An additional method for geometrically manipulating facial data which has gained success is geometric reconstruction from a single image. One popular family of methods aim to fit a parametric model to an image. This idea was first introduced by \cite{blanz1999morphable} and has since been extended by works such as \cite{booth20173d}. An approach which involves regressing the coefficients of a given model via a deep network were popularized by \cite{richardson20163d} and extended by \cite{richardson2017learning} and \cite{tran2017regressing}. More recently methods which are not restricted to a specific model or attempt to learn the model during training time such as \cite{sela2017unrestricted}, \cite{Tewari_2018_CVPR} and \cite{tran2018nonlinear} have been able leave the restricting assumptions of linear models such as 3DMM. Complementary efforts such as
\cite{Deng_2018_CVPR} propose to reconstruct occluded texture regions in order to gain a full textured reconstruction from challenging poses as well. Another recent work by \cite{saito2017photorealistic} focuses on improving the quality of facial texture used in reconstructed faces in order to improve realism. An additional complimentary approach proposed by \cite{Guler2016DenseReg} is to learn a direct mapping from an image to a template model. All of the above approaches while useful, are based on fitting some geometry to a given image by relying on some underlying geometric model. This model however is not explicitly used in order to generate novel faces but rather to reconstruct existing ones.
Our most direct competition comes from several works in the field of facial generative modeling. The seminal work by \cite{blanz1999morphable} which pioneered the field almost two decades ago is still widely used within many methods, some of which were mentioned above. The linear 3D Morphable Model proposed is extremely flexible; however it has the drawback of using a small number of PCA vectors which limit its ability to present highly detailed models. A recent large scale effort taken by \cite{booth20163d} and \cite{booth2018large} has produced the largest publicly known 3DMM by scanning $10k$ subjects and using their scans to construct the model. In contrast to linear models much more complex relations can be captured by training deep networks to take the part of data generators. To this end, \cite{Tewari_2018_CVPR} and \cite{tran2018nonlinear} were able to jointly learn a reconstruction encoder while also learning the facial model itself. Given the trained model one could plausibly generate faces, however the authors have not shown any experiments to this effect. \cite{ranjan2018generating} on the other hand has employed mesh autoencoders to construct new facial geometries, however this method does not produce texture and was trained on a limited dataset of very few subjects. In this work we will propose a new GAN based facial geometric generative model, and analyze the ability of our model to extend to new identities. We also relate between the geometric and texture models which are intrinsically correlated and discuss different ways of exploiting this correlation for our cause.
\section{3D Morphable Model}
\label{sec:3dmm}
One of the early attempts to capture facial geometry and photometry (texture) by a linear low dimensional space is the Blanz and Vetter \cite{blanz1999morphable} {\em 3D Morphable Model} (3DMM).
Using the 3DMM, textures and geometries of faces can be synthesized as a linear combination of the elements of an orthogonal basis.
The basis is constructed from a collection of facial scans and by applying the principal component analysis after alignment of the faces.
That is, the basis construction process relies on a vertex to vertex alignment of the facial scans, which is achieved by computationally finding a dense correspondence between each scan to a template model.
The aligned vertices provide a set of spatial and texture coordinates which are then decomposed into the principal components of the set.
Once the basis is constructed, it is possible to represent each face by projecting it onto the first $k$ components of both the geometry and the texture bases.
This linear model was used to reconstruct 3D faces from 2D images;
Blanz and Vetter \cite{blanz1999morphable} took an analysis-by-synthesis approach, which attempts to fit a projected surface model embedded in $\mathbb{R}^3$ into a given 2D image.
This was established by constructing a fully differentiable parametric image formation pipeline, and performing a gradient descent procedure optimizing for an image to image loss on the model parameters.
The parameters consist of the geometry and texture models coefficients of the face, as well as the lighting and pose parameters.
This process results in a set of coefficients which encode the geometry and texture of any given face up to their projections on the principal components basis, effectively reconstructing the curved surface structure and the photometry of the given image of a face.
\subsection{Model Construction}
According to the 3DMM model, each face is represented as an ordered set of $m$ geometric coordinates \mbox{$g = (\hat x^1, \hat y^1, \hat z^1, \hat x^2,\ldots,\hat y^m, \hat z^m) \in \mathbb{R}^{3m}$} and texture coordinates in RGB space\\ \mbox{$t = (\hat r^1, \hat g^1, \hat b^1, \hat r^2,\ldots,\hat g^m, \hat b^m) \in \mathbb{R}^{3m}$}.
Given a set of $n$ faces, each represented by geometry $g_i$ and texture $t_i$ vectors, construct the $3m \times n$ matrices $G$ and $T$ by column wise concatenation of all geometric coordinates and all corresponding texture coordinates.
Since the alignment process ensures an ordered universal representation of all faces, Principal Component Analysis (PCA) \cite{jolliffe1986principal} can be applied to extract the optimal first $k$ orthogonal basis components in terms of $L_2$ reconstruction error.
To that end, denote by $V_g$ and $V_t$ the $3m \times n$ matrices that contain the left singular vectors of $\Delta G = G - \mu_g\mathbbm{1}^T$ and $\Delta T = T - \mu_t\mathbbm{1}^T$, respectively, where $\mu_g$ and $\mu_t$ are the average geometry and texture of the faces and $\mathbbm{1}$ is a vector of ones.
By ordering $V_g$ and $V_t$ according to the magnitude of the singular values in a descending order,
the texture and the geometric coordinates of each given face can be approximated by the linear combination
\begin{equation}
g_i = \mu_g + V_g\alpha_{g_i},\ \ \ t_i = \mu_t + V_t\alpha_{t_i},
\label{eq:3dmm_model}
\end{equation}
where $\alpha_{g_i}$ and $\alpha_{t_i}$ are the coefficients vectors, obtained by $\alpha_{g_i} = V_g^T(g_i - \mu_g)$ and $\alpha_{t_i} = V_t^T(t_i - \mu_t)$.
Following this formulation, it is possible to use such a model to generate new faces by randomly selecting the geometry and texture coefficients and plugging them into \autoref{eq:3dmm_model}.
According to \cite{blanz1999morphable}, the distribution of the coefficients can be approximated as a multivariate normal distribution, such that the probability for a coefficient vector $\alpha$ is given by
\begin{equation}
P(\alpha) \sim \exp\left \{-\frac{1}{2}\alpha^T\Sigma^{-1}\alpha\right \},
\label{eq:3dmm_dist}
\end{equation}
where $\Sigma$ is a covariance matrix that can be empirically estimated from the data, and is generally assumed to be diagonal.
\subsection{Synthesis model}
The 3D morphable model is useful not only in the context of representation and reconstruction, but, as noted in the previous section, it also allows for the generation of new faces which can not be found in the training set.
The synthesis is achieved by randomizing linear combinations of the basis vectors.
The random coefficients are drawn according to the model prior from the distribution described in \autoref{eq:3dmm_dist}.
As is common practice when dealing with principal components, only the first $k \ll n$
vectors can be taken into account as part of the model.
The number $k$ can be obtained by analyzing the decay of the singular values which is proportional to the error produced by ignoring the associated basis vector.
By excluding the vectors for which the singular variables are sufficiently small we can guarantee minimal loss of data.
Even though two decades have passed since the inception of the 3DMM, it is still widely used in cutting edge applications.
By harnessing the generative powers of this model, it has been used as a tool for data augmentation and data creation for training of convolutional networks \cite{sela2017unrestricted,richardson20163d,richardson2017learning,Gecer_2018_ECCV}.
Furthermore, the model has been integrated into deep learning pipelines in order to provide structure and regularization to the learning process \cite{Tewari_2018_CVPR}.
In spite of the wide use and apparent success of the model it is clear that the faces obtained from it tend to be over-smoothed and in some cases non-realistic.
Furthermore, the multivariate normal distribution model from which the coefficients are drawn is over simplified and does not represent the true distribution of faces.
In particular, the texture and geometry are treated as two uncorrelated variables, in contradiction to empirical evidence. \autoref{fig:3DMM_faces} shows a few samples of synthesized 3DMM faces and depicts the difference between the distributions of 3DMM generated faces and real ones.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{images/3DMM/3DMM_figure.jpg}
\caption{Top: Multi Dimensional Scaling \cite{shamai2018efficient} used to depict the first $k=200$ 3DMM texture and geometry coefficients for real faces versus 3DMM generated faces. Bottom: Examples of 3DMM generated faces. The 3DMM was constructed by our own training set. }
\label{fig:3DMM_faces}
\end{figure}
\section{Progressive growing GAN}
\label{sec:GAN}
Generation of novel plausible data samples requires learning the underlying distribution of the data. Given a perfect discriminator which can differentiate between real and fake data samples it is possible to construct a training loss for a generator model which tries to maximally confuse the discriminator.
For complex realistic data, finding such a discriminator is a difficult problem on its own and requires learning from realistic and fake examples.
The fundamental idea of the GAN framework is to train both of these networks simultaneously.
Essentially, this means that we use a trainable loss function for the generator which constantly evolves as the generator improves.
This process can be formulated as \autoref{eq:minimaxgame-definition}
\begin{equation}
\footnotesize
\label{eq:minimaxgame-definition}
\min_G \max_D V(D, G) = \mathbb{E}_{x \sim p_{\text{data}}(x)}[\log D(x)] + \mathbb{E}_{z \sim p_{z}(z)}[\log (1 - D(G(z)))],
\end{equation}
where $D,G$ are the discriminator and generator parametric functions, $x,z$ are the real data samples and latent representation vector respectively.
Since we wish to produce high resolution textures for facial geometries, we propose to use a recent successful GAN, namely \cite{karras2017progressive}.
The progressive growing GAN is built in levels which gradually increasing the resolution of the output image.
During the training process each level is added consecutively while smoothly blending the new levels into the output as they are added.
This, and several other techniques were shown to increase the training stability as well as the variation of the produced data.
The difficulty concerning geometric data is that it lacks the regular intrinsic ordering which exists in 2D images, which are essentially large matrices.
For this reason, it is unclear how to apply spatial filtering, which is the core building block of neural network layers, to arbitrary geometric structures.
Significant progress has been made in this direction by several recent papers. A comprehensive survey is presented in \cite{bronstein2017geometric}. These methods, however, are not yet widely used and supported within standard deep learning coding libraries.
In order to harness the full power of recent state of the art developments in the field, it is sometimes preferable to work in the domain of images. For this reason, we built a data processing pipeline which maps the geometric scanned data into a flat canonical image which allows the utilization of the progressively growing GAN without major modifications.
\section{Training data construction}
\label{sec:training_data_construction}
In this section we describe the process by which we produce our training data.
We start with digital geometric scans of human faces.
By making use of a surface to surface alignment process \cite{weise2009face}, we are able to bring all the scans into correspondence with each other.
Next, applying a universal mapping from the mesh to the 2D plane, we can transfer the facial texture into a canonically parametrized image.
These aligned texture images are used to train our texture generation model.
We provide several alternatives for constructing the facial geometry which accompanies each texture. One solution is to learn the relation between 3DMM texture and geometry coefficients which is prevalent in the training data.
In addition, we can similarly process the geometric data of the faces as well. By applying the same canonical transformation and encoding the $\left(x,y,z\right)$ coordinates of the model vertices as RGB channels of an image, we can learn to generate geometries as well as textures using the same methodology.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth, trim={2cm 5cm 2cm 5cm}, clip]{images/pipeline/data_acq.jpg}
\caption{Left to right: Raw scan with landmark points, The template is deformed to fit the scan and texture is transferred onto the template, canonical UV mapping between the deformed template and the 2D image, final mapped texture as image.}
\label{fig:my_label}
\end{figure}
\subsection{Face scanning and marking}
Our training data formulation process starts by acquiring digital high resolution facial scans.
Using a 3DMD scanner, roughly $1000$ different subjects were scanned, each making five distinct facial expressions including a neutral expression.
The subjects were selected to form a balanced mixture of genders and ethnic backgrounds.
Each scan is comprised of the facial geometry, represented by a triangulated mesh, as well as two high resolution photographs, which capture a 180 degree view of the subject's face.
Each mesh triangle is automatically mapped to one of the photos, allowing the facial texture to be transferred onto the mesh.
Due to the variety of facial geometries, as well as limitations of the scanning process, the meshes may contain imperfections such as holes and areas of missing texture.
These data corruptions may affect the training data samples that are given to the network and care must be taken not to hinder the training process.
The straightforward path is to filter out the erroneous data samples completely.
This leads to a significant reduction in the overall size of the training set size of roughly 20\%. Instead, we propose a new approach which incorporates corrupted scans without compromising the integrity of the training data.
We describe our approach to learning from corrupted data in \autoref{sec:corrupted_data}.
In order to facilitate the alignment process described in \autoref{sec:Non-Rigid_Alignment}, we annotate each face by 43 landmark locations.
These locations are determined automatically by projecting the facial surface into an image and applying one of many 2D facial landmark detectors such as Dlib \cite{dlib09}.
The landmarks are then back-projected onto the surface to determine their location.
Finally, the locations of the automatically generated landmarks are manually refined in order to prevent displacements that could lead to large errors during the alignment process.
\subsection{Non-Rigid Alignment}
\label{sec:Non-Rigid_Alignment}
The goal of the alignment process is to find a dense correspondence between all the facial geometries.
It is performed by aligning all scans to a single facial template.
This correspondence is achieved by deforming the template into the scanned surface, a process guided by the pre-computed landmarks.
Initially, a rigid alignment between the scanned surface and template is performed as preprocessing step.
This is done by solving for the rotation, translation, and uniform scaling between the scan and template landmarks.
The deformation process is performed by defining a fitting energy which takes into account both surfaces and known landmarks and measures how closely they fit each other.
The energy also includes a regularization term which penalizes non-smooth deformations.
The template mesh is deformed by moving each vertex according to the energy gradient in an iterative manner.
The loss function which is minimized during the alignment process was first described by \cite{blanz1999morphable} and is comprised of $3$ terms which contribute to the final alignment.
The first term accumulates the distances between the facial landmark points on the scanned facial surface and their corresponding points on the template mesh.
The second term accumulates the distances between all the template mesh points to the scanned surface.
The third term serves as a regularization, and penalizes non-smooth deformations.
The loss term is minimized by taking the derivative of the loss with respect to the template vertex coordinates, then deforming the template in the gradient direction.
This process naturally provides a dense point to point correspondence between each and every scanned surface.
\subsection{Universal mapping}
\label{sec:mapping}
Given a facial scanned surface with unknown parametrization, our goal in this section is to discover a 2D parameterization of the surface which maps it to a unit rectangle, such that this mapping is consistent for all scans.
In \autoref{sec:Non-Rigid_Alignment}, we described the process of aligning the facial surface template to a scanned facial surface, and by that, bring them into correspondence.
The obtained correspondence allows to transfer the parametrization from the template to all scans, thus establishing a universal parametrization. In the following section, we define the unique parameterization between the template face and the unit rectangle.
The authors of \cite{slossberg2018high} defined tbe mapping between the scan and the plane by using a ray casting technique built into the animation rendering toolbox of Blender \cite{blender}.
\autoref{fig:uv_result} depicts several examples of the resulting mapped facial photometry.
Although it would be possible to make use of the same parametrizatoin, an alternative definition may suite us better.
The Blender mapping, for example, does not exploit the entire squared image for the mapping.
Moreover, it does not take the facial structure into account.
The eyes, nose, and mouth, for instance, clearly contain more details than smoother parts of the face such as the cheeks and forehead.
It is reasonable to assume that it would be easier to learn and reconstruct the main features, perhaps at the expense of other parts if they take up a larger portion of the input images. To that end, we propose to construct a weighted parametrization that will allow us to control the relative area in the plane taken up by each facial feature.
In \cite{floater1997parametrization}, the authors presented a parametrization technique that allows to choose for each vertex its baricentric coordinates with respect to its neighbors.
The authors demonstrate that any set of baricentric coordinates has a unique planar graph with a valid triangulation that fulfills it.
As an extension, they also provided a method for a weighted least square parametrization that allows some control over the edge lengths in the resulting parametrization.
The method is briefly described as below.
Given any triangulated mesh, the object is to map it into a valid planar graph with the same connectivity.
Assuming a mesh with $N$ vertices, choose a set of $K$ boundary vertices from the mesh and fix their 2D mapping values to some desired convex boundary, $u_1,...,u_K$.
For any other vertex $i > K$ in the mesh, choose a set of non-negative baricentric coordinates $\lambda_{i,j}$, such that
$\sum_{j=1}^N \lambda_{i,j} = 1$, and $\lambda_{i,j} = 0$ if and only if $i$ and $j$ are not connected.
Then, for $i=K+1,...,N$, solve the linear system of equations
\begin{equation}
\label{eq:baricentric}
u_i = \sum_{j=1}^N \lambda_{i,j}u_j.
\end{equation}
The authors in \cite{floater1997parametrization} prove that \autoref{eq:baricentric} has a closed form unique solution that coincides with the chosen baricentric coordinates.
According to \cite{floater1997parametrization}, this technique could be extended to a weighted least square parametrization.
For any desired set of weights $w_{i,j}$, it was shown that the choice of
\begin{equation}
\lambda_{i,j} = \frac{w_{i,j}}{\sum_{j:(i,j)\in E} w_{i,j}},
\end{equation}
minimizes the functional $\sum_{j:(i,j)\in E} w_{i,j} \|u_i - u_j\|^2$, where $E$ represents the set of edges.
Following this technique, we designed the weights $w_{i,j}$ such that eyes, nose and mouth would recieve a larger area in the parametrization plane. We defined a weight for each vertex in the template face, and then gave each edge the average weight of its two adjacent vertices.
Note that the resulting edge lengths also depend on the density of vertices in the mesh.
In other words, when choosing a constant weight for all edges, the edge lengths of the resulting parametrization termed the uniform baricentric parametrization, is not constant.
To design the edge weights more intuitively, we normalize the edge weights by the ones resulting from the uniform baricentric parametrization.
A visualization of the edge weights is shown in \autoref{fig:uv_result}.
To choose the boundary vertices $u_1,\ldots,u_K$, we follow the outer boundary of the facial mesh, starting from the center bottom (a point on the chin), while measuring the length of edges we pass through, $L_1,...,L_k$.
Assume the image boundary is parametrized by $C(t) = \{x(t), y(t)\}$ for $0 \leq t \leq 1$, such that $C(0)=C(1)$ is the bottom center of the image.
Then, we set $u_i = C(t_i)$, where
\begin{equation}
t_i = \frac{\sum_{j=1}^i L_j}{\sum_{j=1}^K L_j}.
\end{equation}
Lastly, unlike \cite{slossberg2018high}, we propose to construct a symmetric mapping, in order to augment the data by mirroring the training samples.
This could be done by ensuring that the template is intrinsically symmetric, as well as the choice of boundary vertices and edge weight.
The resulting mapping and a visualization of the edge weights are shown in \autoref{fig:uv_result}.
The rightmost part in \autoref{fig:uv_result} shows that when mapping back the unwrapped texture to the facial geometry, a better resolution is obtained when using the proposed method.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{images/uv_mapping/uv_mapping.jpeg}
\caption{Left: A visualization of the proposed edge weights. Each vertex is colored by the average weights of its edges. Bright colors represent larger weights.
Next: The deformed template, after its alignment process to some arbitrary scan. Center: the resulting 2D image, using the texture mapping suggested in \cite{slossberg2018high} and the proposed one.
Right: A comparison between the texture mapping methods. An unwrapped texture is mapped back to the template, showing a slightly better resolution in the proposed method.}
\label{fig:uv_result}
\end{figure}
\section{Learning from corrupted data}
\label{sec:corrupted_data}
The semi-automatic data acquisition pipeline described in \autoref{sec:training_data_construction} is used to construct a dataset of 2D images that will be used to train the GAN.
Naturally, some of the generated data samples contain corrupted parts due to errors in one or more of the pipeline stages.
In the so-called 3D scanning process, for example, facial textures that contain hair are often not captured well.
Another reasons for incomplete texture are occlusions and limited camera field of view.
The geometry of the eyes is occasionally distorted due to their high specular reflection properties.
In the landmark annotation stage, some landmarks can be inaccurate or even wrong, resulting in various distortions in the final output. \autoref{fig:PDG_corruptions} provides several examples of such data corruptions.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{images/partial-data_GAN/corruptions_examples.jpeg}
\caption{Examples of different distortions resulting from the noisy scanning and inexact alignment.}
\label{fig:PDG_corruptions}
\end{figure}
One way to handle data corruption is to ignore imperfect images and keep only the valid ones.
In our case, manual screening of the data reduced the number of samples from $4,963$ to only $3,679$ valid ones, thus, eliminating $25\%$ of the data.
Here, we propose a novel technique for training GANs using partially incomplete data that is able to exploit undamaged parts and robustly deal with corrupted data.
To that end, we propose to pair a binary \textit{valid mask} to each training data image, that represents areas in the image that should be ignored.
Without loss of generality, black areas in the masks (zero values) correspond to corrupted regions in the image we would like to ignore, and white regions (values of one) correspond to valid parts we would like to exploit for training the network.
We propose to multiply these valid masks by their corresponding images, as well as concatenate them as a forth channel (R-G-B-mask).
Recall that the discriminator receives as an input a batch of real images and a batch of fake images.
To prevent the discriminator from discriminating real and fake images by the valid masks, the same masks are multiplied and concatenated to both real and fake batches.
The generator, which does not get the masks as an input, must produce complete images in-painting the masked regions.
Otherwise, the discriminator would be able to easily identify masked parts that do not match the valid masks and conclude that the image is fake.
The valid masks could be constructed either manually or using automatic image processing technique for detection of the unwanted parts.
The discriminator and generator of the proposed GAN model are demonstrated in \autoref{fig:PDG_model}.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{images/partial-data_GAN/GAN_model.jpeg}
\caption{The proposed GAN model of learning from incomplete data.
A valid mask is fitted for each training sample according to the corrupted regions.
The real and fake samples are concatenated and masked by the same valid masks and then passed to the discriminator unit.
The discriminator cannot distinguish between the real and fake images by their masks.
The generator cannot generate corrupted images since corruptions are masked.
The generator does not have any information about the mask and therefore cannot generate corrupted images since they would not fit the masked regions.}
\label{fig:PDG_model}
\end{figure}
To demonstrate the performance of the proposed GAN we constructed a synthetic dataset of different colored shapes randomly located in $10,000$ images of size $256 \times 256$.
In this simple experiment, we treat the red circles as corruptions that we would like our model to ignore. \autoref{fig:PDG} shows the data images, the valid masks, and the resulting GAN output.
It is clearly seen that the proposed GAN model generated new data images without the unwanted red circles.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{images/partial-data_GAN/Synthesized.jpeg}
\caption{Left three: examples of the constructed synthetic data, that consists of images with different colored shapes.
In this construction, red circles are unwanted in the output. Middle three: valid masks that correspond to the unwanted elements.
Right three: examples of the output data, generated by the proposed GAN.}
\label{fig:PDG}
\end{figure}
\section{Facial Surface Generator}
We propose to train a model which is able to generate realistic geometries and photometries (textures or colors) of human faces.
The training data for our model is constructed according to \autoref{sec:training_data_construction}, and used to train a NN according to a GAN loss.
At inference, the trained model is used to produce random plausible facial textures which are mapped by our predefined parametrization described in \autoref{sec:mapping}.
In order to also generate corresponding facial geometries for each new texture, we propose two novel approaches.
The first approach is based on training a similar model for geometries. This is done by mapping the training set geometry coordinates using the canonical parametrization into the unit rectangle. By treating each coordinate as a color channel, an we form geometry images which we use to train our geometry generator model.
The second approach relies on the classical 3DMM model.
For both approaches we suggest a method to generate a geometry which is a plausible fit for a given texture.
In the following sections we describe the two proposed approaches in detail.
\subsection{Generating textures using GAN}
Our texture generation model is based on a Convolutional Neural Network which is trained using a GAN loss.
Due to this loss, we are able to train a model that satisfies the distribution of real data samples, by drawing new samples out of this distribution.
By training our model on our dataset which we constructed according to \autoref{sec:training_data_construction}, we are able generate new plausible textures which are all mapped to the unit rectangle plane according to the predefined parametrization described in \autoref{sec:mapping}.
As we will show in the following sections, the generated textures by the proposed model present novel yet realistic human faces.
Since texture and geometry are both inseparable attributes of the same geometric entity, it is necessary to take the relationship between them into account when generating the corresponding geometries.
In \autoref{sec:geom_gen} and \autoref{sec:geom_fit} we describe in detail the process of the proposed geometry generation pipeline which takes as input a generated texture and produces a corresponding plausible geometry.
Several outputs from the suggested texture generation model are depicted in \autoref{fig:fake_real_textures}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{images/unwrapped_textures/unwrapped.jpg}
\caption{Left: Facial textures generated by the suggested pipeline. Right: Real textures from training set.}
\label{fig:fake_real_textures}
\end{figure}
\subsection{Assigning geometries to textures}
\label{sec:geom_fit}
Once novel textures have been generated, we would like to assign them plausible synthetic geometries in order to obtain realistic face models.
One way to generate geometries is by exploiting the 3DMM model by which geometries can be recovered through proper selection of the coefficients.
In what follows, we discuss and compare several methods for obtaining the 3DMM geometry coefficients.
\subsubsection{Random}
The simplest way of synthesizing a geometry to a given texture is by picking random 3DMM geometry coefficients.
We follow the formulation in \autoref{eq:3dmm_dist}.
The probability of a coefficient $\alpha_i$ is given by
\begin{equation}
P(\alpha_i) \sim \exp\left \{-\frac{\alpha_i^2}{2\sigma_i^2}\right \},
\end{equation}
where, $\sigma_i^2$ is the $i$-th eigenvalue of the covariance of $\Delta G$.
$\sigma_i^2$ can be computed more efficiently as $\sigma_i^2 = \frac{1}{n}\delta_i^2$, where $\delta_i$ is the $i$-th singular value of $\Delta G$.
To fit a geometry to a given texture, we randomize a vector of coefficients from the above probability distribution and reconstruct the geometry using the 3DMM formulation.
Random geometries are simple to generate.
Yet, not every geometry can actually fit any texture.
As a convincing visualization, we computed the \textit{canonical correlation} \cite{hotelling1936relations} between the 3DMM texture and geometry coefficients, $\{\alpha_{ti}\}_{i=1}^n$ and $\{\alpha_{gi}\}_{i=1}^n$, of the facial scans. \autoref{fig:fit_corr} shows $u$ w.r.t. $v$, the first two canonical variables of the correlation.
In what follows, we attempt to generate geometries which are suited for their textures.
\begin{figure}
\centering
\includegraphics[width=0.4\linewidth]{images/fitting_geom/corr_visualize_3.jpg}
\caption{
Visualization of the correlation between textures and geometries of the facial scans, where the axes $u$ and $v$ are the first two variables of the canonical correlation.
}
\label{fig:fit_corr}
\end{figure}
\subsubsection{Nearest neighbour}
Given a new texture, a simple way to fit a geometry that is likely to match it, is by finding the data sample with the nearest texture, and projecting its geometry onto the 3DMM subspace.
For that task, we define a distance between two textures as the $L_2$ norm between their 3DMM texture coefficients.
Only the 3DMM texture and geometry coefficients of the data need to be stored.
Nearest neighbor geometries are simple to obtain; however they are restricted to the training data geometries alone.
\subsubsection{Maximum likelihood}
The maximum likelihood estimator (ML) is typically used when one can formulate assumptions about the data distribution.
In our case, given input facial textures, ML could be used to obtain the most likely geometries under a set of assumptions.
We first construct a mutual 3DMM basis by concatenating textures and geometries.
Define a vertical concatenation of geometries $G$ and textures $T$ as the $6m \times n$ matrix $M=\begin{pmatrix}G\\T\end{pmatrix}$,
such that $G,T$ are defined in \autoref{sec:3dmm}.
Define $\Delta M = M - \mu_M\mathbbm{1}^T$, where $\mu_M$ holds the average of the rows of $M$.
Denote by $U$ the $6m \times k$ matrix that contains the first $k$ basis vectors of $\Delta M$, i.e., corresponding to the largest magnitude eigenvalues.
These vectors can be computed either as eigenvectors of $\Delta M \Delta M^T$ or, more efficiently, as the left singular vectors of $\Delta M$.
Denote by $U_g$ and $\mu_{M_g}$ the upper halves of $U$ and $\mu_{M}$, and denote by $U_t$ and $\mu_{M_t}$ the lower halves $U$ and $\mu_{M}$, respectively, such that
\begin{equation}
U = \begin{pmatrix}
U_g\\U_t
\end{pmatrix},\,\,\,\,\,\,\,
\mu_{M} =
\begin{pmatrix}
\mu_{M_g} \\ \mu_{M_t}
\end{pmatrix}.
\end{equation}
Note that $U_g$ and $U_t$, unlike $V_g$ and $V_t$ that were defined in \autoref{sec:3dmm}, are not orthogonal.
Nevertheless, any geometry $g$ and texture $t$ of a given face in $M$ can be represented as a linear combination
\begin{equation}
\begin{pmatrix} g \\ t \end{pmatrix} =
\begin{pmatrix} \mu_{M_g} \\ \mu_{M_t} \end{pmatrix} +
\begin{pmatrix} U_g \\ U_t \end{pmatrix}\beta,
\end{equation}
where the coefficient vector $\beta$ is mutual to the geometry and texture.
Using the notations and definitions above, any new facial texture could be approximated through a coefficient vector $\beta$ as
\begin{eqnarray}
t &=& U_t\beta + \mu_{M_t} + noise_t. \cr
g &=& U_g\beta + \mu_{M_g} + noise_g.
\label{eq:texture_noise_model}
\end{eqnarray}
The maximum likelihood assumption is that $\beta$, $noise_t$, and $noise_g$ follow a multivariate normal distributions with zero mean.
Given a facial texture $t$, our goal is to compute the most likely coefficient vector $\beta^*$ under this assumption, and then obtain the most likely geometry as
\begin{equation}
g = U_g\beta^* + \mu_{M_g}.
\end{equation}
Following Bayes' rule, one could formulate the most likely coefficient vector as
\begin{eqnarray}
\beta^* &=& \argmax_{\beta} P(\beta|t)\cr
& =& \argmax_{\beta} \frac{P(t|\beta)P(\beta)}{P(t)} \cr
& =& \argmax_{\beta} P(t|\beta)P(\beta).
\end{eqnarray}
Since $P(t|\beta)$ and $P(\beta)$ follow multivariate normal distributions, denote their covariance matrices by $\Sigma_{t|\beta}$ and $\Sigma_{\beta}$, and their mean vectors by $\mu_{t|\beta} = U_t\beta$ and $\mu_\beta = \vec{0}$, respectively.
Thus,
\begin{eqnarray}
\beta^* &=& \argmax_{\beta} P(t|\beta)P(\beta) \cr
&=& \argmax_\beta \exp\left \{-\frac{1}{2}(t-\mu_{t|\beta})\Sigma_{t|\beta}^{-1}(t-\mu_{t|\beta})^T\right \}\cdot
\exp\left \{-\frac{1}{2}\beta\Sigma_{\beta}^{-1}\beta^T\right \} \cr
&=& \argmin_{\beta} (t-U_t\beta)\Sigma_{t|\beta}^{-1}(t-V_t\beta)^T + \beta\Sigma_{\beta}^{-1}\beta^T \,\,\,.
\end{eqnarray}
One could obtain a closed form solution for $\beta^*$ by vanishing its gradient, which yields
\begin{equation}
\beta^* = (U_t^T\Sigma_{t|\beta}^{-1}U_t+\Sigma_{\beta})^{-1}(t^T\Sigma_{t|\beta}^{-1}U_t).
\end{equation}
We estimate the covariance matrices $\Sigma_{\beta}$ and $\Sigma_{t|\beta}$ empirically from the data. Since the mean of each coefficient in $\beta$ with respect to all samples is zero, the covariance $\Sigma_{\beta}$ can be estimated by
\begin{equation}
\Sigma_{\beta} = \frac{1}{n-1}\sum_{i=1}^{n}\beta_i\beta_i^T,
\end{equation}
where, $\beta_i$ is the coefficient vector for face sample $i$.
The $3m \times 3m$ covariance matrix $\Sigma_{t|\beta}$ is very large, impractical to estimate from a few thousands of samples or to invert once estimated.
Hence, for simplicity, we approximate it as a diagonal matrix that does not depend on $\beta$.
One can verify that the mean of each element in $noise_t = U_t\beta + \mu_{M_t} - t$ with respect to all samples is zero.
Hence, we estimate its $j$-th diagonal value as
\begin{equation}
\Sigma_{t|\beta,jj} = \frac{1}{n-1}\sum_{i=1}^{n}noise_{t,ij}^2.
\end{equation}
\subsubsection{Least squares}
Least squares (LS) minimization is a simple and very useful approach that should be typically used when the amount of data samples is large enough.
It can be thought of as training a multivariate linear regression with an $L_2$ loss.
Assume a facial sample is represented by a texture vector $t$ and a geometry vector $g$.
Denote by $\alpha_t$ and $\alpha_g$ the column vectors with the first $k_t$ and $k_g$ texture and geometry 3DMM coefficients of the face.
These coefficients can be obtained by projecting $t$ and $g$ onto the 3DMM basis $V_t$ and $V_g$.
Let the $k_t \times n$ matrix $A_t$ hold the texture coefficient vectors of all samples in its columns, and let the $k_g \times n$ matrix $A_g$ hold the geometry coefficient vectors of all samples in its columns in the same order as $A_t$.
The correlation between $A_t$ and $A_g$ could be linearly approximated by
\begin{equation}
A_g \approx W^TA_t.
\label{eq:LS}
\end{equation}
Note that we would not benefit from generalizing from a linear to an affine correlation.
This is because the mean of each row in $A_t$ and $A_g$ are zero, as they hold singular values of a centered set of samples.
Following \autoref{eq:LS}, we would like to find a matrix $W$ that minimizes
\begin{equation}
\mbox{loss}(W) = \|W^TA_t - A_g\|_{F}.
\end{equation}
A closed form solution is easily obtained to be
\begin{equation}
W^* = (A_tA_t^T)^{-1}A_tA_g^T = A_t^{+}A_g^T.
\end{equation}
Define $\tilde V_t$ and $\tilde V_g$ as holding the first $k_t$ and $k_g$ texture and geometry 3DMM basis vectors.
$W^*$ can be estimated using a set of training samples.
Then, given a new texture $t$, one could fit a geometry $g$ by computing the texture coefficients as
\begin{equation}
\alpha_t = \tilde V_t^T(t - \mu_t),
\end{equation}
computing the geometry coefficients as
\begin{equation}
\alpha_g=W^*\alpha_t,
\end{equation}
and finally, computing the geometry as
\begin{equation}
g = \tilde V_g \alpha_g + \mu_g.
\end{equation}
\subsubsection{Geometry reconstruction method comparison}
To evaluate how well the assigned geometries fit each textures, we use a test set of textures and their corresponding geometries obtained from $297$ scans that did not participate in neither the GAN training or geometry fitting procedures.
Given these unseen textures as input, we estimate their geometries using the approaches presented above, and compare them to their ground truth. For this comparison, we computed the average $L_2$ norm between the vertices of the reconstructed and true geometry for each of the methods.
In this experiment, we chose $k = k_t = k_g = 200$.
\autoref{fig:geom_fit_1} shows examples of test textures mapped onto their assigned geometries that were obtained using each of the above methods.
It is clear that the LS approach obtains the best results on the test set.
Since it is also simple and efficient, we choose to use the LS approach to approximate the geometries in the following sections.
Note, however, that the rest of the methods could be beneficial for other applications, depending on each case. \autoref{fig:geom_fit_2} visually compares the reconstructed geometries to the true ones for different textures from the test scans, using the LS approach.
The $L_2$ norm between the reconstruction and true geometries are given below each example.
The geometries, predicted solely from textures of identities that were never seen before, are surprisingly very similar to the true ones.
This validates the strong correlation assumption between textures and geometries.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{images/fitting_geom/face_comparison_1.jpg}
\caption{ Left: Mapped textures from the test set. Right: Corresponding geometries that were reconstructed using the each proposed approach.
Average L2 reconstruction error for each method appears above each method illustration.}
\label{fig:geom_fit_1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{images/fitting_geom/face_comparison_2_prev.jpg}
\caption{Top: examples of unwrapped textures of test scans.
Below each texture we show the corresponding scanned geometry, and compare it to the one estimated using the least squares approach.
The $L_2$ norm between the reconstruction and true geometries is indicated at the bottom.}
\label{fig:geom_fit_2}
\end{figure}
\section{Generating geometries using GAN}
\label{sec:geom_gen}
In \autoref{sec:geom_fit}, we reconstructed geometries using the 3DMM model.
Indeed, projecting geometries onto the subspace of 3DMM has almost no visual effect on the appearance of the faces.
The 3DMM, however, is constrained to the subspace of training set geometries and cannot generalize to unseen examples.
In \autoref{sec:mapping}, we mapped facial textures into 2D images, with the goal of producing new textures.
The same methodology can be used for producing new geometries as well.
To that end, we propose to construct a dataset of aligned facial geometries and train a GAN to generate new ones by repeating the texture mapping process while replacing its RBG texture values by its XYZ geometry values.
As for data augmentation, while the amount of texture samples can be doubled by horizontal mirroring each of the images, we found that mirroring each one of the $X$, $Y$, and $Z$ values independently results in a valid facial geometry. Thus, the amount of geometry samples can be augmented by a factor of $8$.
Note, however, that when mirroring $X$ values, one should perform $C - X$, where $C$ is a constant that could be set, for example, to the maximal $X$ value in all training samples.
The training data and resulting generated geometries are shown in \autoref{fig:generating_geom}.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{images/generating_geometries/GAN_geom.jpg}
\caption{Left: the template aligned to $4$ real 3D facial scans, colored by their $X$, $Y$, and $Z$ values. Center-left: geometry values mapped to an image using the proposed universal mapping. These images are used to train a GAN.
Center-right: fake geometry image generated by the GAN.
Right: the synthesized geometries mapped back to 3D.
}
\label{fig:generating_geom}
\end{figure}
\section{Adding expressions}
\label{sec:expressions}
In previous sections we proposed a method for generating new facial textures and corresponding geometries.
In order to complete the model we must also take expressions into consideration.
We follow \cite{chu20143d} who define a linear basis for expressions by taking the difference $g_{diff} = g_{expr}-g_{neutral}$ for every face in the set. We then remove the mean difference vector to obtain $\Delta G_{diff} = G_{diff}-\mu_{diff}\mathbbm{1}^T$. We compute the principal components of $\Delta G_{diff}$ to obtain our geometric expression model. The expression difference model can be used by randomizing the expression coefficients and adding the linear combination of the difference vectors to a generated neutral face as so $g_{exp} = g_{neutral} + \mu_{diff} + \alpha_{g} + V_{exp} \alpha_{exp}$.
Since the expression model must be applied to neutral faces, we should define a model for neutral faces.
In order to span only the space of neutral expressionless faces we suggest to replace all the geometries in our training set with their neutral counterpart.
By following this course, the texture model still benefits from all the textures available to us while our geometry model learns to predict only neutral models for any texture with or without expression.
This method can be applied to either the 3DMM based geometry model from \autoref{sec:geom_fit} or the GAN based geometry model described in \autoref{sec:geom_gen}.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{images/blendshapes/blendshapes.jpg}
\caption{Expression vectors applied to a single face.}
\label{fig:blend_expression}
\end{figure}
\section{Experimental results}
\label{sec:expermintal}
In order to demonstrate the ability of our model to generate new realistic identities we perform several quantitative as well as qualitative experiments. As our first results we generate several random textures and obtain their corresponding geometry according to \autoref{sec:geom_fit}. We are able to vary the expression by applying a linear expression model as described in \autoref{sec:expressions}. According to this model each expression can be represented by a variation from the mean face which leads to a specific facial movement. By combining various movements of the face one can generate any expression desired. The faces are then rendered under various poses and lighting conditions. The rendered faces are depicted in \autoref{fig:render_experiment}
Our next qualitative experiment demonstrates the ability of our model to generate completely new textures by combining facial features from different training examples. To this end we search for the nearest neighbor to the generated texture from within the training data. It can be seen in \autoref{fig:ID_var} that the demonstrated examples have nearest neighbors that are significantly different from them and cannot be considered as the same identity. Within the following section we will analyze both the generative ability of our model to produce new faces as well its realism and similarity to realistic examples. In addition, we also search for generated texture samples which are nearest to several validation set examples. By finding close by textures we demonstrate the generalization capability of our model to unseen textures. This is demonstrated in \autoref{fig:NN-exp}
The previous qualitative assessment is complimented by a more in depth examination of the nearest neighbors across $10K$ generated faces. For the following experiments we have freedom to choose our distance metric. We aim to find a natural metric which coincides with human perception of faces. We therefore choose to render each generated and real face from a frontal perspective and process the rendered images via a pre-trained facial recognition network. Using a model based on \cite{amos2016openface} we extract the last feature from within the network as our facial descriptor. The distance is calculated as $dist=\|D_1 - D_2\|_{2}$ where $D_1,D_2$ are the descriptors corresponding to the first and second face respectively. By analyzing the distribution of such distances we can assess the spread of identities which exists within each dataset as well as the relation between different datasets.
We use the distribution of distances between generated faces and the training and validation sets of real faces in order to assess the quality of our generative model. In \autoref{fig:ID_var} we plot the distribution of distances between generated sample and their nearest real training sample. This plot implies that these distances are distributed as a shifted Gaussian. This implies that on average new identities are located in between the real samples and not directly near them. Our analysis of the distances to the neighbors of the validation set also depicted in \autoref{fig:ID_var} shows that our model is able to produce identities of subjects similar to ones found in the validation set which were not used to train the model. This validates our claim that our model is producing new identities by combining features from the training set faces, and that these identities are not too similar to the training set yet can still generalize to the unseen validation set.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{images/generated_faces/generated_faces.jpg}
\caption{Several generated faces rendered with varying expression and pose under varying lighting conditions.}
\label{fig:render_experiment}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{images/ID_variation/ID_variation.jpg}
\caption{Left: Distribution of distances between training and generated IDs. Right: Cumulative sum of distances between training and test in light dark blue and generated to test in light blue.}
\label{fig:ID_var}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{images/NN/NN.jpg}
\caption{Left: Generated textures coupled with their nearest neighbor within the training set. Right: Validation textures coupled with the nearest out of 10k generated texture. }
\label{fig:NN-exp}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{images/tsne/tsne.jpg}
\caption{T-SNE embedding of face ID's. Left: Real versus generated ID's. Center: Real versus 3DMM ID's Right: generated ID's labeled according to real data clusters}
\label{fig:T-SNE-exp}
\end{figure}
Following \cite{karras2017progressive} we perform an analysis of sliced Wasserstein distance \cite{rabin2011wasserstein} on our generated textures and geometries. By assessing the distance between the distribution of patches taken from textures generated by our model relative to patches taken from faces generated by 3DMM we can analyze the multi-resolution similarity between the generated and real examples. \autoref{tbl:SWD_texture} and \autoref{tbl:SWD_geom} show the SWD for each resolution relative to the training data. In both experiments it is clear that the SWD is lower for our model at every resolution indicating that at every level of detail the patches produced by our model are more similar to patches in the training data.
In addition to assessing the patch level feature resemblance, we wish to uncover the distances between the distribution of identities. To this end we conduct two more experiments which gauge the similarity between the distributions of generated identities to that of the real ones. In order to qualitatively assess these distributions we depict our identities using the common dimensionality reduction scheme T-SNE \cite{maaten2008visualizing}. \autoref{fig:T-SNE-exp} depicts the low dimensional representation of the embedding proposed by our model and the 3DMM overlaid on top of the real data embedding. In addition \autoref{fig:T-SNE-exp} also depicts the clustering of different ethnic groups as well as gender as data points of different colors. By assigning each generated sample to the nearest cluster, we can automatically assign each new sample with its nearest cluster in order to obtain automatic annotation of our generated data. In addition we perform a quantitative analysis of the difference between identity distribution using SWD. The results of this experiment are depicted in \autoref{tbl:SWD_ids}.
\begin{table}[]
\centering
{\small
\begin{tabularx}{\linewidth}{lXXXXXXXX}
Resolution & 1024 & 512 & 256& 128& 64& 32& 16& avg\\ \hline
Real & 3.33 & 3.33 & 3.35 & 2.93 & 2.53 & 2.47 & 4.16 & 3.16\\
\textbf{{Proposed}} & 35.32 & 18.13 & 10.76 & 6.41 & 7.42 & 10.76 & 34.86 & 17.67\\
PCA &92.7 & 224.01 & 156.87 & 66.20 & 15.71 & 33.97 & 104.08 & 99.08\\
\end{tabularx}
}
\caption{Sliced Wasserstein distance between generated and real texture images. }
\label{tbl:SWD_texture}
{\small
\begin{tabularx}{\linewidth}{lXXXXXXXX}
Resolution & 1024 & 512 & 256& 128& 64& 32& 16& avg\\ \hline
Real & 6.08 & 2.41 & 3.40 & 2.45 & 3.10 & 2.75 & 1.86 & 3.15\\
\textbf{{Proposed}} & 11.8 & 9.58 & 27.13 & 44.5 & 38.05 & 11.03 & 2.16 & 20.61\\
PCA & 272.7 & 43.94 & 29.6 & 50.72 & 44.43 & 13.21 & 4.67 & 65.61 \\
\end{tabularx}
}
\caption{Sliced Wasserstein distance between generated and real geometry images. }
\label{tbl:SWD_geom}
{\small
\begin{tabularx}{\linewidth}{lXXXX}
Method & 3DMM & \textbf{{Proposed}} & train\\ \hline
train & 59.88 & 35.82 & - \\
test & 75.3 & 62.09 & 42.4 \\
\end{tabularx}
}
\caption{Sliced Wasserstein distance between distributions of identities from different sets. }
\label{tbl:SWD_ids}
\end{table}
\section{Discussion}
In this paper we present a new model for generating high detail textures and corresponding geometries of human faces. Our claim is that an effective method for processing geometric surfaces via CNNs is to first align the geometric dataset to a template model, and then map each geometry to a 2D image using a predefined mapping. Once in image form, A GAN loss can be employed to train a generator model which aims to imitate the distribution of the training data images. We further show that by training a generator for both textures and geometries it is possible to synthesize high-detail textures and geometries which are linked to each other by a single canonical mapping.
In addition, we describe in \autoref{sec:geom_fit} several methods for fitting 3DMM geometries by learning the underlying relation between texture and geometry, a relation that has been largely neglected in previous work. In \autoref{sec:geom_fit} we also provide a quantitative and qualitative evaluation of each geometry reconstruction method. our proposed face generation pipeline therefore consists of a high resolution texture generator combined with a geometry that was either produced by a similar geometric generation model or by employing a learning scheme which produces the most likely corresponding 3DMM coefficients.
Besides the main pipeline, we propose two extra data processing steps which improve sample quality. In \autoref{sec:mapping} we describe the design and construction of our canonical mapping. Our mapping by design is intended to reduce distortion in important high detail areas while spreading the flattening distortion to non essential areas. Our mapping was also designed in order to take maximal advantage of the available area in each image. In \autoref{sec:mapping} we also show that our improved mapping compared to \cite{slossberg2018high} indeed preserves delicate texture details in our predefined high importance regions. In \autoref{sec:corrupted_data} we also present a new technique for dealing with partially corrupted data. This is especially important when the data acquisition is expensive and prone to errors. By adding a corruption mask to the data at train time the network is able to ignore the affected areas while still learning from the mostly unaffected ones. In the case of our dataset this increases the amount of usable data by roughly $20\%$.
In order to evaluate our proposed model we preformed a quantitative as well as qualitative analysis of several aspects of our model. Our main objective was to create a realistic model, a requirement which we break down into several factors. Our model should produce high quality plausible facial textures which look as much like the training data as possible, but also compose new faces not seen during training rather than repeat previously seen faces. To that end we use an efficient approximation of Wasserstein distance between distributions in order to evaluate the local and global scale features of the produced textures and geometries as well as the distance between distributions of real and generated identities. Our results show that in both identity distribution and image feature resemblance we outperform the 3DMM model which the most widely used model to date.
\begin{acks}
This research was partially supported by the Israel Ministry of Science, grant number 3-14719 and the Technion Hiroshi Fujiwara Cyber Security Research Center and the Israel Cyber Bureau.
We would also like to thank Intel RealSense group for sharing their data and computational resources with us.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
|
2107.03337
|
\section{Introduction}\label{sec:intro}
Wavelet techniques have a long standing history in the field
of data science. Applications comprise
signal processing, image analysis and machine learning,
see for instance \cite{Chui,Dahmen,Daubechies,Mallat,Mallat2016}
and the references therein. Assuming a signal generated
by some function, the pivotal idea of wavelet
techniques is the splitting of this function into its
respective contributions with respect to a hierarchy of scales.
Such a multiscale ansatz starts from an approximation
on a relatively coarse scale and successively resolves details
at the finer scales. Hence, compression and adaptive
representation are inherently built into this ansatz.
The transformation of a given signal into its wavelet
representation and the inverse transformation can be performed
with linear cost in terms of the degrees of freedom.
Classically, wavelets are constructed by refinement relations
and therefore require a sequence of nested approximation
spaces which are copies of each other, except for a different scaling.
This restricts the concept of wavelets to structured data.
Some adaption of the general principle is possible in
order to treat intervals, bounded domains and surfaces,
compare \cite{DKU,Quak,HS,STE} for example. The seminal
work \cite{TW03} by Tausch and White overcomes this obstruction
by constructing wavelets as suitable linear combinations
of functions at a given fine scale. In particular,
the stability of the resulting basis,
which is essential for numerical algorithms,
is guaranteed by orthonormality.
In this article, we take the concept of wavelets to the next
level and consider discrete data. To this end, we modify the
construction of Tausch and White accordingly and construct
a multiscale basis which consists of localized and discrete
signed measures. Inspired by the term wavelet, we call such
signed measures \emph{samplets}.
Samplets can be constructed
such that their associated measure integrals vanish
for polynomial integrands.
If this is the case for all polynomials of total degree less
or equal than \(q\), we say that the samplets
have \emph{vanishing moments} of order $q+1$.
We remark that lowest order samplets, i.e.\ \(q=0\), have
been considered earlier for data compression in \cite{RE11}.
\textcolor{red}{Another concept for constructing multiscale
bases on data sets are \emph{diffusion wavelets}, which employ
a diffusion operator to construct the multiscale hierarchy,
see \cite{CM06}.
In contrast to diffusion wavelets, however,
the construction of samplets is solely based on discrete concepts
and can always be performed with linear cost
for a balanced cluster tree, even for non-uniformly distributed
data.}
When representing discrete data by samplets, then,
due to the vanishing moments, there is a fast decay of
the corresponding samplet coefficients with respect to
the support size if the data are smooth. This straightforwardly
enables data compression. In contrast, non-smooth regions
in the data are indicated by large samplet coefficients. This,
in turn, enables singularity detection and extraction. However,
we emphasize that our construction is not limited to the use
of polynomials. Indeed, it would easily be possible to adapt
the construction of samplets to other primitives with other
desired properties.
As a further application of samplets, we consider the
compression of kernel matrices as they arise in kernel
based machine learning and scattered data approximation,
compare \cite{Fasshauer2007,HSS08,Rasmussen2006,
Schaback2006,Wendland2004,Williams1998}. Kernel
matrices are typically densely populated since the underlying
kernels are nonlocal. Nonetheless, these kernels
are usually \emph{asymptotically smooth}, meaning
that they behave like smooth functions apart
from the diagonal. A discretization of an asymptotical
smooth kernel with respect to a samplet basis with
vanishing moments results in quasi-sparse kernel
matrices, which means that they can be compressed such
that only a sparse matrix remains, compare \cite{BCR,DHS,
DPS,PS,SCHN}. Especially, it has been demonstrated in
\cite{HM} that nested dissection, see \cite{Geo73,LRT79},
is applicable in order to obtain a fill-in reducing reordering
of the matrix. This reordering in turn allows for the
rapid factorization of the system matrix by the Cholesky
factorization without introducing additional errors.
\textcolor{red}{The asymptotically smoothness of the kernels
is also exploited by cluster methods like the fast multipole
method, see \cite{GR,RO,YBZ04} and particularly \cite{MXTY+2015}
for high-dimensional data. Nonetheless, these methods do
not allow for direct inversion, advantageous for example when
simulating Gaussian random fields. Another approach, which
is more in line of the work presented here, is the gamblet
based approach for the compression of the kernel matrix
in \cite{SSO21}. Nonetheless, compared to samplets with
vanishing moments, the construction of \emph{gamblets}
is more involved in comparison to our construction as
they need to be adapted to the specific operator at
hand, compare \cite{Owh17}.}
The rest of this article is organized as follows.
In Section~\ref{section:Samplets}, the novel concept of
samplets is introduced. The subsequent Section~\ref{sct:construction}
is devoted to the actual construction of samplets and to their
properties. The change of basis by means of the discrete samplet
transform is the topic of Section~\ref{sec:FST}. In Section
\ref{sec:Num1}, we demonstrate the capabilities of samplets
for data compression and smoothing for data in one, two and
three dimensions. Section~\ref{sec:kernelCompression} deals
with the samplet compression of kernel matrices.
Especially, we also develop an interpolation based \(\Hcal^2\)-matrix
approach in order to efficiently assemble the compressed
kernel matrix. Corresponding numerical results are then
presented in Section \ref{sec:Num2}. Finally, in
Section~\ref{sec:Conclusion}, we state
concluding remarks.
\section{Samplets}
\label{section:Samplets}
Let \(X\mathrel{\mathrel{\mathop:}=}\{{\boldsymbol x}_1,\ldots,{\boldsymbol x}_N\}\subset\Omega\)
denote a set of points within some region \(\Omega\subset\Rbb^d\).
Associated to each point \({\boldsymbol x}_i\), we introduce
the Dirac measure
\[
\delta_{{\boldsymbol x}_i}({\boldsymbol x})\mathrel{\mathrel{\mathop:}=}
\begin{cases}
1,&\text{if }{\boldsymbol x}={\boldsymbol x}_i\\
0,&\text{otherwise}.
\end{cases}
\]
With a slight abuse of notation, we also introduce the
point evaluation functional
\[
(f,\delta_{{\boldsymbol x}_i})_\Omega=\int_\Omega
f({\boldsymbol x})\delta_{{\boldsymbol x}_i}({\boldsymbol x})\d{\boldsymbol x}\mathrel{\mathrel{\mathop:}=}
\int_{\Omega}f({\boldsymbol x})\delta_{{\boldsymbol x}_i}(\d{\boldsymbol x})
=f({\boldsymbol x}_i),
\]
where $f\in C(\Omega)$ is a continuous function.
Next, we define the space
\(V\mathrel{\mathrel{\mathop:}=}\spn\{\delta_{{\boldsymbol x}_1},\ldots,\delta_{{\boldsymbol x}_N}\}\)
as the \(N\)-dimensional vector space
of all discrete and finite signed measures supported at the
points in \(X\).
An inner product on \(V\) is defined by
\[
\langle u,v\rangle_V\mathrel{\mathrel{\mathop:}=}\sum_{i=1}^N u_iv_i,\quad\text{where }
u=\sum_{i=1}^Nu_i\delta_{{\boldsymbol x}_i},\ v=\sum_{i=1}^Nv_i\delta_{{\boldsymbol x}_i}.
\]
Indeed, the space \(V\) is isometrically isomorphic to \(\Rbb^N\)
endowed with the canonical inner product.
Similar to the
idea of a multiresolution analysis in the construction of
wavelets, we introduce the spaces \(V_j\mathrel{\mathrel{\mathop:}=}\spn{\boldsymbol \Phi_j}\), where
\[
{\boldsymbol\Phi_j}\mathrel{\mathrel{\mathop:}=}\{\varphi_{j,k}:k\in\Delta_j\}.
\]
Here, $\Delta_j$ denotes a suitable
index set with cardinality $|\Delta_j|=\dim V_j$ and
\(j\in\Nbb\) is referred to as \emph{level}.
Moreover, each basis function \(\varphi_{j,k}\) is a linear
combination of Dirac measures
such that
\[
\langle \varphi_{j,k},\varphi_{j,k'}\rangle_V=0\quad\text{for }k\neq k'
\]
and
\[
\diam(\supp\varphi_{j,k})\mathrel{\mathrel{\mathop:}=}
\diam(\{{\boldsymbol x}_{i_1},\ldots,{\boldsymbol x}_{i_p}\})\sim 2^{-j/d}.
\]
For the sake of notational convenience, we shall identify bases
by row vectors,
such that, for ${\boldsymbol v}_j
= [v_{j,k}]_{k\in\Delta_j}$, the corresponding measure
can simply be written as a dot product according to
\[
v_j = \mathbf\Phi_j{\boldsymbol v}_j=\sum_{k\in\Delta_j} v_{j,k}\varphi_{j,k}.
\]
Rather than using the multiresolution
analysis corresponding to the hierarchy
\[
V_0\subset V_1\subset\cdots\subset V,
\]
the idea of samplets is
to keep track of the increment of information
between two consecutive levels $j$ and $j+1$. Since we have
$V_{j}\subset V_{j+1}$, we may decompose
\begin{equation}\label{eq:decomposition}
V_{j+1} = V_j\overset{\perp}{\oplus} S_j
\end{equation}
by using the \emph{detail space} $S_j$. Of practical interest
is the particular choice of the basis of the detail space $S_j$ in $V_{j+1}$.
This basis is assumed to be orthonormal as well and will be denoted by
\[
{\boldsymbol\Sigma}_j = \{\sigma_{j,k}:k\in\nabla_j\mathrel{\mathrel{\mathop:}=}\Delta_{j+1}
\setminus \Delta_j\}.
\]
Recursively applying the decomposition \eqref{eq:decomposition},
we see that the set
\[
\mathbf\Sigma_J = {\boldsymbol\Phi}_0 \bigcup_{j=0}^{J-1}{\boldsymbol\Sigma}_j
\]
forms a basis of \(V_J\mathrel{\mathrel{\mathop:}=} V\), which we call a \emph{samplet basis}.
In order to employ samplets for the compression of data and
kernel matrices, \textcolor{red}{it is favourable}
that the measures $\sigma_{j,k}$
are localized with respect to the corresponding level $j$, i.e.\
\begin{equation}\label{eq:localized}
\diam(\supp\sigma_{j,k})\sim 2^{-j/d},
\end{equation}
\textcolor{red}{however, this is not a requirement in our construction}
and that they are stable, i.e.\
\[
\langle \sigma_{j,k},\sigma_{j,k'}\rangle_V=0\quad\text{for }k\neq k'.
\]
Moreover, an essential ingredient is the vanishing moment
condition, meaning that
\begin{equation}\label{eq:vanishingMoments}
(p,\sigma_{j,k})_\Omega
= 0\quad \text{for all}\ p\in\Pcal_q(\Omega),
\end{equation}
where \(\Pcal_q(\Omega)\) denotes the space of all polynomials
with total degree at most \(q\).
We say then that the samplets have $q+1$ \emph{vanishing
moments}.
\begin{remark}
The concept of samplets has a very natural interpretation
in the context of reproducing kernel Hilbert spaces, compare
\cite{Aronszajn50}. If \((\Hcal,\langle\cdot,\cdot\rangle_{\Hcal})\)
is a reproducing kernel Hilbert space with reproducing kernel
\(\mathcal{K}\), then there holds
\((f,\delta_{{\boldsymbol x}_i})_\Omega
=\langle \mathcal{K}({\boldsymbol x}_i,\cdot),f\rangle_{\Hcal}\). Hence,
the samplet
\(\sigma_{j,k}=\sum_{\ell=1}^p\beta_\ell\delta_{{\boldsymbol x}_{i_\ell}}\)
can directly be identified with the function
\[
\hat{\sigma}_{j,k}\mathrel{\mathrel{\mathop:}=}
\sum_{\ell=1}^p\beta_\ell \mathcal{K}({\boldsymbol x}_{i_\ell},\cdot)\in\mathcal{H}.
\]
In particular, it holds
\[
\langle\hat{\sigma}_{j,k},h\rangle_\Hcal=0
\]
for any \(h\in\Hcal\) which satisfies
\(h|_{\supp\sigma_{j,k}}\in\Pcal_q(\supp\sigma_{j,k})\).
\end{remark}
\section{Construction of samplets}\label{sct:construction}
\subsection{Cluster tree}
In order to construct samplets with the desired properties,
especially vanishing moments, cf.\ \eqref{eq:vanishingMoments},
we shall transfer the wavelet construction of Tausch and
White from \cite{TW03} into our setting. The first step is to
construct subspaces of signed measures with localized
supports. To this end, we perform a hierarchical
clustering on the set \(X\).
\begin{definition}\label{def:cluster-tree}
Let $\mathcal{T}=(P,E)$ be a tree with vertices $P$ and edges $E$.
We define its set of leaves as
\[
\mathcal{L}(\mathcal{T})\mathrel{\mathrel{\mathop:}=}\{\nu\in P\colon\nu~\text{has no sons}\}.
\]
The tree $\mathcal{T}$ is a \emph{cluster tree} for
the set $X=\{{\boldsymbol x}_1,\ldots,{\boldsymbol x}_N\}$, iff
the set $X$ is the \emph{root} of $\mathcal{T}$ and
all $\nu\in P\setminus\mathcal{L}(\mathcal{T})$
are disjoint unions of their sons.
The \emph{level} \(j_\nu\) of $\nu\in\mathcal{T}$ is its distance from the root,
i.e.\ the number of son relations that are required for traveling from
$X$ to $\nu$. The \emph{depth} \(J\) of \(\Tcal\) is the maximum level
of all clusters. We define the set of clusters
on level $j$ as
\[
\mathcal{T}_j\mathrel{\mathrel{\mathop:}=}\{\nu\in\mathcal{T}\colon \nu~\text{has level}~j\}.
\]
Finally, the \emph{bounding box} $B_{\nu}$ of \(\nu\)
is defined as the smallest axis-parallel cuboid that
contains all its points.
\end{definition}
There exist several possibilities for the choice of a
cluster tree for the set \(X\). However, within this article,
we will exclusively consider binary trees and remark that it is of course
possible to consider other options, such as
\(2^d\)-trees, with the obvious modifications.
Definition~\ref{def:cluster-tree} provides a hierarchical cluster
structure on the set \(X\). Even so, it does not provide guarantees
for the sizes and cardinalities of the clusters.
Therefore, we introduce the concept
of a balanced binary tree.
\begin{definition}
Let $\Tcal$ be a cluster tree on $X$ with depth $J$.
$\Tcal$ is called a \emph{balanced binary tree}, if all
clusters $\nu$ satisfy the following conditions:
\begin{enumerate}
\item
The cluster $\nu$ has exactly two sons
if $j_{\nu} < J$. It has no sons if $j_{\nu} = J$.
\item
It holds $|\nu|\sim 2^{J-j_{\nu}}$.
\end{enumerate}
\end{definition}
A balanced binary tree can be constructed by \emph{cardinality
balanced clustering}. This means that the root cluster
is split into two son clusters of identical (or similar)
cardinality. This process is repeated recursively for the
resulting son clusters until their cardinality falls below a
certain threshold.
For the subdivision, the cluster's bounding box
is split along its longest edge such that the
resulting two boxes both contain an equal number of points.
Thus, as the cluster cardinality halves with each level,
we obtain $\mathcal{O}(\log N)$ levels in total.
The total cost for constructing the cluster tree
is therefore $\mathcal{O}(N \log N)$. Finally, we remark that a
balanced tree is only required to guarantee the cost bounds
for the presented algorithms. The error and compression estimates
we shall present later on are robust in the sense that they
are formulated directly in terms of the actual cluster sizes
rather than the introduced cluster level.
\subsection{Multiscale hierarchy}
Having a cluster tree at hand, we
shall now construct a samplet basis on the resulting
hierarchical structure. We begin by introducing a \emph{two-scale}
transform between basis elements on a cluster $\nu$ of level $j$.
To this end, we create \emph{scaling functions} $\mathbf{\Phi}_{j}^{\nu}
= \{ \varphi_{j,k}^{\nu} \}$ and \emph{samplets} $\mathbf{\Sigma}_{j}^{\nu}
= \{ \sigma_{j,k}^{\nu} \}$ as linear combinations of the scaling
functions $\mathbf{\Phi}_{j+1}^{\nu}$ of $\nu$'s son clusters.
This results in the \emph{refinement relation}
\begin{equation}\label{eq:refinementRelation}
[ \mathbf{\Phi}_{j}^{\nu}, \mathbf{\Sigma}_{j}^{\nu} ]
\mathrel{\mathrel{\mathop:}=}
\mathbf{\Phi}_{j+1}^{\nu}
{\boldsymbol Q}_j^{\nu}=
\mathbf{\Phi}_{j+1}^{\nu}
\big[ {\boldsymbol Q}_{j,\Phi}^{\nu},{\boldsymbol Q}_{j,\Sigma}^{\nu}\big].
\end{equation}
In order to provide both, vanishing moments and orthonormality,
the transformation \({\boldsymbol Q}_{j}^{\nu}\) has to be
appropriately constructed. For this purpose, we consider an orthogonal
decomposition of the \emph{moment matrix}
\[
{\boldsymbol M}_{j+1}^{\nu}\mathrel{\mathrel{\mathop:}=}
\begin{bmatrix}({\boldsymbol x}^{\boldsymbol 0},\varphi_{j+1,1})_\Omega&\cdots&
({\boldsymbol x}^{\boldsymbol 0},\varphi_{j+1,|\nu|})_\Omega\\
\vdots & & \vdots\\
({\boldsymbol x}^{\boldsymbol\alpha},\varphi_{j+1,1})_\Omega&\cdots&
({\boldsymbol x}^{\boldsymbol\alpha},\varphi_{j+1,|\nu|})_\Omega
\end{bmatrix}=
[({\boldsymbol x}^{\boldsymbol\alpha},\mathbf{\Phi}_{j+1}^{\nu})_\Omega]_{|\boldsymbol\alpha|\le q}
\in\Rbb^{m_q\times|\nu|},
\]
where
\begin{equation}\label{eq:mq}
m_q\mathrel{\mathrel{\mathop:}=}\sum_{\ell=0}^q{\ell+d-1\choose d-1}={q+d\choose d}\leq(q+1)^d
\end{equation}
denotes the dimension of \(\Pcal_q(\Omega)\).
In the original construction by
Tausch and White, the matrix \({\boldsymbol Q}_{j}^{\nu}\) is obtained
by a singular value decomposition of \({\boldsymbol M}_{j+1}^{\nu}\).
For the construction of samplets, we follow the idea
form \cite{AHK14} and rather
employ the QR decomposition, which has the advantage of generating
samplets with an increasing number of vanishing moments.
It holds
\begin{equation}\label{eq:QR}
({\boldsymbol M}_{j+1}^{\nu})^\intercal = {\boldsymbol Q}_j^\nu{\boldsymbol R}
\mathrel{=\mathrel{\mathop:}}\big[{\boldsymbol Q}_{j,\Phi}^{\nu} , {\boldsymbol Q}_{j,\Sigma}^{\nu}\big]{\boldsymbol R}
\end{equation}
Consequently, the moment matrix
for the cluster's own scaling functions and samplets is then
given by
\begin{equation}\label{eq:vanishingMomentsQR}
\begin{aligned}
\big[{\boldsymbol M}_{j,\Phi}^{\nu}, {\boldsymbol M}_{j,\Sigma}^{\nu}\big]
&= \left[({\boldsymbol x}^{\boldsymbol\alpha},[\mathbf{\Phi}_{j}^{\nu},
\mathbf{\Sigma}_{j}^{\nu}])_\Omega\right]_{|\boldsymbol\alpha|\le q}
= \left[({\boldsymbol x}^{\boldsymbol\alpha},\mathbf{\Phi}_{j+1}^{\nu}[{\boldsymbol Q}_{j,\Phi}^{\nu}
, {\boldsymbol Q}_{j,\Sigma}^{\nu}])_\Omega
\right]_{|\boldsymbol\alpha|\le q} \\
&= {\boldsymbol M}_{j+1}^{\nu} [{\boldsymbol Q}_{j,\Phi}^{\nu} , {\boldsymbol Q}_{j,\Sigma}^{\nu} ]
= {\boldsymbol R}^\intercal.
\end{aligned}
\end{equation}
As ${\boldsymbol R}^\intercal$ is a lower triangular matrix, the first $k-1$
entries in its $k$-th column are zero. This corresponds to
$k-1$ vanishing moments for the $k$-th function generated
by the transformation
${\boldsymbol Q}_{j}^{\nu}=[{\boldsymbol Q}_{j,\Phi}^{\nu} , {\boldsymbol Q}_{j,\Sigma}^{\nu} ]$.
By defining the first $m_{q}$ functions as scaling functions and
the remaining as samplets, we obtain samplets with vanishing
moments at least up to order $q+1$. By increasing
the polynomial degree to \(\hat{q}\geq q\) at the leaf clusters
such that \(m_{\hat{q}}\geq 2m_q\), we can even construct
samplets with an increasing number of vanishing moments up to order \(\hat{q}+1\)
without any additional cost.
\begin{remark}
We remark that the samplet construction using vanishing moments
is inspired by the classical wavelet theory. However, it is easily
possible to adapt the construction to other primitives of interest.
\end{remark}
\begin{remark}
\label{remark:introCQ}
Each cluster has at most a constant number of scaling
functions and samplets: For a particular cluster $\nu$, their number
is identical to the cardinality of $\mathbf{\Phi}_{j+1}^{\nu}$. For leaf
clusters, this number is bounded by the leaf size.
For non-leaf clusters, it is bounded by the number of scaling functions
provided from all its son clusters. As there are at most two
son clusters with a maximum of $m_q$ scaling functions each,
we obtain the bound $2 m_q$ for non-leaf clusters. Note that,
if $\mathbf{\Phi}_{j+1}^{\nu}$ has at most $m_q$ elements, a
cluster will not provide any samplets at all and all functions
will be considered as scaling functions.
\end{remark}
For leaf clusters, we define the scaling functions by
the Dirac measures supported at the points \({\boldsymbol x}_i\), i.e.\
$\mathbf{\Phi}_J^{\nu}\mathrel{\mathrel{\mathop:}=}\{ \delta_{{\boldsymbol x}_i} : {\boldsymbol x}_i\in\nu \}$,
to account for the lack of son clusters that could provide scaling functions.
The scaling functions of all clusters on a specific level $j$
then generate the spaces
\begin{equation}\label{eq:Vspaces}
V_{j}\mathrel{\mathrel{\mathop:}=} \spn\{ \varphi_{j,k}^{\nu} : k\in \Delta_j^\nu,\ \nu \in\Tcal_{j} \},
\end{equation}
while the samplets span the detail spaces
\begin{equation}\label{eq:Wspaces}
S_{j}\mathrel{\mathrel{\mathop:}=}
\spn\{ \sigma_{j,k}^{\nu} : k\in \nabla_j^\nu,\ \nu \in \Tcal_{j} \} =
V_{j+1}\overset{\perp}{\ominus} V_j.
\end{equation}
Combining the scaling functions of the root cluster with all
clusters' samplets gives rise to the samplet basis
\begin{equation}\label{eq:Wbasis}
\mathbf{\Sigma}_{N}\mathrel{\mathrel{\mathop:}=}\mathbf{\Phi}_{0}^{X}
\cup \bigcup_{\nu \in T} \mathbf{\Sigma}_{j}^{\nu}.
\end{equation}
Writing $\mathbf{\Sigma}_{N}
= \{ \sigma_{k} : 1 \leq k \leq N \}$, where
$\sigma_{k}$ is either a samplet or a scaling function
at the root cluster, we can establish a unique indexing of
all the signed measures comprising the samplet
basis. The indexing induces an order on the
basis set $\mathbf{\Sigma}_{N}$, which we choose
to be level-dependent: Samplets belonging to a particular
cluster are grouped together, with those on finer levels
having larger indices.
\begin{remark}\label{remark:waveletLeafSize}
We remark that the samplet basis on a balanced
cluster tree can be computed in cost $\mathcal{O}(N)$,
we refer to \cite{AHK14} for a proof.
\end{remark}
\subsection{Properties of the samplets}
By construction, the samplets satisfy the following
properties, which can directly be inferred from
the corresponding results in \cite{HKS05,TW03}.
\begin{theorem}\label{theo:waveletProperties}
The spaces $V_{j}$ defined in equation \eqref{eq:Vspaces}
exhibit the desired multiscale hierarchy
\[
V_0\subset V_1\subset\cdots\subset V_J = V,
\]
where the corresponding complement spaces $S_{j}$ from \eqref{eq:Wspaces}
satisfy $V_{j+1}=V_j\overset{\perp}{\oplus} S_{j}$ for all $j=0,1,\ldots,
J-1$. The associated samplet basis $\mathbf{\Sigma}_{N}$ defined in
\eqref{eq:Wbasis} constitutes an orthonormal basis of $V$.
In particular:
\begin{enumerate}
\item The number of all samplets on level $j$ behaves like $2^j$.
\item The samplets have $q+1$ vanishing moments.
\item
Each samplet is supported in a specific cluster $\nu$.
If the points in $X$ are uniformly distributed, then the
diameter of the cluster satisfies $\diam(\nu)\sim
2^{-j_\nu/d}$ and it holds \eqref{eq:localized}.
\end{enumerate}
\end{theorem}
\begin{remark}
Due to $S_j\subset V$ and $V_0\subset V$,
we conclude that each samplet is a linear combination of the
Dirac measures supported at the points in $X$. Especially, the
related coefficient vectors ${\boldsymbol\omega}_{j,k}$ in
\begin{equation}\label{eq:coefficientVectorsOfWavelets}
\sigma_{j,k} = \sum_{i=1}^{N}
\omega_{j,k,i} \delta_{{\boldsymbol x}_i} \quad
\text{and} \quad \varphi_{0,k} = \sum_{i=1}^{N} \omega_{0,k,i} \delta_{{\boldsymbol x}_i}
\end{equation}
are pairwise orthonormal with respect to the inner
product on \(\Rbb^N\).
\end{remark}
Later on, the following bound on the samplets'
coefficients $\|\cdot\|_1$-norm will
be essential:
\begin{lemma}\label{lemma:waveletL1Norm}
The coefficient vector ${\boldsymbol\omega}_{j,k}=\big[\omega_{j,k,i}\big]_i$ of
the samplet $\sigma_{j,k}$ on the cluster $\nu$ fulfills
\begin{equation}\label{eq:ell1-norm}
\|{\boldsymbol\omega}_{j,k}\|_{1}\le\sqrt{|\nu|}.
\end{equation}
The same holds for the scaling functions $\varphi_{j,k}$.
\end{lemma}
\begin{proof}
It holds $\|{\boldsymbol\omega}_{j,k}\|_{\ell^2}=1$. Hence,
the assertion follows immediately from the Cauchy-Schwarz
inequality
\[
\|{\boldsymbol\omega}_{j,k}\|_{1}\le\sqrt{|\nu|}\|{\boldsymbol\omega}_{j,k}\|_{2}
=\sqrt{|\nu|}.
\]
\end{proof}
The key for data compression and singularity detection
is the following estimate which shows that the samplet
coefficients decay with respect to the samplet's level
provided that the data result from the evaluation of a smooth function.
Therefore, in case of smooth data, the samplet
coefficients are small and can be set to zero without
compromising the accuracy. Vice versa, a large samplet
coefficients reflects that the data are singular in the
region of the samplet's support.
\begin{lemma}\label{lemma:decay}
Let $f\in C^{q+1}(\Omega)$. Then, it holds for
a samplet $\sigma_{j,k}$ supported
on the cluster $\nu$ that
\begin{equation}\label{eq:decay}
|(f,\sigma_{j,k})_\Omega|\le
\diam(\nu)^{q+1}\|f\|_{C^{q+1}(\Omega)}\|{\boldsymbol\omega}_{j,k}\|_{1}.
\end{equation}
\end{lemma}
\begin{proof}
For ${\boldsymbol x}_0\in\nu$, the Taylor expansion of $f$ yields
\[
f({\boldsymbol x}) = \sum_{|\boldsymbol\alpha|\le q}
\frac{\partial^{|\boldsymbol\alpha|}}{\partial{\boldsymbol x}^{\boldsymbol\alpha}}f({\boldsymbol x}_0)
\frac{({\boldsymbol x}-{\boldsymbol x}_0)^{\boldsymbol\alpha}}{\boldsymbol\alpha!}
+ R_{{\boldsymbol x}_0}({\boldsymbol x}).
\]
Herein, the remainder $R_{{\boldsymbol x}_0}({\boldsymbol x})$ reads
\begin{align*}
R_{{\boldsymbol x}_0}({\boldsymbol x}) &= (q+1)\sum_{|\boldsymbol\alpha|=q+1}
\frac{({\boldsymbol x}-{\boldsymbol x}_0)^{\boldsymbol\alpha}}{\boldsymbol\alpha!}
\int_0^1\frac{\partial^{q+1}}{\partial{\boldsymbol x}^{\boldsymbol\alpha}}
f\big({\boldsymbol x}_0+s({\boldsymbol x}-{\boldsymbol x}_0)\big)(1-s)^q\d s.
\end{align*}
In view of the vanishing moments, we conclude
\begin{align*}
|(f,\sigma_{j,k})_\Omega|
&= |(R_{{\boldsymbol x}_0},\sigma_{j,k})_\Omega|
\le\sum_{|\boldsymbol\alpha|=q+1}\frac{\|{\boldsymbol x}-{\boldsymbol x}_0\|_2^{|\boldsymbol\alpha|}}{\boldsymbol\alpha!}
\max_{{\boldsymbol x}\in\nu}\bigg|\frac{\partial^{q+1}}
{\partial{\boldsymbol x}^{\boldsymbol\alpha}}f(\boldsymbol x)\bigg|\|{\boldsymbol\omega}_{j,k}\|_{1}\\
&\le\diam(\nu)^{q+1}\|f\|_{C^{q+1}(\Omega)}\|{\boldsymbol\omega}_{j,k}\|_{1}.
\end{align*}
Here, we used the estimate
\[
\sum_{|\boldsymbol\alpha|=q+1}\frac{2^{-(q+1)}}{\boldsymbol\alpha!}\le 1,
\]
which is obtained by choosing \({\boldsymbol x}_0\) as the
cluster's midpoint.
\end{proof}
\section{Discrete samplet transform}\label{sec:FST}
In order to transform between the samplet basis
and the basis of Dirac measures, we introduce
the \emph{discrete samplet transform} and its inverse.
To this end, we assume that the data
\(({\boldsymbol x}_1,y_1),\ldots,({\boldsymbol x}_N,y_N)\)
result from the evaluation of some (unknown) function
\(f\colon\Omega\to\Rbb\),
i.e.\
\[y_i=f_i^{\Delta}=(f,\delta_{{\boldsymbol x}_i})_\Omega.
\]
Hence, we may represent the function \(f\) on \(X\)
according to
\[f = \sum_{i = 1}^{N} f_i^{\Delta} \delta_{{\boldsymbol x}_i}.
\]
Our goal is now to compute the representation
\[f =
\sum_{i = 1}^{N} f_{k}^{\Sigma} \sigma_{k}
\]
with respect to the samplet basis.
For
sake of a simpler notation, let
${\boldsymbol f}^{\Delta}\mathrel{\mathrel{\mathop:}=} [f_i^{\Delta}]_{i=1}^N$
and ${\boldsymbol f}^{\Sigma}\mathrel{\mathrel{\mathop:}=} [f_i^\Sigma]_{i=1}^N$ denote
the associated coefficient vectors.
\begin{figure}[htb]
\begin{center}
\scalebox{0.75}{
\begin{tikzpicture}[x=0.4cm,y=0.4cm]
\tikzstyle{every node}=[circle,draw=black,fill=shadecolor, minimum size=1.2cm]%
\tikzstyle{ptr}=[draw=none,fill=none,above]%
\node at (0,5) (1) {${\boldsymbol f}^{\Delta}$};
\node at (8,5) (2) {${\boldsymbol f}_{J-1}^{\Phi}$};
\node at (8,1) (3) {${\boldsymbol f}_{J-1}^{\Sigma}$};
\node at (16,5) (4) {${\boldsymbol f}_{J-2}^{\Phi}$};
\node at (16,1) (5) {${\boldsymbol f}_{J-2}^{\Sigma}$};
\node at (24,5) (6) {${\boldsymbol f}_{J-3}^{\Phi}$};
\node at (24,1) (7) {${\boldsymbol f}_{J-3}^{\Sigma}$};
\node at (30,5) (8) {${\boldsymbol f}_{1}^{\Phi}$};
\node at (38,5) (9) {${\boldsymbol f}_{0}^{\Phi}$};
\node at (38,1) (10) {${\boldsymbol f}_{0}^{\Sigma}$};
\tikzstyle{forward}=[draw,-stealth]%
\tikzstyle{every node}=[style=ptr]
\draw
(1) edge[forward] node[above,sloped]{${\boldsymbol Q}_{J-1,\Phi}^\intercal$} (2)
(1) edge[forward] node[above,sloped]{${\boldsymbol Q}_{J-1,\Sigma}^\intercal$} (3)
(2) edge[forward] node[above,sloped]{${\boldsymbol Q}_{J-2,\Phi}^\intercal$} (4)
(2) edge[forward] node[above,sloped]{${\boldsymbol Q}_{J-2,\Sigma}^\intercal$} (5)
(4) edge[forward] node[above,sloped]{${\boldsymbol Q}_{J-3,\Phi}^\intercal$} (6)
(4) edge[forward] node[above,sloped]{${\boldsymbol Q}_{J-3,\Sigma}^\intercal$} (7)
(8) edge[forward] node[above,sloped]{${\boldsymbol Q}_{0,\Phi}^\intercal$} (9)
(8) edge[forward] node[above,sloped]{${\boldsymbol Q}_{0,\Sigma}^\intercal$} (10);
\tikzstyle{every node}=[style=ptr]%
\tikzstyle{ptr}=[draw=none,fill=none]%
\node at (27,5) (16) {$\hdots$};
\end{tikzpicture}}
\caption{\label{fig:haar}Visualization of the discrete samplet transform.}
\end{center}
\end{figure}
The discrete samplet transform is based on
recursively applying the refinement relation
\eqref{eq:refinementRelation} to the point evaluations
\begin{equation}\label{eq:refinementRelationInnerProducts}
(f, [ \mathbf{\Phi}_{j}^{\nu}, \mathbf{\Sigma}_{j}^{\nu} ])_\Omega
=(f, \mathbf{\Phi}_{j+1}^{\nu} [{\boldsymbol Q}_{j,\Phi}^{\nu} ,{\boldsymbol Q}_{j,\Sigma}^{\nu} ])_\Omega \\
=(f, \mathbf{\Phi}_{j+1}^{\nu})_\Omega [{\boldsymbol Q}_{j,\Phi}^{\nu} , {\boldsymbol Q}_{j,\Sigma}^{\nu} ].
\end{equation}
On the finest level, the entries of the vector
$(f, \mathbf{\Phi}_{J}^{\nu})_\Omega$
are exactly those of ${\boldsymbol f}^{\Delta}$. Recursively
applying equation \eqref{eq:refinementRelationInnerProducts} therefore
yields all the coefficients $(f, \mathbf{\Psi}_{j}^{\nu})_\Omega$,
including $(f, \mathbf{\Phi}_{0}^{X})_\Omega$,
required for the representation of $f$ in the samplet basis,
see Figure~\ref{fig:haar} for a visualization. The
complete procedure is
formulated in Algorithm~\ref{algo:DWT}.\bigskip
\begin{algorithm}[H]
\caption{Discrete samplet transform}
\label{algo:DWT}
\KwData{Data ${\boldsymbol f}^\Delta$,
cluster tree $\Tcal$ and transformations
$[{\boldsymbol Q}_{j,\Phi}^{\nu},{\boldsymbol Q}_{j,\Sigma}^{\nu}]$.}
\KwResult{Coefficients ${\boldsymbol f}^{\Sigma}$
stored as
$[(f,\mathbf{\Phi}_{0}^{X})_\Omega]^\intercal$ and
$[(f,\mathbf{\Sigma}_{j}^{\nu})_\Omega]^\intercal$.}
\Begin{
store $[(f,\mathbf{\Phi}_{0}^{X})_\Omega]^\intercal\mathrel{\mathrel{\mathop:}=}$
\FuncSty{transformForCluster}($X$)
}
\end{algorithm}
\begin{function}[H]
\caption{transformForCluster($\nu$)}
\Begin{
\uIf{$\nu=\{{\boldsymbol x}_{i_{1}}, \dots,{\boldsymbol x}_{i_{|\nu|}}\}$
is a leaf of \(\Tcal\)}{
set ${\boldsymbol f}_{j+1}^{\nu}\mathrel{\mathrel{\mathop:}=}
\big[f_{i_{k}}^\Delta\big]_{k=1}^{|\nu|}$
}
\Else{
\For{all sons $\nu'$ of $\nu$}{
execute $\transformForCluster(\nu')$\\
append the result to ${\boldsymbol f}_{j+1}^{\nu}$
}
}
set $[(f,\mathbf{\Sigma}_{j}^{\nu})_\Omega]^\intercal
\mathrel{\mathrel{\mathop:}=}({\boldsymbol Q}_{j,\Sigma}^{\nu})^\intercal {\boldsymbol f}_{j+1}^{\nu}$
\Return{$({\boldsymbol Q}_{j,\Phi}^{\nu})^\intercal{\boldsymbol f}_{j+1}^{\nu}$}
}
\end{function}\bigskip
\begin{remark}
Algorithm \ref{algo:DWT} employs the transposed version of
\eqref{eq:refinementRelationInnerProducts} to preserve
the column vector structure of ${\boldsymbol f}^\Delta$ and ${\boldsymbol f}^{\Sigma}$.
\end{remark}
The inverse transformation is obtained by reversing
the steps of the discrete samplet transform:
For each cluster, we compute
\[
(f, \mathbf{\Phi}_{j+1}^{\nu})_\Omega
= (f, [ \mathbf{\Phi}_{j}^{\nu}, \mathbf{\Sigma}_{j}^{\nu} ]
)_\Omega[{\boldsymbol Q}_{j,\Phi}^{\nu} ,{\boldsymbol Q}_{j,\Sigma}^{\nu} ]^\intercal
\]
to either obtain the
coefficients of the
son clusters' scaling functions
or, for leaf clusters, the coefficients ${\boldsymbol f}^{\Delta}$.
The procedure is summarized in Algorithm~\ref{algo:iDWT}.\bigskip
\begin{algorithm}[H]
\caption{Inverse samplet transform}
\label{algo:iDWT}
\KwData{Coefficients ${\boldsymbol f}^\Sigma$,
cluster tree $\Tcal$ and transformations
$[{\boldsymbol Q}_{j,\Phi}^{\nu},{\boldsymbol Q}_{j,\Sigma}^{\nu}]$.}
\KwResult{Coefficients ${\boldsymbol f}^{\Delta}$
stored as
$[(f,\mathbf{\Phi}_{j}^{\nu})_\Omega]^\intercal$.}
\Begin{
\FuncSty{inverseTransformForCluster}($X$,
$[(f,\mathbf{\Phi}_{0}^{X})_\Omega]^\intercal$)
}
\end{algorithm}
\begin{function}[H]
\caption{inverseTransformForCluster($\nu$,
\unexpanded{$[(f,{\boldsymbol\Phi}_{j}^\nu)_\Omega]^\intercal$})}
\Begin{
$[(f,{\boldsymbol\Phi}_{j+1}^\nu)_\Omega]^\intercal
\mathrel{\mathrel{\mathop:}=} [{\boldsymbol Q}_{j,\Phi}^{\nu} ,{\boldsymbol Q}_{j,\Sigma}^{\nu} ]
\begin{bmatrix}
[(f,{\boldsymbol\Phi}_{j}^\nu)_\Omega]^\intercal\\
[(f,{\boldsymbol\Sigma}_{j}^\nu)_\Omega]^\intercal
\end{bmatrix}$
\uIf{$\nu=\{{\boldsymbol x}_{i_{1}}, \dots,{\boldsymbol x}_{i_{|\nu|}}\}$
is a leaf of \(\Tcal\)}{set $\big[f_{i_{k}}^\Delta\big]_{k=1}^{|\nu|}
\mathrel{\mathrel{\mathop:}=}[(f,{\boldsymbol\Phi}_{j_\nu+1}^\nu)_\Omega]^\intercal$
}
\Else{
\For{all sons $\nu'$ of $\nu$}{
assign the part of $[(f,{\boldsymbol\Phi}_{j+1}^\nu)_\Omega]^\intercal$
belonging to \(\nu'\) to $[(f,{\boldsymbol\Phi}_{j'}^{\nu'})_\Omega]^\intercal$
execute \FuncSty{inverseTransformForCluster}($\nu'$,
$[(f,{\boldsymbol\Phi}_{j'}^{\nu'})_\Omega]^\intercal$) }
}
}
\end{function}\bigskip
The discrete samplet transform and its inverse
can be performed in linear cost. This
result is well known in case of wavelets and was
crucial for their rapid development.
\begin{theorem}
The runtime of the discrete samplet transform and the inverse
samplet transform are \(\mathcal{O}(N)\), each.
\end{theorem}
\begin{proof}
As the samplet construction follows the construction
of Tausch and White, we refer to \cite{TW03} for the
details of the proof.
\end{proof}
\section{Numerical results I}\label{sec:Num1}
To demonstrate the efficacy of the samplet analysis,
we compress different sample data in one, two and three
spatial dimensions. For each example, we use samplets
with \(q+1=3\) vanishing moments.
\subsection*{One dimension}
We start with two one-dimensional
examples. On the one hand, we consider the test function
\[
f(x)=\frac 3 2 e^{-40|x-\frac 1 4|}
+ 2e^{-40|x|}-e^{-40|x+\frac 1 2|},
\]
sampled at $8192$ uniformly distributed points on \([-1,1]\).
On the other hand, we consider a path of a Brownian motion
sampled at the same points. The coefficients of the samplet
transformed data are thresholded with \(10^{-i}\|{\boldsymbol f}^{\Sigma}\|_\infty\),
\(i=1,2,3\), respectively.
The resulting compression ratios and the reconstructions
can be found in Figure~\ref{fig:Expcomp} and Figure~\ref{fig:BMcomp},
respectively. One readily infers that in both cases high compression
rates are achieved at high accuracy. In case of the Brownian motion,
the smoothing of the sample data can be realized by increasing the
compression rate, corresponding to throwing away more and
more detail information. Indeed, due to the orthonormality of the samplet
basis, this procedure amounts to a least squares fit of the data.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\begin{axis}[width=\textwidth, height=0.4\textwidth, xmin = -1, xmax=1, ymin=-1.1,
ymax=2.1, ylabel={$y$}, xlabel ={$x$},legend style={mark options={scale=2}},
legend pos = north east]
\addplot[line width=0.7pt,color=black]
table[each nth point=3,x index={0},y index = {1}]{./Results/ExpCompress1D.txt};
\addlegendentry{data};
\addplot[line width=0.7pt,color=blue, only marks, mark size=0.2pt]
table[each nth point=3,x index={0},y index = {5}]{./Results/ExpCompress1D.txt};
\addlegendentry{$98.55\%$ compr.};
\addplot[line width=0.7pt,color=red, only marks, mark size=0.2pt]
table[each nth point=3,x index={0},y index = {4}]{./Results/ExpCompress1D.txt};
\addlegendentry{$99.17\%$ compr.};
\addplot[line width=0.7pt,color=darkgreen, only marks, mark size=0.2pt]
table[each nth point=3,x index={0},y index = {3}]{./Results/ExpCompress1D.txt};
\addlegendentry{$99.63\%$ compr.};
\end{axis}
\end{tikzpicture}
\end{center}
\caption{\label{fig:Expcomp}Sampled test function approximated with
different compression ratios.}
\end{figure}
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}[ ausschnitt/.style={black!80}]
\begin{axis}[width=\textwidth, height=0.4\textwidth, xmin = -1, xmax=1, ymin=-1,
ymax=2.4,
ylabel={$y$}, xlabel ={$x$},legend style={mark options={scale=2}},
legend pos = south east]
\draw[ausschnitt]
(axis cs:-0.5,-0.5)coordinate(ul)--
(axis cs:0.005,-0.5)coordinate(ur)--
(axis cs:0.005,0.4)coordinate(or)--
(axis cs:-0.5,0.4) -- cycle;
\addplot[line width=0.7pt,color=black]
table[each nth point=4,x index={0},y index = {1}]{./Results/BMCompress1D.txt};
\addlegendentry{data};
\addplot[line width=0.7pt,color=blue, only marks, mark size=0.2pt]
table[each nth point=5,x index={0},y index = {5}]{./Results/BMCompress1D.txt};
\addlegendentry{$92.69\%$ compr.};
\addplot[line width=0.7pt,color=red, only marks, mark size=0.2pt]
table[each nth point=5,x index={0},y index = {4}]{./Results/BMCompress1D.txt};
\addlegendentry{$99.24\%$ compr.};
\addplot[line width=0.7pt,color=darkgreen, only marks, mark size=0.2pt]
table[each nth point=5,x index={0},y index = {3}]{./Results/BMCompress1D.txt};
\addlegendentry{$99.88\%$ compr.};
\end{axis}
\begin{axis}[yshift=-.37\textwidth,xshift=0.25\textwidth,
width=0.5\textwidth, height=0.4\textwidth, xmin = -0.5,
xmax=0.005, ymin=-0.5,
ymax=0.4,axis line style=ausschnitt]
\addplot[line width=0.7pt,color=black]
table[each nth point=2,x index={0},y index = {1}]{./Results/BMCompress1D.txt};
\addplot[line width=0.7pt,color=blue, only marks, mark size=0.2pt]
table[each nth point=2,x index={0},y index = {5}]{./Results/BMCompress1D.txt};
\addplot[line width=0.7pt,color=red, only marks, mark size=0.2pt]
table[each nth point=2,x index={0},y index = {4}]{./Results/BMCompress1D.txt};
\addplot[line width=0.7pt,color=darkgreen, only marks, mark size=0.2pt]
table[each nth point=2,x index={0},y index = {3}]{./Results/BMCompress1D.txt};
\end{axis}
\draw[ausschnitt]
(current axis.north west)--(ul)
(current axis.north east)--(ur);
\end{tikzpicture}
\caption{\label{fig:BMcomp}Sampled Brownian motion approximated with
different compression ratios.}
\end{center}
\end{figure}
\subsection*{Two dimensions}
As a second application for samplets, we consider image compression.
To this end, we use a \(2000\times 2000\) pixel grayscale landscape
image. The coefficients of the samplet transformed image are thresholded
with \(10^{-i}\|{\boldsymbol f}^{\Sigma}\|_\infty\), \(i=2,3,4\), respectively.
The corresponding
results and compression rates can be found in Figure~\ref{fig:compImage}.
A visualization of the samplet coefficients in case of the respective
low compression can be found in Figure~\ref{fig:coeffImage}.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\draw(0,0)node{\includegraphics[scale = 0.12,trim=65 47 65 24,clip]{%
./Results/OriginalLugano.png}};
\draw(0,2.4)node{Original image};
\draw(5,2.4)node{\(95.23\%\) compression};
\draw(5,0)node{\includegraphics[scale = 0.12,trim=65 47 65 24,clip]{%
./Results/CompressedLowLugano.png}};
\draw(0,-2.6)node{\(99.89\%\) compression};
\draw(0,-5)node{\includegraphics[scale = 0.12,trim=65 47 65 24,clip]{%
./Results/CompressedIntermedLugano.png}};
\draw(5,-2.6)node{\(99.99\%\) compression};
\draw(5,-5)node{\includegraphics[scale = 0.12,trim=65 47 65 24,clip]{%
./Results/CompressedHighLugano.png}};
\end{tikzpicture}
\caption{\label{fig:compImage}Different compression rates of the test image.}
\end{center}
\end{figure}
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\draw(0,0)node{\includegraphics[scale = 0.12,trim=1000 195 1000 195,clip]{%
./Results/LuganoCoeffs.png}};
\draw(3.3,0)node{\includegraphics[scale = 0.14,trim=2100 400 460 400,clip]{%
./Results/LuganoCoeffs.png}};
\end{tikzpicture}
\caption{\label{fig:coeffImage}Visualization of the samplet coefficients for the
test image.}
\end{center}
\end{figure}
\subsection*{Three dimensions}
Finally, we show a result in three dimensions.
Here, the points are given by a uniform subsample of
a triangulation of the Stanford bunny. We consider data on the
Stanford bunny generated by the function
\[
f({\boldsymbol x})=e^{-20\|{\boldsymbol x}-{\boldsymbol p}_0\|_2}+e^{-20\|{\boldsymbol x}-{\boldsymbol p}_1\|_2},
\]
where the points \({\boldsymbol p}_0\) and \({\boldsymbol p}_1\) are located at the tips
of the bunny's ears. Moreover, the geometry has been rescaled to a
diameter of 2. The plot on the left-hand side of Figure~\ref{fig:coeffStanford}
visualizes the sample data, while the plot on the right-hand side
shows the dominant coefficients in case of a threshold parameter
of \(10^{-2}\|{\boldsymbol f}^{\Sigma}\|_\infty\).
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\draw(0,0)node{\includegraphics[scale = 0.12,trim=1040 230 1050 280,clip]{%
./Results/StanfordBunnySignal.png}};
\draw(5,0)node{\includegraphics[scale = 0.12,trim=1000 200 1000 200,clip]{%
./Results/StanfordBunny1e-2Coeff.png}};
\draw(8,0)node{\includegraphics[scale = 0.14,trim=2130 400 600 400,clip]{%
./Results/StanfordBunny1e-2Coeff.png}};
\end{tikzpicture}
\caption{\label{fig:coeffStanford}Data on the Stanford bunny (left) and
dominant samplet coefficients (right).}
\end{center}
\end{figure}
\section{Compression of kernel matrices}\label{sec:kernelCompression}
\subsection{Kernel matrices}
The second application of samplets we consider
is the compression of matrices arising from positive
(semi-) definite kernels, as they emerge in kernel
methods, such as scattered data analysis, kernel
based learning or Gaussian process regression,
see for example \cite{HSS08,Schaback2006,
Wendland2004,Williams1998} and the references
therein.
We start by recalling the concept of a positive kernel.
\begin{definition}\label{def:poskernel}
A symmetric kernel
$\mathcal{K}\colon\Omega\times\Omega\rightarrow\Rbb$ is
called \textit{positive (semi-)definite} on $\Omega\subset\mathbb{R}^d$,
iff \([\mathcal{K}({\boldsymbol x}_i,{\boldsymbol x}_j)]_{i,j=1}^N\)
is a symmetric and positive (semi-)definite matrix
for all
$\{{\boldsymbol x}_1, \ldots,{\boldsymbol x}_N\}\subset\Omega$
and all $N\in\mathbb{N}$.
\end{definition}
As a particular class of positive definite
kernels, we consider the \emph{Mat\'ern kernels} given by
\begin{equation}\label{eq:matkern}
k_\nu(r)\mathrel{\mathrel{\mathop:}=}\frac{2^{1-\nu}}{\Gamma(\nu)}\bigg(\frac {\sqrt{2\nu}r}{\ell}\bigg)^\nu
K_\nu\bigg(\frac {\sqrt{2\nu}r}{\ell}\bigg),\quad r\geq 0,\ \ell >0 .
\end{equation}
Herein, $K_{\nu}$ is the modified Bessel function of the second
kind of order $\nu$ and $\Gamma$ is the gamma function.
The parameter $\nu$ steers for the smoothness of the
kernel function. Especially,
the analytic squared-exponential kernel is
retrieved for $\nu\to\infty$. Especially, we have
\begin{equation}
\begin{aligned}
k_{1/2}(r)=\exp\bigg(-\frac{r}{\ell}\bigg),
\quad k_{\infty}(r)=\exp\bigg(-\frac{r^2}{2\ell^2}\bigg).
\end{aligned}
\end{equation}
A positive definite kernel in the sense of Definition~\ref{def:poskernel}
is obtained by considering
\[
\mathcal{K}({\boldsymbol x},{\boldsymbol x}^\prime)\mathrel{\mathrel{\mathop:}=} k_\nu(\|{\boldsymbol x}-{\boldsymbol x}^\prime\|_2).
\]
Given the set of points \(X=\{{\boldsymbol x}_1,\ldots,{\boldsymbol x}_N\}\), many
applications require the assembly and the inversion of the
\emph{kernel matrix}
\[
{\boldsymbol K}\mathrel{\mathrel{\mathop:}=}[\mathcal{K}({\boldsymbol x}_i,{\boldsymbol x}_j)]_{i,j=1}^N\in\Rbb^{N\times N}
\]
or an appropriately regularized version
\[
{\boldsymbol K}+\rho{\boldsymbol I},\quad \rho>0,
\]
thereof. In case that
\(N\) is a large number, already the assembly and storage of
\({\boldsymbol K}\)
can easily become prohibitive. For the solution of an associated
linear system, the situation is even worse.
Fortunately, the kernel matrix can be compressed
by employing samplets. To this end, the evaluation of
the kernel function at the points ${\boldsymbol x}_i$ and ${\boldsymbol x}_j$
will be denoted by
\[
(\mathcal{K},\delta_{{\boldsymbol x}_i}\otimes\delta_{{\boldsymbol x}_j}
)_{\Omega\times\Omega}\mathrel{\mathrel{\mathop:}=}\mathcal{K}({\boldsymbol x}_i,{\boldsymbol x}_j).
\]
Hence, in view of $V = \{\delta_{{\boldsymbol x}_1},\ldots,\delta_{{\boldsymbol x}_N}\}$,
we may write the kernel matrix as
\[
{\boldsymbol K} = \big[(\mathcal{K},\delta_{{\boldsymbol x}_i}
\otimes\delta_{{\boldsymbol x}_j})_{\Omega\times\Omega}\big]_{i,j=1}^N.
\]
\subsection{Asymptotically smooth kernels}
The essential ingredient for the samplet compression of
kernel matrices is the \emph{asymptotical smoothness}
property of the kernel
\begin{equation}\label{eq:kernel_estimate}
\frac{\partial^{|\boldsymbol\alpha|+|\boldsymbol\beta|}}
{\partial{\boldsymbol x}^{\boldsymbol\alpha}
\partial{\boldsymbol y}^{\boldsymbol\beta}} \mathcal{K}({\boldsymbol x},{\boldsymbol y})
\le c_\mathcal{K} \frac{(|\boldsymbol\alpha|+|\boldsymbol\beta|)!}
{r^{|\boldsymbol\alpha|+|\boldsymbol\beta|}
\|{\boldsymbol x}-{\boldsymbol y}\|_2^{|\boldsymbol\alpha|+|\boldsymbol\beta|}},\quad
c_\mathcal{K},r>0,
\end{equation}
which is for example satisfied by the Mat\'ern kernels.
Using this estimate, we obtain the following result,
which is the basis for the matrix compression introduced
thereafter.
\begin{lemma}\label{lem:kernel_decay}
Consider two samplets $\sigma_{j,k}$ and $\sigma_{j',k'}$,
exhibiting $q+1$ vanishing moments with supporting
clusters \(\nu\) and \(\nu'\), respectively.
Assume that $\dist(\nu,\nu') > 0$. Then, for kernels
satisfying \eqref{eq:kernel_estimate}, it holds that
\begin{equation}\label{eq:kernel_decay}
(\mathcal{K},\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}\le
c_\mathcal{K} \frac{\diam(\nu)^{q+1}\diam(\nu')^{q+1}}
{(dr\dist(\nu_{j,k},\nu_{j',k'}))^{2(q+1)}}
\|{\boldsymbol\omega}_{j,k}\|_{1}\|{\boldsymbol\omega}_{j',k'}\|_{1}.
\end{equation}
\end{lemma}
\begin{proof}
Let ${\boldsymbol x}_0\in\nu$ and ${\boldsymbol y}_0\in\nu'$.
A Taylor expansion of the kernel with respect to
${\boldsymbol x}$ yields
\[
\mathcal{K}({\boldsymbol x},{\boldsymbol y}) = \sum_{|\boldsymbol\alpha|\le q}
\frac{\partial^{|\boldsymbol\alpha|}}
{\partial{\boldsymbol x}^{\boldsymbol\alpha}\mathcal{K}({\boldsymbol x}_0,{\boldsymbol y})}
\frac{({\boldsymbol x}-{\boldsymbol x}_0)^{\boldsymbol\alpha}}{\boldsymbol\alpha!}
+ R_{{\boldsymbol x}_0}({\boldsymbol x},{\boldsymbol y}),
\]
where the remainder $R_{{\boldsymbol x}_0}({\boldsymbol x},{\boldsymbol y})$ is given by
\begin{align*}
R_{{\boldsymbol x}_0}({\boldsymbol x},{\boldsymbol y}) &= (q+1)\sum_{|\boldsymbol\alpha|=q+1}
\frac{({\boldsymbol x}-{\boldsymbol x}_0)^{\boldsymbol\alpha}}{\boldsymbol\alpha!}
\int_0^1\frac{\partial^{q+1}}{\partial{\boldsymbol x}^{\boldsymbol\alpha}}
\mathcal{K}\big({\boldsymbol x}_0+s({\boldsymbol x}-{\boldsymbol x}_0),{\boldsymbol y}\big)(1-s)^q\d s.
\end{align*}
Next, we expand the remainder $R_{{\boldsymbol x}_0}({\boldsymbol x},{\boldsymbol y})$ with
respect to ${\boldsymbol y}$ and derive
\begin{align*}
R_{{\boldsymbol x}_0}({\boldsymbol x},{\boldsymbol y}) &= (q+1)\sum_{|\boldsymbol\alpha|=q+1}
\frac{({\boldsymbol x}-{\boldsymbol x}_0)^{\boldsymbol\alpha}}{\boldsymbol\alpha!}
\sum_{|\boldsymbol\beta|\le q}\frac{({\boldsymbol y}-{\boldsymbol y}_0)^{\boldsymbol\beta}}{\boldsymbol\beta!}\\
&\times\int_0^1\frac{\partial^{q+1}}{\partial{\boldsymbol x}^{\boldsymbol\alpha}}
\frac{\partial^{|\boldsymbol\beta|}}{\partial{\boldsymbol y}^{\boldsymbol\beta}}
\mathcal{K}\big({\boldsymbol x}_0+s({\boldsymbol x}-{\boldsymbol x}_0),{\boldsymbol y}_0\big)(1-s)^q\d s
+ R_{{\boldsymbol x}_0,{\boldsymbol y}_0}({\boldsymbol x},{\boldsymbol y}).
\end{align*}
Here, the remainder $R_{{\boldsymbol x}_0,{\boldsymbol y}_0}({\boldsymbol x},{\boldsymbol y})$
is given by
\begin{align*}
&R_{{\boldsymbol x}_0,{\boldsymbol y}_0}({\boldsymbol x},{\boldsymbol y}) = (q+1)^2
\sum_{|\boldsymbol\alpha|,|\boldsymbol\beta|=q+1}
\frac{({\boldsymbol x}-{\boldsymbol x}_0)^{\boldsymbol\alpha}}{\boldsymbol\alpha!}
\frac{({\boldsymbol y}-{\boldsymbol y}_0)^{\boldsymbol\beta}}{\boldsymbol\beta!}\\
&\qquad\times\int_0^1\int_0^1\frac{\partial^{2(q+1)}}
{\partial{\boldsymbol x}^{\boldsymbol\alpha}\partial{\boldsymbol y}^{\boldsymbol\beta}}
\mathcal{K}\big({\boldsymbol x}_0+s({\boldsymbol x}-{\boldsymbol x}_0),{\boldsymbol y}_0+t({\boldsymbol y}-{\boldsymbol y}_0)\big)(1-s)^q(1-t)^q\d t\d s.
\end{align*}
We thus arrive at the decomposition
\[
\mathcal{K}({\boldsymbol x},{\boldsymbol y}) = p_{{\boldsymbol y}}({\boldsymbol x}) + p_{{\boldsymbol x}}({\boldsymbol y})
+ R_{{\boldsymbol x}_0,{\boldsymbol y}_0}({\boldsymbol x},{\boldsymbol y}),
\]
where $p_{{\boldsymbol y}}({\boldsymbol x})$ is a polynomial of degree $q$ in ${\boldsymbol x}$,
with coefficients depending on ${\boldsymbol y}$, while $p_{{\boldsymbol x}}({\boldsymbol y})$
is a polynomial of degree $q$ in ${\boldsymbol y}$, with coefficients depending
on ${\boldsymbol x}$. Due to the vanishing moments, we obtain
\[
(\mathcal{K},\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}
=(R_{{\boldsymbol x}_0,{\boldsymbol y}_0},
\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}.
\]
In view of \eqref{eq:kernel_estimate}, we thus find
\begin{align*}
&|(\mathcal{K},\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}|
= |(R_{{\boldsymbol x}_0,{\boldsymbol y}_0},
\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}|\\
&\qquad\le c_\mathcal{K} \Bigg(\sum_{|\boldsymbol\alpha|,|\boldsymbol\beta|=q+1}
\frac{(|\boldsymbol\alpha|+|\boldsymbol\beta|)!}{\boldsymbol\alpha!\boldsymbol\beta!}\Bigg)
\frac{(\|\cdot-{\boldsymbol x}_0\|^{q+1}_2,|\sigma_{j,k}|)_\Omega
(\|\cdot-{\boldsymbol y}_0\|^{q+1}_2,|\sigma_{j',k'}|)_\Omega}{r^{2(q+1)}\dist(\nu,\nu')^{2(q+1)}}.
\end{align*}
Next, we have by means of multinomial coefficients that
\[
(|\boldsymbol\alpha|+|\boldsymbol\beta|)!
={|\boldsymbol\alpha|+|\boldsymbol\beta|\choose |\boldsymbol\beta|}
{|\boldsymbol\alpha|\choose\boldsymbol\alpha}
{|\boldsymbol\beta|\choose\boldsymbol\beta}
\boldsymbol\alpha!\boldsymbol\beta!,
\]
which in turn implies that
\begin{align*}
\sum_{|\boldsymbol\alpha|,|\boldsymbol\beta|=q+1}
\frac{(|\boldsymbol\alpha|+|\boldsymbol\beta|)!}{\boldsymbol\alpha!\boldsymbol\beta!}
&= {2(q+1)\choose q+1} \sum_{|\boldsymbol\alpha|,|\boldsymbol\beta|=q+1}
{|\boldsymbol\alpha|\choose\boldsymbol\alpha}
{|\boldsymbol\beta|\choose\boldsymbol\beta}\\
&= {2(q+1)\choose q+1} d^{2(q+1)}
\le d^{2(q+1)} 2^{2(q+1)}.
\end{align*}
Moreover, we use
\[
(\|\cdot-{\boldsymbol x}_0\|_2^{q+1},|\sigma_{j,k}|)_\Omega
\le\bigg(\frac{\diam(\nu)}{2}\bigg)^{q+1}\|{\boldsymbol\omega}_{j,k}\|_{1},
\]
and likewise
\[
(\|\cdot-{\boldsymbol y}_0\|_2^{q+1},|\sigma_{j',k'}|)_\Omega
\le\bigg(\frac{\diam(\nu')}{2}\bigg)^{q+1}\|{\boldsymbol\omega}_{j',k'}\|_{1}.
\]
Combining all the estimates, we arrive at the desired
result \eqref{eq:kernel_decay}.
\end{proof}
\subsection{Matrix compression}
Lemma~\ref{lem:kernel_decay} immediately suggests
a compression strategy for kernel matrices in
samplet representation. We mention that this compression
differs from the wavelet matrix compression introduced
in \cite{DHS}, since we do not exploit the decay of the
samplet coefficients with respect to the level in case of
smooth data. This enables us to also consider a non-uniform
distribution of the points in $V$. Consequently, we use
on all levels the same accuracy, what is more similar
to the setting in \cite{BCR}.
\begin{theorem}
Set all coefficients of the kernel matrix
\[
{\boldsymbol K}^\Sigma\mathrel{\mathrel{\mathop:}=}\big[(\mathcal{K},\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}
\big]_{j,j',k,k'}
\]
to zero which satisfy
\begin{equation}\label{eq:cutoff}
\dist(\nu,\nu')\ge\eta\max\{\diam(\nu),\diam(\nu')\},\quad\eta>0,
\end{equation}
where \(\nu\) is the cluster supporting \(\sigma_{j,k}\) and
\(\nu'\) is the cluster supporting \(\sigma_{j',k'}\), respectively.
Then, it holds
\[
\big\|{\boldsymbol K}^\Sigma-{\boldsymbol K}^\Sigma_\varepsilon\big\|_F
\le c_\mathcal{K} c_{\operatorname{sum}} {(\eta dr)^{-2(q+1)}}
m_q N\sqrt{\log(N)}.
\]
for some constant \(c_{\operatorname{sum}}>0\),
where \(m_q\) is given by \eqref{eq:mq}.
\end{theorem}
\begin{proof}
We first fix the levels $j$ and $j'$. In view
\eqref{eq:kernel_decay}, we can estimate any coefficient
which satisfies \eqref{eq:cutoff} by
\begin{align*}
&|(\mathcal{K},\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}|\\
&\qquad\le
c_\mathcal{K} \bigg(\frac{\min\{\diam(\nu),\diam(\nu')\}}
{\max\{\diam(\nu),\diam(\nu')\}}\bigg)^{q+1}
{(\eta dr)^{-2(q+1)}}\|{\boldsymbol\omega}_{j,k}\|_{1}\|{\boldsymbol\omega}_{j',k'}\|_{1}.
\end{align*}
If we next set
\[
\theta_{j,j'}\mathrel{\mathrel{\mathop:}=} \max_{\nu\in\Tcal_j,\nu'\in\Tcal_{j'}}\bigg\{\frac{\min\{\diam(\nu),\diam(\nu')\}}
{\max\{\diam(\nu),\diam(\nu')\}}\bigg\},
\]
then we obtain
\[
|(\mathcal{K},\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}|
\le c_\mathcal{K}\theta_{j,j'}^{q+1}{(\eta dr)^{-2(q+1)}}
\|{\boldsymbol\omega}_{j,k}\|_{1}\|{\boldsymbol\omega}_{j',k'}\|_{1}
\]
for all coefficients such that \eqref{eq:cutoff} holds.
In view of \eqref{eq:ell1-norm} and the fact that there are
at most $m_q$ samplets
per cluster, we arrive at
\[
\sum_{k,k'} \|{\boldsymbol\omega}_{j,k}\|_{1}^2\|{\boldsymbol\omega}_{j',k'}\|_{1}^2
\leq\sum_{k,k'}|\nu|\cdot|\nu'| = m_q^2 N^2.
\]
Thus, for a fixed level-level block, we arrive at the estimate
\begin{align*}
\big\|{\boldsymbol K}^\Sigma_{j,j'}-{\boldsymbol K}^\Sigma_{\varepsilon,j,j'}\big\|_F^2
&\le\sum_{\begin{smallmatrix}k,k':\ \dist(\nu,\nu')\\
\ge\eta\max\{\diam(\nu),\diam(\nu')\}\end{smallmatrix}}
|(\mathcal{K},\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}|^2\\
&\le c_\mathcal{K}^2 \theta_{j,j'}^{2(q+1)} {(\eta dr)^{-4(q+1)}} m_q^2 N^2.
\end{align*}
Finally, summation over all levels yields
\begin{align*}
\big\|{\boldsymbol K}^\Sigma-{\boldsymbol K}^\Sigma_{\varepsilon}\big\|_F^2
&= \sum_{j,j'}\big\|{\boldsymbol K}^\Sigma_{j,j'}-{\boldsymbol K}^\Sigma_{\varepsilon,j,j'}\big\|_F^2\\
&\le c_\mathcal{K}^2 {(\eta dr)^{-4(q+1)}}m_q^2 N^2\sum_{j,j'} \theta_{j,j'}^{2(q+1)}\\
&\le c_\mathcal{K}^2 c_{\operatorname{sum}} {(\eta dr)^{-4(q+1)}} m_q^2 N^2\log N,
\end{align*}
which is the desired claim.
\end{proof}
\begin{remark}
In case of uniformly distributed points ${\boldsymbol x}_i\in X$,
we have $\big\|{\boldsymbol K}^\Sigma\big\|_F\sim N$. Thus,
in this case we immediately obtain
\[
\frac{\big\|{\boldsymbol K}^\Sigma-{\boldsymbol K}^\Sigma_\varepsilon\big\|_F}
{\big\|{\boldsymbol K}^\Sigma\big\|_F} \le c_\mathcal{K}
\sqrt{c_{\operatorname{sum}}} {(\eta dr)^{-2(q+1)}} m_q \log N.
\]
\end{remark}
\begin{theorem}
The matrix consists of only $\mathcal{O}(m_q^2
N\log N)$ relevant matrix coefficients provided
that the points in $V$ are uniformly
distributed in $\Omega$.
\end{theorem}
\begin{proof}
We fix $j,j'$ and assume $j\ge j'$. In case of uniformly
distributed points, it holds $\diam(v)\sim 2^{-j_\nu/d}$.
Hence, for the cluster $\nu_{j',k'}$, there exist only
$\mathcal{O}([2^{j-j'}]^d)$ clusters $\nu_{j,k}$ from
level $j$, which do not satisfy the cut-off criterion
\eqref{eq:cutoff}. Since each cluster contains at most
$m_q$ samplets, we hence arrive at
\[
\sum_{j=0}^J \sum_{j'\le j}m_q^2( 2^{j'} 2^{(j-j')})^d
= m_q^2 \sum_{j=0}^J j 2^{jd} \sim m_q^2 N\log N,
\]
which implies the assertion.
\end{proof}
\begin{remark}
The chosen cut-off criterion \eqref{eq:cutoff} coincides
with the so called \emph{admissibility condition} used
by hierarchical matrices. We particularly refer here to
\cite{Boe10}, as we will later on rely the \(\mathcal{H}^2\)-matrix
method presented there for the fast assembly of the
compressed kernel matrix.
\end{remark}
\subsection{Compressed matrix assembly}
For a given pair of clusters, we can now determine whether the
corresponding entries need to be calculated. As there are $\mathcal{O}(N)$
clusters, naively checking the cut-off criterion for all pairs would
still take $\mathcal{O}(N^{2})$ operations, however. Hence, we require
smarter means to determine the non-negligible cluster pairs.
For this purpose, we first state the transferability of the cut-off criterion
to son clusters, compare \cite{DHS} for a proof.
\begin{lemma}
Let $\nu$ and $\nu'$ be clusters satisfying the cut-off criterion
\eqref{eq:cutoff}. Then, for the son clusters $\nu_{\mathrm{son}}$
of $\nu$ and $\nu_{\mathrm{son}}'$ of $\nu'$, we have
\begin{align*}
\dist(\nu,\nu_{\mathrm{son}}')
&\ge\eta\max\{\diam(\nu),\diam(\nu_{\mathrm{son}}')\},\\
\dist(\nu_{\mathrm{son}},\nu')
&\ge\eta\max\{\diam(\nu_{\mathrm{son}}),\diam(\nu')\},\\
\dist(\nu_{\mathrm{son}},\nu_{\mathrm{son}}')
&\ge\eta\max\{\diam(\nu_{\mathrm{son}}),\diam(\nu_{\mathrm{son}}')\}.
\end{align*}
\end{lemma}
The lemma tells us that we may omit cluster pairs whose father
clusters already satisfy the cut-off criterion. This will be essential for
the assembly of the compressed matrix.
The computation of the compressed kernel matrix
can be sped up further by using
\(\Hcal^2\)-matrix techniques, see
\cite{HB02,Gie01}. Similarly to \cite{AHK14,HKS05}, we shall
rely here on \(\Hcal^2\)-matrices for this purpose.
The idea of \(\Hcal^2\)-matrices is to approximate the kernel interaction
for sufficiently distant clusters \(\nu\) and \(\nu'\) in the sense
of the admissibility condition \eqref{eq:cutoff} by means
of the interpolation based \(\Hcal^2\)-matrix approach.
More precisely, given a suitable set of interpolation
points \(\{{\boldsymbol\xi}_t^\nu\}_t\) for each cluster \(\nu\) with
associated Lagrange polynomials \(\{\mathcal{L}_{t}^{\nu}
({\boldsymbol x})\}_t\), we introduce the interpolation operator
\[
\mathcal{I}^{\nu,\nu'}[\mathcal{K}]({\boldsymbol x}, {\boldsymbol y})
= \sum_{s,t} \mathcal{K}({\boldsymbol\xi}_{s}^{\nu}, {\boldsymbol\xi}_{t}^{\nu'})
\mathcal{L}_{s}^{\nu}({\boldsymbol x}) \mathcal{L}_{t}^{\nu'}({\boldsymbol y})
\]
and approximate an admissible matrix block via
\begin{align*}
{\boldsymbol K}^\Delta_{\nu,\nu'}
&=[(\mathcal{K},\delta_{\boldsymbol x}\otimes\delta_{\boldsymbol y})_{\Omega\times\Omega}]_{{\boldsymbol x}\in\nu,{\boldsymbol y}\in\nu'}\\
&\approx\sum_{s,t} \mathcal{K}({\boldsymbol\xi}_{s}^{\nu}, {\boldsymbol\xi}_{t}^{\nu'})
[(\mathcal{L}_{s}^{\nu},\delta_{\boldsymbol x})_\Omega]_{{\boldsymbol x}\in\nu}
[(\mathcal{L}_{t}^{\nu'},\delta_{\boldsymbol y})_\Omega]_{{\boldsymbol y}\in\nu'}
\mathrel{=\mathrel{\mathop:}}{\boldsymbol V}^{\nu}_\Delta{\boldsymbol S}^{\nu,\nu'}({\boldsymbol V}^{\nu'}_\Delta)^\intercal.
\end{align*}
Herein, the \emph{cluster bases} are given according to
\begin{equation}\label{eq:cluster bases}
{\boldsymbol V}^{\nu}_\Delta\mathrel{\mathrel{\mathop:}=} [(\mathcal{L}_{s}^{\nu},\delta_{\boldsymbol x})_\Omega]_{{\boldsymbol x}\in\nu},\quad
{\boldsymbol V}^{\nu'}_\Delta\mathrel{\mathrel{\mathop:}=}[(\mathcal{L}_{t}^{\nu'},\delta_{\boldsymbol y})_\Omega]_{{\boldsymbol y}\in\nu'},
\end{equation}
while the \emph{coupling matrix} is given by
\(
{\boldsymbol S}^{\nu,\nu'}\mathrel{\mathrel{\mathop:}=}[\mathcal{K}({\boldsymbol\xi}_{s}^{\nu}, {\boldsymbol\xi}_{t}^{\nu'})]_{s,t}.
\)
Directly transforming the cluster bases into their corresponding
samplet representation results in a log-linear cost. This can be
avoided by the use of nested cluster bases, as they have been
introduced for \(\Hcal^2\)-matrices. For the sake of simplicity, we
assume from now on that tensor product polynomials of degree
\(p\) are used for the kernel interpolation at all different cluster
combinations. As a consequence, the Lagrange polynomials
of a father cluster can exactly be represented by those of the
son clusters. Introducing the \emph{transfer matrices}
\(
{\boldsymbol T}^{\nu_{\mathrm{son}}}
\mathrel{\mathrel{\mathop:}=}[\mathcal{L}_s^\nu({\boldsymbol\xi}_t^{\nu_{\mathrm{son}}})]_{s,t},
\)
there holds
\[
\mathcal{L}_s^\nu({\boldsymbol x})=\sum_t{\boldsymbol T}^{\nu_{\mathrm{son}}}_{s,t}
\mathcal{L}_t^{\nu_{\mathrm{son}}}({\boldsymbol x}),\quad{\boldsymbol x}\in B_{\nu_{\mathrm{son}}}.
\]
Exploiting this relation in the construction of the cluster bases
\eqref{eq:cluster bases} finally leads to
\[
{\boldsymbol V}^{\nu}_\Delta=\begin{bmatrix}
{\boldsymbol V}^{\nu_{\mathrm{son}_1}}_\Delta{\boldsymbol T}^{\nu_{\mathrm{son}_1}}\\
{\boldsymbol V}^{\nu_{\mathrm{son}_2}}_\Delta{\boldsymbol T}^{\nu_{\mathrm{son}_2}}
\end{bmatrix}.
\]
Combining this refinement relation with the recursive nature of the
samplet basis, results
in the variant of the discrete samplet transform summarized in
Algorithm~\ref{algo:multiscaleClusterBasis}.\bigskip
\begin{algorithm}[H]
\caption{Recursive computation of the multiscale cluster basis}
\label{algo:multiscaleClusterBasis}
\KwData{Cluster tree $\Tcal$, transformations
$[{\boldsymbol Q}_{j,\Phi}^{\nu}$, ${\boldsymbol Q}_{j,\Sigma}^{\nu}]$,
nested cluster bases ${\boldsymbol V}_{\Delta}^{\nu}$ for leaf clusters and
transformation matrices ${\boldsymbol T}^{\nu_{\mathrm{son}_1}}\), \(
{\boldsymbol T}^{\nu_{\mathrm{son}_2}}$ for non-leaf clusters.
}
\KwResult{Multiscale cluster basis matrices ${\boldsymbol V}_{\Phi}^{\nu}$,
${\boldsymbol V}_{\Sigma}^{\nu}$ for all clusters $\nu \in\Tcal$.}
\Begin{
\FuncSty{computeMultiscaleClusterBasis}($X$)\;
}
\end{algorithm}
\begin{function}[H]
\caption{computeMultiscaleClusterBasis($\nu$)}
\Begin{
\uIf{$\nu$ is a leaf cluster}{
store $\begin{bmatrix}
{\boldsymbol V}_{\Phi}^{\nu} \\
{\boldsymbol V}_{\Sigma}^{\nu}
\end{bmatrix}\mathrel{\mathrel{\mathop:}=}\big[{\boldsymbol Q}_{j,\Phi}^{\nu},
{\boldsymbol Q}_{j,\Sigma}^{\nu}\big]^{\intercal} {\boldsymbol V}_{\Delta}^{\nu}$
}
\Else{
\For{all sons $\nu'$ of $\nu$}{
$\computeMultiscaleClusterBasis(\nu')$
}
store $\begin{bmatrix}
{\boldsymbol V}_{\Phi}^{\nu} \\
{\boldsymbol V}_{\Sigma}^{\nu}
\end{bmatrix}\mathrel{\mathrel{\mathop:}=}\big[{\boldsymbol Q}_{j,\Phi}^{\nu},
{\boldsymbol Q}_{j,\Sigma}^{\nu}\big]^{\intercal} \begin{bmatrix}
{\boldsymbol V}^{\nu_{\mathrm{son}_1}}_\Phi{\boldsymbol T}^{\nu_{\mathrm{son}_1}}\\
{\boldsymbol V}^{\nu_{\mathrm{son}_2}}_\Phi{\boldsymbol T}^{\nu_{\mathrm{son}_2}}
\end{bmatrix}$
}
}
\end{function}\medskip
Having the multiscale cluster bases at our disposal, the next step is
the assembly of the compressed kernel matrix. The computation of the
required matrix blocks is exclusively
based on the two refinement relations
\begin{align*}
\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu'}^{\Phi,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Sigma}
\end{bmatrix}
&=
\begin{bmatrix}
(\mathcal{K},{\boldsymbol \Phi}^\nu\otimes{\boldsymbol\Phi}^{\nu'})_{\Omega\times\Omega} &
(\mathcal{K},{\boldsymbol \Phi}^\nu\otimes{\boldsymbol\Sigma}^{\nu'})_{\Omega\times\Omega} \\
(\mathcal{K},{\boldsymbol \Sigma}^\nu\otimes{\boldsymbol\Phi}^{\nu'})_{\Omega\times\Omega} &
(\mathcal{K},{\boldsymbol \Sigma}^\nu\otimes{\boldsymbol\Sigma}^{\nu'})_{\Omega\times\Omega}
\end{bmatrix}\\
&=\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}_1}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}_2}'}^{\Phi,\Phi}\\
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}_1}'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}_2}'}^{\Sigma,\Phi}
\end{bmatrix}\big[{\boldsymbol Q}_{j,\Phi}^{\nu'},
{\boldsymbol Q}_{j,\Sigma}^{\nu'}\big]
\end{align*}
and
\begin{align*}
\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu'}^{\Phi,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Sigma}
\end{bmatrix}
&=\begin{bmatrix}
(\mathcal{K},{\boldsymbol \Phi}^\nu\otimes{\boldsymbol\Phi}^{\nu'})_{\Omega\times\Omega} &
(\mathcal{K},{\boldsymbol \Phi}^\nu\otimes{\boldsymbol\Sigma}^{\nu'})_{\Omega\times\Omega} \\
(\mathcal{K},{\boldsymbol \Sigma}^\nu\otimes{\boldsymbol\Phi}^{\nu'})_{\Omega\times\Omega} &
(\mathcal{K},{\boldsymbol \Sigma}^\nu\otimes{\boldsymbol\Sigma}^{\nu'})_{\Omega\times\Omega}
\end{bmatrix}\\
&=
\big[{\boldsymbol Q}_{j,\Phi}^{\nu},
{\boldsymbol Q}_{j,\Sigma}^{\nu}\big]^\intercal
\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}_1},\nu'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}_1},\nu'}^{\Phi,\Phi}\\
{\boldsymbol K}_{\nu_{\mathrm{son}_2},\nu'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}_2},\nu'}^{\Sigma,\Phi}
\end{bmatrix}.
\end{align*}
We obtain the following function, which is the key ingredient for the computation
of the compressed kernel matrix.\bigskip
\begin{function}[H]
\caption{recursivelyDetermineBlock($\nu$, $\nu'$)}
\KwResult{Approximation of the block \scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu'}^{\Phi,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Sigma}
\end{bmatrix}$}.}
\Begin{
\uIf{$(\nu, \nu')$ is admissible}{
\Return{\scalebox{1}{$\begin{bmatrix}
{\boldsymbol V}_{\Phi}^{\nu} \\
{\boldsymbol V}_{\Sigma}^{\nu}
\end{bmatrix}
{\boldsymbol S}^{\nu,\nu'} \big[
({\boldsymbol V}_{\Phi}^{\nu'})^\intercal,
({\boldsymbol V}_{\Sigma}^{\nu'})^\intercal
\big]$}}
}
\uElseIf{$\nu$ and $\nu'$ are leaf clusters}{
\Return{\scalebox{1}{$\big[{\boldsymbol Q}_{j,\Phi}^{\nu},
{\boldsymbol Q}_{j,\Sigma}^{\nu}\big]^{\intercal}{\boldsymbol K}_{\nu,\nu'}^{\Delta}
\big[{\boldsymbol Q}_{j,\Phi}^{\nu'},
{\boldsymbol Q}_{j,\Sigma}^{\nu'}\big]$}}
}
\uElseIf{$\nu'$ is not a leaf cluster and $\nu$ is a leaf cluster}{
\For{all sons $\nu_{\mathrm{son}}'$ of $\nu'$}{
\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Sigma,\Sigma}
\end{bmatrix}$} $
\mathrel{\mathrel{\mathop:}=}\recursivelyDetermineBlock(\nu, \nu_{\mathrm{son}'})$
}
\Return{\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu_{\mathrm{son},1}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son},2}'}^{\Phi,\Phi}\\
{\boldsymbol K}_{\nu,\nu_{\mathrm{son},1}'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son},2}'}^{\Sigma,\Phi}
\end{bmatrix}\big[{\boldsymbol Q}_{j,\Phi}^{\nu'},
{\boldsymbol Q}_{j,\Sigma}^{\nu'}\big]$}}
}
\uElseIf{$\nu$ is not a leaf cluster and $\nu'$ is a leaf cluster}{
\For{all sons \(\nu_{\mathrm{son}}\) of \(\nu\)}{
\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Sigma,\Sigma}
\end{bmatrix}$} $\mathrel{\mathrel{\mathop:}=} \recursivelyDetermineBlock(\nu_{\mathrm{son}}, \nu')$
}
\Return{\scalebox{1}{$\big[{\boldsymbol Q}_{j,\Phi}^{\nu},
{\boldsymbol Q}_{j,\Sigma}^{\nu}\big]^\intercal\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}_1},\nu'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}_1},\nu'}^{\Phi,\Phi}\\
{\boldsymbol K}_{\nu_{\mathrm{son}_2},\nu'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}_2},\nu'}^{\Sigma,\Phi}
\end{bmatrix}$}.
}
}
\Else(){
\For{all sons $\nu_{\mathrm{son}}$ of $\nu$ {\bf and}
all sons $\nu_{\mathrm{son}}'$ of $\nu'$}{
\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu_{\mathrm{son}}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu_{\mathrm{son}}'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu_{\mathrm{son}}'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu_{\mathrm{son}}'}^{\Sigma,\Sigma}
\end{bmatrix}$}$\mathrel{\mathrel{\mathop:}=}\recursivelyDetermineBlock(\nu_{\mathrm{son}},
\nu_{\mathrm{son}'})$
}
\Return{\scalebox{1}{$\big[{\boldsymbol Q}_{\Phi}^{\nu},
{\boldsymbol Q}_{\Sigma}^{\nu}\big]^\intercal\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}_1},\nu_{\mathrm{son}_1}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}_1},\nu_{\mathrm{son}_2}'}^{\Phi,\Phi}\\
{\boldsymbol K}_{\nu_{\mathrm{son}_2},\nu_{\mathrm{son}_1}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}_2},\nu_{\mathrm{son}_2}'}^{\Phi,\Phi}
\end{bmatrix} \big[{\boldsymbol Q}_{\Phi}^{\nu'},
{\boldsymbol Q}_{\Sigma}^{\nu'}\big]$}}
}
}
\end{function}\bigskip
Now, in order to assemble the compressed kernel matrix, we require
two nested recursive calls of the cluster tree, which is traversed in
a depth first search way. Algorithm~\ref{algo:h2Wavelet}
first computes the lower right matrix block and advances from bottom
to top and from right to left. To this end, the two recursive
functions \texttt{setupColumn} and \texttt{setupRow} are introduced.\bigskip
\begin{algorithm}[H]
\caption{Computation of the compressed kernel matrix}
\label{algo:h2Wavelet}
\KwData{Cluster tree $\Tcal$, multiscale cluster bases ${\boldsymbol V}_{\Phi}^{\nu}$, ${\boldsymbol V}_{\Sigma}^{\nu}$
and transformations $[{\boldsymbol Q}_{j,\Phi}^{\nu},{\boldsymbol Q}_{j,\Sigma}^{\nu}]$.}
\KwResult{Sparse matrix ${\boldsymbol K}^\Sigma_\varepsilon$}
\Begin{
\FuncSty{setupColumn}($X$)
store the blocks the remaining blocks
${\boldsymbol K}^\Sigma_{\varepsilon,\nu,X}$ for \(\nu\in\Tcal\setminus\{X\}\)
in ${\boldsymbol K}^\Sigma_\varepsilon$(they have already been computed by
earlier calls to \FuncSty{recursivelyDetermineBlock})
}
\end{algorithm}\bigskip
The purpose of the function \texttt{setupColumn} is to
recursively traverse the column cluster tree, i.e.\ the
cluster tree associated to the columns of the matrix.
Before returning, each instance of \texttt{setupColumn}
calls the function \texttt{setupRow}, which performs the
actual assembly of the compressed matrix.\bigskip
\begin{function}[H]
\caption{setupColumn($\nu'$)}
\Begin{
\For{all sons $\nu_{\mathrm{son}}'$ of $\nu'$}{
$\setupColumn(\nu_{\mathrm{son}}')$
}
store ${\boldsymbol K}^\Sigma_{\varepsilon,X,\nu'}\mathrel{\mathrel{\mathop:}=}
\FuncSty{setupRow}(X, \nu')$
in ${\boldsymbol K}^\Sigma_{\varepsilon}$
}
\end{function}\bigskip
For a given column cluster \(\nu'\), the function \texttt{setupRow}
recursively traverses the row cluster tree, i.e.\
the cluster tree associated to the rows of the matrix, and
assembles the corresponding column of the compressed matrix.
The function reuses the already computed blocks to the right of the column
under consideration and blocks at the bottom of the very same
column.\bigskip
\begin{function}[H]
\caption{setupRow($\nu$, $\nu'$)}
\Begin{
\uIf{$\nu$ is not a leaf}{
\For{all sons \(\nu_{\mathrm{son}}\) of \(\nu\)}{
\uIf{\(\nu_{\mathrm{son}}\) and \(\nu'\) are not admissible}{
\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Sigma,\Sigma}
\end{bmatrix}$}
$\mathrel{\mathrel{\mathop:}=} \setupRow(\nu_{\mathrm{son}}, \nu')$
}
\Else{
\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Sigma,\Sigma}
\end{bmatrix}$}
$\mathrel{\mathrel{\mathop:}=}\recursivelyDetermineBlock(\nu_{\mathrm{son}}, \nu')$}
}
\scalebox{1}{$
\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu'}^{\Phi,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Sigma}
\end{bmatrix}\mathrel{\mathrel{\mathop:}=}\big[{\boldsymbol Q}_{\Phi}^{\nu},
{\boldsymbol Q}_{\Sigma}^{\nu}\big]^\intercal\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}_1},\nu'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}_1},\nu'}^{\Phi,\Phi}\\
{\boldsymbol K}_{\nu_{\mathrm{son}_2},\nu'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}_2},\nu'}^{\Sigma,\Phi}
\end{bmatrix}$
}
}
\Else{
\uIf{$\nu'$ is a leaf cluster}{
\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Sigma,\Sigma}
\end{bmatrix}$}
$\mathrel{\mathrel{\mathop:}=}\recursivelyDetermineBlock(\nu_{\mathrm{son}}, \nu')$
}
\Else{
\For{all sons \(\nu_{\mathrm{son}}'\) of \(\nu\)'}{
\uIf{\(\nu\) and \(\nu_{\mathrm{son}}'\) are not admissible}{
load already computed block \scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Sigma,\Sigma}
\end{bmatrix}$}
}
\Else
{
\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Sigma,\Sigma}
\end{bmatrix}$}
$\mathrel{\mathrel{\mathop:}=} \recursivelyDetermineBlock(\nu, \nu_{\mathrm{son}'})$
}
}
}
\scalebox{1}{
$\begin{bmatrix}{\boldsymbol K}_{\nu,\nu'}^{\Phi,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Sigma}
\end{bmatrix}\mathrel{\mathrel{\mathop:}=}\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}_1}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}_2}'}^{\Phi,\Phi}\\
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}_1}'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}_2}'}^{\Sigma,\Phi}
\end{bmatrix}\big[{\boldsymbol Q}_{\Phi}^{\nu'},
{\boldsymbol Q}_{\Sigma}^{\nu'}\big]$}
}
store ${\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Sigma}$ as part of ${\boldsymbol K}^\Sigma_\varepsilon$
\Return{\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu'}^{\Phi,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Sigma}
\end{bmatrix}$}}
}
\end{function}
\begin{remark}
Algorithm~\ref{algo:h2Wavelet} has a cost of \(\mathcal{O}(N\log N)\)
and requires an additional storage of \(\mathcal{O}(N\log N)\)
if all stored blocks are directly released when they are not required
anymore. We refer to \cite{AHK14} for all the details.
\end{remark}
\section{Numerical results I\!I}\label{sec:Num2}
All computations in this section have been performed on a single node
with two Intel Xeon E5-2650 v3 @2.30GHz CPUs and up to 512GB
of main memory\footnote{The full specifications can be found on
https://www.euler.usi.ch/en/research/resources.}. In order to obtain
consistent timings, only a single core was used for all computations.
\subsection*{Benchmark problem}
To benchmark the compression of kernel matrices, we consider
the exponential kernel
\[
k({\boldsymbol x},{\boldsymbol y})=e^{-10\frac{\|{\boldsymbol x}-{\boldsymbol y}\|_2}{\sqrt{d}}}
\]
evaluated at an increasing number of uniformly distributed
random sample points
in the hypercube \([-1,1]^d\) for \(d=1,2,3\). As a measure of sparsity,
we introduce the \emph{average number of nonzeros per row}
\[
\operatorname{anz}({\boldsymbol A})\mathrel{\mathrel{\mathop:}=}\frac{\operatorname{nnz}({\boldsymbol A})}{N},\quad
{\boldsymbol A}\in\Rbb^{N\times N},
\]
where \(\operatorname{nnz}({\boldsymbol A})\) is the number of nonzero entries of
\({\boldsymbol A}\). Besides the compression, we also report the fill-in generated
by the Cholesky factorization in combination with the nested dissection
reordering from \cite{KK98}. For the reordering and the Cholesky
factorization, we rely on \textsc{Matlab} R2020a\footnote{Version 9.8.0.1396136,
The MathWorks Inc., Natick, Massachusetts, 2020.} while the
samplet compression is implemented in \texttt{C++11} using the
\texttt{Eigen} template library\footnote{\texttt{https://eigen.tuxfamily.org/}}
for linear algebra operations. For the computations, we consider
a polynomial degree of 3 for the \(\Hcal^2\)-matrix representation
and \(q+1=3\) vanishing moments for the samplets. In addition,
we have performed a thresholding of the computed matrix
coefficients that were smaller than \(\varepsilon=10^{-3}\).
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\begin{loglogaxis}[width=0.42\textwidth,grid=both, ymin= 0, ymax = 1e5, xmin = 256, xmax =1.2e6,
legend style={legend pos=south east,font=\small}, ylabel={\small wall time}, xlabel ={\small $N$}]
\addplot[line width=0.7pt,color=blue,mark=triangle]
table[x=npts,y=ctim]{./Results/matlabLogger1.txt};
\label{pgfplots:plot1D}
\addplot[line width=0.7pt,color=darkgreen,mark=square]
table[x=npts,y=ctim]{./Results/matlabLogger2.txt};
\label{pgfplots:plot2D}
\addplot[line width=0.7pt,color=red,mark=o]
table[x=npts,y=ctim]{./Results/matlabLogger3.txt};
\label{pgfplots:plot3D}
\addplot[line width=0.7pt,color=black, dashed]
table[x=npts,y expr={1e-5 * x}]{./Results/matlabLogger1.txt};
\addplot[line width=0.7pt,color=black, dashed]
table[x=npts,y expr={1e-5 * x * ln(x)}]{./Results/matlabLogger1.txt};
\addplot[line width=0.7pt,color=black, dashed]
table[x=npts,y expr={1e-5 * x * ln(x)^2}]{./Results/matlabLogger1.txt};
\addplot[line width=0.7pt,color=black, dashed]
table[x=npts,y expr={1e-5 * x * ln(x)^3}]{./Results/matlabLogger1.txt};
\label{pgfplots:asymps}
\end{loglogaxis}
\begin{loglogaxis}[xshift=0.405\textwidth,width=0.42\textwidth,grid=both, ymin= 0, ymax = 2e3, xmin = 256, xmax =1.2e6, ytick={1e1, 1e2, 1e3, 1e4},
legend style={legend pos=south east,font=\small}, ylabel={\small $\operatorname{anz}({\boldsymbol K}^\Sigma_\varepsilon)$}, xlabel ={\small $N$}]
\addplot[line width=0.7pt,color=blue,mark=triangle] table[x=npts,
y expr = {\thisrow{nzS}}]{./Results/matlabLogger1.txt};
\addplot[line width=0.7pt,color=darkgreen,mark=square] table[x=npts,
y expr = {\thisrow{nzS}}]{./Results/matlabLogger2.txt};
\addplot[line width=0.7pt,color=red,mark=o] table[x=npts,
y expr = {\thisrow{nzS}}]{./Results/matlabLogger3.txt};
\end{loglogaxis}
\matrix[
matrix of nodes,
anchor=north west,
draw
inner sep=0.1em,
column 1/.style={nodes={anchor=center}},
column 2/.style={nodes={anchor=west},font=\strut},
draw
]
at([xshift=0.02\textwidth]current axis.north east){
\ref{pgfplots:plot1D}& \(d=1\)\\
\ref{pgfplots:plot2D}& \(d=2\)\\
\ref{pgfplots:plot3D}& \(d=3\)\\
\ref{pgfplots:asymps}& \(N\!\log^\alpha\!\! N\)\\};
\end{tikzpicture}
\caption{\label{fig:compTimesNNZ}Assembly times (left) and
average numbers of nonzeros per row (right) versus the number
sample points $N$ in case of the exponential kernel matrix.}
\end{center}
\end{figure}
The left-hand side of Figure~\ref{fig:compTimesNNZ} shows
the wall time for the assembly of the compressed kernel matrices.
The different dashed lines indicate the asymptotics \(N\log^\alpha N\)
for \(\alpha=0,1,2,3\). It can be seen that, for increasing number
\(N\) of points and the dimensions \(d=1,2,3\) under
consideration, all computation times approach the expected rate
of \(N\log N\). The right-hand side of Figure~\ref{fig:compTimesNNZ}
shows the average number of nonzeros per row for an increasing number
\(N\) of points. Except for the case of \(d=1\), where this number even
decreases, it becomes constant as expected.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\begin{loglogaxis}[width=0.42\textwidth,grid=both, ymin= 0, ymax = 3e4, xmin = 500, xmax =1.2e6,
legend style={legend pos=south east,font=\small}, ylabel={\small wall time}, xlabel ={\small $N$}]
\addplot[line width=0.7pt,color=blue,mark=triangle]
table[x=npts,y=Ltim]{./Results/matlabLogger1.txt};
\label{pgfplots:plot1D1}
\addplot[line width=0.7pt,color=darkgreen,mark=square]
table[x=npts,y=Ltim]{./Results/matlabLogger2.txt};
\label{pgfplots:plot2D1}
\addplot[line width=0.7pt,color=red,mark=o]
table[x=npts,y=Ltim]{./Results/matlabLogger3.txt};
\label{pgfplots:plot3D1}
\addplot[line width=0.7pt,color=black,dashed]
table[x=npts,y expr={0.7e-6 * x^1.5}]{./Results/matlabLogger1.txt};
\addplot[line width=0.7pt,color=black,dashed]
table[x=npts,y expr={0.1e-6 * x^2}]{./Results/matlabLogger1.txt};
\label{pgfplots:asymps1}
\end{loglogaxis}
\begin{loglogaxis}[xshift=0.405\textwidth,width=0.42\textwidth,grid=both, ymin= 0, ymax = 4e4, xmin = 256, xmax =1.2e6,ytick={1e1, 1e2, 1e3, 1e4},
legend style={legend pos=south east,font=\small}, ylabel={\small $\operatorname{anz}({\boldsymbol L})$}, xlabel ={\small $N$}]
\addplot[line width=0.7pt,color=blue,mark=triangle] table[x=npts,
y = nzL]{./Results/matlabLogger1.txt};
\addplot[line width=0.7pt,color=darkgreen,mark=square] table[x=npts,
y = nzL]{./Results/matlabLogger2.txt};
\addplot[line width=0.7pt,color=red,mark=o] table[x=npts,
y = nzL]{./Results/matlabLogger3.txt};
\end{loglogaxis}
\matrix[
matrix of nodes,
anchor=north west,
draw
inner sep=0.1em,
column 1/.style={nodes={anchor=center}},
column 2/.style={nodes={anchor=west},font=\strut},
draw
]
at([xshift=0.02\textwidth]current axis.north east){
\ref{pgfplots:plot1D1}& \(d=1\)\\
\ref{pgfplots:plot2D1}& \(d=2\)\\
\ref{pgfplots:plot3D1}& \(d=3\)\\
\ref{pgfplots:asymps1}& \(N^{\frac{3}{2}}\), \(N^2\)\\};
\end{tikzpicture}
\caption{\label{fig:cholTimesNNZ}Computation times for the
Cholesky factorization (left) and average numbers of nonzeros
per row for the Cholesky factor (right) versus the number
sample points $N$ in case of the exponential kernel matrix.}
\end{center}
\end{figure}
Next, we examine the Cholesky factorization of the compressed
kernel matrix. As the largest eigenvalue of the kernel matrix
grows proportionally to the number \(N\) of points,
while the smallest eigenvalue is
given by the ridge parameter, the condition number grows with \(N\) as well.
Hence, to obtain a constant condition number for increasing \(N\),
the ridge parameter needs to be adjusted accordingly.
However, as we are only interested in the generated fill-in and
the computation times,
we neglect this fact and just fix the ridge parameter to
\(\rho=1\) for all considered \(N\) and \(d=1,2,3\).
The obtained results are found in
Figure~\ref{fig:cholTimesNNZ}. Herein, on the left-hand
side, the wall times for the Cholesky factorization of
the reordered matrix are found. For \(d=1\) the behavior is a bit peculiar as
the average number of nonzeros per row decreases when
the number \(N\) of points increases. This indicates that the kernel
function is already fully resolved up to the threshold parameter on
the coarser levels. For \(d=2\), the observed rate is slightly better than
the expected one of \(N^{\frac{3}{2}}\) for the Cholesky factorization, while the scaling
is approximately like \(N^2\) for \(d=3\). On the right-hand side of the
same figure, it can be seen that the fill-in remains rather moderate.
A visualization of the matrix patterns for the matrix
\({\boldsymbol K}^\Sigma_\varepsilon+\rho{\boldsymbol I}\),
the reordered matrix and the Cholesky factor for \(N=131\,072\) points is
shown in Figure~\ref{fig:patterns}. Each dot corresponds to a block of
\(256\times 256\) matrix entries and its intensity indicates the number
of nonzero entries, where darker blocks contain more entries than lighter blocks.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\draw(0,4.5) node {\includegraphics[scale=0.31,frame,trim= 0 0 0 13.4,clip]{./Results/Kmat_1D.eps}};
\draw(4,4.5) node {\includegraphics[scale=0.31,frame,trim= 0 0 0 13.4,clip]{./Results/KmatND_1D.eps}};
\draw(8,4.5) node {\includegraphics[scale=0.31,frame,trim= 0 0 0 13.4,clip]{./Results/Lmat_1D.eps}};
\draw(4,6.6) node {$d=1$};
\draw(0,0) node {\includegraphics[scale=0.31,frame,trim= 0 0 0 13.4,clip]{./Results/Kmat_2D.eps}};
\draw(4,0) node {\includegraphics[scale=0.31,frame,trim= 0 0 0 13.4,clip]{./Results/KmatND_2D.eps}};
\draw(8,0) node {\includegraphics[scale=0.31,frame,trim= 0 0 0 13.4,clip]{./Results/Lmat_2D.eps}};
\draw(4,2.1) node {$d=2$};
\draw(0,-4.5) node {\includegraphics[scale=0.31,frame,trim= 0 0 0 13.4,clip]{./Results/Kmat_3D.eps}};
\draw(4,-4.5) node {\includegraphics[scale=0.31,frame,trim= 0 0 0 13.4,clip]{./Results/KmatND_3D.eps}};
\draw(8,-4.5) node {\includegraphics[scale=0.31,frame,trim= 0 0 0 13.4,clip]{./Results/Lmat_3D.eps}};
\draw(4,-2.4) node {$d=3$};
\end{tikzpicture}
\caption{\label{fig:patterns}Sparsity pattern of \({\boldsymbol K}^\Sigma_\varepsilon+\rho{\boldsymbol I}\) (left),
the reordered matrices (middle) and the Cholesky factors \({\boldsymbol L}\) (right)
for \(d=1,2,3\) and \(N=131\,072\).}
\end{center}
\end{figure}
\subsection*{Simulation of a Gaussian random field}
As our last example, we consider a Gaussian random field evaluated at
100\,000 randomly chosen points at the surface of the Stanford bunny.
As before, the Stanford bunny has been rescaled to have a diameter of 2.
In order to demonstrate that our approach works also for higher dimensions,
the Stanford bunny has been embedded into \(\mathbb{R}^4\) and randomly
rotated to prevent axis-aligned bounding boxes. The polynomial degree for
the \(\Hcal^2\)-matrix representation is set to 3 as before and likewise
we consider \(q+1=3\) vanishing moments. The covariance
function is given by the exponential kernel
\[
k({\boldsymbol x},{\boldsymbol y})=e^{-25\|{\boldsymbol x}-{\boldsymbol y}\|_2}.
\]
Moreover, we discard all computed matrix entries which are
below the threshold of \(\varepsilon=10^{-6}\).
The ridge
parameter is set to \(\rho=10^{-2}\).
The compressed covariance matrix exhibits
\(\operatorname{anz}({\boldsymbol K}^\Sigma_\varepsilon)=5985\)
nonzero
matrix entries per row on average, while the corresponding Cholesky
factor exhibits \(\operatorname{anz}({\boldsymbol L})=12\,010\) nonzero
matrix entries per row on average. This is comparable to the
benchmark case on the hypercube for \(d=3\).
Having the Cholesky factor \({\boldsymbol L}\) at hand,
the computation of a realization of the
Gaussian random field is extremely fast, as it only requires
a simple sparse matrix-vector multiplication of \({\boldsymbol L}\)
by a Gaussian random vector and an inverse samplet transform.
Four different realizations of the random field projected
to \(\mathbb{R}^3\) are shown in Figure~\ref{fig:GRF}.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\draw (0,5) node {\includegraphics[scale=0.12,clip, trim= 1000 250 1000 300]{./Results/bunnyField1.png}};
\draw (5,5) node {\includegraphics[scale=0.12,clip, trim= 1000 250 1000 300]{./Results/bunnyField2.png}};
\draw (0,0) node {\includegraphics[scale=0.12,clip, trim= 1000 250 1000 300]{./Results/bunnyField3.png}};
\draw (5,0) node {\includegraphics[scale=0.12,clip, trim= 1000 250 1000 300]{./Results/bunnyField4.png}};
\draw (8.5,2.5) node {\includegraphics[scale=0.2,clip, trim= 2430 400 300 500]{./Results/bunnyField4.png}};
\end{tikzpicture}
\caption{\label{fig:GRF}Four different realizations of a Gaussian
random field based on an exponential covariance kernel.}
\end{center}
\end{figure}
\section{Conclusion}\label{sec:Conclusion}
Samplets provide a new methodology for the analysis
of large data sets. They are easy to construct and discrete
data can be transformed into the samplet basis in linear cost.
In our construction, we deliberately let out the discussion
of a level dependent compression of the given data, as it
is known from wavelet analysis, in favor of a robust error
analysis. We emphasize however that, under the assumption
of uniformly distributed points, different norms can be
incorporated, allowing for the construction of band-pass
filters and level dependent thresholding. In this situation,
also an improved samplet matrix compression is possible
such that a fixed number of vanishing moments is sufficient
to achieve a precision proportional to the fill distance with
log-linear cost.
Besides data compression, detection of singularities
and adaptivity, we have demonstrated how
samplets can be employed for the compression kernel
matrices to obtain an essentially sparse matrix.
Having a sparse representation of the kernel matrix,
algebraic operations, such as matrix vector multiplications
can considerably be sped up. Moreover, in combination
with a fill-in reducing reordering, the factorization of
the compressed kernel matrices becomes computationally
feasible, which allows for the fast application of the
inverse kernel matrix on the one hand and the efficient
solution of linear systems involving the kernel matrix
on the other hand. The numerical results, featuring about
\(10^6\) data points in up to four dimensions, impressively
demonstrate the capabilities of samplets.
A straightforward future research direction is
the incorporation of different clustering strategies, such as
manifold aware clustering, to optimally resolve lower dimensional
manifolds in high dimensional data.
\bibliographystyle{plain}
\section{Introduction}\label{sec:intro}
Wavelet techniques have a long standing history in the field of data
science. Applications comprise signal processing, image analysis and
machine learning, see for instance
\cite{Chui,Dahmen,Daubechies,Mallat,Mallat2016}
and the references therein. Assuming a signal generated by some
function, the pivotal idea of wavelet techniques is the splitting of
this function into its respective contributions with respect to a
hierarchy of scales. Such a multiscale ansatz starts from an
approximation on a relatively coarse scale and successively resolves
details at the finer scales. Hence, compression and adaptive
representation are inherently built into this ansatz. The transformation
of a given signal into its wavelet representation and the inverse
transformation can be performed with linear cost in terms of the degrees
of freedom.
Classically, wavelets are constructed by refinement relations and
therefore require a sequence of nested approximation spaces which are
copies of each other, except for a different scaling. This restricts the
concept of wavelets to structured data. Some adaption of the general
principle is possible in order to treat intervals, bounded domains and
surfaces, compare \cite{DKU,Quak,HS,STE} for example. The seminal work
\cite{TW03} by Tausch and White overcomes this obstruction by
constructing wavelets as suitable linear combinations of functions at a
given fine scale. In particular, the stability of the resulting basis,
which is essential for numerical algorithms is guaranteed by
orthonormality.
In this article, we take the concept of wavelets to the next level and
consider discrete data. To this end, we modify the construction of
Tausch and White accordingly and construct a multiscale basis which
consists of localized and discrete signed measures. Inspired by the term
wavelet, we call such signed measures \emph{samplets}. Samplets can be
constructed such that their associated measure integrals vanish for
polynomial integrands. If this is the case for all polynomials of total
degree less or equal than \(q\), we say that the samplets have
\emph{vanishing moments} of order $q+1$. We remark that lowest order
samplets, i.e.\ \(q=0\), have been considered earlier for data
compression in \cite{RE11}. Another concept for constructing multiscale
bases on data sets are \emph{diffusion wavelets}, which employ a
diffusion operator to construct the multiscale hierarchy, see
\cite{CM06}. In contrast to diffusion wavelets, however, the
construction of samplets is solely based on discrete structures and can
always be performed with linear cost for a balanced cluster tree, even
for non-uniformly distributed data.
When representing discrete data by samplets, then, due to the vanishing
moments, there is a fast decay of the corresponding samplet coefficients
with respect to the support size if the data are smooth. This
straightforwardly enables data compression. In contrast, non-smooth
regions in the data are indicated by large samplet coefficients. This,
in turn, enables singularity detection and extraction. Furthermore, the
construction of samplets is not limited to the use of polynomials.
Indeed, it is easily be possible to adapt the construction to other
primitives with different desired properties.
The second application of samplets we consider is compression of kernel
matrices, as they arise in kernel based machine learning and scattered
data approximation, compare \cite{Fasshauer2007,HSS08,Rasmussen2006,%
Schaback2006,Wendland2004,Williams1998}. Kernel matrices are typically
densely populated, since the underlying kernels are nonlocal.
Nonetheless, these kernels are usually \emph{asymptotically smooth},
meaning that they behave like smooth functions apart from the diagonal.
A discretization of an asymptotical smooth kernel with respect to a
samplet basis with vanishing moments results in quasi-sparse kernel
matrices, which means that they can be compressed such that only a
sparse matrix remains, compare \cite{BCR,DHS,DPS,PS,SCHN}. Especially,
it has been demonstrated in \cite{HM} that nested dissection, see
\cite{Geo73,LRT79}, is applicable in order to obtain a fill-in reducing
reordering of the matrix. This reordering in turn allows for the rapid
factorization of the system matrix by the Cholesky factorization without
introducing additional errors.
The asymptotic smoothness of the kernels is also exploited by cluster
methods, like the fast multipole method, see \cite{GR,RO,YBZ04} and
particularly \cite{MXTY+2015} for high-dimensional data. However,
these methods do not allow for the direct and exact factorization, which
is for example advantageous advantagous for the simulation of Gaussian
random fields. A further approach, which is more in line of the present
work, is the use of \emph{gamblets}, see \cite{Owh17}, for the
compression of the kernel matrix, cp.\ \cite{SSO21}. Different from the
the discrete construction of samplets with vanishing
moments, the construction of gamblets is adapted to some underlying
pseudo-differential operator in order to obtain basis functions with
localized supports, while localized supports are automatically obtained
by the samplet construction.
As samplets are directly constructed with respect on a discrete data
set, their applications are manifold. Within this article, we
particularly consider time-series data, image data, kernel matrix
representation and the simulation of Gaussian random fields as examples.
We remark, however, that we do claim to have invented a new method for
high-dimensional data approximation. The current construction is based
on total degree polynomials and is hence not dimension robust, thus
limited to data of moderate dimension. Even so, we believe that samplets
provide most of the advantages of other approaches for scattered data,
while being easy to implement. Especially, most of the algorithms
available for wavelets with vanishing moments are
transferable.
The rest of this article is organized as follows. In
Section~\ref{section:Samplets}, the concept of samplets is introduced.
The subsequent Section~\ref{sct:construction} is devoted to the actual
construction of samplets and to their properties. The change of basis
by means of the discrete samplet transform is the topic of
Section~\ref{sec:FST}. In Section \ref{sec:Num1}, we demonstrate the
capabilities of samplets for data compression and smoothing for data in
one, two and three dimensions. Section~\ref{sec:kernelCompression} deals
with the samplet compression of kernel matrices. Especially, we also
develop an interpolation based \(\Hcal^2\)-matrix approach in order to
efficiently assemble the compressed kernel matrix. Corresponding
numerical results are then presented in Section \ref{sec:Num2} for up
to four dimensions. Finally, in Section~\ref{sec:Conclusion}, we state
concluding remarks.
\section{Samplets}
\label{section:Samplets}
Let \(X\mathrel{\mathrel{\mathop:}=}\{{\boldsymbol x}_1,\ldots,{\boldsymbol x}_N\}\subset\Omega\)
denote a set of points within some region \(\Omega\subset\Rbb^d\).
Associated to each point \({\boldsymbol x}_i\), we introduce
the Dirac measure
\[
\delta_{{\boldsymbol x}_i}({\boldsymbol x})\mathrel{\mathrel{\mathop:}=}
\begin{cases}
1,&\text{if }{\boldsymbol x}={\boldsymbol x}_i\\
0,&\text{otherwise}.
\end{cases}
\]
With a slight abuse of notation, we also introduce the
point evaluation functional
\[
(f,\delta_{{\boldsymbol x}_i})_\Omega=\int_\Omega
f({\boldsymbol x})\delta_{{\boldsymbol x}_i}({\boldsymbol x})\d{\boldsymbol x}\mathrel{\mathrel{\mathop:}=}
\int_{\Omega}f({\boldsymbol x})\delta_{{\boldsymbol x}_i}(\d{\boldsymbol x})
=f({\boldsymbol x}_i),
\]
where $f\in C(\Omega)$ is a continuous function.
Next, we define the space
\(V\mathrel{\mathrel{\mathop:}=}\spn\{\delta_{{\boldsymbol x}_1},\ldots,\delta_{{\boldsymbol x}_N}\}\)
as the \(N\)-dimensional vector space
of all discrete and finite signed measures supported at the
points in \(X\).
An inner product on \(V\) is defined by
\[
\langle u,v\rangle_V\mathrel{\mathrel{\mathop:}=}\sum_{i=1}^N u_iv_i,\quad\text{where }
u=\sum_{i=1}^Nu_i\delta_{{\boldsymbol x}_i},\ v=\sum_{i=1}^Nv_i\delta_{{\boldsymbol x}_i}.
\]
Indeed, the space \(V\) is isometrically isomorphic to \(\Rbb^N\)
endowed with the canonical inner product.
Similar to the
idea of a multiresolution analysis in the construction of
wavelets, we introduce the spaces \(V_j\mathrel{\mathrel{\mathop:}=}\spn{\boldsymbol \Phi_j}\), where
\[
{\boldsymbol\Phi_j}\mathrel{\mathrel{\mathop:}=}\{\varphi_{j,k}:k\in\Delta_j\}.
\]
Here, $\Delta_j$ denotes a suitable
index set with cardinality $|\Delta_j|=\dim V_j$ and
\(j\in\Nbb\) is referred to as \emph{level}.
Moreover, each basis function \(\varphi_{j,k}\) is a linear
combination of Dirac measures
such that
\[
\langle \varphi_{j,k},\varphi_{j,k'}\rangle_V=0\quad\text{for }k\neq k'
\]
and, in case of uniformly distributed points,
it holds
\[
\diam(\supp\varphi_{j,k})\mathrel{\mathrel{\mathop:}=}
\diam(\{{\boldsymbol x}_{i_1},\ldots,{\boldsymbol x}_{i_p}\})\sim 2^{-j/d}.
\]
For the sake of notational convenience, we shall identify bases
by row vectors,
such that, for ${\boldsymbol v}_j
= [v_{j,k}]_{k\in\Delta_j}$, the corresponding measure
can simply be written as a dot product according to
\[
v_j = \mathbf\Phi_j{\boldsymbol v}_j=\sum_{k\in\Delta_j} v_{j,k}\varphi_{j,k}.
\]
Rather than using the multiresolution
analysis corresponding to the hierarchy
\[
V_0\subset V_1\subset\cdots\subset V,
\]
the idea of samplets is
to keep track of the increment of information
between two consecutive levels $j$ and $j+1$. Since we have
$V_{j}\subset V_{j+1}$, we may decompose
\begin{equation}\label{eq:decomposition}
V_{j+1} = V_j\overset{\perp}{\oplus} S_j
\end{equation}
by using the \emph{detail space} $S_j$. Of practical interest
is the particular choice of the basis of the detail space $S_j$ in $V_{j+1}$.
This basis is assumed to be orthonormal as well and will be denoted by
\[
{\boldsymbol\Sigma}_j = \{\sigma_{j,k}:k\in\nabla_j\mathrel{\mathrel{\mathop:}=}\Delta_{j+1}
\setminus \Delta_j\}.
\]
Recursively applying the decomposition \eqref{eq:decomposition},
we see that the set
\[
\mathbf\Sigma_J = {\boldsymbol\Phi}_0 \bigcup_{j=0}^{J-1}{\boldsymbol\Sigma}_j
\]
forms a basis of \(V_J\mathrel{\mathrel{\mathop:}=} V\), which we call a \emph{samplet basis}.
In order to employ samplets for the compression of data and
kernel matrices, it is favorable
that the measures $\sigma_{j,k}$
are localized with respect to the corresponding level $j$, i.e.\
\begin{equation}\label{eq:localized}
\diam(\supp\sigma_{j,k})\sim 2^{-j/d},
\end{equation}
however, this is not a requirement in our construction,
and that they are stable, i.e.\
\[
\langle \sigma_{j,k},\sigma_{j,k'}\rangle_V=0\quad\text{for }k\neq k'.
\]
Moreover, an essential ingredient is the vanishing moment
condition, meaning that
\begin{equation}\label{eq:vanishingMoments}
(p,\sigma_{j,k})_\Omega
= 0\quad \text{for all}\ p\in\Pcal_q(\Omega),
\end{equation}
where \(\Pcal_q(\Omega)\) denotes the space of all polynomials
with total degree at most \(q\).
We say then that the samplets have $q+1$ \emph{vanishing
moments}.
\begin{remark}
The concept of samplets has a very natural interpretation
in the context of reproducing kernel Hilbert spaces, compare
\cite{Aronszajn50}. If \((\Hcal,\langle\cdot,\cdot\rangle_{\Hcal})\)
is a reproducing kernel Hilbert space with reproducing kernel
\(\mathcal{K}\), then there holds
\((f,\delta_{{\boldsymbol x}_i})_\Omega
=\langle \mathcal{K}({\boldsymbol x}_i,\cdot),f\rangle_{\Hcal}\). Hence,
the samplet
\(\sigma_{j,k}=\sum_{\ell=1}^p\beta_\ell\delta_{{\boldsymbol x}_{i_\ell}}\)
can directly be identified with the function
\[
\hat{\sigma}_{j,k}\mathrel{\mathrel{\mathop:}=}
\sum_{\ell=1}^p\beta_\ell \mathcal{K}({\boldsymbol x}_{i_\ell},\cdot)\in\mathcal{H}.
\]
In particular, it holds
\[
\langle\hat{\sigma}_{j,k},h\rangle_\Hcal=0
\]
for any \(h\in\Hcal\) which satisfies
\(h|_{\supp\sigma_{j,k}}\in\Pcal_q(\supp\sigma_{j,k})\).
\end{remark}
\section{Construction of samplets}\label{sct:construction}
\subsection{Cluster tree}
In order to construct samplets with the desired properties,
especially vanishing moments, cf.\ \eqref{eq:vanishingMoments},
we shall transfer the wavelet construction of Tausch and
White from \cite{TW03} into our setting. The first step is to
construct subspaces of signed measures with localized
supports. To this end, we perform a hierarchical
clustering on the set \(X\).
\begin{definition}\label{def:cluster-tree}
Let $\mathcal{T}=(P,E)$ be a tree with vertices $P$ and edges $E$.
We define its set of leaves as
\[
\mathcal{L}(\mathcal{T})\mathrel{\mathrel{\mathop:}=}\{\nu\in P\colon\nu~\text{has no sons}\}.
\]
The tree $\mathcal{T}$ is a \emph{cluster tree} for
the set $X=\{{\boldsymbol x}_1,\ldots,{\boldsymbol x}_N\}$, iff
the set $X$ is the \emph{root} of $\mathcal{T}$ and
all $\nu\in P\setminus\mathcal{L}(\mathcal{T})$
are disjoint unions of their sons.
The \emph{level} \(j_\nu\) of $\nu\in\mathcal{T}$ is its distance from the root,
i.e.\ the number of son relations that are required for traveling from
$X$ to $\nu$. The \emph{depth} \(J\) of \(\Tcal\) is the maximum level
of all clusters. We define the set of clusters
on level $j$ as
\[
\mathcal{T}_j\mathrel{\mathrel{\mathop:}=}\{\nu\in\mathcal{T}\colon \nu~\text{has level}~j\}.
\]
Finally, the \emph{bounding box} $B_{\nu}$ of \(\nu\)
is defined as the smallest axis-parallel cuboid that
contains all its points.
\end{definition}
There exist several possibilities for the choice of a
cluster tree for the set \(X\). However, within this article,
we will exclusively consider binary trees and remark that it is of course
possible to consider other options, such as
\(2^d\)-trees, with the obvious modifications.
Definition~\ref{def:cluster-tree} provides a hierarchical cluster
structure on the set \(X\). Even so, it does not provide guarantees
for the sizes and cardinalities of the clusters.
Therefore, we introduce the concept
of a balanced binary tree.
\begin{definition}
Let $\Tcal$ be a cluster tree on $X$ with depth $J$.
$\Tcal$ is called a \emph{balanced binary tree}, if all
clusters $\nu$ satisfy the following conditions:
\begin{enumerate}
\item
The cluster $\nu$ has exactly two sons
if $j_{\nu} < J$. It has no sons if $j_{\nu} = J$.
\item
It holds $|\nu|\sim 2^{J-j_{\nu}}$.
\end{enumerate}
\end{definition}
A balanced binary tree can be constructed by \emph{cardinality
balanced clustering}. This means that the root cluster
is split into two son clusters of identical (or similar)
cardinality. This process is repeated recursively for the
resulting son clusters until their cardinality falls below a
certain threshold.
For the subdivision, the cluster's bounding box
is split along its longest edge such that the
resulting two boxes both contain an equal number of points.
Thus, as the cluster cardinality halves with each level,
we obtain $\mathcal{O}(\log N)$ levels in total.
The total cost for constructing the cluster tree
is therefore $\mathcal{O}(N \log N)$. Finally, we remark that a
balanced tree is only required to guarantee the cost bounds
for the presented algorithms. The error and compression estimates
we shall present later on are robust in the sense that they
are formulated directly in terms of the actual cluster sizes
rather than the introduced cluster level.
\subsection{Multiscale hierarchy}
Having a cluster tree at hand, we
shall now construct a samplet basis on the resulting
hierarchical structure. We begin by introducing a \emph{two-scale}
transform between basis elements on a cluster $\nu$ of level $j$.
To this end, we create \emph{scaling functions} $\mathbf{\Phi}_{j}^{\nu}
= \{ \varphi_{j,k}^{\nu} \}$ and \emph{samplets} $\mathbf{\Sigma}_{j}^{\nu}
= \{ \sigma_{j,k}^{\nu} \}$ as linear combinations of the scaling
functions $\mathbf{\Phi}_{j+1}^{\nu}$ of $\nu$'s son clusters.
This results in the \emph{refinement relation}
\begin{equation}\label{eq:refinementRelation}
[ \mathbf{\Phi}_{j}^{\nu}, \mathbf{\Sigma}_{j}^{\nu} ]
\mathrel{\mathrel{\mathop:}=}
\mathbf{\Phi}_{j+1}^{\nu}
{\boldsymbol Q}_j^{\nu}=
\mathbf{\Phi}_{j+1}^{\nu}
\big[ {\boldsymbol Q}_{j,\Phi}^{\nu},{\boldsymbol Q}_{j,\Sigma}^{\nu}\big].
\end{equation}
In order to provide both, vanishing moments and orthonormality,
the transformation \({\boldsymbol Q}_{j}^{\nu}\) has to be
appropriately constructed. For this purpose, we consider an orthogonal
decomposition of the \emph{moment matrix}
\[
{\boldsymbol M}_{j+1}^{\nu}\mathrel{\mathrel{\mathop:}=}
\begin{bmatrix}({\boldsymbol x}^{\boldsymbol 0},\varphi_{j+1,1})_\Omega&\cdots&
({\boldsymbol x}^{\boldsymbol 0},\varphi_{j+1,|\nu|})_\Omega\\
\vdots & & \vdots\\
({\boldsymbol x}^{\boldsymbol\alpha},\varphi_{j+1,1})_\Omega&\cdots&
({\boldsymbol x}^{\boldsymbol\alpha},\varphi_{j+1,|\nu|})_\Omega
\end{bmatrix}=
[({\boldsymbol x}^{\boldsymbol\alpha},\mathbf{\Phi}_{j+1}^{\nu})_\Omega]_{|\boldsymbol\alpha|\le q}
\in\Rbb^{m_q\times|\nu|},
\]
where
\begin{equation}\label{eq:mq}
m_q\mathrel{\mathrel{\mathop:}=}\sum_{\ell=0}^q{\ell+d-1\choose d-1}={q+d\choose d}\leq(q+1)^d
\end{equation}
denotes the dimension of \(\Pcal_q(\Omega)\).
In the original construction by
Tausch and White, the matrix \({\boldsymbol Q}_{j}^{\nu}\) is obtained
by a singular value decomposition of \({\boldsymbol M}_{j+1}^{\nu}\).
For the construction of samplets, we follow the idea
form \cite{AHK14} and rather
employ the QR decomposition, which has the advantage of generating
samplets with an increasing number of vanishing moments.
It holds
\begin{equation}\label{eq:QR}
({\boldsymbol M}_{j+1}^{\nu})^\intercal = {\boldsymbol Q}_j^\nu{\boldsymbol R}
\mathrel{=\mathrel{\mathop:}}\big[{\boldsymbol Q}_{j,\Phi}^{\nu} ,
{\boldsymbol Q}_{j,\Sigma}^{\nu}\big]{\boldsymbol R}
\end{equation}
Consequently, the moment matrix
for the cluster's own scaling functions and samplets is then
given by
\begin{equation}\label{eq:vanishingMomentsQR}
\begin{aligned}
\big[{\boldsymbol M}_{j,\Phi}^{\nu}, {\boldsymbol M}_{j,\Sigma}^{\nu}\big]
&= \left[({\boldsymbol x}^{\boldsymbol\alpha},[\mathbf{\Phi}_{j}^{\nu},
\mathbf{\Sigma}_{j}^{\nu}])_\Omega\right]_{|\boldsymbol\alpha|\le q}
= \left[({\boldsymbol x}^{\boldsymbol\alpha},\mathbf{\Phi}_{j+1}^{\nu}[{\boldsymbol Q}_{j,\Phi}^{\nu}
, {\boldsymbol Q}_{j,\Sigma}^{\nu}])_\Omega
\right]_{|\boldsymbol\alpha|\le q} \\
&= {\boldsymbol M}_{j+1}^{\nu} [{\boldsymbol Q}_{j,\Phi}^{\nu} , {\boldsymbol Q}_{j,\Sigma}^{\nu} ]
= {\boldsymbol R}^\intercal.
\end{aligned}
\end{equation}
As ${\boldsymbol R}^\intercal$ is a lower triangular matrix, the first $k-1$
entries in its $k$-th column are zero. This corresponds to
$k-1$ vanishing moments for the $k$-th function generated
by the transformation
${\boldsymbol Q}_{j}^{\nu}=[{\boldsymbol Q}_{j,\Phi}^{\nu} , {\boldsymbol Q}_{j,\Sigma}^{\nu} ]$.
By defining the first $m_{q}$ functions as scaling functions and
the remaining as samplets, we obtain samplets with vanishing
moments at least up to order $q+1$. By increasing
the polynomial degree to \(\hat{q}\geq q\) at the leaf clusters
such that \(m_{\hat{q}}\geq 2m_q\), we can even construct
samplets with an increasing number of vanishing moments up to order \(\hat{q}+1\)
without any additional cost.
\begin{remark}
We remark that the samplet construction using vanishing moments
is inspired by the classical wavelet theory. However, it is easily
possible to adapt the construction to other primitives of interest.
\end{remark}
\begin{remark}
\label{remark:introCQ}
Each cluster has at most a constant number of scaling
functions and samplets: For a particular cluster $\nu$, their number
is identical to the cardinality of $\mathbf{\Phi}_{j+1}^{\nu}$. For leaf
clusters, this number is bounded by the leaf size.
For non-leaf clusters, it is bounded by the number of scaling functions
provided from all its son clusters. As there are at most two
son clusters with a maximum of $m_q$ scaling functions each,
we obtain the bound $2 m_q$ for non-leaf clusters. Note that,
if $\mathbf{\Phi}_{j+1}^{\nu}$ has at most $m_q$ elements, a
cluster will not provide any samplets at all and all functions
will be considered as scaling functions.
\end{remark}
For leaf clusters, we define the scaling functions by
the Dirac measures supported at the points \({\boldsymbol x}_i\), i.e.\
$\mathbf{\Phi}_J^{\nu}\mathrel{\mathrel{\mathop:}=}\{ \delta_{{\boldsymbol x}_i} : {\boldsymbol x}_i\in\nu \}$,
to account for the lack of son clusters that could provide scaling
functions.
The scaling functions of all clusters on a specific level $j$
then generate the spaces
\begin{equation}\label{eq:Vspaces}
V_{j}\mathrel{\mathrel{\mathop:}=} \spn\{ \varphi_{j,k}^{\nu} : k\in \Delta_j^\nu,\ \nu \in\Tcal_{j} \},
\end{equation}
while the samplets span the detail spaces
\begin{equation}\label{eq:Wspaces}
S_{j}\mathrel{\mathrel{\mathop:}=}
\spn\{ \sigma_{j,k}^{\nu} : k\in \nabla_j^\nu,\
\nu \in \Tcal_{j} \} =
V_{j+1}\overset{\perp}{\ominus} V_j.
\end{equation}
Combining the scaling functions of the root cluster with all
clusters' samplets gives rise to the samplet basis
\begin{equation}\label{eq:Wbasis}
\mathbf{\Sigma}_{N}\mathrel{\mathrel{\mathop:}=}\mathbf{\Phi}_{0}^{X}
\cup \bigcup_{\nu \in T} \mathbf{\Sigma}_{j}^{\nu}.
\end{equation}
Writing $\mathbf{\Sigma}_{N}
= \{ \sigma_{k} : 1 \leq k \leq N \}$, where
$\sigma_{k}$ is either a samplet or a scaling function
at the root cluster, we can establish a unique indexing of
all the signed measures comprising the samplet
basis. The indexing induces an order on the
basis set $\mathbf{\Sigma}_{N}$, which we choose
to be level-dependent: Samplets belonging to a particular
cluster are grouped together, with those on finer levels
having larger indices.
\begin{remark}\label{remark:waveletLeafSize}
We remark that the samplet basis on a balanced
cluster tree can be computed in cost $\mathcal{O}(N)$,
we refer to \cite{AHK14} for a proof.
\end{remark}
\subsection{Properties of the samplets}
By construction, the samplets satisfy the following
properties, which can directly be inferred from
the corresponding results in \cite{HKS05,TW03}.
\begin{theorem}\label{theo:waveletProperties}
The spaces $V_{j}$ defined in equation \eqref{eq:Vspaces}
exhibit the desired multiscale hierarchy
\[
V_0\subset V_1\subset\cdots\subset V_J = V,
\]
where the corresponding complement spaces $S_{j}$ from \eqref{eq:Wspaces}
satisfy $V_{j+1}=V_j\overset{\perp}{\oplus} S_{j}$ for all $j=0,1,\ldots,
J-1$. The associated samplet basis $\mathbf{\Sigma}_{N}$ defined in
\eqref{eq:Wbasis} constitutes an orthonormal basis of $V$.
In particular:
\begin{enumerate}
\item The number of all samplets on level $j$ behaves like $2^j$.
\item The samplets have $q+1$ vanishing moments.
\item
Each samplet is supported in a specific cluster $\nu$.
If the points in $X$ are uniformly distributed, then the
diameter of the cluster satisfies $\diam(\nu)\sim
2^{-j_\nu/d}$ and it holds \eqref{eq:localized}.
\end{enumerate}
\end{theorem}
\begin{remark}
Due to $S_j\subset V$ and $V_0\subset V$,
we conclude that each samplet is a linear combination of the
Dirac measures supported at the points in $X$. Especially, the
related coefficient vectors ${\boldsymbol\omega}_{j,k}$ in
\begin{equation}\label{eq:coefficientVectorsOfWavelets}
\sigma_{j,k} = \sum_{i=1}^{N}
\omega_{j,k,i} \delta_{{\boldsymbol x}_i} \quad
\text{and} \quad \varphi_{0,k} = \sum_{i=1}^{N}
\omega_{0,k,i} \delta_{{\boldsymbol x}_i}
\end{equation}
are pairwise orthonormal with respect to the inner
product on \(\Rbb^N\).
\end{remark}
Later on, the following bound on the samplets'
coefficients $\|\cdot\|_1$-norm will
be essential:
\begin{lemma}\label{lemma:waveletL1Norm}
The coefficient vector ${\boldsymbol\omega}_{j,k}=\big[\omega_{j,k,i}\big]_i$ of
the samplet $\sigma_{j,k}$ on the cluster $\nu$ fulfills
\begin{equation}\label{eq:ell1-norm}
\|{\boldsymbol\omega}_{j,k}\|_{1}\le\sqrt{|\nu|}.
\end{equation}
The same holds for the scaling functions $\varphi_{j,k}$.
\end{lemma}
\begin{proof}
It holds $\|{\boldsymbol\omega}_{j,k}\|_{\ell^2}=1$. Hence,
the assertion follows immediately from the Cauchy-Schwarz
inequality
\[
\|{\boldsymbol\omega}_{j,k}\|_{1}\le\sqrt{|\nu|}\|{\boldsymbol\omega}_{j,k}\|_{2}
=\sqrt{|\nu|}.
\]
\end{proof}
The key for data compression and singularity detection
is the following estimate which shows that the samplet
coefficients decay with respect to the samplet's level
provided that the data result from the evaluation of a smooth function.
Therefore, in case of smooth data, the samplet
coefficients are small and can be set to zero without
compromising the accuracy. Vice versa, a large samplet
coefficients reflects that the data are singular in the
region of the samplet's support.
\begin{lemma}\label{lemma:decay}
Let $f\in C^{q+1}(\Omega)$. Then, it holds for
a samplet $\sigma_{j,k}$ supported
on the cluster $\nu$ that
\begin{equation}\label{eq:decay}
|(f,\sigma_{j,k})_\Omega|\le
\diam(\nu)^{q+1}\|f\|_{C^{q+1}(\Omega)}\|{\boldsymbol\omega}_{j,k}\|_{1}.
\end{equation}
\end{lemma}
\begin{proof}
For ${\boldsymbol x}_0\in\nu$, the Taylor expansion of $f$ yields
\[
f({\boldsymbol x}) = \sum_{|\boldsymbol\alpha|\le q}
\frac{\partial^{|\boldsymbol\alpha|}}{\partial{\boldsymbol x}^{\boldsymbol\alpha}}f({\boldsymbol x}_0)
\frac{({\boldsymbol x}-{\boldsymbol x}_0)^{\boldsymbol\alpha}}{\boldsymbol\alpha!}
+ R_{{\boldsymbol x}_0}({\boldsymbol x}).
\]
Herein, the remainder $R_{{\boldsymbol x}_0}({\boldsymbol x})$ reads
\begin{align*}
R_{{\boldsymbol x}_0}({\boldsymbol x}) &= (q+1)\sum_{|\boldsymbol\alpha|=q+1}
\frac{({\boldsymbol x}-{\boldsymbol x}_0)^{\boldsymbol\alpha}}{\boldsymbol\alpha!}
\int_0^1\frac{\partial^{q+1}}{\partial{\boldsymbol x}^{\boldsymbol\alpha}}
f\big({\boldsymbol x}_0+s({\boldsymbol x}-{\boldsymbol x}_0)\big)(1-s)^q\d s.
\end{align*}
In view of the vanishing moments, we conclude
\begin{align*}
|(f,\sigma_{j,k})_\Omega|
&= |(R_{{\boldsymbol x}_0},\sigma_{j,k})_\Omega|
\le\sum_{|\boldsymbol\alpha|=q+1}
\frac{\|{\boldsymbol x}-{\boldsymbol x}_0\|_2^{|\boldsymbol\alpha|}}{\boldsymbol\alpha!}
\max_{{\boldsymbol x}\in\nu}\bigg|\frac{\partial^{q+1}}
{\partial{\boldsymbol x}^{\boldsymbol\alpha}}f(\boldsymbol x)\bigg|
\|{\boldsymbol\omega}_{j,k}\|_{1}\\
&\le\diam(\nu)^{q+1}\|f\|_{C^{q+1}(\Omega)}\|{\boldsymbol\omega}_{j,k}\|_{1}.
\end{align*}
Here, we used the estimate
\[
\sum_{|\boldsymbol\alpha|=q+1}\frac{2^{-(q+1)}}{\boldsymbol\alpha!}\le 1,
\]
which is obtained by choosing \({\boldsymbol x}_0\) as the
cluster's midpoint.
\end{proof}
\section{Discrete samplet transform}\label{sec:FST}
In order to transform between the samplet basis
and the basis of Dirac measures, we introduce
the \emph{discrete samplet transform} and its inverse.
To this end, we assume that the data
\(({\boldsymbol x}_1,y_1),\ldots,({\boldsymbol x}_N,y_N)\)
result from the evaluation of some (unknown) function
\(f\colon\Omega\to\Rbb\),
i.e.\
\[y_i=f_i^{\Delta}=(f,\delta_{{\boldsymbol x}_i})_\Omega.
\]
Hence, we may represent the function \(f\) on \(X\)
according to
\[f = \sum_{i = 1}^{N} f_i^{\Delta} \delta_{{\boldsymbol x}_i}.
\]
Our goal is now to compute the representation
\[f =
\sum_{i = 1}^{N} f_{k}^{\Sigma} \sigma_{k}
\]
with respect to the samplet basis.
For
sake of a simpler notation, let
${\boldsymbol f}^{\Delta}\mathrel{\mathrel{\mathop:}=} [f_i^{\Delta}]_{i=1}^N$
and ${\boldsymbol f}^{\Sigma}\mathrel{\mathrel{\mathop:}=} [f_i^\Sigma]_{i=1}^N$ denote
the associated coefficient vectors.
\begin{figure}[htb]
\begin{center}
\scalebox{0.75}{
\begin{tikzpicture}[x=0.4cm,y=0.4cm]
\tikzstyle{every node}=[circle,draw=black,fill=shadecolor,
minimum size=1.2cm]%
\tikzstyle{ptr}=[draw=none,fill=none,above]%
\node at (0,5) (1) {${\boldsymbol f}^{\Delta}$};
\node at (8,5) (2) {${\boldsymbol f}_{J-1}^{\Phi}$};
\node at (8,1) (3) {${\boldsymbol f}_{J-1}^{\Sigma}$};
\node at (16,5) (4) {${\boldsymbol f}_{J-2}^{\Phi}$};
\node at (16,1) (5) {${\boldsymbol f}_{J-2}^{\Sigma}$};
\node at (24,5) (6) {${\boldsymbol f}_{J-3}^{\Phi}$};
\node at (24,1) (7) {${\boldsymbol f}_{J-3}^{\Sigma}$};
\node at (30,5) (8) {${\boldsymbol f}_{1}^{\Phi}$};
\node at (38,5) (9) {${\boldsymbol f}_{0}^{\Phi}$};
\node at (38,1) (10) {${\boldsymbol f}_{0}^{\Sigma}$};
\tikzstyle{forward}=[draw,-stealth]%
\tikzstyle{every node}=[style=ptr]
\draw
(1) edge[forward] node[above,sloped]{${\boldsymbol Q}_{J-1,\Phi}^\intercal$} (2)
(1) edge[forward] node[above,sloped]{${\boldsymbol Q}_{J-1,\Sigma}^\intercal$}%
(3)
(2) edge[forward] node[above,sloped]{${\boldsymbol Q}_{J-2,\Phi}^\intercal$} (4)
(2) edge[forward] node[above,sloped]{${\boldsymbol Q}_{J-2,\Sigma}^\intercal$}%
(5)
(4) edge[forward] node[above,sloped]{${\boldsymbol Q}_{J-3,\Phi}^\intercal$} (6)
(4) edge[forward] node[above,sloped]{${\boldsymbol Q}_{J-3,\Sigma}^\intercal$}%
(7)
(8) edge[forward] node[above,sloped]{${\boldsymbol Q}_{0,\Phi}^\intercal$} (9)
(8) edge[forward] node[above,sloped]{${\boldsymbol Q}_{0,\Sigma}^\intercal$}%
(10);
\tikzstyle{every node}=[style=ptr]%
\tikzstyle{ptr}=[draw=none,fill=none]%
\node at (27,5) (16) {$\hdots$};
\end{tikzpicture}}
\caption{\label{fig:haar}Visualization of the discrete samplet transform.}
\end{center}
\end{figure}
The discrete samplet transform is based on
recursively applying the refinement relation
\eqref{eq:refinementRelation} to the point evaluations
\begin{equation}\label{eq:refinementRelationInnerProducts}
(f, [ \mathbf{\Phi}_{j}^{\nu}, \mathbf{\Sigma}_{j}^{\nu} ])_\Omega
=(f, \mathbf{\Phi}_{j+1}^{\nu} [{\boldsymbol Q}_{j,\Phi}^{\nu} ,{\boldsymbol Q}_{j,\Sigma}^{\nu} ])_\Omega \\
=(f, \mathbf{\Phi}_{j+1}^{\nu})_\Omega [{\boldsymbol Q}_{j,\Phi}^{\nu} , {\boldsymbol Q}_{j,\Sigma}^{\nu} ].
\end{equation}
On the finest level, the entries of the vector
$(f, \mathbf{\Phi}_{J}^{\nu})_\Omega$
are exactly those of ${\boldsymbol f}^{\Delta}$. Recursively
applying equation \eqref{eq:refinementRelationInnerProducts} therefore
yields all the coefficients $(f, \mathbf{\Psi}_{j}^{\nu})_\Omega$,
including $(f, \mathbf{\Phi}_{0}^{X})_\Omega$,
required for the representation of $f$ in the samplet basis,
see Figure~\ref{fig:haar} for a visualization. The
complete procedure is
formulated in Algorithm~\ref{algo:DWT}.\bigskip
\begin{algorithm}[H]
\caption{Discrete samplet transform}
\label{algo:DWT}
\KwData{Data ${\boldsymbol f}^\Delta$,
cluster tree $\Tcal$ and transformations
$[{\boldsymbol Q}_{j,\Phi}^{\nu},{\boldsymbol Q}_{j,\Sigma}^{\nu}]$.}
\KwResult{Coefficients ${\boldsymbol f}^{\Sigma}$
stored as
$[(f,\mathbf{\Phi}_{0}^{X})_\Omega]^\intercal$ and
$[(f,\mathbf{\Sigma}_{j}^{\nu})_\Omega]^\intercal$.}
\Begin{
store $[(f,\mathbf{\Phi}_{0}^{X})_\Omega]^\intercal\mathrel{\mathrel{\mathop:}=}$
\FuncSty{transformForCluster}($X$)
}
\end{algorithm}
\begin{function}[H]
\caption{transformForCluster($\nu$)}
\Begin{
\uIf{$\nu=\{{\boldsymbol x}_{i_{1}}, \dots,{\boldsymbol x}_{i_{|\nu|}}\}$
is a leaf of \(\Tcal\)}{
set ${\boldsymbol f}_{j+1}^{\nu}\mathrel{\mathrel{\mathop:}=}
\big[f_{i_{k}}^\Delta\big]_{k=1}^{|\nu|}$
}
\Else{
\For{all sons $\nu'$ of $\nu$}{
execute $\transformForCluster(\nu')$\\
append the result to ${\boldsymbol f}_{j+1}^{\nu}$
}
}
set $[(f,\mathbf{\Sigma}_{j}^{\nu})_\Omega]^\intercal
\mathrel{\mathrel{\mathop:}=}({\boldsymbol Q}_{j,\Sigma}^{\nu})^\intercal {\boldsymbol f}_{j+1}^{\nu}$
\Return{$({\boldsymbol Q}_{j,\Phi}^{\nu})^\intercal{\boldsymbol f}_{j+1}^{\nu}$}
}
\end{function}\bigskip
\begin{remark}
Algorithm \ref{algo:DWT} employs the transposed version of
\eqref{eq:refinementRelationInnerProducts} to preserve
the column vector structure of ${\boldsymbol f}^\Delta$ and ${\boldsymbol f}^{\Sigma}$.
\end{remark}
The inverse transformation is obtained by reversing
the steps of the discrete samplet transform:
For each cluster, we compute
\[
(f, \mathbf{\Phi}_{j+1}^{\nu})_\Omega
= (f, [ \mathbf{\Phi}_{j}^{\nu}, \mathbf{\Sigma}_{j}^{\nu} ]
)_\Omega[{\boldsymbol Q}_{j,\Phi}^{\nu} ,{\boldsymbol Q}_{j,\Sigma}^{\nu} ]^\intercal
\]
to either obtain the
coefficients of the
son clusters' scaling functions
or, for leaf clusters, the coefficients ${\boldsymbol f}^{\Delta}$.
The procedure is summarized in Algorithm~\ref{algo:iDWT}.\bigskip
\begin{algorithm}[H]
\caption{Inverse samplet transform}
\label{algo:iDWT}
\KwData{Coefficients ${\boldsymbol f}^\Sigma$,
cluster tree $\Tcal$ and transformations
$[{\boldsymbol Q}_{j,\Phi}^{\nu},{\boldsymbol Q}_{j,\Sigma}^{\nu}]$.}
\KwResult{Coefficients ${\boldsymbol f}^{\Delta}$
stored as
$[(f,\mathbf{\Phi}_{j}^{\nu})_\Omega]^\intercal$.}
\Begin{
\FuncSty{inverseTransformForCluster}($X$,
$[(f,\mathbf{\Phi}_{0}^{X})_\Omega]^\intercal$)
}
\end{algorithm}
\begin{function}[H]
\caption{inverseTransformForCluster($\nu$,
\unexpanded{$[(f,{\boldsymbol\Phi}_{j}^\nu)_\Omega]^\intercal$})}
\Begin{
$[(f,{\boldsymbol\Phi}_{j+1}^\nu)_\Omega]^\intercal
\mathrel{\mathrel{\mathop:}=} [{\boldsymbol Q}_{j,\Phi}^{\nu} ,{\boldsymbol Q}_{j,\Sigma}^{\nu} ]
\begin{bmatrix}
[(f,{\boldsymbol\Phi}_{j}^\nu)_\Omega]^\intercal\\
[(f,{\boldsymbol\Sigma}_{j}^\nu)_\Omega]^\intercal
\end{bmatrix}$
\uIf{$\nu=\{{\boldsymbol x}_{i_{1}}, \dots,{\boldsymbol x}_{i_{|\nu|}}\}$
is a leaf of \(\Tcal\)}{set $\big[f_{i_{k}}^\Delta\big]_{k=1}^{|\nu|}
\mathrel{\mathrel{\mathop:}=}[(f,{\boldsymbol\Phi}_{j_\nu+1}^\nu)_\Omega]^\intercal$
}
\Else{
\For{all sons $\nu'$ of $\nu$}{
assign the part of $[(f,{\boldsymbol\Phi}_{j+1}^\nu)_\Omega]^\intercal$
belonging to \(\nu'\) to $[(f,{\boldsymbol\Phi}_{j'}^{\nu'})_\Omega]^\intercal$
execute \FuncSty{inverseTransformForCluster}($\nu'$,
$[(f,{\boldsymbol\Phi}_{j'}^{\nu'})_\Omega]^\intercal$) }
}
}
\end{function}\bigskip
The discrete samplet transform and its inverse
can be performed in linear cost. This
result is well known in case of wavelets and was
crucial for their rapid development.
\begin{theorem}
The runtime of the discrete samplet transform and the inverse
samplet transform are \(\mathcal{O}(N)\), each.
\end{theorem}
\begin{proof}
As the samplet construction follows the construction
of Tausch and White, we refer to \cite{TW03} for the
details of the proof.
\end{proof}
\section{Numerical results I}\label{sec:Num1}
To demonstrate the efficacy of the samplet analysis,
we compress different sample data in one, two and three
spatial dimensions. For each example, we use samplets
with \(q+1=3\) vanishing moments.
\subsection*{One dimension}
We start with two one-dimensional
examples. On the one hand, we consider the test function
\[
f(x)=\frac 3 2 e^{-40|x-\frac 1 4|}
+ 2e^{-40|x|}-e^{-40|x+\frac 1 2|},
\]
sampled at $8192$ uniformly distributed points on \([-1,1]\).
On the other hand, we consider a path of a Brownian motion
sampled at the same points. The coefficients of the samplet
transformed data are thresholded with \(10^{-i}\|{\boldsymbol f}^{\Sigma}\|_\infty\),
\(i=1,2,3\), respectively.
The resulting compression ratios and the reconstructions
can be found in Figure~\ref{fig:Expcomp} and Figure~\ref{fig:BMcomp},
respectively. One readily infers that in both cases high compression
rates are achieved at high accuracy. In case of the Brownian motion,
the smoothing of the sample data can be realized by increasing the
compression rate, corresponding to throwing away more and
more detail information. Indeed, due to the orthonormality of the samplet
basis, this procedure amounts to a least squares fit of the data.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\begin{axis}[width=\textwidth, height=0.4\textwidth, xmin = -1, xmax=1,
ymin=-1.1,
ymax=2.1, ylabel={$y$}, xlabel ={$x$},legend style={mark
options={scale=2}},
legend pos = north east]
\addplot[line width=0.7pt,color=black]
table[each nth point=3,x index={0},y index = {1}]{%
./Results/ExpCompress1D.txt};
\addlegendentry{data};
\addplot[line width=0.7pt,color=blue, only marks, mark size=0.2pt]
table[each nth point=3,x index={0},y index = {5}]{%
./Results/ExpCompress1D.txt};
\addlegendentry{$98.55\%$ compr.};
\addplot[line width=0.7pt,color=red, only marks, mark size=0.2pt]
table[each nth point=3,x index={0},y index = {4}]{%
./Results/ExpCompress1D.txt};
\addlegendentry{$99.17\%$ compr.};
\addplot[line width=0.7pt,color=darkgreen, only marks, mark size=0.2pt]
table[each nth point=3,x index={0},y index = {3}]{%
./Results/ExpCompress1D.txt};
\addlegendentry{$99.63\%$ compr.};
\end{axis}
\end{tikzpicture}
\end{center}
\caption{\label{fig:Expcomp}Sampled test function approximated with
different compression ratios.}
\end{figure}
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}[ ausschnitt/.style={black!80}]
\begin{axis}[width=\textwidth, height=0.4\textwidth, xmin = -1, xmax=1,
ymin=-1,
ymax=2.4,
ylabel={$y$}, xlabel ={$x$},legend style={mark options={scale=2}},
legend pos = south east]
\draw[ausschnitt]
(axis cs:-0.5,-0.5)coordinate(ul)--
(axis cs:0.005,-0.5)coordinate(ur)--
(axis cs:0.005,0.4)coordinate(or)--
(axis cs:-0.5,0.4) -- cycle;
\addplot[line width=0.7pt,color=black]
table[each nth point=4,x index={0},y index = {1}]{%
./Results/BMCompress1D.txt};
\addlegendentry{data};
\addplot[line width=0.7pt,color=blue, only marks, mark size=0.2pt]
table[each nth point=5,x index={0},y index = {5}]{%
./Results/BMCompress1D.txt};
\addlegendentry{$92.69\%$ compr.};
\addplot[line width=0.7pt,color=red, only marks, mark size=0.2pt]
table[each nth point=5,x index={0},y index = {4}]{%
./Results/BMCompress1D.txt};
\addlegendentry{$99.24\%$ compr.};
\addplot[line width=0.7pt,color=darkgreen, only marks, mark size=0.2pt]
table[each nth point=5,x index={0},y index = {3}]{%
./Results/BMCompress1D.txt};
\addlegendentry{$99.88\%$ compr.};
\end{axis}
\begin{axis}[yshift=-.37\textwidth,xshift=0.25\textwidth,
width=0.5\textwidth, height=0.4\textwidth, xmin = -0.5,
xmax=0.005, ymin=-0.5,
ymax=0.4,axis line style=ausschnitt]
\addplot[line width=0.7pt,color=black]
table[each nth point=2,x index={0},y index = {1}]{%
./Results/BMCompress1D.txt};
\addplot[line width=0.7pt,color=blue, only marks, mark size=0.2pt]
table[each nth point=2,x index={0},y index = {5}]{%
./Results/BMCompress1D.txt};
\addplot[line width=0.7pt,color=red, only marks, mark size=0.2pt]
table[each nth point=2,x index={0},y index = {4}]{%
./Results/BMCompress1D.txt};
\addplot[line width=0.7pt,color=darkgreen, only marks, mark size=0.2pt]
table[each nth point=2,x index={0},y index = {3}]{%
./Results/BMCompress1D.txt};
\end{axis}
\draw[ausschnitt]
(current axis.north west)--(ul)
(current axis.north east)--(ur);
\end{tikzpicture}
\caption{\label{fig:BMcomp}Sampled Brownian motion approximated with
different compression ratios.}
\end{center}
\end{figure}
\subsection*{Two dimensions}
As a second application for samplets, we consider image compression.
To this end, we use a \(2000\times 2000\) pixel grayscale landscape
image. The coefficients of the samplet transformed image are thresholded
with \(10^{-i}\|{\boldsymbol f}^{\Sigma}\|_\infty\), \(i=2,3,4\), respectively.
The corresponding
results and compression rates can be found in Figure~\ref{fig:compImage}.
A visualization of the samplet coefficients in case of the respective
low compression can be found in Figure~\ref{fig:coeffImage}.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\draw(0,0)node{\includegraphics[scale = 0.12,trim=65 47 65 24,clip]{%
./Results/OriginalLugano.png}};
\draw(0,2.4)node{Original image};
\draw(5,2.4)node{\(95.23\%\) compression};
\draw(5,0)node{\includegraphics[scale = 0.12,trim=65 47 65 24,clip]{%
./Results/CompressedLowLugano.png}};
\draw(0,-2.6)node{\(99.89\%\) compression};
\draw(0,-5)node{\includegraphics[scale = 0.12,trim=65 47 65 24,clip]{%
./Results/CompressedIntermedLugano.png}};
\draw(5,-2.6)node{\(99.99\%\) compression};
\draw(5,-5)node{\includegraphics[scale = 0.12,trim=65 47 65 24,clip]{%
./Results/CompressedHighLugano.png}};
\end{tikzpicture}
\caption{\label{fig:compImage}Different compression rates of the
test image.}
\end{center}
\end{figure}
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\draw(0,0)node{\includegraphics[scale = 0.12,trim=1000 195 1000 195,clip]{%
./Results/LuganoCoeffs.png}};
\draw(3.3,0)node{\includegraphics[scale = 0.14,trim=2100 400 460 400,clip]{%
./Results/LuganoCoeffs.png}};
\end{tikzpicture}
\caption{\label{fig:coeffImage}Visualization of the samplet coefficients
for the test image.}
\end{center}
\end{figure}
\subsection*{Three dimensions}
Finally, we show a result in three dimensions.
Here, the points are given by a uniform subsample of
a triangulation of the Stanford bunny. We consider data on the
Stanford bunny generated by the function
\[
f({\boldsymbol x})=e^{-20\|{\boldsymbol x}-{\boldsymbol p}_0\|_2}
+e^{-20\|{\boldsymbol x}-{\boldsymbol p}_1\|_2},
\]
where the points \({\boldsymbol p}_0\) and \({\boldsymbol p}_1\) are located at the tips
of the bunny's ears. Moreover, the geometry has been rescaled to a
diameter of 2. The plot on the left-hand side of
Figure~\ref{fig:coeffStanford}
visualizes the sample data, while the plot on the right-hand side
shows the dominant coefficients in case of a threshold parameter
of \(10^{-2}\|{\boldsymbol f}^{\Sigma}\|_\infty\).
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\draw(0,0)node{\includegraphics[%
scale = 0.12,trim=1040 230 1050 280,clip]{%
./Results/StanfordBunnySignal.png}};
\draw(5,0)node{\includegraphics[%
scale = 0.12,trim=1000 200 1000 200,clip]{%
./Results/StanfordBunny1e-2Coeff.png}};
\draw(8,0)node{\includegraphics[%
scale = 0.14,trim=2130 400 600 400,clip]{%
./Results/StanfordBunny1e-2Coeff.png}};
\end{tikzpicture}
\caption{\label{fig:coeffStanford}Data on the Stanford bunny (left) and
dominant samplet coefficients (right).}
\end{center}
\end{figure}
\section{Compression of kernel matrices}\label{sec:kernelCompression}
\subsection{Kernel matrices}
The second application of samplets we consider
is the compression of matrices arising from positive
(semi-) definite kernels, as they emerge in kernel
methods, such as scattered data analysis, kernel
based learning or Gaussian process regression,
see for example \cite{HSS08,Schaback2006,
Wendland2004,Williams1998} and the references
therein.
We start by recalling the concept of a positive kernel.
\begin{definition}\label{def:poskernel}
A symmetric kernel
$\mathcal{K}\colon\Omega\times\Omega\rightarrow\Rbb$ is
called \textit{positive (semi-)definite} on $\Omega\subset\mathbb{R}^d$,
iff \([\mathcal{K}({\boldsymbol x}_i,{\boldsymbol x}_j)]_{i,j=1}^N\)
is a symmetric and positive (semi-)definite matrix
for all
$\{{\boldsymbol x}_1, \ldots,{\boldsymbol x}_N\}\subset\Omega$
and all $N\in\mathbb{N}$.
\end{definition}
As a particular class of positive definite
kernels, we consider the \emph{Mat\'ern kernels} given by
\begin{equation}\label{eq:matkern}
k_\nu(r)\mathrel{\mathrel{\mathop:}=}\frac{2^{1-\nu}}{\Gamma(\nu)}\bigg(\frac {\sqrt{2\nu}r}
{\ell}\bigg)^\nu
K_\nu\bigg(\frac {\sqrt{2\nu}r}{\ell}\bigg),\quad r\geq 0,\ \ell >0 .
\end{equation}
Herein, $K_{\nu}$ is the modified Bessel function of the second
kind of order $\nu$ and $\Gamma$ is the gamma function.
The parameter $\nu$ steers for the smoothness of the
kernel function. Especially,
the analytic squared-exponential kernel is
retrieved for $\nu\to\infty$. Especially, we have
\begin{equation}
\begin{aligned}
k_{1/2}(r)=\exp\bigg(-\frac{r}{\ell}\bigg),
\quad k_{\infty}(r)=\exp\bigg(-\frac{r^2}{2\ell^2}\bigg).
\end{aligned}
\end{equation}
A positive definite kernel in the sense of
Definition~\ref{def:poskernel}
is obtained by considering
\[
\mathcal{K}({\boldsymbol x},{\boldsymbol x}^\prime)\mathrel{\mathrel{\mathop:}=}
k_\nu(\|{\boldsymbol x}-{\boldsymbol x}^\prime\|_2).
\]
Given the set of points \(X=\{{\boldsymbol x}_1,\ldots,{\boldsymbol x}_N\}\), many
applications require the assembly and the inversion of the
\emph{kernel matrix}
\[
{\boldsymbol K}\mathrel{\mathrel{\mathop:}=}[\mathcal{K}({\boldsymbol x}_i,{\boldsymbol x}_j)]_{i,j=1}^N\in\Rbb^{N\times N}
\]
or an appropriately regularized version
\[
{\boldsymbol K}+\rho{\boldsymbol I},\quad \rho>0,
\]
thereof. In case that
\(N\) is a large number, already the assembly and storage of
\({\boldsymbol K}\)
can easily become prohibitive. For the solution of an associated
linear system, the situation is even worse.
Fortunately, the kernel matrix can be compressed
by employing samplets. To this end, the evaluation of
the kernel function at the points ${\boldsymbol x}_i$ and ${\boldsymbol x}_j$
will be denoted by
\[
(\mathcal{K},\delta_{{\boldsymbol x}_i}\otimes\delta_{{\boldsymbol x}_j}
)_{\Omega\times\Omega}\mathrel{\mathrel{\mathop:}=}\mathcal{K}({\boldsymbol x}_i,{\boldsymbol x}_j).
\]
Hence, in view of $V = \{\delta_{{\boldsymbol x}_1},\ldots,\delta_{{\boldsymbol x}_N}\}$,
we may write the kernel matrix as
\[
{\boldsymbol K} = \big[(\mathcal{K},\delta_{{\boldsymbol x}_i}
\otimes\delta_{{\boldsymbol x}_j})_{\Omega\times\Omega}\big]_{i,j=1}^N.
\]
\subsection{Asymptotically smooth kernels}
The essential ingredient for the samplet compression of
kernel matrices is the \emph{asymptotical smoothness}
property of the kernel
\begin{equation}\label{eq:kernel_estimate}
\frac{\partial^{|\boldsymbol\alpha|+|\boldsymbol\beta|}}
{\partial{\boldsymbol x}^{\boldsymbol\alpha}
\partial{\boldsymbol y}^{\boldsymbol\beta}} \mathcal{K}({\boldsymbol x},{\boldsymbol y})
\le c_\mathcal{K} \frac{(|\boldsymbol\alpha|+|\boldsymbol\beta|)!}
{r^{|\boldsymbol\alpha|+|\boldsymbol\beta|}
\|{\boldsymbol x}-{\boldsymbol y}\|_2^{|\boldsymbol\alpha|+|\boldsymbol\beta|}},\quad
c_\mathcal{K},r>0,
\end{equation}
which is for example satisfied by the Mat\'ern kernels.
Using this estimate, we obtain the following result,
which is the basis for the matrix compression introduced
thereafter.
\begin{lemma}\label{lem:kernel_decay}
Consider two samplets $\sigma_{j,k}$ and $\sigma_{j',k'}$,
exhibiting $q+1$ vanishing moments with supporting
clusters \(\nu\) and \(\nu'\), respectively.
Assume that $\dist(\nu,\nu') > 0$. Then, for kernels
satisfying \eqref{eq:kernel_estimate}, it holds that
\begin{equation}\label{eq:kernel_decay}
(\mathcal{K},\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}\le
c_\mathcal{K} \frac{\diam(\nu)^{q+1}\diam(\nu')^{q+1}}
{(dr\dist(\nu_{j,k},\nu_{j',k'}))^{2(q+1)}}
\|{\boldsymbol\omega}_{j,k}\|_{1}\|{\boldsymbol\omega}_{j',k'}\|_{1}.
\end{equation}
\end{lemma}
\begin{proof}
Let ${\boldsymbol x}_0\in\nu$ and ${\boldsymbol y}_0\in\nu'$.
A Taylor expansion of the kernel with respect to
${\boldsymbol x}$ yields
\[
\mathcal{K}({\boldsymbol x},{\boldsymbol y}) = \sum_{|\boldsymbol\alpha|\le q}
\frac{\partial^{|\boldsymbol\alpha|}}
{\partial{\boldsymbol x}^{\boldsymbol\alpha}\mathcal{K}({\boldsymbol x}_0,{\boldsymbol y})}
\frac{({\boldsymbol x}-{\boldsymbol x}_0)^{\boldsymbol\alpha}}{\boldsymbol\alpha!}
+ R_{{\boldsymbol x}_0}({\boldsymbol x},{\boldsymbol y}),
\]
where the remainder $R_{{\boldsymbol x}_0}({\boldsymbol x},{\boldsymbol y})$ is given by
\begin{align*}
R_{{\boldsymbol x}_0}({\boldsymbol x},{\boldsymbol y}) &= (q+1)\sum_{|\boldsymbol\alpha|=q+1}
\frac{({\boldsymbol x}-{\boldsymbol x}_0)^{\boldsymbol\alpha}}{\boldsymbol\alpha!}
\int_0^1\frac{\partial^{q+1}}{\partial{\boldsymbol x}^{\boldsymbol\alpha}}
\mathcal{K}\big({\boldsymbol x}_0+s({\boldsymbol x}-{\boldsymbol x}_0),{\boldsymbol y}\big)(1-s)^q\d s.
\end{align*}
Next, we expand the remainder $R_{{\boldsymbol x}_0}({\boldsymbol x},{\boldsymbol y})$ with
respect to ${\boldsymbol y}$ and derive
\begin{align*}
R_{{\boldsymbol x}_0}({\boldsymbol x},{\boldsymbol y}) &= (q+1)\sum_{|\boldsymbol\alpha|=q+1}
\frac{({\boldsymbol x}-{\boldsymbol x}_0)^{\boldsymbol\alpha}}{\boldsymbol\alpha!}
\sum_{|\boldsymbol\beta|\le q
}\frac{({\boldsymbol y}-{\boldsymbol y}_0)^{\boldsymbol\beta}}{\boldsymbol\beta!}\\
&\times\int_0^1\frac{\partial^{q+1}}{\partial{\boldsymbol x}^{\boldsymbol\alpha}}
\frac{\partial^{|\boldsymbol\beta|}}{\partial{\boldsymbol y}^{\boldsymbol\beta}}
\mathcal{K}\big({\boldsymbol x}_0+s({\boldsymbol x}-{\boldsymbol x}_0),{\boldsymbol y}_0\big)(1-s)^q\d s
+ R_{{\boldsymbol x}_0,{\boldsymbol y}_0}({\boldsymbol x},{\boldsymbol y}).
\end{align*}
Here, the remainder $R_{{\boldsymbol x}_0,{\boldsymbol y}_0}({\boldsymbol x},{\boldsymbol y})$
is given by
\begin{align*}
&R_{{\boldsymbol x}_0,{\boldsymbol y}_0}({\boldsymbol x},{\boldsymbol y}) = (q+1)^2
\sum_{|\boldsymbol\alpha|,|\boldsymbol\beta|=q+1}
\frac{({\boldsymbol x}-{\boldsymbol x}_0)^{\boldsymbol\alpha}}{\boldsymbol\alpha!}
\frac{({\boldsymbol y}-{\boldsymbol y}_0)^{\boldsymbol\beta}}{\boldsymbol\beta!}\\
&\qquad\times\int_0^1\int_0^1\frac{\partial^{2(q+1)}}
{\partial{\boldsymbol x}^{\boldsymbol\alpha}\partial{\boldsymbol y}^{\boldsymbol\beta}}
\mathcal{K}\big({\boldsymbol x}_0+s({\boldsymbol x}-{\boldsymbol x}_0),{\boldsymbol y}_0
+t({\boldsymbol y}-{\boldsymbol y}_0)\big)(1-s)^q(1-t)^q\d t\d s.
\end{align*}
We thus arrive at the decomposition
\[
\mathcal{K}({\boldsymbol x},{\boldsymbol y}) = p_{{\boldsymbol y}}({\boldsymbol x}) + p_{{\boldsymbol x}}({\boldsymbol y})
+ R_{{\boldsymbol x}_0,{\boldsymbol y}_0}({\boldsymbol x},{\boldsymbol y}),
\]
where $p_{{\boldsymbol y}}({\boldsymbol x})$ is a polynomial of degree $q$ in ${\boldsymbol x}$,
with coefficients depending on ${\boldsymbol y}$, while $p_{{\boldsymbol x}}({\boldsymbol y})$
is a polynomial of degree $q$ in ${\boldsymbol y}$, with coefficients depending
on ${\boldsymbol x}$. Due to the vanishing moments, we obtain
\[
(\mathcal{K},\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}
=(R_{{\boldsymbol x}_0,{\boldsymbol y}_0},
\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}.
\]
In view of \eqref{eq:kernel_estimate}, we thus find
\begin{align*}
&|(\mathcal{K},\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}|
= |(R_{{\boldsymbol x}_0,{\boldsymbol y}_0},
\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}|\\
&\qquad\le c_\mathcal{K} \Bigg(\sum_{|\boldsymbol\alpha|,|\boldsymbol\beta|=q+1}
\frac{(|\boldsymbol\alpha|+|\boldsymbol\beta|)!}{\boldsymbol\alpha!\boldsymbol\beta!}\Bigg)
\frac{(\|\cdot-{\boldsymbol x}_0\|^{q+1}_2,|\sigma_{j,k}|)_\Omega
(\|\cdot-{\boldsymbol y}_0\|^{q+1}_2,|\sigma_{j',k'}|)_\Omega}{
r^{2(q+1)}\dist(\nu,\nu')^{2(q+1)}}.
\end{align*}
Next, we have by means of multinomial coefficients that
\[
(|\boldsymbol\alpha|+|\boldsymbol\beta|)!
={|\boldsymbol\alpha|+|\boldsymbol\beta|\choose |\boldsymbol\beta|}
{|\boldsymbol\alpha|\choose\boldsymbol\alpha}
{|\boldsymbol\beta|\choose\boldsymbol\beta}
\boldsymbol\alpha!\boldsymbol\beta!,
\]
which in turn implies that
\begin{align*}
\sum_{|\boldsymbol\alpha|,|\boldsymbol\beta|=q+1}
\frac{(|\boldsymbol\alpha|+|\boldsymbol\beta|)!}{\boldsymbol\alpha!\boldsymbol\beta!}
&= {2(q+1)\choose q+1} \sum_{|\boldsymbol\alpha|,|\boldsymbol\beta|=q+1}
{|\boldsymbol\alpha|\choose\boldsymbol\alpha}
{|\boldsymbol\beta|\choose\boldsymbol\beta}\\
&= {2(q+1)\choose q+1} d^{2(q+1)}
\le d^{2(q+1)} 2^{2(q+1)}.
\end{align*}
Moreover, we use
\[
(\|\cdot-{\boldsymbol x}_0\|_2^{q+1},|\sigma_{j,k}|)_\Omega
\le\bigg(\frac{\diam(\nu)}{2}\bigg)^{q+1}\|{\boldsymbol\omega}_{j,k}\|_{1},
\]
and likewise
\[
(\|\cdot-{\boldsymbol y}_0\|_2^{q+1},|\sigma_{j',k'}|)_\Omega
\le\bigg(\frac{\diam(\nu')}{2}\bigg)^{q+1}\|{\boldsymbol\omega}_{j',k'}\|_{1}.
\]
Combining all the estimates, we arrive at the desired
result \eqref{eq:kernel_decay}.
\end{proof}
\subsection{Matrix compression}
Lemma~\ref{lem:kernel_decay} immediately suggests
a compression strategy for kernel matrices in
samplet representation. We mention that this compression
differs from the wavelet matrix compression introduced
in \cite{DHS}, since we do not exploit the decay of the
samplet coefficients with respect to the level in case of
smooth data. This enables us to also consider a non-uniform
distribution of the points in $V$. Consequently, we use
on all levels the same accuracy, what is more similar
to the setting in \cite{BCR}.
\begin{theorem}
Set all coefficients of the kernel matrix
\[
{\boldsymbol K}^\Sigma\mathrel{\mathrel{\mathop:}=}\big[(\mathcal{K},\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}
\big]_{j,j',k,k'}
\]
to zero which satisfy
\begin{equation}\label{eq:cutoff}
\dist(\nu,\nu')\ge\eta\max\{\diam(\nu),\diam(\nu')\},\quad\eta>0,
\end{equation}
where \(\nu\) is the cluster supporting \(\sigma_{j,k}\) and
\(\nu'\) is the cluster supporting \(\sigma_{j',k'}\), respectively.
Then, it holds
\[
\big\|{\boldsymbol K}^\Sigma-{\boldsymbol K}^\Sigma_\varepsilon\big\|_F
\le c_\mathcal{K} c_{\operatorname{sum}} {(\eta dr)^{-2(q+1)}}
m_q N\sqrt{\log(N)}.
\]
for some constant \(c_{\operatorname{sum}}>0\),
where \(m_q\) is given by \eqref{eq:mq}.
\end{theorem}
\begin{proof}
We first fix the levels $j$ and $j'$. In view
\eqref{eq:kernel_decay}, we can estimate any coefficient
which satisfies \eqref{eq:cutoff} by
\begin{align*}
&|(\mathcal{K},\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}|\\
&\qquad\le
c_\mathcal{K} \bigg(\frac{\min\{\diam(\nu),\diam(\nu')\}}
{\max\{\diam(\nu),\diam(\nu')\}}\bigg)^{q+1}
{(\eta dr)^{-2(q+1)}}\|{\boldsymbol\omega}_{j,k}\|_{1}\|
{\boldsymbol\omega}_{j',k'}\|_{1}.
\end{align*}
If we next set
\[
\theta_{j,j'}\mathrel{\mathrel{\mathop:}=} \max_{\nu\in\Tcal_j,\nu'\in\Tcal_{j'}}
\bigg\{\frac{\min\{\diam(\nu),\diam(\nu')\}}
{\max\{\diam(\nu),\diam(\nu')\}}\bigg\},
\]
then we obtain
\[
|(\mathcal{K},\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}|
\le c_\mathcal{K}\theta_{j,j'}^{q+1}{(\eta dr)^{-2(q+1)}}
\|{\boldsymbol\omega}_{j,k}\|_{1}\|{\boldsymbol\omega}_{j',k'}\|_{1}
\]
for all coefficients such that \eqref{eq:cutoff} holds.
In view of \eqref{eq:ell1-norm} and the fact that there are
at most $m_q$ samplets
per cluster, we arrive at
\[
\sum_{k,k'} \|{\boldsymbol\omega}_{j,k}\|_{1}^2\|{\boldsymbol\omega}_{j',k'}\|_{1}^2
\leq\sum_{k,k'}|\nu|\cdot|\nu'| = m_q^2 N^2.
\]
Thus, for a fixed level-level block, we arrive at the estimate
\begin{align*}
\big\|{\boldsymbol K}^\Sigma_{j,j'}-{\boldsymbol K}^\Sigma_{\varepsilon,j,j'}\big\|_F^2
&\le\sum_{\begin{smallmatrix}k,k':\ \dist(\nu,\nu')\\
\ge\eta\max\{\diam(\nu),\diam(\nu')\}\end{smallmatrix}}
|(\mathcal{K},\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}|^2\\
&\le c_\mathcal{K}^2 \theta_{j,j'}^{2(q+1)} {(\eta dr)^{-4(q+1)}}
m_q^2 N^2.
\end{align*}
Finally, summation over all levels yields
\begin{align*}
\big\|{\boldsymbol K}^\Sigma-{\boldsymbol K}^\Sigma_{\varepsilon}\big\|_F^2
&= \sum_{j,j'}\big\|{\boldsymbol K}^\Sigma_{j,j'}
-{\boldsymbol K}^\Sigma_{\varepsilon,j,j'}\big\|_F^2\\
&\le c_\mathcal{K}^2 {(\eta dr)^{-4(q+1)}}m_q^2 N^2
\sum_{j,j'} \theta_{j,j'}^{2(q+1)}\\
&\le c_\mathcal{K}^2 c_{\operatorname{sum}} {(\eta dr)^{-4(q+1)}}
m_q^2 N^2\log N,
\end{align*}
which is the desired claim.
\end{proof}
\begin{remark}
In case of uniformly distributed points ${\boldsymbol x}_i\in X$,
we have $\big\|{\boldsymbol K}^\Sigma\big\|_F\sim N$. Thus,
in this case we immediately obtain
\[
\frac{\big\|{\boldsymbol K}^\Sigma-{\boldsymbol K}^\Sigma_\varepsilon\big\|_F}
{\big\|{\boldsymbol K}^\Sigma\big\|_F} \le c_\mathcal{K}
\sqrt{c_{\operatorname{sum}}} {(\eta dr)^{-2(q+1)}} m_q \log N.
\]
\end{remark}
\begin{theorem}
The matrix consists of only $\mathcal{O}(m_q^2
N\log N)$ relevant matrix coefficients provided
that the points in $V$ are uniformly
distributed in $\Omega$.
\end{theorem}
\begin{proof}
We fix $j,j'$ and assume $j\ge j'$. In case of uniformly
distributed points, it holds $\diam(v)\sim 2^{-j_\nu/d}$.
Hence, for the cluster $\nu_{j',k'}$, there exist only
$\mathcal{O}([2^{j-j'}]^d)$ clusters $\nu_{j,k}$ from
level $j$, which do not satisfy the cut-off criterion
\eqref{eq:cutoff}. Since each cluster contains at most
$m_q$ samplets, we hence arrive at
\[
\sum_{j=0}^J \sum_{j'\le j}m_q^2( 2^{j'} 2^{(j-j')})^d
= m_q^2 \sum_{j=0}^J j 2^{jd} \sim m_q^2 N\log N,
\]
which implies the assertion.
\end{proof}
\begin{remark}
The chosen cut-off criterion \eqref{eq:cutoff} coincides
with the so called \emph{admissibility condition} used
by hierarchical matrices. We particularly refer here to
\cite{Boe10}, as we will later on rely the \(\mathcal{H}^2\)-matrix
method presented there for the fast assembly of the
compressed kernel matrix.
\end{remark}
\subsection{Compressed matrix assembly}
For a given pair of clusters, we can now determine whether the
corresponding entries need to be calculated. As there are
$\mathcal{O}(N)$ clusters, naively checking the cut-off criterion for
all pairs would still take $\mathcal{O}(N^{2})$ operations, however.
Hence, we require smarter means to determine the non-negligible cluster
pairs. For this purpose, we first state the transferability of the
cut-off criterion to son clusters, compare \cite{DHS} for a proof.
\begin{lemma}
Let $\nu$ and $\nu'$ be clusters satisfying the cut-off criterion
\eqref{eq:cutoff}. Then, for the son clusters $\nu_{\mathrm{son}}$
of $\nu$ and $\nu_{\mathrm{son}}'$ of $\nu'$, we have
\begin{align*}
\dist(\nu,\nu_{\mathrm{son}}')
&\ge\eta\max\{\diam(\nu),\diam(\nu_{\mathrm{son}}')\},\\
\dist(\nu_{\mathrm{son}},\nu')
&\ge\eta\max\{\diam(\nu_{\mathrm{son}}),\diam(\nu')\},\\
\dist(\nu_{\mathrm{son}},\nu_{\mathrm{son}}')
&\ge\eta\max\{\diam(\nu_{\mathrm{son}}),\diam(\nu_{\mathrm{son}}')\}.
\end{align*}
\end{lemma}
The lemma tells us that we may omit cluster pairs whose father
clusters already satisfy the cut-off criterion. This will be essential
for the assembly of the compressed matrix.
The computation of the compressed kernel matrix
can be sped up further by using
\(\Hcal^2\)-matrix techniques, see
\cite{HB02,Gie01}. Similarly to \cite{AHK14,HKS05}, we shall
rely here on \(\Hcal^2\)-matrices for this purpose.
The idea of \(\Hcal^2\)-matrices is to approximate the kernel
interaction
for sufficiently distant clusters \(\nu\) and \(\nu'\) in the sense
of the admissibility condition \eqref{eq:cutoff} by means
of the interpolation based \(\Hcal^2\)-matrix approach.
More precisely, given a suitable set of interpolation
points \(\{{\boldsymbol\xi}_t^\nu\}_t\) for each cluster \(\nu\) with
associated Lagrange polynomials \(\{\mathcal{L}_{t}^{\nu}
({\boldsymbol x})\}_t\), we introduce the interpolation operator
\[
\mathcal{I}^{\nu,\nu'}[\mathcal{K}]({\boldsymbol x}, {\boldsymbol y})
= \sum_{s,t} \mathcal{K}({\boldsymbol\xi}_{s}^{\nu}, {\boldsymbol\xi}_{t}^{\nu'})
\mathcal{L}_{s}^{\nu}({\boldsymbol x}) \mathcal{L}_{t}^{\nu'}({\boldsymbol y})
\]
and approximate an admissible matrix block via
\begin{align*}
{\boldsymbol K}^\Delta_{\nu,\nu'}
&=[(\mathcal{K},\delta_{\boldsymbol x}\otimes
\delta_{\boldsymbol y})_{\Omega\times\Omega}]_{{\boldsymbol x}\in\nu,{\boldsymbol y}\in\nu'}\\
&\approx\sum_{s,t} \mathcal{K}({\boldsymbol\xi}_{s}^{\nu}, {\boldsymbol\xi}_{t}^{\nu'})
[(\mathcal{L}_{s}^{\nu},\delta_{\boldsymbol x})_\Omega]_{{\boldsymbol x}\in\nu}
[(\mathcal{L}_{t}^{\nu'},\delta_{\boldsymbol y})_\Omega]_{{\boldsymbol y}\in\nu'}
\mathrel{=\mathrel{\mathop:}}{\boldsymbol V}^{\nu}_\Delta{\boldsymbol S}^{\nu,\nu'}
({\boldsymbol V}^{\nu'}_\Delta)^\intercal.
\end{align*}
Herein, the \emph{cluster bases} are given according to
\begin{equation}\label{eq:cluster bases}
{\boldsymbol V}^{\nu}_\Delta\mathrel{\mathrel{\mathop:}=} [(\mathcal{L}_{s}^{\nu},
\delta_{\boldsymbol x})_\Omega]_{{\boldsymbol x}\in\nu},\quad
{\boldsymbol V}^{\nu'}_\Delta\mathrel{\mathrel{\mathop:}=}[(\mathcal{L}_{t}^{\nu'},
\delta_{\boldsymbol y})_\Omega]_{{\boldsymbol y}\in\nu'},
\end{equation}
while the \emph{coupling matrix} is given by
\(
{\boldsymbol S}^{\nu,\nu'}\mathrel{\mathrel{\mathop:}=}[\mathcal{K}({\boldsymbol\xi}_{s}^{\nu},
{\boldsymbol\xi}_{t}^{\nu'})]_{s,t}.
\)
Directly transforming the cluster bases into their corresponding
samplet representation results in a log-linear cost. This can be
avoided by the use of nested cluster bases, as they have been
introduced for \(\Hcal^2\)-matrices. For the sake of simplicity, we
assume from now on that tensor product polynomials of degree
\(p\) are used for the kernel interpolation at all different cluster
combinations. As a consequence, the Lagrange polynomials
of a father cluster can exactly be represented by those of the
son clusters. Introducing the \emph{transfer matrices}
\(
{\boldsymbol T}^{\nu_{\mathrm{son}}}
\mathrel{\mathrel{\mathop:}=}[\mathcal{L}_s^\nu({\boldsymbol\xi}_t^{\nu_{\mathrm{son}}})]_{s,t},
\)
there holds
\[
\mathcal{L}_s^\nu({\boldsymbol x})=\sum_t{\boldsymbol T}^{\nu_{\mathrm{son}}}_{s,t}
\mathcal{L}_t^{\nu_{\mathrm{son}}}({\boldsymbol x}),\quad{\boldsymbol x}
\in B_{\nu_{\mathrm{son}}}.
\]
Exploiting this relation in the construction of the cluster bases
\eqref{eq:cluster bases} finally leads to
\[
{\boldsymbol V}^{\nu}_\Delta=\begin{bmatrix}
{\boldsymbol V}^{\nu_{\mathrm{son}_1}}_\Delta{\boldsymbol T}^{\nu_{\mathrm{son}_1}}\\
{\boldsymbol V}^{\nu_{\mathrm{son}_2}}_\Delta{\boldsymbol T}^{\nu_{\mathrm{son}_2}}
\end{bmatrix}.
\]
Combining this refinement relation with the recursive nature of the
samplet basis, results
in the variant of the discrete samplet transform summarized in
Algorithm~\ref{algo:multiscaleClusterBasis}.\bigskip
\begin{algorithm}[H]
\caption{Recursive computation of the multiscale cluster basis}
\label{algo:multiscaleClusterBasis}
\KwData{Cluster tree $\Tcal$, transformations
$[{\boldsymbol Q}_{j,\Phi}^{\nu}$, ${\boldsymbol Q}_{j,\Sigma}^{\nu}]$,
nested cluster bases ${\boldsymbol V}_{\Delta}^{\nu}$ for leaf clusters and
transformation matrices ${\boldsymbol T}^{\nu_{\mathrm{son}_1}}\), \(
{\boldsymbol T}^{\nu_{\mathrm{son}_2}}$ for non-leaf clusters.
}
\KwResult{Multiscale cluster basis matrices ${\boldsymbol V}_{\Phi}^{\nu}$,
${\boldsymbol V}_{\Sigma}^{\nu}$ for all clusters $\nu \in\Tcal$.}
\Begin{
\FuncSty{computeMultiscaleClusterBasis}($X$)\;
}
\end{algorithm}
\begin{function}[H]
\caption{computeMultiscaleClusterBasis($\nu$)}
\Begin{
\uIf{$\nu$ is a leaf cluster}{
store $\begin{bmatrix}
{\boldsymbol V}_{\Phi}^{\nu} \\
{\boldsymbol V}_{\Sigma}^{\nu}
\end{bmatrix}\mathrel{\mathrel{\mathop:}=}\big[{\boldsymbol Q}_{j,\Phi}^{\nu},
{\boldsymbol Q}_{j,\Sigma}^{\nu}\big]^{\intercal} {\boldsymbol V}_{\Delta}^{\nu}$
}
\Else{
\For{all sons $\nu'$ of $\nu$}{
$\computeMultiscaleClusterBasis(\nu')$
}
store $\begin{bmatrix}
{\boldsymbol V}_{\Phi}^{\nu} \\
{\boldsymbol V}_{\Sigma}^{\nu}
\end{bmatrix}\mathrel{\mathrel{\mathop:}=}\big[{\boldsymbol Q}_{j,\Phi}^{\nu},
{\boldsymbol Q}_{j,\Sigma}^{\nu}\big]^{\intercal} \begin{bmatrix}
{\boldsymbol V}^{\nu_{\mathrm{son}_1}}_\Phi{\boldsymbol T}^{\nu_{\mathrm{son}_1}}\\
{\boldsymbol V}^{\nu_{\mathrm{son}_2}}_\Phi{\boldsymbol T}^{\nu_{\mathrm{son}_2}}
\end{bmatrix}$
}
}
\end{function}\medskip
Having the multiscale cluster bases at our disposal, the next step is
the assembly of the compressed kernel matrix. The computation of the
required matrix blocks is exclusively
based on the two refinement relations
\begin{align*}
\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu'}^{\Phi,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Sigma}
\end{bmatrix}
&=
\begin{bmatrix}
(\mathcal{K},{\boldsymbol \Phi}^\nu\otimes{\boldsymbol\Phi}^{\nu'})_{\Omega\times\Omega} &
(\mathcal{K},{\boldsymbol \Phi}^\nu\otimes{\boldsymbol\Sigma}^{\nu'})_{\Omega\times\Omega}
\\
(\mathcal{K},{\boldsymbol \Sigma}^\nu\otimes{\boldsymbol\Phi}^{\nu'})_{\Omega\times\Omega}
&
(\mathcal{K},{\boldsymbol \Sigma}^\nu\otimes{\boldsymbol\Sigma}^{\nu'})_{\Omega\times\Omega}
\end{bmatrix}\\
&=\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}_1}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}_2}'}^{\Phi,\Phi}\\
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}_1}'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}_2}'}^{\Sigma,\Phi}
\end{bmatrix}\big[{\boldsymbol Q}_{j,\Phi}^{\nu'},
{\boldsymbol Q}_{j,\Sigma}^{\nu'}\big]
\end{align*}
and
\begin{align*}
\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu'}^{\Phi,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Sigma}
\end{bmatrix}
&=\begin{bmatrix}
(\mathcal{K},{\boldsymbol \Phi}^\nu\otimes{\boldsymbol\Phi}^{\nu'})_{\Omega\times\Omega} &
(\mathcal{K},{\boldsymbol \Phi}^\nu\otimes{\boldsymbol\Sigma}^{\nu'})_{\Omega\times\Omega}
\\
(\mathcal{K},{\boldsymbol \Sigma}^\nu\otimes{\boldsymbol\Phi}^{\nu'})_{\Omega\times\Omega}
&
(\mathcal{K},{\boldsymbol \Sigma}^\nu\otimes{\boldsymbol\Sigma}^{\nu'})_{\Omega\times\Omega}
\end{bmatrix}\\
&=
\big[{\boldsymbol Q}_{j,\Phi}^{\nu},
{\boldsymbol Q}_{j,\Sigma}^{\nu}\big]^\intercal
\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}_1},\nu'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}_1},\nu'}^{\Phi,\Phi}\\
{\boldsymbol K}_{\nu_{\mathrm{son}_2},\nu'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}_2},\nu'}^{\Sigma,\Phi}
\end{bmatrix}.
\end{align*}
We obtain the following function, which is the key ingredient for the
computation of the compressed kernel matrix.\bigskip
\begin{function}[H]
\caption{recursivelyDetermineBlock($\nu$, $\nu'$)}
\KwResult{Approximation of the block \scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu'}^{\Phi,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Sigma}
\end{bmatrix}$}.}
\Begin{
\uIf{$(\nu, \nu')$ is admissible}{
\Return{\scalebox{1}{$\begin{bmatrix}
{\boldsymbol V}_{\Phi}^{\nu} \\
{\boldsymbol V}_{\Sigma}^{\nu}
\end{bmatrix}
{\boldsymbol S}^{\nu,\nu'} \big[
({\boldsymbol V}_{\Phi}^{\nu'})^\intercal,
({\boldsymbol V}_{\Sigma}^{\nu'})^\intercal
\big]$}}
}
\uElseIf{$\nu$ and $\nu'$ are leaf clusters}{
\Return{\scalebox{1}{$\big[{\boldsymbol Q}_{j,\Phi}^{\nu},
{\boldsymbol Q}_{j,\Sigma}^{\nu}\big]^{\intercal}{\boldsymbol K}_{\nu,\nu'}^{\Delta}
\big[{\boldsymbol Q}_{j,\Phi}^{\nu'},
{\boldsymbol Q}_{j,\Sigma}^{\nu'}\big]$}}
}
\uElseIf{$\nu'$ is not a leaf cluster and $\nu$ is a leaf cluster}{
\For{all sons $\nu_{\mathrm{son}}'$ of $\nu'$}{
\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Sigma,\Sigma}
\end{bmatrix}$} $
\mathrel{\mathrel{\mathop:}=}\recursivelyDetermineBlock(\nu, \nu_{\mathrm{son}'})$
}
\Return{\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu_{\mathrm{son},1}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son},2}'}^{\Phi,\Phi}\\
{\boldsymbol K}_{\nu,\nu_{\mathrm{son},1}'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son},2}'}^{\Sigma,\Phi}
\end{bmatrix}\big[{\boldsymbol Q}_{j,\Phi}^{\nu'},
{\boldsymbol Q}_{j,\Sigma}^{\nu'}\big]$}}
}
\uElseIf{$\nu$ is not a leaf cluster and $\nu'$ is a leaf cluster}{
\For{all sons \(\nu_{\mathrm{son}}\) of \(\nu\)}{
\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Sigma,\Sigma}
\end{bmatrix}$} $\mathrel{\mathrel{\mathop:}=}
\recursivelyDetermineBlock(\nu_{\mathrm{son}}, \nu')$
}
\Return{\scalebox{1}{$\big[{\boldsymbol Q}_{j,\Phi}^{\nu},
{\boldsymbol Q}_{j,\Sigma}^{\nu}\big]^\intercal\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}_1},\nu'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}_1},\nu'}^{\Phi,\Phi}\\
{\boldsymbol K}_{\nu_{\mathrm{son}_2},\nu'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}_2},\nu'}^{\Sigma,\Phi}
\end{bmatrix}$}.
}
}
\Else(){
\For{all sons $\nu_{\mathrm{son}}$ of $\nu$ {\bf and}
all sons $\nu_{\mathrm{son}}'$ of $\nu'$}{
\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu_{\mathrm{son}}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu_{\mathrm{son}}'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu_{\mathrm{son}}'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu_{\mathrm{son}}'}^{\Sigma,\Sigma}
\end{bmatrix}$}$\mathrel{\mathrel{\mathop:}=}\recursivelyDetermineBlock(\nu_{\mathrm{son}},
\nu_{\mathrm{son}'})$
}
\Return{\scalebox{1}{$\big[{\boldsymbol Q}_{\Phi}^{\nu},
{\boldsymbol Q}_{\Sigma}^{\nu}\big]^\intercal\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}_1},\nu_{\mathrm{son}_1}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}_1},\nu_{\mathrm{son}_2}'}^{\Phi,\Phi}\\
{\boldsymbol K}_{\nu_{\mathrm{son}_2},\nu_{\mathrm{son}_1}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}_2},\nu_{\mathrm{son}_2}'}^{\Phi,\Phi}
\end{bmatrix} \big[{\boldsymbol Q}_{\Phi}^{\nu'},
{\boldsymbol Q}_{\Sigma}^{\nu'}\big]$}}
}
}
\end{function}\bigskip
Now, in order to assemble the compressed kernel matrix, we require
two nested recursive calls of the cluster tree, which is traversed in
a depth first search way. Algorithm~\ref{algo:h2Wavelet}
first computes the lower right matrix block and advances from bottom
to top and from right to left. To this end, the two recursive
functions \texttt{setupColumn} and \texttt{setupRow} are introduced.%
\bigskip
\begin{algorithm}[H]
\caption{Computation of the compressed kernel matrix}
\label{algo:h2Wavelet}
\KwData{Cluster tree $\Tcal$, multiscale cluster bases
${\boldsymbol V}_{\Phi}^{\nu}$, ${\boldsymbol V}_{\Sigma}^{\nu}$
and transformations $[{\boldsymbol Q}_{j,\Phi}^{\nu},
{\boldsymbol Q}_{j,\Sigma}^{\nu}]$.}
\KwResult{Sparse matrix ${\boldsymbol K}^\Sigma_\varepsilon$}
\Begin{
\FuncSty{setupColumn}($X$)
store the blocks the remaining blocks
${\boldsymbol K}^\Sigma_{\varepsilon,\nu,X}$ for \(%
\nu\in\Tcal\setminus\{X\}\)
in ${\boldsymbol K}^\Sigma_\varepsilon$ (they have already
been computed by earlier calls to %
\FuncSty{recursivelyDetermineBlock})
}
\end{algorithm}\bigskip
The purpose of the function \texttt{setupColumn} is to
recursively traverse the column cluster tree, i.e.\ the
cluster tree associated to the columns of the matrix.
Before returning, each instance of \texttt{setupColumn}
calls the function \texttt{setupRow}, which performs
the actual assembly of the compressed matrix.\bigskip
\begin{function}[H]
\caption{setupColumn($\nu'$)}
\Begin{
\For{all sons $\nu_{\mathrm{son}}'$ of $\nu'$}{
$\setupColumn(\nu_{\mathrm{son}}')$
}
store ${\boldsymbol K}^\Sigma_{\varepsilon,X,\nu'}\mathrel{\mathrel{\mathop:}=}
\FuncSty{setupRow}(X, \nu')$
in ${\boldsymbol K}^\Sigma_{\varepsilon}$
}
\end{function}\bigskip
For a given column cluster \(\nu'\), the function
\texttt{setupRow}
recursively traverses the row cluster tree, i.e.\
the cluster tree associated to the rows of the matrix,
and
assembles the corresponding column of the compressed
matrix.
The function reuses the already computed blocks to the
right of the column
under consideration and blocks at the bottom of the
very same
column.\bigskip
\begin{function}[H]
\caption{setupRow($\nu$, $\nu'$)}
\Begin{
\uIf{$\nu$ is not a leaf}{
\For{all sons \(\nu_{\mathrm{son}}\) of \(\nu\)}{
\uIf{\(\nu_{\mathrm{son}}\) and \(\nu'\) are not %
admissible}{
\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Sigma,\Sigma}
\end{bmatrix}$}
$\mathrel{\mathrel{\mathop:}=} \setupRow(\nu_{\mathrm{son}}, \nu')$
}
\Else{
\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Sigma,\Sigma}
\end{bmatrix}$}
$\mathrel{\mathrel{\mathop:}=}\recursivelyDetermineBlock(\nu_{\mathrm{son}},%
\nu')$}
}
\scalebox{1}{$
\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu,\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Sigma}
\end{bmatrix}\mathrel{\mathrel{\mathop:}=}\big[{\boldsymbol Q}_{\Phi}^{\nu},
{\boldsymbol Q}_{\Sigma}^{\nu}
\big]^\intercal\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}_1},\nu'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}_1},\nu'}^{\Phi,\Phi}\\
{\boldsymbol K}_{\nu_{\mathrm{son}_2},\nu'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}_2},\nu'}^{\Sigma,\Phi}
\end{bmatrix}$
}
}
\Else{
\uIf{$\nu'$ is a leaf cluster}{
\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Sigma,\Sigma}
\end{bmatrix}$}
$\mathrel{\mathrel{\mathop:}=}\recursivelyDetermineBlock(%
\nu_{\mathrm{son}}, \nu')$
}
\Else{
\For{all sons \(\nu_{\mathrm{son}}'\) of \(\nu\)'}{
\uIf{\(\nu\) and \(\nu_{\mathrm{son}}'\) %
are not admissible}{
load already computed block \scalebox{1}{%
$\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Sigma,\Sigma}
\end{bmatrix}$}
}
\Else
{
\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Sigma,\Sigma}
\end{bmatrix}$}
$\mathrel{\mathrel{\mathop:}=} \recursivelyDetermineBlock(%
\nu, \nu_{\mathrm{son}'})$
}
}
}
\scalebox{1}{
$\begin{bmatrix}{\boldsymbol K}_{\nu,\nu'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu,\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Sigma}
\end{bmatrix}\mathrel{\mathrel{\mathop:}=}\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}_1}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}_2}'}^{\Phi,\Phi}\\
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}_1}'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}_2}'}^{\Sigma,\Phi}
\end{bmatrix}\big[{\boldsymbol Q}_{\Phi}^{\nu'},
{\boldsymbol Q}_{\Sigma}^{\nu'}\big]$}
}
store ${\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Sigma}$ %
as part of ${\boldsymbol K}^\Sigma_\varepsilon$
\Return{\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu,\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Sigma}
\end{bmatrix}$}}
}
\end{function}
\begin{remark}
Algorithm~\ref{algo:h2Wavelet} has a cost of
\(\mathcal{O}(N\log N)\) and requires an additional
storage of \(\mathcal{O}(N\log N)\) if all stored
blocks are directly released when they are not required
anymore. We refer to \cite{AHK14} for all the details.
\end{remark}
\section{Numerical results I\!I}\label{sec:Num2}
All computations in this section have been performed on
a single node with two Intel Xeon E5-2650 v3 @2.30GHz
CPUs and up to 512GB of main memory\footnote{The full
specifications can be found on
https://www.euler.usi.ch/en/research/resources.}.
In order to obtain consistent timings, only a single
core was used for all computations.
\subsection*{Benchmark problem}
To benchmark the compression of kernel matrices,
we consider the exponential kernel
\[
k({\boldsymbol x},{\boldsymbol y})=e^{-10
\frac{\|{\boldsymbol x}-{\boldsymbol y}\|_2}{\sqrt{d}}}
\]
evaluated at an increasing number of uniformly
distributed random sample points
in the hypercube \([-1,1]^d\) for \(d=1,2,3\). As a
measure of sparsity, we introduce the
\emph{average number of nonzeros per row}
\[
\operatorname{anz}({\boldsymbol A})
\mathrel{\mathrel{\mathop:}=}\frac{\operatorname{nnz}({\boldsymbol A})}{N},\quad
{\boldsymbol A}\in\Rbb^{N\times N},
\]
where \(\operatorname{nnz}({\boldsymbol A})\) is the number of
nonzero entries of
\({\boldsymbol A}\). Besides the compression, we also report
the fill-in generated
by the Cholesky factorization in combination with the
nested dissection reordering from \cite{KK98}.
For the reordering and the Cholesky
factorization, we rely on \textsc{Matlab}%
R2020a\footnote{Version 9.8.0.1396136,
The MathWorks Inc., Natick, Massachusetts, 2020.},
while the
samplet compression is implemented in \texttt{C++11}
using the
\texttt{Eigen} template
library\footnote{\texttt{https://eigen.tuxfamily.org/}}
for linear algebra operations. For the computations,
we consider
a polynomial degree of 3 for the \(\Hcal^2\)-matrix
representation
and \(q+1=3\) vanishing moments for the samplets.
In addition, we have performed a thresholding of the
computed matrix coefficients that were smaller than
\(\varepsilon=10^{-3}\).
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\begin{loglogaxis}[width=0.42\textwidth,grid=both,
ymin= 0, ymax = 1e5, xmin = 256, xmax =1.2e6,
legend style={legend pos=south east,font=\small},
ylabel={\small wall time}, xlabel ={\small $N$}]
\addplot[line width=0.7pt,color=blue,mark=triangle]
table[x=npts,y=ctim]{./Results/matlabLogger1.txt};
\label{pgfplots:plot1D}
\addplot[line width=0.7pt,color=darkgreen,mark=square]
table[x=npts,y=ctim]{./Results/matlabLogger2.txt};
\label{pgfplots:plot2D}
\addplot[line width=0.7pt,color=red,mark=o]
table[x=npts,y=ctim]{./Results/matlabLogger3.txt};
\label{pgfplots:plot3D}
\addplot[line width=0.7pt,color=black, dashed]
table[x=npts,y expr={1e-5 * x}]{%
./Results/matlabLogger1.txt};
\addplot[line width=0.7pt,color=black, dashed]
table[x=npts,y expr={1e-5 * x * ln(x)}]{%
./Results/matlabLogger1.txt};
\addplot[line width=0.7pt,color=black, dashed]
table[x=npts,y expr={1e-5 * x * ln(x)^2}]{%
./Results/matlabLogger1.txt};
\addplot[line width=0.7pt,color=black, dashed]
table[x=npts,y expr={1e-5 * x * ln(x)^3}]{%
./Results/matlabLogger1.txt};
\label{pgfplots:asymps}
\end{loglogaxis}
\begin{loglogaxis}[%
xshift=0.405\textwidth,width=0.42\textwidth,grid=both,
ymin= 0, ymax = 2e3, xmin = 256, xmax =1.2e6,
ytick={1e1, 1e2, 1e3, 1e4},
legend style={legend pos=south east,font=\small},
ylabel={\small $\operatorname{anz}({\boldsymbol K}%
^\Sigma_\varepsilon)$}, xlabel ={\small $N$}]
\addplot[line width=0.7pt,color=blue,%
mark=triangle] table[x=npts,
y expr = {\thisrow{nzS}}]{./Results/matlabLogger1.txt};
\addplot[line width=0.7pt,color=darkgreen,mark=square]%
table[x=npts,
y expr = {\thisrow{nzS}}]{./Results/matlabLogger2.txt};
\addplot[line width=0.7pt,color=red,mark=o]%
table[x=npts,
y expr = {\thisrow{nzS}}]{./Results/matlabLogger3.txt};
\end{loglogaxis}
\matrix[
matrix of nodes,
anchor=north west,
draw
inner sep=0.1em,
column 1/.style={nodes={anchor=center}},
column 2/.style={nodes={anchor=west},font=\strut},
draw
]
at([xshift=0.02\textwidth]current axis.north east){
\ref{pgfplots:plot1D}& \(d=1\)\\
\ref{pgfplots:plot2D}& \(d=2\)\\
\ref{pgfplots:plot3D}& \(d=3\)\\
\ref{pgfplots:asymps}& \(N\!\log^\alpha\!\! N\)\\};
\end{tikzpicture}
\caption{\label{fig:compTimesNNZ}Assembly times
(left) and average numbers of nonzeros per row (right)
versus the number sample points $N$ in case of the
exponential kernel matrix.}
\end{center}
\end{figure}
The left-hand side of Figure~\ref{fig:compTimesNNZ}
shows the wall time for the assembly of the compressed
kernel matrices. The different dashed lines indicate
the asymptotics \(N\log^\alpha N\)
for \(\alpha=0,1,2,3\). It can be seen that, for
increasing number
\(N\) of points and the dimensions \(d=1,2,3\) under
consideration, all computation times approach the
expected rate
of \(N\log N\). The right-hand side of
Figure~\ref{fig:compTimesNNZ} shows the average number
of nonzeros per row for an increasing number
\(N\) of points. This number becomes constant or even
decreases, as expected.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\begin{loglogaxis}[width=0.42\textwidth,grid=both,
ymin= 0, ymax = 6e4, xmin = 500, xmax =1.2e6,
legend style={legend pos=south east,font=\small},
ylabel={\small wall time}, xlabel ={\small $N$}]
\addplot[line width=0.7pt,color=blue,mark=triangle]
table[x=npts,y=Ltim]{./Results/matlabLogger1.txt};
\label{pgfplots:plot1D1}
\addplot[line width=0.7pt,color=darkgreen,mark=square]
table[x=npts,y=Ltim]{./Results/matlabLogger2.txt};
\label{pgfplots:plot2D1}
\addplot[line width=0.7pt,color=red,mark=o]
table[x=npts,y=Ltim]{./Results/matlabLogger3.txt};
\label{pgfplots:plot3D1}
\addplot[line width=0.7pt,color=black,dashed]
table[x=npts,y expr={0.7e-6 * x^1.5}]{%
./Results/matlabLogger1.txt};
\addplot[line width=0.7pt,color=black,dashed]
table[x=npts,y expr={0.1e-6 * x^2}]{%
./Results/matlabLogger1.txt};
\label{pgfplots:asymps1}
\end{loglogaxis}
\begin{loglogaxis}[%
xshift=0.405\textwidth,width=0.42\textwidth,grid=both,
ymin= 0, ymax = 4e4, xmin = 256,
xmax =1.2e6,ytick={1e1, 1e2, 1e3, 1e4},
legend style={legend pos=south east,font=\small},
ylabel={\small $\operatorname{anz}({\boldsymbol L})$},
xlabel ={\small $N$}]
\addplot[line width=0.7pt,color=blue,mark=triangle]%
table[x=npts,
y = nzL]{./Results/matlabLogger1.txt};
\addplot[line width=0.7pt,color=darkgreen,mark=square]%
table[x=npts,
y = nzL]{./Results/matlabLogger2.txt};
\addplot[line width=0.7pt,color=red,mark=o]%
table[x=npts,
y = nzL]{./Results/matlabLogger3.txt};
\end{loglogaxis}
\matrix[
matrix of nodes,
anchor=north west,
draw
inner sep=0.1em,
column 1/.style={nodes={anchor=center}},
column 2/.style={nodes={anchor=west},font=\strut},
draw
]
at([xshift=0.02\textwidth]current axis.north east){
\ref{pgfplots:plot1D1}& \(d=1\)\\
\ref{pgfplots:plot2D1}& \(d=2\)\\
\ref{pgfplots:plot3D1}& \(d=3\)\\
\ref{pgfplots:asymps1}& \(N^{\frac{3}{2}}\),
\(N^2\)\\};
\end{tikzpicture}
\caption{\label{fig:cholTimesNNZ}Computation times
for the
Cholesky factorization (left) and average numbers
of nonzeros
per row for the Cholesky factor (right) versus
the number
sample points $N$ in case of the exponential
kernel matrix.}
\end{center}
\end{figure}
Next, we examine the Cholesky factorization of
the compressed
kernel matrix. As the largest eigenvalue of the
kernel matrix
grows proportionally to the number \(N\) of points,
while the smallest eigenvalue is
given by the ridge parameter, the condition number
grows with \(N\) as well.
Hence, to obtain a constant condition number for
increasing \(N\), the ridge parameter needs to be
adjusted accordingly.
However, as we are only interested in the generated
fill-in and the computation times,
we neglect this fact and just fix the ridge parameter
to \(\rho=1\) for all considered \(N\) and \(d=1,2,3\).
The obtained results are found in
Figure~\ref{fig:cholTimesNNZ}. Herein, on the left-hand
side, the wall times for the Cholesky factorization of
the reordered matrix are found. For \(d=1\)
the behavior is a bit peculiar as
the average number of nonzeros per row decreases when
the number \(N\) of points increases. This indicates
that the kernel function is already fully resolved up
to the threshold parameter on the coarser levels.
For \(d=2\), the observed rate is slightly better than
the expected one of \(N^{\frac{3}{2}}\) for the
Cholesky factorization, while the scaling
is approximately like \(N^2\) for \(d=3\). On the
right-hand side of the
same figure, it can be seen that the fill-in
remains rather moderate.
A visualization of the matrix patterns for the matrix
\({\boldsymbol K}^\Sigma_\varepsilon+\rho{\boldsymbol I}\),
the reordered matrix and the Cholesky factor for
\(N=131\,072\) points is
shown in Figure~\ref{fig:patterns}. Each dot
corresponds to a block of
\(256\times 256\) matrix entries and its intensity
indicates the number
of nonzero entries, where darker blocks contain more
entries than lighter blocks.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\draw(0,4.5) node%
{\includegraphics[scale=0.31,frame,%
trim= 0 0 0 13.4,clip]{%
./Results/Kmat_1D.eps}};
\draw(4,4.5) node {\includegraphics[scale=0.31,frame,%
trim= 0 0 0 13.4,clip]{./Results/KmatND_1D.eps}};
\draw(8,4.5) node%
{\includegraphics[scale=0.31,frame,%
trim= 0 0 0 13.4,clip]{./Results/Lmat_1D.eps}};
\draw(4,6.6) node {$d=1$};
\draw(0,0) node%
{\includegraphics[scale=0.31,frame,%
trim= 0 0 0 13.4,clip]{./Results/Kmat_2D.eps}};
\draw(4,0) node%
{\includegraphics[scale=0.31,frame,%
trim= 0 0 0 13.4,clip]{./Results/KmatND_2D.eps}};
\draw(8,0) node%
{\includegraphics[scale=0.31,frame,%
trim= 0 0 0 13.4,clip]{./Results/Lmat_2D.eps}};
\draw(4,2.1) node {$d=2$};
\draw(0,-4.5) node%
{\includegraphics[scale=0.31,frame,%
trim= 0 0 0 13.4,clip]{./Results/Kmat_3D.eps}};
\draw(4,-4.5) node%
{\includegraphics[scale=0.31,frame,%
trim= 0 0 0 13.4,clip]{./Results/KmatND_3D.eps}};
\draw(8,-4.5) node%
{\includegraphics[scale=0.31,frame,%
trim= 0 0 0 13.4,clip]{./Results/Lmat_3D.eps}};
\draw(4,-2.4) node {$d=3$};
\end{tikzpicture}
\caption{\label{fig:patterns}Sparsity pattern of
\({\boldsymbol K}^\Sigma_\varepsilon+\rho{\boldsymbol I}\) (left),
the reordered matrices (middle) and the Cholesky
factors \({\boldsymbol L}\) (right)
for \(d=1,2,3\) and \(N=131\,072\).}
\end{center}
\end{figure}
\subsection*{Simulation of a Gaussian random field}
As our last example, we consider a Gaussian random
field evaluated at 100\,000 randomly chosen points at
the surface of the Stanford bunny. As before, the
Stanford bunny has been rescaled to have a diameter
of 2. In order to demonstrate that our approach works
also for larger dimensions, the Stanford bunny has been
embedded into \(\mathbb{R}^4\) and randomly rotated to
prevent axis-aligned bounding boxes. The polynomial
degree for the \(\Hcal^2\)-matrix representation is
set to 3 as before and likewise we consider \(q+1=3\)
vanishing moments. The covariance function is given by
the exponential kernel
\[
k({\boldsymbol x},{\boldsymbol y})=e^{-25\|{\boldsymbol x}-{\boldsymbol y}\|_2}.
\]
Moreover, we discard all computed matrix entries which
are below the threshold of \(\varepsilon=10^{-6}\).
The ridge parameter is set to \(\rho=10^{-2}\).
The compressed covariance matrix exhibits
\(\operatorname{anz}({\boldsymbol K}^\Sigma_\varepsilon)=6457\)
nonzero matrix entries per row on average, while the
corresponding Cholesky factor exhibits
\(\operatorname{anz}({\boldsymbol L})=14\,898\) nonzero
matrix entries per row on average.
Having the Cholesky factor \({\boldsymbol L}\) at hand,
the computation of a realization of the
Gaussian random field is extremely fast, as it only
requires a simple sparse matrix-vector multiplication
of \({\boldsymbol L}\) by a Gaussian random vector and an
inverse samplet transform. Four different realizations
of the random field projected
to \(\mathbb{R}^3\) are shown in Figure~\ref{fig:GRF}.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\draw (0,5) node {\includegraphics[scale=0.1,clip,%
trim= 840 330 850 400]{./Results/bunnyField1.png}};
\draw (5,5) node {\includegraphics[scale=0.1,clip,%
trim= 840 330 850 400]{./Results/bunnyField2.png}};
\draw (0,0) node {\includegraphics[scale=0.1,clip,%
trim= 840 330 850 400]{./Results/bunnyField3.png}};
\draw (5,0) node {\includegraphics[scale=0.1,clip,%
trim= 840 330 850 400]{./Results/bunnyField4.png}};
\draw (9,2.5) node {\includegraphics[scale=0.2,clip,%
trim= 2260 450 450 750]{./Results/bunnyField4.png}};
\end{tikzpicture}
\caption{\label{fig:GRF}Four different realizations of
a Gaussian random field based on an exponential covariance kernel.}
\end{center}
\end{figure}
\section{Conclusion}\label{sec:Conclusion}
Samplets provide a new methodology for the analysis
of large data sets. They are easy to construct and
discrete data can be transformed into the samplet
basis in linear cost. In our construction, we
deliberately let out the discussion of a level
dependent compression of the given data, as it
is known from wavelet analysis, in favor of a robust
error analysis. We emphasize however that, under the
assumption of uniformly distributed points, different
norms can be incorporated, allowing for the
construction of band-pass filters and level dependent
thresholding. In this situation, also an improved
samplet matrix compression is possible
such that a fixed number of vanishing moments is
sufficient to achieve a precision proportional to the
fill distance with log-linear cost.
Besides data compression, detection of singularities
and adaptivity, we have demonstrated how samplets can
be employed for the compression kernel matrices to
obtain an essentially sparse matrix. Having a sparse
representation of the kernel matrix, algebraic
operations, such as matrix vector multiplications
can considerably be sped up. Moreover, in combination
with a fill-in reducing reordering, the factorization
of the compressed kernel matrices becomes
computationally feasible, which allows for the fast
application of the inverse kernel matrix on the one
hand and the efficient solution of linear systems
involving the kernel matrix on the other hand. The
numerical results, featuring about \(10^6\) data points
in up to four dimensions, demonstrate the capabilities
of samplets.
Future research will be directed to
the extension of samplets towards high-dimensional
data.
This extension requires the incorporation of different
clustering strategies, such as locality sensitive
hashing,
to obtain a manifold-aware cluster tree and the careful
construction for the vanishing moments, for example
by anisotropic polynomials.
\bibliographystyle{plain}
|
2011.05951
|
\section{Introduction}
The human microbiome plays a critical role in human health and disease \citep{nih2019review}.
Modern technology enables cost-effective acquisition of microbiome data.
Large collaborative efforts such as the Human Microbiome Project\st{ (HMP)} \citep{turnbaugh2007human} have generated rich and valuable databases.
The ever-increasing microbiome data allow us to better decipher the relationship between the microbiome and host health.
A thorough understanding of the link between the microbiome and health outcomes promises to revolutionize diagnosis and prognosis and may lead to therapeutic breakthroughs.
However, microbiome data are complex in nature, {involving high dimensionality, compositionality, zero inflation, and taxonomic hierarchy.}
Microbiome data are usually measured through high-throughput sequencing (e.g., 16S rRNA sequencing).
High dimensional read counts are then clustered into hundreds of ``operational taxonomic units'' (OTUs) for subsequent analyses.
The corresponding OTU table provides the abundance of each taxon (i.e., feature) in each community (i.e., sample).
Due to the large variation in library size (i.e., total number of reads in each sample), read counts in OTU table are usually normalized as compositions (i.e., relative abundance) \citep{gloor2016s,tsilimigras2016compositional}.
Compositionality is a prominent feature of microbiome data that complicates statistical analyses.
As compositions reside in a simplex that does not admit the standard Euclidean geometry, many standard notions and methods do not directly apply.
In addition, microbiome data are usually highly sparse with few dominant compositions and excessive zeros \citep{xia2018modeling,xu2020zero}.
There also exists extrinsic information such as the taxonomic hierarchy among taxa.
The taxonomic tree captures the classification of microbes at different ranks.
OTU data at lower taxonomic ranks have higher resolution (i.e., more features) but are more prone to measurement errors, while data at higher taxonomic ranks have lower resolution with higher accuracy.
Namely, there is a trade-off between data resolution and accuracy along the taxonomic hierarchy, which provides useful guidance for microbiome analysis.
{The unique features of microbiome data pose new challenges for statistical analysis \citep{li2015microbiome,zhou2019review}. Here we focus on regression analysis with microbiome compositional data, which is critical to study the association of bacterial taxa and clinical outcomes.}
Due to the compositional nature, microbiome data are typically transformed (e.g., log-ratio transformation) first to admit the Euclidean geometry \citep{aitchison2005compositional}.
Linear regression models are subsequently built upon transformed data.
For example, one of the most commonly used models is the log-contrast model \citep{aitchison1984log} where log-ratio-transformed compositions are used as predictors in a linear regression.
An equivalent symmetric form of the model is as follows
\[
y=\beta_0+\beta_1\log{x_1}+\cdots+\beta_p\log{x_p}+\varepsilon,
\]
where the compositional vector $\boldsymbol{x}$ is in the $(p-1)$-simplex $\simp^{p-1}=\{\boldsymbol{x}{\color{black}=(x_1, \ldots, x_p)^T}\in\real^p: \sum_{j=1}^p x_j=1, x_j\geq0, j=1,\ldots,p\}$ and the coefficients satisfy a linear constraint $\sum_{j=1}^{p}\beta_j=0$.
The model enjoys the subcompositional coherence and scale and permutation invariance properties \citep{aitchison1982statistical}.
\cite{lin2014variable} and \cite{shi2016regression} proposed variable selection methods for such models.
\cite{lu2019generalized} further generalized it to non-Gaussian responses.
Centered log-ratio transformation is also frequently used in the literature and it has been shown to be equivalent to the log-contrast model if the zero-sum constraint is imposed on the regression coefficients \citep{randolph2018kernel,wang2017structured}.
Although almost all existing compositional regression methods are based on transformed data, the transformation procedure itself has several major drawbacks.
Most prominently, the commonly used logarithmic transformation cannot handle zero values.
A common practice is to artificially replace zero with some preset small value to avoid singularity \citep{aitchison1984log,Palarea2013, lin2014variable}.
However, since microbiome data are typically inflated with zeros, such manipulation may introduce unwanted bias and result in misleading results \citep{mcmurdie2014waste}.
Another drawback is the lack of straightforward biological interpretation.
The log transformation removes compositions from the simplex, but does not eliminate the interrelated structure of the data.
The change in one predictor value is linked with the change in at least one other predictor value.
As a result, one cannot simply interpret the coefficient $\beta_j$ as the effect size corresponding to one unit increase in $\log x_j$ with others held fixed.
In addition, the transformation hinders the incorporation of extrinsic information such as the taxonomic tree structure.
Several attempts have been made in the literature, but there is no consensus about how to properly regularize the regression coefficients to reflect extrinsic information.
This is largely due to the lack of clear interpretation of the coefficients for transformed data.
For example, \cite{shi2016regression} proposed a subcompositional model that partially accounts for the taxonomic structure. However, it can only handle two taxonomic ranks.
\cite{garcia2013identification} and \cite{wang2017structured} developed group-lasso-type regularization methods to achieve subcomposition selection.
\cite{randolph2018kernel} proposed to translate extrinsic information into kernels and incorporate them into a penalized regression framework.
However, kernelizing taxonomy or phylogeny may oversimplify the data structure since the original tree structure cannot be fully characterized by a similarity matrix.
Besides, to the best of our knowledge, no existing method ensures compatible results across taxonomic ranks.
That is, analyses conducted on the same OTU data at different taxonomic ranks may have drastically different results.
For example, a species may be deemed important from the species-level analysis, but the genus it belongs to may have negligible effect from the genus-level analysis.
Such discrepancy may call microbiome regression analysis into question.
In this paper, we break new ground to develop a new regression paradigm for microbiome compositional data.
The new framework, called {\em Relative-Shift}, directly models compositions as predictors without transformation.
It is fundamentally different from the log-contrast models and provides an alternative approach to microbiome regression.
The basic model is based on a simple yet intriguing finding, that is, the regression on compositions is completely identifiable if we just eliminate the intercept term.
Namely, an intercept-free linear regression model with compositional predictors is the basic form of our proposed relative-shift model.
Although seemingly simple, the model carries important biological interpretations and enjoys many desirable properties such as scale and shift invariance.
In particular, the contrast in regression coefficients can be interpreted as the effect size of shifting concentrations between taxa (i.e., the origination of the name, {\em relative-shift}).
The relative-shift model also serves as a flexible basis for accommodating unique features of microbiome data.
For example, zero values are directly handled without substitution; {high dimensional compositional features can be reduced through aggregation or amalgamation.}
More importantly, the taxonomic tree structure among taxa can be tactfully incorporated as well.
In particular, we develop new taxonomy-guided regularization methods for parameter estimation.
The proposed methods can adaptively determine what taxa at which taxonomic levels are
most relevant to the response.
They are robust against the change in the taxonomic levels of the study and offer superior biological interpretability.
The methods utilize data across all taxonomic ranks and strike a good balance between data resolution and accuracy.
\st{
The rest of the paper is organized as follows. In Section {\ref{sec:RS}}, we introduce the relative-shift model. In Section {\ref{sec:reg}}, we develop novel regularization methods for feature aggregation in high dimension with and without taxonomic guidance. In Section {\ref{sec:comp}}, we devise model fitting algorithms for different settings.
In Section {\ref{sec:theory}}, we derive a unified finite-sample prediction error bound for the proposed estimators.
Comprehensive simulation studies are contained in Section {\ref{sec:sim}} and a real data application to a microbiome study of neurodevelopment in preterm infants is in Section {\ref{sec:real}}.
We conclude and discuss open questions in Section {\ref{sec:dis}}.
}
\section{Relative-Shift Regression Paradigm}\label{sec:RS}
Let $\boldsymbol{y} = {\color{black}(y_1,\ldots,y_n)^T} \in\real^{n}$ denote the continuous response vector of $n$ samples.
Let $\boldsymbol{x}_i=(x_{i1},\ldots,x_{ip})^T\in\simp^{p-1}$ represent the microbial compositional vector of $p$ taxa and $\boldsymbol{c}_i=(c_{i1},\ldots, c_{iq})^T\in\real^q$ be a length-$q$ auxiliary non-compositional covariate vector for the $i$th subject ($i=1,\ldots,n)$.
We propose the following relative-shift model for compositional regression with covariate adjustment
\begin{equation}\label{RS}
y_i=\boldsymbol{c}_i^T\boldsymbol{\beta}_c + \beta_1 x_{i1}+\cdots+\beta_p x_{ip}+\varepsilon_i,
\end{equation}
where $\varepsilon_i$ is the random noise with mean zero and variance $\sigma^2$, and $\boldsymbol{\beta}_c\in\real^q$ and $\boldsymbol{\beta}=(\beta_1,\ldots,\beta_p)^T\in\real^p$ are coefficient vectors for covariates and compositions, respectively.
The relative-shift model is identical to a linear regression model less the intercept term, yet the difference ensures the identifiability of the model.
In contrast to the models based on transformed data, the relative-shift model uses proportions as predictors, and thus directly characterizes how composition changes affect the response.
The coefficients for compositions provide important biological interpretation.
We stress that they shall not be interpreted separately since compositions are interrelated.
Instead, differences between coefficients are readily interpretable.
For example, for any pair of taxa $(j,k)$, we can write $\beta_jx_{\cdot j} + \beta_kx_{\cdot k} = \beta_k(x_{\cdot j} + x_{\cdot k}) + (\beta_{j} - \beta_{k})x_{\cdot j}$. Therefore, $(\beta_{j} - \beta_{k})$ can be interpreted as the effect size of $x_{\cdot j}$ on the outcome when holding $x_{\cdot j} + x_{\cdot k}$ fixed, that is, $\Delta(\beta_j-\beta_k)$ can be viewed as the effect on the response with a shift of $\Delta$ concentration from the $k$th taxon to the $j$th taxon
while holding other taxa fixed. As another example, consider the triplet of taxa $(j,k, r)$. Write $\beta_jx_{\cdot j} + \beta_rx_{\cdot r} + \beta_jx_{\cdot k} = \beta_r(x_{\cdot j} + x_{\cdot r}+x_{\cdot k}) + (\beta_j -\beta_r)x_{\cdot j} + (\beta_k-\beta_r)x_{\cdot k}$. Then a contrast of the form $\Delta(a\beta_j + (1-a)\beta_k - \beta_r)$ can be interpreted as the effect on the response with a shift of $\Delta$ concentration from the $r$th taxon to the $j$th and $k$th taxa by the amount $a\Delta$ and $(1-a)\Delta$ $(0\leq a\leq 1)$, respectively, while holding the other taxa fixed.
Intriguingly, under the proposed model, all the contrasts of the regression coefficients can be interpreted as the effect of certain shifts of abundances of a group of taxa while holding their sum fixed. This is the origination of the name \textit{relative-shift regression}.
Although simple, the relative-shift model well characterizes the fundamental relations between compositional predictors and the response. Surprisingly, this simple alternative has been overlooked in the microbiome literature.
The relative-shift model enjoys several desirable properties. First, it is scale and shift invariant. In particular, if the response is multiplied by a constant, the model will remain the same if all coefficients are multiplied by the same constant. If the response
shifts by a constant, the effect can be offset by adding the same constant to all coefficients for the compositions. This invariance also indicates that the magnitude or the absolute value of the coefficients is not important. Instead, the relative relationships between different parameters are crucial.
Second, the model is immune to zero inflation. It can directly handle zeros in the design matrix without taking additional steps as in a log-contrast model.
Third, equal coefficients directly translate into feature aggregation. Namely, if the coefficients for two (or more) taxa are the same, they can be directly combined as a new entity with the new composition being the sum of the original compositions and the coefficient being the shared coefficient. This serves as the foundation for the regularization methods in Section \ref{sec:reg}.
\section{Regularization Methods for Feature Aggregation}\label{sec:reg}
\subsection{Equi-Sparsity Regularization}\label{subsec:hd}
To estimate model parameters, one can directly resort to the ordinary least squares approach by minimizing the following convex objective function
\[
g(\boldsymbol{\beta}_c,\boldsymbol{\beta})={1\over 2n}\|\boldsymbol{y}-\boldsymbol{C}\boldsymbol{\beta}_c-\boldsymbol{X}\boldsymbol{\beta}\|^2,
\]
where $\boldsymbol{C}=(\boldsymbol{c}_1,\ldots,\boldsymbol{c}_n)^T\in\real^{n\times q}$ is a covariate matrix, $\boldsymbol{X}=(\boldsymbol{x}_1,\ldots,\boldsymbol{x}_n)^T{\color{black}\in\real^{n\times p}}$ is a compositional matrix, and $\|\cdot\|$ represents the Frobenius norm.
The optimization has a unique closed-form solution if there is no collinearity in $\boldsymbol{C}$ and $\boldsymbol{X}$.
However, microbiome data are often high dimensional, with the number of taxa $p$ greater than the sample size $n$.
As a result, the design matrix becomes singular and there is no unique solution to the problem.
To address the problem, we introduce an {\em equi-sparsity} regularization for { compositional coefficients}.
Equi-sparsity is a generalization of the widely studied zero-sparsity \citep{she2010sparse}.
It encourages coefficients to be equal to each other (i.e., clustering of coefficients) rather than close to zero.
In the relative-shift Model \eqref{RS}, equi-sparsity for composition coefficients is especially relevant because we only care about the relative relations between coefficients rather than their absolute numerical values.
If two coefficients are equal, shifting concentrations between the corresponding pair of taxa does not change the model.
Correspondingly, the two taxa can be combined to form a new entity without losing any information.
For example, if $\beta_j=\beta_k$, the corresponding taxa $j$ and $k$ can be directly combined since $\beta_jx_{ij}+\beta_kx_{ik}=\beta_j(x_{ij}+x_{ik})$.
As a result, equi-sparsity achieves dimension reduction of compositional data by feature aggregation.
More specifically, we consider the following equi-sparsity regularization for the composition coefficients $\boldsymbol{\beta}$ in Model \eqref{RS}
\begin{equation}
\pen_E(\boldsymbol{\beta})=\sum_{j<k}\omega_{jk}|\beta_j-\beta_k|,
\end{eqnarray*}
where $\omega_{jk}$ is some predefined positive weight between taxa $j$ and $k$.
The weights can be determined based on extrinsic information (e.g., phylogenetic information), where a larger value induces more penalty on the pairwise difference and vice versa.
By default, we set all weights to be equal to 1, and the penalty term reduces to the clustered lasso penalty studied in \cite{she2010sparse}.
It also coincides with the graph-guided-fused-lasso penalty in \cite{kim2009multivariate} with a complete graph.
Applying the equi-sparsity regularization to Model \eqref{RS}, we obtain the following convex optimization problem for parameter estimation in high dimension
\begin{equation}\label{opt1}
(\widehat{\boldsymbol{\beta}_c},\widehat{\boldsymbol{\beta}})=\argmin_{\boldsymbol{\beta}_c,\,\boldsymbol{\beta}}\quad g(\boldsymbol{\beta}_c,\boldsymbol{\beta})+\lambda\pen_E(\boldsymbol{\beta}),
\end{equation}
where $\lambda>0$ is a tuning parameter.
We devise an efficient algorithm for solving \eqref{opt1} in Section \ref{sec:comp}.
\subsection{Taxonomic-Tree-Guided Regularization}\label{subsec:tax}
Assume the taxonomic tree is available as prior knowledge.
Now we elaborate how to incorporate the taxonomic tree structure into the relative-shift regression paradigm via novel regularization methods.
The basic idea of the taxonomic-tree-guided regularization is to encourage equi-sparse coefficients for taxa sharing similar taxonomic paths.
In other words, the more common ancestors two taxa share, the more likely they are to be aggregated.
Let $T$ represent a $p$-leafed taxonomic tree, $I(T)$ represent the set of internal nodes, $L(T)$ represent the set of leaf nodes, and $|T|$ represent the total number of nodes in a tree.
We follow the commonly used notions of child, parent, sibling, descendant and ancestor to describe the relations between nodes.
Each leaf node of the tree corresponds to a taxon and each internal node corresponds to a group of taxa (i.e., the descendant leaf nodes of the internal node).
\cite{yan2018rare} recently proposed a tree-guided regularization method for rare feature aggregation.
A significant innovation is the adoption of a tree-based parameterization where original regression coefficients are broken down into intermediate coefficients assigned to each node of a tree.
For example, Figure \ref{fig:tree} provides an illustration of a tree $T$ with seven leaf nodes (one for each regression coefficient).
An intermediate coefficient $\gamma_u$ is assigned to each node $u\in T$.
Then each original coefficient $\beta_j$ (at the leaf node $j$) is expressed as
\[
\beta_j=\sum_{u\in \mbox{\footnotesize Ancestor}(j)\cup \{j\}} \gamma_u,
\]
where $\mbox{Ancestor}(j)$ denotes the set of ancestors of node $j$.
For example, $\beta_1=\gamma_1+\gamma_8+\gamma_{10}+\gamma_{12}$.
Correspondingly, the coefficient vector $\boldsymbol{\beta}$ can be written as a linear transformation of the intermediate coefficient vector $\boldsymbol{\gamma}=(\gamma_u)_{u\in T}$
\begin{equation}\label{reparam}
\boldsymbol{\beta}=\boldsymbol{A}\boldsymbol{\gamma},
\end{equation}
where $\boldsymbol{A}\in\{0,1\}^{p\times |T|}$ is a tree-induced indicator matrix with entry $A_{jk}=1_{k\in \mbox{\footnotesize Ancestor}(j)\cup \{j\}}$ (or, equivalently, $1_{j\in \mbox{\footnotesize Descendant}(k)\cup \{k\}}$, with $\mbox{Descendant}(k)$ being the set of descendants of node $k$.).
\begin{figure}[hbpt]
\centering
\includegraphics[width=2.5in]{iTree3.pdf}
\caption{Illustration of tree-guided reparameterization.}
\label{fig:tree}
\end{figure}
The reparameterization provides a means to couple the tree structure with the coefficients for compositions.
Since the root node value appears in every coefficient in $\boldsymbol{\beta}$ by construction, we recommend fixing it to a constant to avoid unnecessary redundancy.
In practice, we set it to be $n^{-1}\mathbf{1}_n^T\boldsymbol{y}$ to capture the mean of the response {\color{black} where $\mathbf{1}_n$ is a length-$n$ vector of ones}.
Hereafter, we always assume the intermediate coefficient for the root node is prefixed.
Next, we will introduce regularization on the intermediate coefficients to induce tree-guided equi-sparsity in $\boldsymbol{\beta}$.
With the new parameterization, it becomes immediately clear that aggregating features that share the same ancestor node $u$ is equivalent to zeroing out all the intermediate coefficients for nodes in Descendant$(u)$.
For example, in Figure \ref{fig:tree}, if $\gamma_1=\gamma_2=0$, the leaf nodes 1 and 2 sharing the same parent node 8 will be combined and their common coefficient is $\beta_1=\beta_2=\gamma_{10}+\gamma_8+\gamma_{12}$.
As a result, the desired tree-guided equi-sparsity regularization on $\boldsymbol{\beta}$ can be equivalently expressed as structured zero-sparsity regularization on $\boldsymbol{\gamma}$.
In particular, we propose three sparsity-inducing penalty terms on the intermediate coefficients:
\begin{itemize}
\item[(1)] {\it Node $\ell_1$} (L1):
\begin{equation}\label{pen1}
\mathcal{P}_{T}(\boldsymbol{\gamma})=\sum_{u\in T_{-r}} w_u|\gamma_u|,
\end{equation}
where $T_{-r}$ denotes the set of nodes in $T$ without the root node;
\item[(2)] {\it Child $\ell_2$} (CL2):
\begin{equation}\label{pen2}
\mathcal{P}_{T}(\boldsymbol{\gamma})=\sum_{u\in I(T)} w_u\|(\gamma_v)_{v\in \mbox{\footnotesize Child}(u)}\|,
\end{equation}
where $\mbox{Child}(u)$ denotes the set of children nodes of node $u$;
\item[(3)] {\it Descendant $\ell_2$} (DL2):
\begin{equation}\label{pen3}
\mathcal{P}_{T}(\boldsymbol{\gamma})=\sum_{u\in I(T)} w_u\|(\gamma_v)_{v\in \mbox{\footnotesize Descendant}(u)}\|.
\end{equation}
\end{itemize}
All three penalties induce sparsity in $\boldsymbol{\gamma}$ and thus potentially result in equi-sparsity in $\boldsymbol{\beta}$.
The {\NL} penalty is closely related to the one in \cite{yan2018rare}, except that we do not penalize the original coefficients in $\boldsymbol{\beta}$.
This is because the zero-sparsity in $\boldsymbol{\beta}$ does not carry any special interpretation in our proposed relative-shift model.
The {\CL} and {\DL} penalties resort to a group-lasso-type regularization which may further encourage the groups of nodes towards the leaves of a tree to take zero values.
In particular, {\CL} does not contain any overlapping groups while {\DL} does.
{Later we show that all three penalty terms can be implemented by the same algorithm and their theoretical properties can be understood through a unified finite-sample prediction error bound.}
The weights in each penalty may be used to adjust for different node heights or heterogeneous group sizes and/or avoid over-penalization if desired.
By default, we set the weights to be 1 throughout the paper.
Data-adaptive selection of weights is a future research direction.
Combining with Model \eqref{RS}, we have the following optimization problem
\begin{equation}\label{opt2}
(\widehat{\boldsymbol{\beta}_c},\widehat{\boldsymbol{\beta}})=\argmin_{\boldsymbol{\beta}_c,\,\boldsymbol{\beta}=\boldsymbol{A}\boldsymbol{\gamma}}\quad g(\boldsymbol{\beta}_c,\boldsymbol{\beta})+\lambda\pen_T(\boldsymbol{\gamma}).
\end{equation}
The method utilizes data across all taxonomic ranks and naturally strikes a good balance between data resolution and accuracy.
Moreover, it also achieves taxonomic rank selection by adaptively identifying the most relevant taxa at the most relevant taxonomic ranks.
For example, suppose we start with species-level OTU data and fit a relative-shift model with the taxonomic-tree-guided regularization.
If all the species within a genus are regularized to have the same coefficient, a new taxon is formed at the genus level with its composition being the sum of all the child species compositions, and the genus-level taxon is deemed relevant in the regression analysis rather than its descendant species.
Similarly, if all the species (in different genera) within a family share the same coefficient, the newly formed family-level taxon is selected.
\section{Model Fitting Algorithm}\label{sec:comp}
Both optimization problems in \eqref{opt1} and \eqref{opt2} are convex. In principle, generic convex optimization solvers can be used.
Nonetheless, given the high dimensional nature of the problem, such generic methods are usually computationally prohibitive.
Instead, we resort to a more efficient smoothing proximal gradient\st{ (SPG)} method \citep{chen2012smoothing} to solve the optimization in \eqref{opt1} and \eqref{opt2}.
We remark that the details of the {\SPG} algorithm are well documented in \cite{chen2012smoothing}, so we only outline the general idea of the algorithm below.
The optimization problems in \eqref{opt1} and \eqref{opt2} can be uniformly expressed as
\begin{equation}\label{opt_uni}
\min_{\widetilde{\boldsymbol{\beta}}}\ {1\over 2n}\|\boldsymbol{y}-\widetilde{\boldsymbol{X}}\widetilde{\boldsymbol{\beta}}\|^2+\lambda\Omega(\widetilde{\boldsymbol{\beta}}),
\end{equation}
where $\widetilde{\boldsymbol{X}}=(\boldsymbol{C},\boldsymbol{X})$, $\widetilde{\boldsymbol{\beta}}=(\boldsymbol{\beta}_c^T,\boldsymbol{\beta}^T)^T$, and $\Omega(\widetilde{\boldsymbol{\beta}})=\pen_E(\boldsymbol{\beta})$ in \eqref{opt1}, and $\widetilde{\boldsymbol{X}}=(\boldsymbol{C},\boldsymbol{X}\boldsymbol{A})$, $\widetilde{\boldsymbol{\beta}}=(\boldsymbol{\beta}_c^T,\boldsymbol{\gamma}^T)^T$, and $\Omega(\widetilde{\boldsymbol{\beta}})=\pen_T(\boldsymbol{\gamma})$ in \eqref{opt2}.
In particular, the penalty term $\Omega(\widetilde{\boldsymbol{\beta}})$ is a nonsmooth function of $\widetilde{\boldsymbol{\beta}}$ and the elements of $\widetilde{\boldsymbol{\beta}}$ is nonseparable.
The fundamental idea of {\SPG} is to 1) decouple the nonseparable elements via the dual norm; 2) apply a Nesterov smoothing technique \citep{nesterov2005smooth} to obtain the gradient of $\Omega(\widetilde{\boldsymbol{\beta}})$; and 3) apply an optimal gradient method \citep{beck2009fast}.
More specifically, the term $\Omega(\widetilde{\boldsymbol{\beta}})$ in \eqref{opt_uni} can be expressed by the dual norm as
\[
\Omega(\widetilde{\boldsymbol{\beta}})=\max_{\boldsymbol{\alpha}\in\mathcal{Q}} \boldsymbol{\alpha}^T\boldsymbol{D}\widetilde{\boldsymbol{\beta}},
\]
where $\mathcal{Q}$ is some convex, closed unit ball and $\boldsymbol{D}$ is a constant matrix defined by respective problems (see \cite{chen2012smoothing} for details).
Subsequently, it is approximated by a surrogate function
\begin{equation}\label{alpha}
f_\mu(\widetilde{\boldsymbol{\beta}})=\max_{\boldsymbol{\alpha}\in\mathcal{Q}} {\boldsymbol{\alpha}^T\boldsymbol{D}\widetilde{\boldsymbol{\beta}}-{\mu\over 2}\|\boldsymbol{\alpha}\|^2},
\end{equation}
which can be shown to be smooth with respect to $\widetilde{\boldsymbol{\beta}}$ (as long as $\mu>0$) and bounded by a tight interval around $\Omega(\widetilde{\boldsymbol{\beta}})$ \citep{nesterov2005smooth}.
\cite{nesterov2005smooth} further showed that the gradient of $f_\mu(\widetilde{\boldsymbol{\beta}})$ is $\boldsymbol{D}^T\boldsymbol{\alpha}^*$ with $\boldsymbol{\alpha}^*$ being the optimal solution to \eqref{alpha} and the gradient is Lipschitz continuous.
In particular, in both problems \eqref{opt1} and \eqref{opt2}, $\boldsymbol{\alpha}^*$ has a closed-form expression and the Lipschitz constant is explicit \citep{chen2012smoothing}.
Let $h(\widetilde{\boldsymbol{\beta}})={\color{black} (2n)^{-1} \mathst{1\over 2n}}\|\boldsymbol{y}-\widetilde{\boldsymbol{X}}\widetilde{\boldsymbol{\beta}}\|^2+\lambda f_\mu(\widetilde{\boldsymbol{\beta}})$ be the new objective function.
The gradient of $h(\widetilde{\boldsymbol{\beta}})$, i.e., $\triangledown h(\widetilde{\boldsymbol{\beta}})$, has an explicit form and is Lipschitz continuous with an explicit Lipschitz constant $L$.
To minimize $h(\widetilde{\boldsymbol{\beta}})$, one may resort to the classical gradient algorithm by iteratively updating the estimate of $\widetilde{\boldsymbol{\beta}}$:
\[
{\widetilde{\boldsymbol{\beta}}}^{(t+1)}={\widetilde{\boldsymbol{\beta}}}^{(t)}-{1\over L} \triangledown h({\widetilde{\boldsymbol{\beta}}}^{(t)}),
\]
until convergence.
However, the convergence may be slow \citep{nesterov1983method}.
Instead, {\SPG} applies the fast iterative shrinkage-thresholding algorithm\st{ (FISTA)} \citep{beck2009fast} which is an optimal gradient method in terms of convergence rate.
{\color{black} The fast iterative shrinkage-thresholding algorithm\st{FISTA}} updates the estimate ${\widetilde{\boldsymbol{\beta}}}^{(t+1)}$ with not just the previous estimate ${\widetilde{\boldsymbol{\beta}}}^{(t)}$, but rather a very specific combination of the previous two estimates ${\widetilde{\boldsymbol{\beta}}}^{(t)}$ and ${\widetilde{\boldsymbol{\beta}}}^{(t-1)}$.
As a result, the convergence has been proved to be much faster than the standard gradient method \citep{chen2012smoothing,beck2009fast}.
The tuning parameter $\lambda$ in \eqref{opt_uni} balances the quadratic loss function and the penalty term. In practice, it typically has to be determined from data. A standard approach is to use cross validation to adaptively select the optimal tuning parameter. Since the {\SPG} algorithm for model fitting is very efficient, the cross validation scheme is computationally feasible. We provide more details in the numerical studies in Section \ref{sec:sim}.
\section{Theory}\label{sec:theory}
Let $T$ represent a $p$-leafed taxonomic tree with root node $r$.
Both $L(T)$ and $I(T)$ have been defined previously as the sets of leaf nodes and internal nodes, respectively.
Let $T_u$ be a subtree of $T$ rooted at the node $u$ for $u \in T$.
To focus on the main idea, we consider the relative-shift model without additional covariates,
\begin{equation}\label{RS1}
\boldsymbol{y} = \boldsymbol{X}\boldsymbol{\beta}^* + \boldsymbol{\varepsilon},
\end{equation}
where $\boldsymbol{\beta}^*\in\real^p$ is the true coefficient vector.
With the tree-based reparameterization \eqref{reparam}, we have
\[
\boldsymbol{\beta}^*=\boldsymbol{A}\boldsymbol{\gamma}^*,
\]
where $\boldsymbol{\gamma}^*=(\gamma_u^*)_{u\in T}$ is the vector of intermediate coefficients and $\boldsymbol{A}$
is a tree-induced indicator matrix.
Without loss of generality, we assume the response $\boldsymbol{y}$ is centered and the root node takes value 0 (i.e., $\gamma_r=0$).
Correspondingly, $\boldsymbol{\gamma}^*=(\gamma_u^*)_{u\in T\backslash \{r\}}\in\mathbb R^{|T|-1}$ and $\boldsymbol{A}\in\{0,1\}^{p\times (|T|-1)}$.
We study finite-sample properties of the regularized estimator
\begin{eqnarray} \label{eq:tree-pen}
\widehat{\boldsymbol{\beta}} = \argmin_{\boldsymbol{\beta}=\boldsymbol{A} \boldsymbol{\gamma}}\left\{ \frac{1}{2n} \|\boldsymbol{y} - \boldsymbol{X} \boldsymbol{\beta}\|^2 + \lambda \mathcal{P}_{T}(\boldsymbol{\gamma})\right\},
\end{eqnarray}
where $\mathcal{P}_{T}(\boldsymbol{\gamma})$ is any one of the three penalties (i.e., {\NL}, {\CL}, and {\DL}) introduced in Section \ref{subsec:tax}.
We formally state the assumptions on the design matrix and the random error vector and present our main results in Theorem \ref{eq:th1}. The detailed proof is in {the Section~\ref{app:proofs} of Supplementary Materials}.
\begin{assumption}\label{assumption1}
The entries of $\boldsymbol{\varepsilon}$ are independently and identically drawn from standard Gaussian distribution with mean zero and variance $\sigma^2$.
\end{assumption}
\begin{assumption}\label{assumption2}
The design matrix $\boldsymbol{X}\in \mathbb{R}^{n\times p}$ is compositional, i.e., each entry of $\boldsymbol{X}$ is in the interval $[0,1]$ and each of its row sums up to 1.
\end{assumption}
\begin{theorem}\label{eq:th1}
Suppose Assumptions \ref{assumption1}--\ref{assumption2} hold. Consider the regularized estimator $\widehat{\boldsymbol{\beta}}$ of $\boldsymbol{\beta}$ from solving \eqref{eq:tree-pen} with any penalty forms in \eqref{pen1}--\eqref{pen3}. Denote $|I(T)|$ as the number of internal nodes of the tree. Choose $\lambda \ge 2 \sqrt{2}\sigma\sqrt{\log(|I(T)|) /(\delta n)}$. Then with probability at least $1-\delta$, it holds that
\begin{equation}\label{eq:bound}
\frac{1}{n}\|\boldsymbol{X} \widehat{\boldsymbol{\beta}} - \boldsymbol{X} \boldsymbol{\beta}^*\|^2 \preceq
\lambda \{\min_{\boldsymbol{\gamma}; \boldsymbol{A}\boldsymbol{\gamma} = \boldsymbol{\beta}^*} \mathcal{P}_T(\boldsymbol{\gamma})\},
\end{equation}
where $\preceq$ means the inequality holds up to a multiplicative constant.
\end{theorem}
In the above results, the order of $\lambda$ is $O(\sqrt{\log(|I(T)|)/n})$, depending on the tree structure through the total number of internal nodes $|I(T)|$ that represents the dimension of the model. The term $\{\min_{\boldsymbol{\gamma}; \boldsymbol{A}\boldsymbol{\gamma} = \boldsymbol{\beta}^*} \mathcal{P}_T(\boldsymbol{\gamma})\}$ captures the complexity of the true model by measuring the minimal penalty function evaluated at the truth.
With the above unified prediction error bound, we now perform further analysis on mode size and complexity to obtain specific error rates. Following \citet{yan2018rare}, we first introduce the concepts of aggregating set and coarsest aggregating set, which correspond to the equi-sparsity pattern of the coefficients in the proposed relative shift model.
\begin{definition}\label{def1}
We say that $B \subseteq T$ is an aggregating set with respect to
$T$ if $\{L(T_u): u \in B\}$ forms a partition of $L(T)$.
\end{definition}
\begin{definition}\label{def2}
For any $\boldsymbol{\beta}^{*} \in \mathbb{R}^p$, there exists a unique
coarsest aggregating set $B^* := B(\boldsymbol{\beta}^{*}, T) \subseteq T$
(``the aggregating set'') with respect to the tree $T$ such
that (a) $\beta^{*}_j = \beta^{*}_k$ for $j, k \in L(T_u)$
$\forall u \in B^*$, (b) $|\beta^{*}_j - \beta^{*}_k| > 0$ for
$j \in L(T_u)$ and $k \in L(T_v)$ for siblings $u, v \in B^*$.
\end{definition}
As an example, consider the tree structure in Figure~\ref{fig:tree}.
Assume $\beta^*_1=\beta^*_2$ and $\beta_5^*=\beta^*_6=\beta_7^*$, and all other coefficients are distinct.
The node set $\{3,4,8,11\}$ forms an aggregating set
since the leaves of the four subtrees rooted by the four nodes form a partition of the leaf nodes $\{1, \ldots, 7\}$. Moreover, this node set is also the coarsest aggregating set corresponding to the equi-sparsity pattern of $\boldsymbol{\beta}^*$.
It is then clear that the size of the coarsest aggregating set,
$|B^*|$, is a natural complexity measure of the equi-sparsity pattern of $\boldsymbol{\beta}^*$ guided by the tree. Therefore, it is desired to bound $\{\min_{\boldsymbol{\gamma}; \boldsymbol{A}\boldsymbol{\gamma} = \boldsymbol{\beta}^*} \mathcal{P}_T(\boldsymbol{\gamma})\}$
in Theorem \ref{eq:th1} in terms of $|B^*|$ and the magnitude of $\boldsymbol{\beta}^*$.
In particular, for {\NL} in \eqref{pen1} and {\CL} in \eqref{pen2}, we have the following lemma (see Section~\ref{app:proofs} of Supplementary Materials for a detailed proof).
\begin{assumption}\label{assumption3}
$T$ is a $p$-leafed full tree such that each node is either a leaf or possesses at least two child nodes.
\end{assumption}
\begin{assumption}\label{assumption4}
The true coefficient $\boldsymbol{\beta}^*$ is bounded, i.e., $\|\boldsymbol{\beta}\|_{\infty} \leq M$, where $M>0$ is a constant.
\end{assumption}
\begin{lemma}\label{lemma:penalty}
Suppose Assumptions \ref{assumption3}--\ref{assumption4} hold.
For {\NL} in \eqref{pen1} and {\CL} in \eqref{pen2}, respectively, it holds that
$$
\min_{\boldsymbol{\gamma}; \boldsymbol{A}\boldsymbol{\gamma} = \boldsymbol{\beta}^*} \mathcal{P}_T(\boldsymbol{\gamma}) \leq M|B^*|.
$$
\end{lemma}
Together with Theorem \ref{eq:th1}, we have the following results.
\begin{corollary}\label{corollary:bound}
Suppose Assumptions \ref{assumption1}--\ref{assumption4} hold. Consider the regularized estimator $\widehat{\boldsymbol{\beta}}$ of $\boldsymbol{\beta}$ from solving \eqref{eq:tree-pen} with either penalty form in \eqref{pen1} and \eqref{pen2}. Let $B^*$ be the coarsest aggregating set according to $(T,\boldsymbol{\beta}^*)$. Choose $\lambda \ge 2 \sqrt{2}\sigma\sqrt{\log(p) /(\delta n)}$. Then with probability at least $1-\delta$, it holds that
\begin{equation}\label{eq:bound2}
\frac{1}{n}\|\boldsymbol{X} \widehat{\boldsymbol{\beta}} - \boldsymbol{X} \boldsymbol{\beta}^*\|^2 \preceq
\sigma\sqrt{\log(p) /(\delta n)}|B^*|.
\end{equation}
\end{corollary}
{We remark that the bound takes a familiar form as those for many well-studied high-dimensional models. Due to Assumption \ref{assumption3}, the measure of the model dimension, i.e., the number of internal nodes $|I(T)|$, is of order $p$. The $\log(p)$ term then represents the price we have to pay due to high dimensionality. Both {\NL} and {\CL} can predict well as long as $\log(p)/n = o(1)$ and its performance is tied to $|B^*|$, representing the complexity of the equi-sparsity pattern on the tree.}
\section{Simulation}\label{sec:sim}
\subsection{\color{black} Preliminaries}
We compare the proposed relative-shift regression framework with the transformation-based regression models using comprehensive simulations.
Specifically, we consider the relative-shift model with the equi-sparsity regularization (i.e., ``RS-ES") and the three tree-guided regularization methods, {\color{black} {\it Node $\ell_1$}, {\it Child $\ell_2$} and {\it Descendant $\ell_2$} (denoted as ``RS-L1", ``RS-CL2", and ``RS-DL2" respectively), when applicable}.
For competing methods, we consider the log-contrast model with lasso penalty (``LC-Lasso") \citep{lin2014variable} and the kernel penalized regression (KPR) model \citep{randolph2018kernel} with ridge kernel (``KPR-Ridge") and taxonomic kernel (``KPR-Tree").
Since the proposed relative-shift paradigm is fundamentally different from the log-contrast models, there is no point of directly comparing parameter estimation accuracy.
Instead, we focus on the comparison of prediction accuracy and computing times under various generative models.
Each simulation study is repeated 100 times.
All tuning parameters are selected using cross validation.
\subsection{Study I: Equi-Sparsity Setting}
In this study, we first simulate relative abundance data $\boldsymbol{X}$ for $p=100$ taxa and $n=500$ samples (i.e., $100$ for training and $400$ for testing).
In particular, the compositions are generated from a logistic Gaussian distribution where we first simulate a Gaussian data matrix $\boldsymbol{Z}=(z_{ij})$ of size $n\times (p-1)$ and then obtain the compositional vector
$\boldsymbol{x}_{i}=\left(e^{z_{i1}} / \{1+\sum_{j=1}^{p-1}e^{z_{ij}}\},\cdots,\right.$ $\left.e^{z_{i,p-1}}/ \{1+\sum_{j=1}^{p-1}e^{z_{ij}}\},1 / \{1+\sum_{j=1}^{p-1}e^{z_{ij}}\}\right)^T$
for the $i$th subject.
The resulting relative abundance matrix $\boldsymbol{X}$ does not have zero values.
To further mimic the reality, we truncate the data by 0.005, which leads to about 40\% zero entries.
Then we renormalize the data to be compositions and denote the zero-inflated matrix as $\boldsymbol{X}_0$.
In what follows, we assume the true model is generated from $\boldsymbol{X}$ but we only use $\boldsymbol{X}_0$ in model fitting and testing.
Then we simulate response from the relative-shift Model \eqref{RS}, where the coefficients for compositions are equi-sparse with the first twenty entries being -1, the next ten entries being 2, and remaining being 0.
Namely, from the prediction perspective, Taxa 1-20, Taxa 21-30, and Taxa 31-100 can be aggregated, respectively, without losing any information.
We further simulate random errors with variance such that the signal-to-noise ratio (SNR) is 1.
Since there is no extrinsic taxonomic tree information in this setting, we just compare RS-ES, LC-Lasso, and KPR-Ridge.
In particular, since both LC-Lasso and KPR-Ridge rely on logarithmic transformations which cannot handle zero, we follow the convention and substitute zero values with a preset small value (i.e., $0.001$) and renormalize data.
The out-sample mean squared prediction error (MSPE)
\begin{eqnarray*}
MSPE={1\over 400} {\sum_{i=1}^{400} (y_i-\widehat{y}_i)^2}
\end{eqnarray*}
and the computing time (including cross validation for tuning parameter selection) are assessed for each method.
The comparison can be found in Figure \ref{fig:sim1}.
Apparently, RS-ES significantly outperforms the two transformation-based methods in prediction accuracy.
All three methods are computationally efficient as the model fitting times are within a couple of seconds on a standard desktop computer (16Gb RAM, Intel Core i7 CPU 2.20 GHz).
We remark that the MPSE of the competing methods may change as we vary the preset value of the zero surrogate.
This indicates that the transformation-based methods may be unstable in handling data with excessive zeros.
We also compare different methods when data are generated from a log-contrast model.
The proposed method is comparable to the log-contrast counterparts in this misspecified setting, especially when the SNR is low.
Detailed results are contained in {Section~\ref{app:add_sim} of Supplementary Materials}.
\begin{figure}[hbpt]
\centering
\includegraphics[width=2.5in]{Sim1_paper1}
\includegraphics[width=2.5in]{Sim1_paper2}
\caption{MPSE and computing time comparison in simulation Study I.}
\label{fig:sim1}
\end{figure}
\subsection{Study II: Tree-Guided Equi-Sparsity Setting}
In Study II, we include extrinsic information of a taxonomic tree.
The compositional data are generated in the same way as in Study I.
In particular, we assume there is a tree structure among the $p=100$ variables as shown in the left panel of Figure \ref{fig:simtree}, where every 10 consecutive leaf nodes share a common parent node and so on.
Guided by the tree structure, the true coefficient vector for the generative relative-shift model is set to be
\[
\boldsymbol{\beta}=\left(\mathbf{1}_{20}^T,\ -2\cdot\mathbf{1}_{10}^T,\ 0.5\cdot\mathbf{1}_{10}^T,\ 2\cdot\mathbf{1}_{40}^T,\ \boldsymbol{\xi}_{20}^T\right)^T,
\]
where $\mathbf{1}_q$ is a length-$q$ vector of ones and $\boldsymbol{\xi}_{20}$ is a length-20 vector filled standard Gaussian random numbers.
Let us define the level of a node to be the number of connections between the node and the root plus 1.
The coefficients indicate that the first 20 variables on level 5 are aggregated to the internal node on level 3; the next 10 variables on level 5 are aggregated to the internal node on level 4, and so on.
The last 20 variables have distinct values, meaning that they cannot be further aggregated.
The feature aggregation is shown in the right panel of Figure \ref{fig:simtree}.
\begin{figure}[hbpt]
\centering
\subfloat{\includegraphics[width=.5\textwidth]{Sim3new_tree_left}}
\subfloat{\includegraphics[width=.5\textwidth]{Sim3new_tree_right}}
\caption{Left: The taxonomic tree structure among variables in Study II. The leaf node indices are in ascending order from left to right. Right: The equi-sparsity structure of the regression coefficients. Features with the same coefficient are aggregated to the common ancestor (i.e., the closest solid node).
}
\label{fig:simtree}
\end{figure}
We apply the three tree-guided regularization methods RS-L1, RS-CL2, and RS-DL2, as well as RS-ES, LC-Lasso, KPR-Ridge, and KPR-Tree to the data.
We remark that other than the three proposed methods, only KPR-Tree can take advantage of the tree structure.
In particular, KPR-Tree converts the tree structure to a patristic distance kernel.
In addition, we substitute zero values by a small preset value for LC-Lasso, KPR-Ridge, and KPR-Tree just as in Study I.
The comparison is shown in Figure \ref{fig:sim3}.
In the left panel, the three tree-guided relative-shift methods perform similarly and are significantly better than the other methods in terms of the prediction accuracy.
The next best method is KPR-Tree, which also takes into account the guidance from the extrinsic tree structure.
RS-ES is slightly worse than KPR-Tree, but significantly better than LC-Lasso and KPR-Ridge.
On the other hand, the superior prediction performance of the tree-guided relative-shift methods does come at a price, that is, a slightly higher computational cost (especially for RS-L1).
However, considering the scale of the problem and the cross validation scheme for tuning parameter selection, the computing time is quite acceptable.
\begin{figure}[hbpt]
\centering
\includegraphics[width=2.5in]{Sim3new_paper1}
\includegraphics[width=2.5in]{Sim3new_paper2}
\caption{MPSE and computing time comparison in simulation Study III.}
\label{fig:sim3}
\end{figure}
We also conduct additional simulations with various zero proportions and SNRs. The results are largely consistent with what has been reported here.
One thing we find intriguing is that the three proposed tree-guided regularization methods almost always have similar prediction performance.
We further investigate the three methods in a separate study in {Section~\ref{app:add_sim} of Supplementary Materials}.
In general, the three methods only differ slightly in the estimation of the intermediate coefficients.
They almost always have similar estimation (of the original coefficients) and prediction performance, and thus may be used exchangeably in practice.
A more comprehensive comparison between different regularization methods is left for future research.
\section{Application to Preterm Infant Gut Microbiome Study}\label{sec:real}
We apply the proposed relative-shift model with taxonomic-tree-guided regularization to a preterm infant gut microbiome study.
The study aims to investigate how gut microbiome is related to the neurodevelopment of preterm infants.
Data were collected at a Neonatal Intensive Care Unit (NICU) in the northeast US. Fecal samples of preterm infants were collected daily when available during the infant’s first month of postnatal age.
Bacterial DNA was isolated and extracted from each sample \citep{Bomar2011,cong2017influence}; V4 regions of the 16S rRNA gene were sequenced using the Illumina platform. Gender, birth weight, delivery type, and complications were recorded at birth, and medical procedures and feeding types were recorded throughout the infant’s stay. Infant neurobehavioral outcomes were measured when the infant reached 36-38 weeks of postmenstrual age, using the NICU Network Neurobehavioral Scale (NNNS) \citep{cong2017influence}.
To obtain the OTU table for analysis, we cluster and analyze the raw data using {\color{black} Quantitative Insights Into Microbial Ecology}\st{QIIME} \citep{QIIME2010}.
The data are classified up to the genus level.
After proper processing, we obtain $p=62$ taxa, most at the genus level, on $n=34$ individuals.
The longitudinal data are averaged across the postnatal period for each infant, resulting in a single $34\times 62$ compositional matrix with 39.2\% zero entries.
Moreover, the taxonomic tree of the 62 taxa is also available (see Figure \ref{fig:realtree}).
Each taxon in the OTU table corresponds to a leaf node.
Most taxa are at the genus level while some are at higher levels.
The primary outcome is the continuous NNNS score, and we include 6 additional covariates (i.e., gender, delivery type, premature rupture of membranes, score for Neonatal Acute Physiology–Perinatal Extension-II (SNAPPE-II), birth weight, and percentage of feeding with mother’s breast milk) in our analysis.
\begin{figure}[hbpt]
\centering
\includegraphics[width=5in]{taxonomy}
\caption{Taxonomic tree of the 62 taxa in the NICU data.}
\label{fig:realtree}
\end{figure}
We apply RS-DL2 to the NICU data with covariate adjustment (both RS-CL2 and RS-L1 lead to very similar results and thus are omitted).
The tuning parameter is chosen by 5-fold cross validation.
The estimated coefficients for compositions are approximately equi-sparse but not exact.
This is a common issue with the group-lasso-type penalty \citep{chen2012smoothing}.
To facilitate interpretation, we set a small threshold (i.e., $10^{-4}$) and truncate the groups of intermediate coefficients whose Frobenius norms are below the threshold.
As a result, we obtain highly interpretable equi-sparse coefficients for the 62 taxa; an illustration of the estimated coefficients for compositions on the taxonomic tree is provided in Figure \ref{fig:coef}.
Blank nodes have zero intermediate coefficients.
Taxa with the same coefficient are aggregated to the common ancestor (i.e., the closest solid node).
For instance, Taxa 2-6 have the same coefficient, indicating that the total composition of these five taxa (at the class level) { is what matters to the prediction of the outcome.}
\begin{figure}[hbpt]
\centering
\includegraphics[width=5in]{tree_structure_1}
\caption{Estimated RS-DL2 coefficients for compositions in the NICU data. Taxa with the same title color have the same coefficient and are aggregated to common ancestor (i.e., the closest solid node). }
\label{fig:coef}
\end{figure}
The findings are largely consistent with our previous result at the order level \citep{sun2018log}, where Lactobacillales (Taxa 17-23), Clostridiales (Taxa 24-38), Enterobacteriales (Taxa 48-56), and other unclassified bacteria (Taxon 1) are identified to be significantly associated with the stress score in infants.
The proposed method provides more insightful results with better biological interpretation.
In particular, Figure \ref{fig:coef} shows that composition of the order Clostridiales (Taxa 24-38) as a whole matters \citep{sordillo2019association} rather than its child taxonomies at the family or genus level. In contrast, the order Enterobacteriales (Taxa 48-56) is important because the compositions of all genera in this order are relevant.
This is very intriguing because higher abundance of gut Enterobacteria is a characteristic pattern of dysbiosis \citep{zeng2017mechanisms,degruttola2016current}, and our results thus warrant further analysis at the genus level on how such dysbiosis contributes to neurodevelopmental deficits. In addition, our method also identifies other relevant taxa at different taxonomic levels, which serve as a basis for further clinical research.
For example, the class Actinobacteria (Taxa 2-6) is selected, which is involved in lipid metabolism \citep{painold2019step} and has been shown to be associated with mood disorders \citep{huang2019current}.
We further apply RS-ES and the competing methods (LC-Lasso, KPR-Ridge, and KPR-Tree) to the NICU data.
We conduct a leave-one-out cross validation (LOOCV) to compare the prediction accuracy of different methods. The tuning parameters for different methods are selected by CV on the training samples in each run.
The prediction squared errors (PSE) are summarized in Table \ref{tab:real}.
We remark that due to the small sample size ($n=34$), all methods have similar performance except for the LC-Lasso method which is apparently inferior to others.
This is mainly because LC-Lasso cannot properly adjust for covariates, and all coefficients (for covariates and for log-transformed compositions) are equally penalized.
Although the PSE of our methods is slightly higher than that of KPR, the superior interpretability of the estimated coefficients further warrants the use of the new regression framework in this application.
\begin{table}[htbp]
\centering
\caption{The median (median absolute deviation) of PSE (multiplied by $10^3$) of different methods based on LOOCV of the NICU data
\begin{tabular}{|c||c|c|c|c|c|}
\hline
Method & RS-DL2 & RS-ES & LC-Lasso & KPR-Ridge & KPR-Tree \\
\hline
PSE$*10^3$ & 4.47 (3.49) & 4.44 (3.85) & 6.81 (5.52) & 4.58 (3.95) & 4.18 (3.81) \\
\hline
\end{tabular}
\label{tab:real}
\end{table}
\section{Discussion}\label{sec:dis}
In this paper, we develop a novel relative-shift regression paradigm for microbiome compositional data.
The new framework regresses the response on compositions directly without transformation.
As a result, it adequately handles the unique features of microbiome data such as compositionality and zero inflation.
Moreover, the regression coefficients carry straightforward biological interpretation, that is, the contrast of the regression coefficients can be interpreted as the effect of certain shifts of abundances of a group of taxa while holding their sum fixed.
The relative-shift framework provides a flexible basis for supervised dimension reduction.
We develop different regularization methods, i.e., the equi-sparsity regularization and the taxonomic-tree-guided regularization, for feature aggregation.
In particular, the tree-guided regularization takes advantage of the extrinsic taxonomic structure among taxa and adaptively identifies relevant taxa at different taxonomic levels.
An efficient {\SPG} algorithm is devised to fit models with different regularization terms.
Numerical studies demonstrate that the proposed methods provide an effective and highly interpretable alternative for microbiome regression, especially in low-signal scenarios.
There are a few directions for future research.
First, same as the log-contrast models, the relative-shift model is a linear regression model. In practice, the effect of microbial concentration shifts on the response may be nonlinear. It is of particular interest to generalize the current framework to accommodate such nonlinear relationships.
{Second, in the theoretical analysis, thus far we are not able to show that the prediction error of the {\DL} estimator can achieve the same order as those of the other two estimators. We will thoroughly investigate this issue in view of their similar numerical performance.} Third, microbiome data are typically measured with errors. In particular, data at lower taxonomic ranks are more granular but less accurate. Although the proposed taxonomy-guided regularization method can strike a balance between data accuracy and resolution by using data across the tree, tailored error-in-variable methods are warranted to better account for measurement errors. This is especially relevant when the taxonomy is not readily available. Last but not the least, longitudinal microbiome studies are burgeoning as exemplified by the NICU study. There have been some recent developments on longitudinal regression methods for microbiome data \citep{sun2018log}.
The generalization of the relative-shift framework to the longitudinal setting calls for more investigation.
\section*{Acknowledgement}
The authors thank Dr.\ Xiaomei Cong for providing data from the NICU study (supported by U.S. National Institutes of Health Grant K23NR014674) and offering helpful discussions.
Gen Li's research was partially supported by the National Institutes of Health grant R03DE027773.
Kun Chen's research was partially supported by the National Science Foundation grant IIS-1718798.
\newpage
|
quant-ph/0508073
|
\section*{Acknowledgments}
Two of us, BB and RR, gratefully acknowledge the support of the National Fund for
Scientific Research (FNRS), Belgium, and the warm hospitality at PNTPM, Universit\'e Libre
de Bruxelles, where this work was carried out. CQ is a Research Director of the National
Fund for Scientific Research (FNRS), Belgium.\par
\newpage
|
quant-ph/0508200
|
\section{Introduction}
Many quantum algorithms (for example, Grover's
algorithm \cite{Grover} and quantum counting \cite{Counting})
can be analyzed in the query model where the input is
accessed via a black box that answers queries
about the values of input bits.
There are two main methods for proving lower bounds
on query algorithms: adversary method \cite{Ambainis00} and
polynomials method \cite{Beals+} and both of them have been
studied in detail. The limits of adversary method
are particularly well understood. The original adversary
method \cite{Ambainis00} has been generalized in several
different ways \cite{Ambainis03,LM,BSS}. \v Spalek and Szegedy
\cite{SS} then showed that all the generalizations are
equivalent and, for certain problems, cannot improve
the best known lower bounds. For example \cite{SS,Zhang},
the adversary methods of \cite{Ambainis03,LM,BSS} cannot
prove a lower bound on a total Boolean function that exceeds
$O(\sqrt{C_0(f) C_1(f)})$, where $C_0(f)$ and $C_1(f)$ are
the certificate complexities of f on 0-inputs and 1-inputs.
This implies that the adversary methods
of \cite{Ambainis03,LM,BSS} cannot prove a tight lower
bound for element distinctness or improve the best known
lower bound for triangle finding.
(The complexity of element distinctness is $\Theta(N^{2/3})$
\cite{AS,Ambainis04} but the adversary method cannot prove
a bound better than $\Omega(\sqrt{N})$.
For triangle finding \cite{MSS}, the best known lower bound is
$\Omega(N)$ and it is known that it cannot be improved
using the adversary method.
It is, however, possible that the bound is not tight,
because the best algorithm uses $O(N^{1.3})$
queries.)
In this paper, we describe a new version of quantum adversary
method which may not be subject to those limitations.
We then use the new method to prove a strong direct product
theorem for the {\em K-fold search} problem.
In the $K$-fold search problem, a black box
contains $x_1, \ldots, x_N$ such that $|\{i:x_i=1\}|=K$
and we have to find all $K$ values $i:x_i=1$.
This problem can be solved with $O(\sqrt{NK})$ queries.
It can be easily shown, using any of the previously known methods,
that $\Omega(\sqrt{NK})$ queries are required.
A more difficult problem is to show that $\Omega(\sqrt{NK})$
queries are required, even if the algorithm only has to
be correct with an exponentially small probability $c^{-K}$, $c>1$.
This result is known as the {\em strong direct product theorem}
for $k$-fold search. Besides being interesting on its own,
the strong direct product theorem is useful for proving
time-space tradeoffs for quantum sorting \cite{KSW} and lower
bounds on quantum computers that use advice \cite{Aaronson}.
The strong direct product theorem for quantum search
was first shown by Klauck et al. \cite{KSW}, using polynomials
method. No proof using adversary method
has been known and, as we show in section \ref{sec:previous},
the previously known adversary methods
are insufficient to prove a strong direct product theorem
for $K$-fold search.
\section{Preliminaries}
We consider the following problem.
{\bf Search for $K$ marked elements,} $SEARCH_K(N)$.
Given a black box containing $x_1, \ldots, x_N\in\{0, 1\}$ such that
$x_i=1$ for exactly $K$ values $i\in\{1, 2, \ldots, N\}$, find
all $K$ values $i_1, \ldots, i_K$ satisfying $x_{i_j}=1$.
This problem can be viewed as computing an ${N \choose K}$-valued
function $f(x_1, \ldots, x_N)$ of variables
$x_1, \ldots, x_N\in\{0, 1\}$, with values of the function
being indices for ${N \choose K}$ sets $S\subseteq [N]$ of
size $K$, in some canonical ordering of those sets.
We study this problem in the quantum query model
(for a survey on query model, see \cite{BWSurvey}).
In this model, the input bits can be accessed by queries to an oracle $X$
and the complexity of $f$ is the number of queries needed to compute $f$.
A quantum computation with $T$ queries
is just a sequence of unitary transformations
\[ U_0\rightarrow O\rightarrow U_1\rightarrow O\rightarrow\ldots
\rightarrow U_{T-1}\rightarrow O\rightarrow U_T.\]
The $U_j$'s can be arbitrary unitary transformations that do not depend
on the input bits $x_1, \ldots, x_N$. The $O$'s are query (oracle) transformations
which depend on $x_1, \ldots, x_N$.
To define $O$, we represent basis states as $|i, z\rangle$ where
$i$ consists of $\lceil \log (N+1)\rceil$ bits and
$z$ consists of all other bits. Then, $O_x$ maps
$\ket{0, z}$ to itself and
$\ket{i, z}$ to $(-1)^{x_i}\ket{i, z}$ for $i\in\{1, ..., N\}$
(i.e., we change phase depending on $x_i$, unless $i=0$ in which case we do
nothing).
The computation starts with a state $|0\rangle$.
Then, we apply $U_0$, $O_x$, $\ldots$, $O_x$,
$U_T$ and measure the final state.
The result of the computation are
$\lceil \log_2 {N \choose K} \rceil$ rightmost
bits of the state obtained by the measurement
which are interpreted as a description for
one of ${N \choose K}$ subsets
$S\subseteq \{1, \ldots, N\}$, $|S|=K$.
\section{Overview of adversary method}
\label{sec:previous}
We describe the adversary method of \cite{Ambainis00}.
Let $S$ be a subset of the set of possible inputs $\{0, 1\}^N$.
We run the algorithm on a superposition of inputs in $S$.
More formally, let ${\cal H}_A$ be the workspace of the algorithm.
We consider a bipartite system ${\cal H}={\cal H}_A\otimes {\cal H}_I$
where ${\cal H}_I$ is an ``input subspace" spanned by
basis vectors $\ket{x}$ corresponding to inputs $x\in S$.
Let $U_T O U_{T-1} \ldots U_0$ be the sequence of unitary transformations
on ${\cal H}_A$ performed by the algorithm $A$
(with $U_0, \ldots, U_T$ being the transformations that
do not depend on the input and $O$ being the query transformations).
We transform it into a sequence of unitary transformations on ${\cal H}$.
A unitary transformation $U_i$ on ${\cal H}_A$ corresponds to
the transformation $U'_i=U_i\otimes I$ on the whole ${\cal H}$.
The query transformation $O$ corresponds to a transformation $O'$
that is equal to $O_x$ on subspace $H_A\otimes \ket{x}$.
We perform the sequence of transformations
$U'_T O' U'_{T-1}\ldots U'_0$ on the starting state
\[ \ket{\psi_{start}}=\ket{0}\otimes \sum_{x\in S} \alpha_x \ket{x} .\]
Then, the final state is
\[ \ket{\psi_{end}}= \sum_{x\in S}\alpha_x \ket{\psi_x}\otimes\ket{x} \]
where $\ket{\psi_x}$ is the final state of $A=U_T O U_{T-1} \ldots U_0$
on the input $x$. This follows from the fact that the restrictions
of $U'_T, O', U'_{T-1}, \ldots, U'_0$ to ${\cal H}_A\otimes\ket{x}$ are
$U_T$, $O_x$, $U_{T-1}$, $\ldots$, $U_0$ and these are exactly the
transformations of the algorithm $A$ on the input $x$.
Let $\rho_{end}$ be the reduced density matrix of the ${\cal H}_I$
register of the state $\ket{\psi_{end}}$.
The adversary method of \cite{Ambainis00,Ambainis03}
works by showing the following two statements
\begin{itemize}
\item
Let $x\in S$ and $y\in S$ be such that $f(x)\neq f(y)$
(where $f$ is the function that is being computed).
If the algorithm outputs the correct answer with
probability $1-\epsilon$ on both $x$ and $y$, then
$|\rho_{end})_{x, y}| \leq
2\sqrt{\epsilon (1-\epsilon)} |\alpha_x| |\alpha_y|$.
\item
for any algorithm that uses $T$ queries, there are
inputs $x, y\in S$ such that $(\rho_{end})_{x, y} >
2\sqrt{\epsilon (1-\epsilon)} |\alpha_x| |\alpha_y|$
and $f(x)\neq f(y)$.
\end{itemize}
These two statements together imply that any algorithm
computing $f$ must use more than $T$ queries.
An equivalent approach \cite{HNS,Ambainis03} is to consider
the inner products $\langle \psi_x\ket{\psi_y}$ between
the final states $\ket{\psi_x}$ and $\ket{\psi_y}$
of the algorithm on inputs $x$ and $y$.
Then, $|(\rho_{end})_{x, y}| \leq
2\sqrt{\epsilon (1-\epsilon)} |\alpha_x| |\alpha_y|$
is equivalent to $|\langle \psi_x\ket{\psi_y}|\leq
2\sqrt{\epsilon (1-\epsilon)}$.
As a result, both of the above statements can be described
in terms of inner products $\langle \psi_x\ket{\psi_y}$,
without explicitly introducing the register ${\cal H}_I$.
The first statement says that, for the algorithm
to succeed on inputs $x, y$ such that $f(x) \neq f(y)$,
the states $\ket{\psi_x}$ and $\ket{\psi_y}$ must be
sufficiently far apart one from another
(so that the inner product $|\langle \psi_x\ket{\psi_y}|$
is at most $2\sqrt{\epsilon(1-\epsilon)}$).
The second statement says that this is impossible
if the algorithm only uses $T$ queries.
\comment{
We show that, for any algorithm that uses $T$ queries,
there are two inputs $x$, $y$, such that $f(x) \neq f(y)$
but the final states of the algorithm $\ket{\psi_x}$
and $\ket{\psi_y}$ are close. Therefore, the algorithm
must fail to produce a correct answer on one of the inputs
$x$ and $y$.
}
This approach breaks down if we consider computing a
function $f$ with success probability $p < 1/2$.
($f$ has to have more than 2 values for this task
to be nontrivial.)
Then, $\ket{\psi_x}$ and $\ket{\psi_y}$ could be the same
and the algorithm may still succeed on both inputs,
if it outputs $x$ with probability 1/2 and $y$ with
probability 1/2.
In the case of strong direct product theorems,
the situation is even more difficult.
Since the algorithm only has to be correct with
a probability $c^{-K}$, the algorithm could have
almost the same final state on $c^{K}$ different
inputs and still succeed on every one of them.
In this paper, we present a new method that does not
suffer from this problem. Our method, described
in the next section, uses the idea
of augmenting the algorithm with an input register
${\cal H}_I$, together with two new ingredients:
\begin{enumerate}
\item
{\bf Symmetrization.}
We symmetrize the algorithm by applying a random
permutation $\pi\in S_N$ to the input $x_1, \ldots, x_N$.
\item
{\bf Eigenspace analysis.}
We study the eigenspaces of $\rho_{start}$, $\rho_{end}$
and density matrices describing the state of ${\cal H}_I$ at
intermediate steps and use them to bound the progress
of the algorithm.
\end{enumerate}
The eigenspace analysis is the main new technique.
Symmetrization is necessary to simplify the structure
of the eigenspaces, to make the eigenspace analysis possible.
\section{Our result}
\begin{theorem}
There exist $\epsilon$ and $c$ satisfying $\epsilon>0$, $0<c<1$
such that, for any $K\leq N/2$, solving $SEARCH_K(N)$ with probability
at least $c^K$ requires $(\epsilon-o(1)) \sqrt{NK}$ queries.
\end{theorem}
\noindent {\bf Proof: }
Let ${\cal A}$ be an algorithm for $SEARCH_K(N)$ that uses $T\leq \epsilon\sqrt{NK}$
queries.
We first ``symmetrize" ${\cal A}$ by adding an extra register ${\cal H}_P$
holding a permutation $\pi\in S_N$. Initially, ${\cal H}_P$ holds a uniform superposition
of all permutations $\pi$: $\frac{1}{\sqrt{N!}}\sum_{\pi\in S_N} \ket{\pi}$.
Before each query $O$, we insert a transformation
$\ket{i}\ket{\pi}\rightarrow\ket{\pi^{-1}(i)}\ket{\pi}$
on the part of the state containing the index $i$ to be queried
and ${\cal H}_P$. After the query, we insert a transformation
$\ket{i}\ket{\pi}\rightarrow\ket{\pi(i)}\ket{\pi}$.
At the end of algorithm, we apply the transformation
$\ket{i_1} \ldots \ket{i_K}\ket{\pi}\rightarrow \ket{\pi^{-1}(i_1)}\ldots
\ket{\pi^{-1}(i_K}\ket{\pi}$. The effect of the symmetrization
is that, on the subspace $\ket{s}\otimes \ket{\pi}$,
the algorithm is effectively running on the input $x_1$, $\ldots$, $x_N$
with $x_{\pi(i_1)}=\ldots=x_{\pi(i_K)}=1$.
If the original algorithm ${\cal A}$ succeeds on every input $(x_1, \ldots, x_N)$
with probability at least $\epsilon$, the symmetrized algorithm
also succeeds with probability at least $\epsilon$, since its
success probability is just the average of the success probabilities of ${\cal A}$
over all $(x_1, \ldots, x_N)$ with exactly $K$ values $x_i=1$.
Next, we recast ${\cal A}$ into a different form, using a register that stores
the input $x_1, \ldots, x_N$, as in section \ref{sec:previous}.
Let ${\cal H}_A$ be the Hilbert space on which the symmetrized version of
${\cal A}$ operates. Let ${\cal H}_I$ be an ${N \choose K}$-dimensional Hilbert space
whose basis states correspond to inputs $(x_1, \ldots, x_N)$
with exactly $K$ values $i:x_i=1$. We transform ${\cal A}$ into a sequence of
transformations on a Hilbert space ${\cal H}={\cal H}_A\otimes{\cal H}_I$.
A non-query transformation $U$ on ${\cal H}_A$ is replaced with $U\otimes I$ on ${\cal H}$.
A query is replaced by a transformation $O$ that is equal
to $O_{x_1, \ldots, x_N}\otimes I$
on the subspace consisting of states of the form
$\ket{s}_A\otimes\ket{x_1 \ldots x_N}_I$.
The starting state of the algorithm on Hilbert space ${\cal H}$ is
$\ket{\varphi_0}=\ket{\psi_{start}}_A\otimes\ket{\psi_0}_I$
where $\ket{\psi_{start}}$ is
the starting state of ${\cal A}$ as an algorithm acting
on ${\cal H}_A$ and $\ket{\psi_0}$ is
the uniform superposition of all basis states of ${\cal H}_I$:
\[ \ket{\psi_0}=\frac{1}{\sqrt{N\choose K}}
\sum_{x_1, \ldots, x_N:x_1+\ldots+x_N=K}
\ket{x_1\ldots x_N} .\]
Let $\ket{\psi_t}$ be the state of the algorithm ${\cal A}$, as a sequence of transformations
on ${\cal H}$, after the $t^{\rm th}$ query. Let $\rho_t$ be the mixed state obtained from
$\ket{\psi_t}$ by tracing out the ${\cal H}_A$ register.
We claim that the states $\rho_t$ have a special form,
due to our symmetrization step.
\begin{lemma}
\label{lem:sym}
The entries $(\rho_t)_{x, y}$ are the same for all $x=(x_1, \ldots, x_N)$,
$y=(y_1, \ldots, y_N)$ with the same cardinality of the set $\{l:x_l=y_l=1\}$.
\end{lemma}
\noindent {\bf Proof: }
Since $\rho_t$ is independent of the way how the ${\cal H}_A\otimes {\cal H}_S$ is traced
out, we first measure ${\cal H}_S$ (in the $\ket{\pi}$ basis) and then measure ${\cal H}_A$
(arbitrarily). When measuring ${\cal H}_S$, every $\pi$ is obtained with an equal
probability. Let $\rho_{t, \pi}$ be the reduced density matrix of ${\cal H}_I$,
conditioned on the measurement of ${\cal H}_S$ giving $\pi$. Then,
\[ \rho_t =\sum_{\pi} \frac{1}{N!} \rho_{t, \pi} .\]
The entry $(\rho_{t, \pi})_{x, y}$ is the same
as the entry $(\rho_{t, id})_{\pi^{-1}(x), \pi^{-1}(y)}$ because
the symmetrization by $\pi$ maps $\pi^{-1}(x), \pi^{-1}(y)$ to $x, y$.
For every $x, y$, $x', y'$ with $|\{i:x_i=y_i=1\}|=|\{i:x'_i=y'_i=1\}|$,
there is an equal number of permutations $\pi$ mapping $\pi(x)=x'$,
$\pi(y)=y'$. Therefore, $(\rho_t)_{x, y}$ is the average of
$(\rho_{t, id})_{x', y'}$
over all $x', y'$ with $|\{l:x_l=y_l=1\}|=|\{l:x'_l=y'_l=1\}|$.
This means that $(\rho_t)_{x, y}$ only depends on $|\{l:x_l=y_l=1\}|$.
\hfill{\rule{2mm}{2mm}}
Any ${N \choose K}\times {N\choose K}$ matrix with
this property shares the same eigenspaces.
Namely \cite{Knuth}, its eigenspaces are $S_0$, $S_1$, $\ldots$, $S_K$ where
$T_0=S_0$ consists of multiples of $\ket{\psi_0}$ and, for $j>0$,
$S_j=T_j-T_{j-1}$, with $T_j$ being the space spanned by all states
\[ \ket{\psi_{i_1, \ldots, i_j}}=\frac{1}{\sqrt{N \choose K-j}}
\mathop{\mathop{\sum_{x_1, \ldots, x_N:}}_{x_1+\ldots+x_N=K,}}_{x_{i_1}=\ldots=x_{i_j}=1}
\ket{x_1\ldots x_N} .\]
Let $\tau_j$ be the completely mixed state over the subspace $S_j$.
\begin{lemma}
\label{lem:eigen}
There exist $p_{t,0}\geq 0$, $\ldots$, $p_{t, K}\geq 0$ such that
$\rho_t=\sum_{j=0}^K p_{t, j}\tau_j$.
\end{lemma}
\noindent {\bf Proof: }
According to \cite{Knuth}, $S_0$, $\ldots$, $S_K$ are the eigenspaces of
$\rho_t$. Therefore, $\rho_t$ is a linear combination of
the projectors to $S_0$, $\ldots$, $S_K$.
Since $\tau_j$ is a multiple of the projector to $S_j$, we have
\[ \rho_t=\sum_{j=0}^K p_{t, j}\tau_j .\]
Since $\rho_t$ is a density matrix, it must be positive semidefinite.
This means that $p_{t,0}\geq 0$, $\ldots$, $p_{t, K}\geq 0$.
\hfill{\rule{2mm}{2mm}}
Let $q_{t, j}=p_{t, j}+p_{t, j+1}+\ldots+p_{t, K}$.
The theorem now follows from the following lemmas.
\begin{lemma}
$p_{0, 0}=1$, $p_{0, j}=0$ for $j>0$.
\end{lemma}
\noindent {\bf Proof: }
The state $\ket{\varphi_0}$ is just $\ket{\psi_{start}}\otimes\ket{\psi_0}$.
Tracing out $\ket{\psi_{start}}$ leaves the state
$\rho_0=\ket{\psi_0}\bra{\psi_0}$.
\hfill{\rule{2mm}{2mm}}
\begin{lemma}
\label{lem:main1}
For all $j\in \{1, \ldots, K\}$ and all $t$,
$q_{t+1, j+1}\leq q_{t, j+1}+
\frac{4\sqrt{K}}{\sqrt{N}} q_{t, j}$
\end{lemma}
\noindent {\bf Proof: }
In section \ref{sec:proof1}.
\hfill{\rule{2mm}{2mm}}
\begin{lemma}
$q_{t, j}\leq {t \choose j} \left(\frac{4\sqrt{K}}{\sqrt{N}}\right)^j$.
\end{lemma}
\noindent {\bf Proof: }
By induction on $t$. The base case, $t=0$ follows immediately
from $p_{0, 0}=1$ and $p_{0, 1}=\ldots=p_{0, K}=0$.
For the inductive case, we have
\[ q_{t+1, j}\leq q_{t, j}+\frac{4\sqrt{K}}{\sqrt{N}} q_{t, j-1}
\leq {t \choose j} \left(\frac{4\sqrt{K}}{\sqrt{N}}\right)^j +
\frac{4\sqrt{K}}{\sqrt{N}} {t \choose j-1}
\left(\frac{4\sqrt{K}}{\sqrt{N}}\right)^{j-1} \]
\[ \leq \left( {t \choose j}+{t \choose j-1} \right)
\left (\frac{4\sqrt{K}}{\sqrt{N}}\right)^j =
{t+1\choose j} \left(\frac{4\sqrt{K}}{\sqrt{N}}\right)^j ,\]
with the first inequality following from Lemma \ref{lem:main1}
and the second inequality following from the inductive assumption.
\hfill{\rule{2mm}{2mm}}
\begin{lemma}
\label{lem:main1c}
If $t\leq 0.03 \sqrt{NK}$, then $p_{t, j}<0.65^j$ for all $j>K/2$.
\end{lemma}
\noindent {\bf Proof: }
We have
\[ q_{t, j}\leq {t \choose j} \left(\frac{4\sqrt{K}}{\sqrt{N}}\right)^j <
\frac{t^j}{j!} \left(\frac{4\sqrt{K}}{\sqrt{N}}\right)^j \]
\[\leq \frac{t^j e^j}{j^j} \left(\frac{4\sqrt{K}}{\sqrt{N}}\right)^j =
\left(\frac{4\sqrt{K} e t}{\sqrt{N} j} \right)^j ,\]
where the third inequality follows from $j!\geq (\frac{j}{e})^j$
which is a consequence of the Stirling's formula.
Let $j>K/2$ and $t\leq 0.03\sqrt{NK}$.
Then,
\[ \frac{4\sqrt{K} e t}{\sqrt{N} j}
\leq \frac{0.12 e \sqrt{K} \sqrt{NK}}{\sqrt{N} K/2} < 0.65,\]
implying the lemma.
\hfill{\rule{2mm}{2mm}}
\begin{lemma}
\label{lem:main2}
The success probability of ${\cal A}$ is at most
\[ \frac{{N \choose K/2}}{{N\choose K}}+
4 \sqrt{\sum_{j=K/2+1}^K p_{T, j}} .\]
\end{lemma}
\noindent {\bf Proof: }
In section \ref{sec:proof2}.
\hfill{\rule{2mm}{2mm}}
To complete the proof, given the two Lemmas,
we choose a constant $c>\sqrt[4]{0.65}=0.8979...$ and set $\epsilon=0.04$.
Then, by Lemma \ref{lem:main2}, the success probability
of ${\cal A}$ is at most
\[ \frac{{N\choose K/2}}{{N \choose K}}
+ 4 \sqrt{\frac{K}{2} 0.65^{K/2}} .\]
The first term is equal to
\[ \frac{{N\choose K/2}}{{N \choose K}} =
\frac{K! (N-K)!}{(K/2)! (N-K/2)!} \leq \frac{K!}{(K/2)! (N-K)^{K/2}}
\]
\[ = O\left(\frac{(K/e)^K}{(K/2e)^{K/2} (N-K)^{K/2}}\right) =
O\left( \left(\frac{2K}{e(N-K)}\right)^{K/2} \right) \]
\[ = O\left(\left(\frac{2}{e}\right)^{K/2}\right) = O(0.857...^K) ,\]
with the third step following from Stirling's approximation
and the fifth step following from $K<N/2$.
The second part, $\sqrt{\frac{K}{2} 0.65^{K/2}}$ is less than $c^K/2$ if
$K$ is sufficiently large.
It remains to prove the two lemmas.
\section{Proof of Lemma \ref{lem:main1}}
\label{sec:proof1}
We decompose the state $\ket{\psi_t}$ as
$\sum_{i=0}^N a_i \ket{\psi_{t, i}}$, with
$\ket{\psi_{t, i}}$ being the part in which the query register contains $\ket{i}$.
Because of symmetrization, we must have $|a_1|=|a_2|=\ldots=|a_N|$.
Let $\rho_{t, i}=\ket{\psi_{t, i}}\bra{\psi_{t, i}}$.
Then,
\begin{equation}
\label{eq-2507a}
\rho_t=\sum_{i=0}^N a^2_i \rho_{t, i}.
\end{equation}
For $i>0$, we have
\begin{Claim}
Let $i\in\{1, \ldots, N\}$.
The entry $(\rho_{t, i})_{x, y}$ only depends on $x_i, y_i$ and the cardinality
of $\{l:l\neq i, x_l=y_l=1\}$.
\end{Claim}
\noindent {\bf Proof: }
Similar to lemma \ref{lem:sym}.
\hfill{\rule{2mm}{2mm}}
We now describe the eigenspaces of matrices $\rho_{t, i}$.
The proofs of some claims are postponed to section \ref{sec:eigen}.
We define the following subspaces of states.
Let $T^{i, 0}_{j}$ be the subspace spanned by all states
\[ \ket{\psi^{i,0}_{i_1, \ldots, i_j}}=\frac{1}{\sqrt{N-j-1 \choose K-j}}
\mathop{ \sum_{x:|x|=K}}_{x_{i_1}=\ldots=x_{i_j}=1, x_i=0}
\ket{x_1\ldots x_N} \]
and $T^{i, 1}_{j}$ be the subspace spanned by all states
\[ \ket{\psi^{i,1}_{i_1, \ldots, i_j}}=\frac{1}{\sqrt{N-j-1 \choose K-j-1}}
\mathop{ \sum_{x:|x|=K}}_{x_{i_1}=\ldots=x_{i_j}=1, x_i=1}
\ket{x_1\ldots x_N} .\]
Let $S^{i, 0}_{j}=T^{i, 0}_{j}\cap (T^{i, 0}_{j-1})^{\perp}$ and
$S^{i, 1}_{j}=T^{i, 1}_{j}\cap (T^{i, 1}_{j-1})^{\perp}$.
Equivalently, we can define $S^{i, 0}_j$ and $S^{i, 1}_j$ as the
subspaces spanned by the states $\ket{\tilde{\psi}^{i,0}_{i_1, \ldots, i_j}}$
and $\ket{\tilde{\psi}^{i,1}_{i_1, \ldots, i_j}}$, respectively, with
\[ \ket{\tilde{\psi}^{i,l}_{i_1, \ldots, i_j}} = P_{(T^{i, l}_{j-1})^{\perp}}
\ket{\psi^{i,l}_{i_1, \ldots, i_j}} .\]
Let $S^i_{\alpha, \beta, j}$ be the subspace spanned by all states
\begin{equation}
\label{eq-2507}
\alpha\frac{\ket{\tilde{\psi}^{i,0}_{i_1, \ldots, i_j}}}{
\|\ket{\tilde{\psi}^{i,0}_{i_1, \ldots, i_j}}\|}+
\beta \frac{\ket{\tilde{\psi}^{i,1}_{i_1, \ldots, i_j}}}{
\|\ket{\tilde{\psi}^{i,1}_{i_1, \ldots, i_j}}\|} .
\end{equation}
\begin{Claim}
\label{claim:eigen1}
Every eigenspace of $\rho_{t,i}$ is
a direct sum of subspaces $S^i_{\alpha, \beta, j}$
for some $\alpha$, $\beta$, $j$.
\end{Claim}
\noindent {\bf Proof: }
In section \ref{sec:eigen}.
\hfill{\rule{2mm}{2mm}}
Let $\tau^i_{\alpha, \beta, j}$ be the completely mixed state
over $S^i_{\alpha, \beta, j}$.
Similarly to lemma \ref{lem:eigen}, we can write $\rho_{t, i}$ as
\begin{equation}
\label{eq-2507b}
\rho_{t,i}=\sum_{(\alpha, \beta, j)\in A_{t, i}}
p^i_{\alpha, \beta, j} \tau^{i}_{\alpha, \beta, j} ,
\end{equation}
where $(\alpha, \beta, j)$ range over some finite set $A_{t, i}$.
(This set is finite because the ${\cal H}_I$ register holding $\ket{x_1\ldots x_N}$
is finite dimensional and, therefore, decomposes
into a direct sum of finitely many eigenspaces.)
For every pair $(\alpha, \beta, j)\in A_{t, i}$, we
normalize $\alpha, \beta$ by multiplying them by the same constant
so that $\alpha^2+\beta^2=1$.
Querying $x_i$ transforms this state to
\[ \rho'_{t,i}=\sum_{(\alpha, \beta, j)\in A_{t,i}}
p^i_{\alpha, \beta, j} \tau^{i}_{\alpha, -\beta, j} ,\]
because $\ket{\tilde{\psi}^{i, l}_{i_1, \ldots, i_j}}$ is a superposition
of $\ket{x}$ with $x_i=l$ and, therefore, a query leaves
$\ket{\tilde{\psi}^{i, 0}_{i_1, \ldots, i_j}}$ unchanged and flips a phase
on $\ket{\tilde{\psi}^{i, 1}_{i_1, \ldots, i_j}}$.
If $i=0$, we have $\rho'_{t, 0}=\rho_{t, 0}$, because, if
the query register contains $\ket{0}$, the query maps any state to itself,
thus leaving $\rho_{t, 0}$ unchanged.
\begin{Claim}
\label{claim:relate}
Let $\alpha_0=\sqrt{\frac{N-K}{N-j}}
\|\tilde{\psi}^{i, 0}_{i_1, \ldots, i_j}\|$ and
$\beta_0=\sqrt{\frac{K-j}{N-j}} \|\tilde{\psi}^{i, 1}_{i_1, \ldots, i_j}\|$.
\begin{enumerate}
\item[(i)]
$S^i_{\alpha_0, \beta_0, j}\subseteq S_j$;
\item[(ii)]
$S^i_{\beta_0, -\alpha_0, j}\subseteq S_{j+1}$.
\end{enumerate}
\end{Claim}
\noindent {\bf Proof: }
In section \ref{sec:eigen}.
\hfill{\rule{2mm}{2mm}}
\begin{Corollary}
\label{cor:contain}
For any $\alpha$, $\beta$, $S^i_{\alpha, \beta, j}\subseteq S_j\cup S_{j+1}$.
\end{Corollary}
\noindent {\bf Proof: }
We have $S_{\alpha, \beta, j}\subseteq S^{i, 0}_j \cup S^{i, 1}_j$,
since $S_{\alpha, \beta, j}$ is spanned by linear combinations of
states $\ket{\tilde{\psi}^{i, 0}_{i_1, \ldots, i_j}}$ (which belong to $S^{i, 0}_j$) and
states $\ket{\tilde{\psi}^{i, 1}_{i_1, \ldots, i_j}}$ (which belong to $S^{i, 1}_j$).
As shown in the proof of claim \ref{claim:relate} above,
\[ S^{i, 0}_j\cup S^{i, 1}_j\subseteq S_{\alpha_0, \beta_0, j}\cup
S_{-\beta_0, \alpha_0, j} \subseteq S_j \cup S_{j+1} .\]
\hfill{\rule{2mm}{2mm}}
The next claim quantifies the overlap between $S^i_{\alpha, \beta, j}$
and $S_{j+1}$.
\begin{Claim}
\label{claim:2dim}
\[ Tr P_{S_{j+1}} \tau^{i}_{\alpha, \beta, j} =
\frac{|\alpha\beta_0-\beta\alpha_0|^2}{\alpha_0^2 +\beta_0^2} \]
\end{Claim}
\noindent {\bf Proof: }
In section \ref{sec:eigen}.
\hfill{\rule{2mm}{2mm}}
To be able to use this bound, we also need to bound $\alpha_0$ and $\beta_0$.
\begin{Claim}
\label{claim:relatebound}
$\frac{\beta_0}{\sqrt{\alpha_0^2+\beta_0^2}}\leq
\sqrt{\frac{4(K-j)}{N+3K-4j}}$.
\end{Claim}
\noindent {\bf Proof: }
In section \ref{sec:eigen}.
\hfill{\rule{2mm}{2mm}}
We can now complete the proof of lemma \ref{lem:main1}.
By projecting both sides of $\rho_t=\sum_i p_{t, i}\tau_i$
to $(T_{j})^{\perp}=S_{j+1}\cup \ldots S_K$
and taking trace, we get
\begin{equation}
\label{eq-project}
Tr P_{(T_{j})^{\perp}} \rho_t =
\sum_{j'=0}^K p_{t, j} Tr P_{(T^{j})^{\perp}} \tau_{j'}=
\sum_{j'=j}^K p_{t, j} =q_{t, j},
\end{equation}
with the second equality following because
the states $\tau_{j'}$ are uniform mixtures over subspaces $S_{j'}$
and $S_0, \ldots, S_{j}$ are contained in $T_{j}$ while
$S_{j+1}, \ldots, S_K$ are contained in $(T_{j})^{\perp}$.
Because of equations (\ref{eq-2507a}), (\ref{eq-2507aa}) and (\ref{eq-2507b}),
this means that
\begin{equation}
\label{eq-2607b}
q_{t, j+1} = a^2_0 Tr P_{(T_{j})^{\perp}} \rho_{t, 0} +
\sum_{i=1}^N a^2_i \sum_{(\alpha, \beta, j') \in A_{t, i}}
p^{i}_{\alpha, \beta, j'}
Tr P_{(T_{j})^{\perp}} \tau^{i}_{\alpha, \beta, j'} .
\end{equation}
Decomposing the state after the query in a similar way, we get
\[ q_{t+1, j+1} = a^2_0
Tr P_{(T_{j})^{\perp}} \rho'_{t, 0} +
\sum_{i=1}^N a^2_i \sum_{(\alpha, \beta, j')\in A_{t, i}}
p^{i}_{\alpha, \beta, j'} Tr P_{(T_{j})^{\perp}}
\tau^{i}_{\alpha, -\beta, j'} .\]
By substracting the two sums and using $\rho'_{t, 0}=\rho_{t, 0}$, we get
\begin{equation}
\label{eq-2607}
q_{t+1, j+1}-q_{t, j+1}= \sum_{i=1}^N a^2_i
\sum_{(\alpha, \beta, j')\in A_{t, i}}
p^{i}_{\alpha, \beta, j'} Tr P_{(T_{j})^{\perp}}
(\tau^{i}_{\alpha, -\beta, j'} - \tau^{i}_{\alpha, \beta, j'}) .
\end{equation}
We now claim that all the terms in this sum with $j'\neq j$ are 0.
For $j'<j$, $S_{\alpha, \beta, j'}\subseteq T_{j'+1}\subseteq T_{j}$,
implying that $Tr P_{(T_{j})^{\perp}} \tau^{i}_{\alpha, \beta, j'} = 0$
and, similarly, $Tr P_{(T_{j})^{\perp}} \tau^{i}_{\alpha, -\beta, j'} = 0$.
For $j'>j$, $S_{\alpha, \beta, j'}\subseteq S_{j'}\cup S_{j'+1}
\subseteq (T_{j})^{\perp}$,
implying that
\[ Tr P_{(T_{j})^{\perp}} \tau^{i}_{\alpha, \beta, j'} = 1, \mbox{~~}
Tr P_{(T_{j})^{\perp}} \tau^{i}_{\alpha, -\beta, j'} = 1 \]
and the difference of the two is 0.
By removing those terms from (\ref{eq-2607}), we get
\begin{equation}
\label{eq-2607new}
q_{t+1, j+1}-q_{t, j+1}= \sum_{i=1}^N a^2_i
\sum_{(\alpha, \beta, j)\in A_{t, i}}
p^{i}_{\alpha, \beta, j} Tr P_{(T_{j})^{\perp}}
(\tau^{i}_{\alpha, -\beta, j} - \tau^{i}_{\alpha, \beta, j}) .
\end{equation}
We have
\[ Tr P_{(T_{j})^{\perp}} (\tau^{i}_{\alpha, -\beta, j} -
\tau^{i}_{\alpha, \beta, j} ) =
Tr P_{S_{j+1}} (\tau^{i}_{\alpha, -\beta, j} -
\tau^{i}_{\alpha, \beta, j} ) \]
\[ =
\frac{|\alpha\beta_0+\beta\alpha_0|^2}{\alpha_0^2 +\beta_0^2} -
\frac{|\alpha\beta_0-\beta\alpha_0|^2}{\alpha_0^2 +\beta_0^2} ,\]
with the first equality following from Corollary \ref{cor:contain},
$S_{j}\subseteq T_{j}$ and $S_{j+1}\subseteq (T_{j})^{\perp}$
and the second equality following from Claim \ref{claim:2dim}.
This is at most
\[ 4 \frac{|\alpha \beta \alpha_0 \beta_0|}{\alpha_0^2 +\beta_0^2}
\leq 2 \frac{\alpha_0 \beta_0}{\alpha_0^2 +\beta_0^2} \]
\[ =2 \frac{\alpha_0}{\sqrt{\alpha_0^2+\beta_0^2}}
\frac{\beta_0}{\sqrt{\alpha_0^2+\beta_0^2}}
\leq 2 \sqrt{\frac{4(K-j)}{N+3K-4j}} \leq 2 \sqrt{\frac{4K}{N}} ,\]
with the first inequality following from
$|\alpha\beta|\leq \frac{|\alpha|^2+|\beta|^2}{2}=\frac{1}{2}$
and the second inequality following from Claim \ref{claim:relatebound}
and $\frac{\alpha_0}{\sqrt{\alpha_0^2+\beta_0^2}}\leq 1$.
Together with equation (\ref{eq-2607}), this means
\begin{equation}
\label{eq-almost}
q_{t+1, j+1}-q_{t, j+1} \leq \frac{4\sqrt{K}}{\sqrt{N}}
\sum_{i=1}^N a_i^2 \sum_{(\alpha, \beta, j)\in A_{t, i}}
p^{i}_{\alpha, \beta, j}
\end{equation}
Similarly to equation (\ref{eq-project}) we have
\[ p_{t, j+1}+p_{t, j} = Tr P_{(S_j \cup S_{j+1})} \rho_t .\]
We can then express the right hand side similarly to
equation (\ref{eq-2607b}), as a sum of terms
$p^0_{j'} Tr P_{(S_j \cup S_{j+1})} \tau_{j'}$ and
$p^i_{\alpha, \beta, j'} Tr P_{(S_j \cup S_{j+1})} \tau^i_{\alpha, \beta, j'}$.
Since $S^i_{\alpha, \beta, j}\subseteq S_j \cup S_{j+1}$
(by corollary \ref{cor:contain}), we have
$Tr P_{(S_{j}\cup S_{j+1})} \tau^{i}_{\alpha, \beta, j}=1$.
This means that
\[ p_{t, j+1}+p_{t, j} \geq \sum_{i=1}^N a_i^2
\sum_{(\alpha, \beta, j)\in A_{t, i}}
p^{i}_{\alpha, \beta, j} .\]
Together with equation (\ref{eq-almost}), this implies
\[ q_{t+1, j+1}-q_{t, j+1} \leq
\frac{4\sqrt{K}}{\sqrt{N}} (p_{t, j}+p_{t, j+1}) \leq
\frac{4\sqrt{K}}{\sqrt{N}} \sum_{j'=j}^K p_{t, j'} =
\frac{4\sqrt{K}}{\sqrt{N}} q_{t, j} .\]
\hfill{\rule{2mm}{2mm}}
\section{Proof of Lemma \ref{lem:main2}}
\label{sec:proof2}
We start with the case, when $p_{T, K/2+1}=\ldots=p_{T, K}=0$.
\begin{lemma}
\label{lem:modifiedprob}
If $p_{T, K/2+1}=\ldots=p_{T, K}=0$, the success probability of ${\cal A}$ is
at most $\frac{{N\choose K/2}}{{N \choose K}}$.
\end{lemma}
\noindent {\bf Proof: }
Let $\ket{\psi}$ be the final state.
The state of ${\cal H}_I$ register lies in $T_{K/2}$, which is
a ${N \choose K/2}$ dimensional space.
Therefore, there is a Schmidt decomposition for $\ket{\psi}$
with at most ${N \choose K/2}$ terms.
This means that the state of ${\cal H}_A$ lies in a ${N \choose K/2}$
subspace of ${\cal H}_A\otimes H_S$.
We express the final state as
\[ \ket{\psi}= \sum_{x:|x|=K} \frac{1}{\sqrt{{N \choose K}}}
\ket{\psi_x} \ket{x} .\]
We can think of $\ket{\psi_x}$ as a quantum encoding for $x$
and the final measurement as a decoding procedure that
takes $\ket{\psi_x}$ and produces a guess for $x$.
The probability that algorithm ${\cal A}$ succeeds is then
equal to the average success probability of the encoding.
We now use
\begin{theorem}
\label{thm:nayak}
\cite{Nayak}
For any encoding $\ket{\psi_x}$ of $M$ classical values in
by quantum states in $d$ dimensions, the probability of
success is at most $\frac{d}{M}$.
\end{theorem}
In our case, $M={N \choose K}$ and $d={N\choose K/2}$ because the
states $\ket{\psi}$ all lie in a ${N \choose K/2}$-dimensional
subspace of ${\cal H}_A\otimes {\cal H}_S$. Therefore, Theorem \ref{thm:nayak}
implies Lemma \ref{lem:modifiedprob}.
\hfill{\rule{2mm}{2mm}}
We decompose the state $\ket{\psi_T}$ as
$\sqrt{1-\delta}\ket{\psi'_T}+\sqrt{\delta}\ket{\psi''_T}$
where $\ket{\psi'_T}$ is in the subspace
${\cal H}_A \otimes \cup_{j=0}^{K/2} S_j$ and
$\ket{\psi''_T}$ is in ${\cal H}_A \otimes \cup_{j=K/2+1}^{K} S_j$.
We have
\[ \delta=\sum_{j=K/2+1}^K p_{T, j} .\]
The success probability of ${\cal A}$ is the probability that, if we measure both
the register of ${\cal H}_A$ containing the result of the computation and ${\cal H}_I$,
then, we get $i_1, \ldots, i_K$ and $x_1, \ldots, x_N$ such that
$x_{i_1}=\ldots=x_{i_K}=1$.
Consider the probability of getting $i_1, \ldots, i_K$
and $x_1, \ldots, x_N$ such that
$x_{i_1}=\ldots=x_{i_K}=1$, when measuring $\ket{\psi'_T}$ (instead of
$\ket{\psi_T}$). By Lemma \ref{lem:modifiedprob}, this probability is
at most $\frac{{N\choose K/2}}{{N \choose K}}$.
We have
\[ \|\psi_T -\psi'_T\| \leq (1-\sqrt{1-\delta^2})
\|\psi'_T\| + \sqrt{\delta} \|\psi''_T\| =
(1-\sqrt{1-\delta^2}) + \sqrt{\delta} \leq 2 \sqrt{\delta} .\]
We now apply
\begin{lemma}
\label{lem:bv}
\cite{BV}
For any states $\ket{\psi}$ and $\ket{\psi'}$ and any measurement $M$,
the variational distance between the
probability distributions obtained by applying $M$ to $\ket{\psi}$
and $\ket{\psi'}$ is at most $2\|\psi-\psi'\|$.
\end{lemma}
By Lemma \ref{lem:bv}, the probabilities of getting $i_1, \ldots, i_K$
and $x_1, \ldots, x_N$ such that $x_{i_1}=\ldots=x_{i_K}=1$,
when measuring $\ket{\psi_T}$ and $\ket{\psi'_T}$
differ by at most $4\sqrt{\delta}=4\sqrt{\sum_{j=K/2+1}^K p_{T, j}}$.
Therefore, the success probability of ${\cal A}$ is at most
\[ \frac{{N\choose K/2}}{{N \choose K}} + 4\sqrt{\sum_{j=K/2+1}^K p_{T, j}} .\]
\section{Structure of the eigenspaces of $\rho_{t, i}$}
\label{sec:eigen}
In this section, we prove claims \ref{claim:eigen1}, \ref{claim:relate},
\ref{claim:2dim} and \ref{claim:relatebound} describing
the structure of the eigenspaces of $\rho_{t, i}$.
\noindent {\bf Proof: } [of Claim \ref{claim:eigen1}]
We rearrange the rows and the columns of $\rho_{t, i}$ so that all rows
and columns corresponding to $\ket{x_1\ldots x_N}$ with $x_i=0$ are before
the rows and the columns corresponding to $\ket{x_1\ldots x_N}$ with $x_i=1$.
We then express $\rho_{t, i}$ as
\[ \rho_{t, i}= \left(
\begin{array}{cc} A & B \\
C & D \end{array} \right) ,\]
with $A$ being a ${N-1 \choose K}\times {N-1\choose K}$ square matrix
indexed by $\ket{x_1\ldots x_N}$ with $x_i=0$,
$D$ being a ${N-1 \choose K-1}\times {N-1\choose K-1}$ square matrix
indexed by $\ket{x_1\ldots x_N}$ with $x_i=1$ and $B$ and $C$
being rectangular matrices with rows (columns) indexed by
$\ket{x_1\ldots x_N}$ with $x_i=0$ and columns (rows) indexed by
$\ket{x_1\ldots x_N}$ with $x_i=1$.
We claim that
\[ \rho_{t, i} \ket{\tilde{\psi}^{i,0}_{i_1, \ldots, i_j}}
= a_{11} \ket{\tilde{\psi}^{i,0}_{i_1, \ldots, i_j}}+
a_{12} \ket{\tilde{\psi}^{i,1}_{i_1, \ldots, i_j}} ,\]
\begin{equation}
\label{eq-rem0}
\rho_{t, i} \ket{\tilde{\psi}^{i,1}_{i_1, \ldots, i_j}}
= a_{21} \ket{\tilde{\psi}^{i,0}_{i_1, \ldots, i_j}}+
a_{22} \ket{\tilde{\psi}^{i,1}_{i_1, \ldots, i_j}} ,
\end{equation}
where $a_{11}$, $a_{12}$, $a_{21}$, $a_{22}$ are independent of $i_1, \ldots, i_j$.
To prove that, we first note that $A$ and $D$ are matrices where $A_{xy}$ and $D_{xy}$ only
depends on $|\{ t:x_t=y_t\}|$. Therefore, the results of Knuth\cite{Knuth} about
eigenspaces of such matrices apply. This means that $S^{i, 0}_j$ an $S^{i, 1}_j$
are eigenspaces for $A$ and $D$, respectively, and
\[ A \ket{\tilde{\psi}^{i,0}_{i_1, \ldots, i_j}} = a_{11} \ket{\tilde{\psi}^{i,0}_{i_1, \ldots, i_j}},\]
\[ D \ket{\tilde{\psi}^{i,1}_{i_1, \ldots, i_j}} = a_{22} \ket{\tilde{\psi}^{i,1}_{i_1, \ldots, i_j}},\]
where $a_{11}$ and $a_{22}$ are the eigenvalues of $A$ and $D$ for
the eigenspaces $S^{i, 0}_j$ and $S^{i, 1}_j$.
It remains to prove that
\begin{equation}
\label{eq-rem1}
B \ket{\tilde{\psi}^{i,0}_{i_1, \ldots, i_j}} = a_{12} \ket{\tilde{\psi}^{i,1}_{i_1, \ldots, i_j}},
\end{equation}
\begin{equation}
\label{eq-rem2}
C \ket{\tilde{\psi}^{i,1}_{i_1, \ldots, i_j}} = a_{21} \ket{\tilde{\psi}^{i,0}_{i_1, \ldots, i_j}}.
\end{equation}
Let $M$ be a rectangular matrix, with entries indexed by $x, y$,
with $|x|=|y|=K$ and $x_i=1$ and $y_i=0$.
The entries of $M$ are $M_{xy}=1$ if $x$ and $y$ differ in two places, with $x_i=1$, $y_i=0$
and $x_l=0$, $y_l=1$ for some $l\neq i$ and $M_{xy}=0$ otherwise.
We claim
\begin{equation}
\label{eq-rem3}
M\ket{\tilde{\psi}^{i,0}_{i_1, \ldots, i_j}}=c \ket{\tilde{\psi}^{i,1}_{i_1, \ldots, i_j}}
\end{equation}
for some $c$ that may depend on $N, k$ and $j$ but not on $i_1, \ldots, i_j$.
To prove that, we need to prove two things.
First,
\begin{equation}
\label{eq-rem4}
M\ket{\psi^{i,0}_{i_1, \ldots, i_j}}=c \ket{\psi^{i,1}_{i_1, \ldots, i_j}}.
\end{equation}
This follows by
\[ M\ket{\psi^{i,0}_{i_1, \ldots, i_j}} = \frac{1}{\sqrt{N-j-1 \choose k-j}}
\mathop{\sum_{x:x_{i_1}=\ldots=x_{i_j}=1,}}_{x_i=0} M \ket{x} \]
\[=\frac{1}{\sqrt{N-j-1 \choose K-j}}
\mathop{\sum_{x:x_{i_1}=\ldots=x_{i_j}=1}}_{x_i=0} \sum_{l:x_l=1}
\ket{x_1 \ldots x_{l-1} 0 x_{l+1} \ldots x_{i-1} 1 x_{i+1} \ldots x_N} \]
\[ =\frac{1}{\sqrt{N-j-1 \choose K-j}} (N-K)
\mathop{\sum_{y:y_{i_1}=\ldots=y_{i_j}=1}}_{y_i=1} \ket{y}=
\sqrt{(K-j)(N-K)} \ket{\psi^{i,1}_{i_1, \ldots, i_j}} .\]
Second, $M(T^{i,0}_j)\subseteq T^{i,1}_j$ and
$M (T^{i, 0}_j)^{\perp} \subseteq (T^{i,1}_j)^{\perp}$.
The first statement is immediately follows from equation (\ref{eq-rem4}),
because the subspaces $T^{i,0}_j$, $T^{i, 1}_j$ are spanned by the states
$\ket{\psi^{i,0}_{i_1, \ldots, i_j}}$ and $\ket{\psi^{i,1}_{i_1, \ldots, i_j}}$,
respectively.
To prove the second statement, let $\ket{\psi}\in (T^{i, 0}_j)^{\perp}$,
$\ket{\psi}=\sum_{x} a_{x}\ket{x}$.
We would like to prove $M\ket{\psi}\in (T^{i,1}_j)^{\perp}$.
This is equivalent to $\bra{\psi^{i,1}_{i_1, \ldots, i_j}}M\ket{\psi}=0$
for all $i_1, \ldots, i_j$.
We have
\[ \bra{\psi^{i,1}_{i_1, \ldots, i_j}}M\ket{\psi}= \frac{1}{\sqrt{N-j-1 \choose K-j-1}}
\sum_{y:y_{i_1}=\ldots=y_{i_j}=1} \bra{y}M\ket{\psi}\]
\[ =\frac{1}{\sqrt{N-j-1 \choose K-j-1}}
\mathop{\sum_{x:x_{i_1}=\ldots=x_{i_j}=1,}}_{x_i=0}
\mathop{\sum_{l:x_l=1,}}_{l\notin \{i_1, \ldots, i_j\}} a_x \]
\[= \frac{1}{\sqrt{N-j-1 \choose K-j-1}}
(K-j) \sum_{x:x_{i_1}=\ldots=x_{i_j}=1} a_x = 0.\]
The first equality follows by writing out $\bra{\psi^{i,1}_{i_1, \ldots, i_j}}$,
the second equality follows by writing out $M$. The third equality follows because,
for every $x$ with $|x|=K$ and $x_{i_1}=\ldots=x_{i_j}=1$,
there are $K-j$ more $l\in[N]$ satisfying $x_l=1$.
The fourth equality follows because $\sum_{x:x_{i_1}=\ldots=x_{i_j}=1} a_x$ is a constant
times $\bra{\psi^{i,0}_{i_1, \ldots, i_j}}\psi\rangle$ and
$\bra{\psi^{i,0}_{i_1, \ldots, i_j}}\psi\rangle=0$, because $\ket{\psi}\in (T^{i, 0}_j)^{\perp}$.
Furthermore, $BM$ is an ${N-1 \choose K}\times {N-1\choose K}$ matrix, with $(BM)_{x, y}$ only
depending on $|\{l:x_l = y_l =1\}|$. Therefore, $S^{i,1}_j$ is an eigenspace of $BM$
and, since $\ket{\tilde{\psi}^{i,1}_{i_1, \ldots, i_j}}\in S^{i,1}_j$, we have
\[ BM \ket{\tilde{\psi}^{i,1}_{i_1, \ldots, i_j}} =
\lambda \ket{\tilde{\psi}^{i,1}_{i_1, \ldots, i_j}} \]
for an eigenvalue $\lambda$ independent of $i_1, \ldots, i_j$.
Together with equation (\ref{eq-rem3}), this implies equation (\ref{eq-rem1}) with
$a_{12}=\lambda/j$.
Equation (\ref{eq-rem2}) follows by proving
\[ M^T \ket{\tilde{\psi}^{i,1}_{i_1, \ldots, i_j}}=c \ket{\tilde{\psi}^{i,0}_{i_1, \ldots, i_j}} \]
and
\[ CM^T \ket{\tilde{\psi}^{i,0}_{i_1, \ldots, i_j}} =
\lambda \ket{\tilde{\psi}^{i,0}_{i_1, \ldots, i_j}} ,\]
in a similar way.
We now diagonalize the matrix
\[ M'=\left(\begin{array}{cc} a_{11} & a_{12} \\a_{21} & a_{22} \end{array} \right) .\]
It has two eigenvectors: $\left(\begin{array}{c} \alpha_1 \\ \beta_1 \end{array}\right)$
and $\left(\begin{array}{c} \alpha_2 \\ \beta_2 \end{array}\right)$.
Equation (\ref{eq-rem0}) implies that, for any $i_1, \ldots, i_j$,
\[ \alpha_1 \ket{\tilde{\psi}^{i,0}_{i_1, \ldots, i_j}} +
\beta_1 \ket{\tilde{\psi}^{i,1}_{i_1, \ldots, i_j}} \]
is an eigenvector of $M$ with the same eigenvalue $\lambda$.
Therefore, $S_{\alpha_1, \beta_1, i}$ is an eigenspace of $M$.
Similarly, $S_{\alpha_2, \beta_2, i}$ is an eigenspace of $M$.
Vectors $\alpha_1\ket{\tilde{\psi}^{i,0}_{i_1, \ldots, i_j}} +
\beta_1 \ket{\tilde{\psi}^{i,1}_{i_1, \ldots, i_j}}$ and
$\alpha_2\ket{\tilde{\psi}^{i,0}_{i_1, \ldots, i_j}} +
\beta_2 \ket{\tilde{\psi}^{i,1}_{i_1, \ldots, i_j}}$ together
span the same space as vectors $\ket{\tilde{\psi}^{i,0}_{i_1, \ldots, i_j}}$
and $\ket{\tilde{\psi}^{i,1}_{i_1, \ldots, i_j}}$.
Since vectors $\ket{\tilde{\psi}^{i,l}_{i_1, \ldots, i_j}}$
span $S^{i, l}_j$, this means that
\[ S^{i,0}_j\cup S^{i,1}_j \subseteq S_{\alpha_1, \beta_1, i} \cup
S_{\alpha_2, \beta_2, i}. \]
Therefore, repeating this argument for every $i$ gives
a collection of eigenspaces that span the entire state space for ${\cal H}_I$.
This means that any eigenspace of $M$ is a direct sum of some of eigenspaces
$S_{\alpha, \beta, i}$.
\hfill{\rule{2mm}{2mm}}
\noindent {\bf Proof: } [of Claim \ref{claim:relate}]
For part (i), consider the states $\ket{\psi_{i_1, \ldots, i_j}}$
spanning $T_j$.
We have
\begin{equation}
\label{eq-new}
\ket{\psi_{i_1, \ldots, i_j}}=\sqrt{\frac{N-k}{N-j}} \ket{\psi^{i,0}_{i_1, \ldots, i_j}}+
\sqrt{\frac{K-j}{N-j}} \ket{\psi^{i,1}_{i_1, \ldots, i_j}}
\end{equation}
because a $\frac{N-K}{N-j}$ fraction of the states $\ket{x_1\ldots x_N}$ with
$|x|=K$ and $x_{i_1}=\ldots=x_{i_j}=1$ have $x_i=0$ and the rest have $x_i=1$.
The projection of these states to $(T^{i,0}_{j-1}\cup T^{i, 1}_{j-1})^{\perp}$
are
\[
\sqrt{\frac{N-K}{N-j}} \ket{\tilde{\psi}^{i,0}_{i_1, \ldots, i_j}}+
\sqrt{\frac{K-j}{N-j}} \ket{\tilde{\psi}^{i,1}_{i_1, \ldots, i_j}}
\]
which, by equation (\ref{eq-2507}) are exactly the states spanning
$S^i_{\alpha_0, \beta_0, j}$.
Furthermore, we claim that
\begin{equation}
\label{eq-2707} T_{j-1}\subseteq T^{i, 0}_{j-1}\cup T^{i, 1}_{j-1} \subseteq T_j.
\end{equation}
The first containment is true because $T_{j-1}$ is spanned by
the states $\ket{\psi_{i_1, \ldots, i_{j-1}}}$
which either belong to $T^{i,1}_{j-2}\subseteq T^{i,1}_{j-1}$
(if one of $i_1, \ldots, i_{j-1}$ is equal to $i$)
or are a linear combination of states $\ket{\psi^{i,0}_{i_1, \ldots, i_{j-1}}}$
and $\ket{\psi^{i,1}_{i_1, \ldots, i_{j-1}}}$ which belong to
$T^{i,0}_{j-1}$ and $T^{i,1}_{j-1}$.
The second containment follows because the
states $\ket{\psi^{i, 1}_{i_1, \ldots, i_{j-1}}}$ spanning
$T^{i,1}_{j-1}$ are the same as the states
$\ket{\psi_{i, i_1, \ldots, i_{j-1}}}$ which belong to $T_j$ and
the states $\ket{\psi^{i, 0}_{i_1, \ldots, i_{j-1}}}$ spanning
$T^{i,0}_{j-1}$ can be expressed as linear combinations
of $\ket{\psi_{i_1, \ldots, i_{j-1}}}$ and
$\ket{\psi_{i, i_1, \ldots, i_{j-1}}}$ which both belong to $T_j$.
The first part of (\ref{eq-2707}) now implies
\[ S^i_{\alpha_0, \beta_0, j}\subseteq (T^{i, 0}_{j-1} \cup T^{i, 1}_{j-1})^{\perp} \subseteq
(T_{j-1})^{\perp} .\]
We also have
$S^i_{\alpha_0, \beta_0, j}\subseteq T_j$,
because, $S^i_{\alpha_0, \beta_0, j}$
is spanned by the states
\[ P_{(T^{i, 0}_{j-1} \cup T^{i, 1}_{j-1})^{\perp}} \ket{\psi_{i_1, \ldots, i_j}} =
\ket{\psi_{i_1, \ldots, i_j}} - P_{T^{i, 0}_{j-1} \cup T^{i, 1}_{j-1}}
\ket{\psi_{i_1, \ldots, i_j}} \]
and $\ket{\psi_{i_1, \ldots, i_j}}$ belongs to $T_j$ by the definition of $T_j$ and
$P_{T^{i, 0}_{j-1} \cup T^{i, 1}_{j-1}} \ket{\psi_{i_1, \ldots, i_j}}$ belongs to
$T_j$ because of the second part of (\ref{eq-2707}).
Therefore,
$S^i_{\alpha_0, \beta_0, j} \subseteq T_j \cap (T_{j-1})^{\perp}=S_j$.
For the part (ii),
we have
\[ S^i_{\alpha_0, \beta_0, j}\subseteq S^{i,0}_{j}\cup S^{i, 1}_j
\subseteq T^{i, 0}_j \cup T^{i, 1}_j \subseteq T_{j+1} ,\]
where the first containment is true because $S^i_{\alpha_0, \beta_0, j}$
is spanned by linear combinations of vectors
$\ket{\tilde{\psi}^{i,0}_{i_1, \ldots, i_j}}$ (which belong to $S^{i, 0}_j$)
and vectors $\ket{\tilde{\psi}^{i,1}_{i_1, \ldots, i_j}}$
(which belong to $S^{i, 1}_j$) and the last containment is
true because of the second part of equation (\ref{eq-2707}).
Let
\begin{equation}
\label{eq-2707a}
\ket{\psi}=\beta_0 \frac{\ket{\tilde{\psi}^{i,0}_{i_1, \ldots, i_j}}}{
\|\ket{\tilde{\psi}^{i,0}_{i_1, \ldots, i_j}}\| }
-\alpha_0 \frac{\ket{\tilde{\psi}^{i,1}_{i_1, \ldots, i_j}}}{
\|\ket{\tilde{\psi}^{i,1}_{i_1, \ldots, i_j}}\|}
\end{equation}
be one of the vectors spanning $S^i_{\beta_0, -\alpha_0, j}$.
To prove that $\ket{\psi}$ is in $S_{j+1}=T_{j+1} - T_j$, it remains to
prove that $\ket{\psi}$ is orthogonal to $T_{j}$.
This is equivalent to proving that $\ket{\psi}$ is orthogonal to
every of the vectors $\ket{\psi_{i'_1, \ldots, i'_j}}$ spanning $T_j$.
\noindent
{\bf Case 1.}
$\{i'_1, \ldots, i'_j\} = \{i_1, \ldots, i_j \}$.
Since $\ket{\psi}$ belongs to $(T^{i,0}_{j-1}\cup T^{i, 1}_{j-1})^{\perp}$,
it suffices to prove that $\ket{\psi}$ is orthogonal to the projection
of $\ket{\psi_{i_1, \ldots, i_j}}$ to $(T^{i,0}_{j-1}\cup T^{i, 1}_{j-1})^{\perp}$
which, by discussion after the equation (\ref{eq-new}),
is equal to
\begin{equation}
\label{eq-2707b}
\alpha_0 \frac{\ket{\tilde{\psi}^{i,0}_{i_1, \ldots, i_j}}}{
\|\ket{\tilde{\psi}^{i,0}_{i_1, \ldots, i_j}}\| }
+\beta_0 \frac{\ket{\tilde{\psi}^{i,1}_{i_1, \ldots, i_j}}}{
\|\ket{\tilde{\psi}^{i,1}_{i_1, \ldots, i_j}}\|} .
\end{equation}
From equations (\ref{eq-2707a}) and (\ref{eq-2707b}), we see that the
inner product of the two states is $\alpha_0 \beta_0 -\beta_0 \alpha_0=0$.
\noindent
{\bf Case 2.}
$\{i'_1, \ldots, i'_j\} \neq \{i_1, \ldots, i_j \}$
but one of $i'_1, \ldots, i'_j$ is equal to $i$.
For simplicity, assume $i=i'_j$. Then,
$\ket{\psi_{i'_1, \ldots, i'_j}}$ is the same
as $\ket{\psi^{i, 1}_{i'_1, \ldots, i'_{j-1}}}$
which belongs to $T^{i,1}_{j-1}$. By definition of $S^i_{\alpha, \beta, j}$,
the vector $\ket{\psi}$ belongs to $(T^{i,0}_{j-1}\cup T^{i,1}_{j-1})^{\perp}$
and is therefore orthogonal to $\ket{\psi^{i, 1}_{i'_1, \ldots, i'_{j-1}}}$.
\noindent
{\bf Case 3.}
$\{i'_1, \ldots, i'_j\} \neq \{i_1, \ldots, i_j \}$
and none of $i'_1, \ldots, i'_j$ is equal to $i$.
One of $i'_1, \ldots, i'_j$ must be not in $\{i_1, \ldots, i_j \}$.
For simplicity, assume it is $i'_j$. We have
\[ \ket{\psi_{i'_1, \ldots, i'_{j-1}}}=
\sum_{i'\notin\{i'_1, \ldots, i'_{j-1}\}}
\ket{\psi_{i'_1, \ldots, i'_{j-1}, i'}} .\]
Also, $\langle \psi_{i'_1, \ldots, i'_{j-1}} \ket{\psi}=0$,
because $\ket{\psi_{i'_1, \ldots, i'_{j-1}}}$ is in
$T^{i,0}_{j-1}\cup T^{i,1}_{j-1}$.
As proven in the previous case,
$\langle \psi_{i'_1, \ldots, i'_{j-1},i} \ket{\psi}=0$.
We therefore have
\begin{equation}
\label{eq-new1}
\sum_{i'\notin\{i'_1, \ldots, i'_{j-1}, i\}}
\bra{\psi_{i'_1, \ldots, i'_{j-1}, i'}} \psi\rangle
=0 .
\end{equation}
By symmetry, the inner product $\bra{\psi_{i'_1, \ldots, i'_{j-1}, i'}} \psi\rangle$
is the same for every $i'\notin\{i'_1, \ldots, i'_{j-1}, i\}$.
Therefore, equation (\ref{eq-new1}) means
\[ \bra{\psi_{i'_1, \ldots, i'_{j-1}, i'}} \psi\rangle
=0 \]
for every $i'\notin\{i'_1, \ldots, i'_{j-1}, i\}$.
\hfill{\rule{2mm}{2mm}}
\noindent {\bf Proof: } [of Claim \ref{claim:2dim}]
$\tau^i_{\alpha, \beta, j}$ is a mixture of states $\ket{\psi}$
from the subspace $S^i_{\alpha, \beta, j}$.
We prove the claim by showing that,
for any of those states $\ket{\psi}$, the squared norm
of its projection to $S_{j+1}$ is equal to the
right hand side of claim \ref{claim:2dim}.
Since $\ket{\psi}\in S^i_{\alpha, \beta, j}$ we can write it
as
\[ \ket{\psi} = \sum_{i_1, \ldots, i_{j}} a_{i_1, \ldots, i_{j}}
(\alpha \ket{\tilde{\psi}^{i, 0}_{i_1, \ldots, i_{j}}} +
\beta \ket{\tilde{\psi}^{i, 1}_{i_1, \ldots, i_{j}}} ) \]
for some $a_{i_1, \ldots, i_{j}}$. Let
\[ \ket{\psi^+}= \sum_{i_1, \ldots, i_{j}} a_{i_1, \ldots, i_{j}}
(\beta_0 \ket{\tilde{\psi}^{i, 0}_{i_1, \ldots, i_{j}}} -
\alpha_0 \ket{\tilde{\psi}^{i, 1}_{i_1, \ldots, i_{j}}} ) ,\]
\[ \ket{\psi^-}= \sum_{i_1, \ldots, i_{j}} a_{i_1, \ldots, i_{j}}
(\alpha_0 \ket{\tilde{\psi}^{i, 0}_{i_1, \ldots, i_{j}}} +
\beta_0 \ket{\tilde{\psi}^{i, 1}_{i_1, \ldots, i_{j}}} ) .\]
Then, $\ket{\psi}$ is a linear combination of $\ket{\psi^+}$
which belongs to $S^i_{\beta_0, -\alpha_0, j}\subset S_{j+1}$
(by Claim \ref{claim:relate}) and $\ket{\psi^-}$
which belongs to $S^i_{\alpha_0, \beta_0, j}\subseteq S_j$.
Moreover, all three states are linear combinations of
$\ket{\psi^0}$, $\ket{\psi^1}$ defined by
\[ \ket{\psi^l}=\sum_{i_1, \ldots, i_{j}} a_{i_1, \ldots, i_{j}}
\ket{\tilde{\psi}^{i, l}_{i_1, \ldots, i_{j}}}.\]
We have
\[ \ket{\psi}=\alpha\ket{\psi^0}+\beta\ket{\psi^1}, \]
\[ \ket{\psi^+}=\beta_0\ket{\psi^0}-\alpha_0\ket{\psi^1}, \]
\[ \ket{\psi^-}=\alpha_0\ket{\psi^0}+\beta_0\ket{\psi^1} .\]
Since $\ket{\psi^+}$ and $\ket{\psi^-}$ belong to subspaces
$S_{j+1}$ and $S_j$ which are orthogonal, we must have
$\langle \psi^+ \ket{\psi^-}=0$. This means
\[ \alpha_0\beta_0 \|\psi^0\|^2 -\beta_0 \alpha_0 \|\psi^1\|^2 = 0 .\]
By dividing the equation by $\alpha_0\beta_0$, we get
$\|\psi^0\|^2=\|\psi^1\|^2$ and $\|\psi^0\|=\|\psi^1\|$.
Since $\|\psi\|=1$, this means that $\|\psi^0\|=\|\psi^1\|
=\frac{1}{\sqrt{\alpha^2+\beta^2}}=1$.
Since $\ket{\psi}$ lies in the subspace spanned by
$\ket{\psi^+}$ which belongs to $S_{j+1}$
and $\ket{\psi^-}$ which belongs to $S_j$,
the norm of the projection of $\ket{\psi}$ to $S_{j+1}$
is equal to $\frac{|\langle\psi|\psi^+\rangle|}{\|\psi^+\|}$.
By expressing $\ket{\psi}$, $\ket{\psi^+}$ in terms of
$\ket{\psi^0}$, $\ket{\psi^1}$, we get
\[ \frac{|\langle\psi|\psi^+\rangle|}{\|\psi^+\|} =
\frac{\alpha\beta_0 \|\psi^0\|^2 -\alpha_0 \beta \|\psi^1\|^2}{
\sqrt{\beta_0^2 \|\psi^0\|^2+ \alpha_0^2 \|\psi^0\|^2 }} =
\frac{|\alpha\beta_0 -\alpha_0 \beta|}{\sqrt{\alpha_0^2+\beta_0^2}} ,\]
proving the claim.
\hfill{\rule{2mm}{2mm}}
\noindent {\bf Proof: } [of Claim \ref{claim:relatebound}]
We will prove
$\|\tilde{\psi}^{i, 0}_{i_1, \ldots, i_j}\|
\geq \frac{1}{2} \|\tilde{\psi}^{i, 1}_{i_1, \ldots, i_j}\|$,
because that means
\[ \alpha_0 = \frac{\sqrt{N-K}}{\sqrt{N-j}}
\|\tilde{\psi}^{i, 0}_{i_1, \ldots, i_j}\|
\geq \frac{1}{2}
\frac{\sqrt{N-K}}{\sqrt{K-j}} \frac{\sqrt{K-j}}{\sqrt{N-j}}
\|\tilde{\psi}^{i, 1}_{i_1, \ldots, i_j}\|
= \frac{\sqrt{N-K}}{2\sqrt{K-j}} \beta_0 \]
and
\[ \frac{\beta_0}{\sqrt{\alpha_0^2+\beta_0^2}} \leq
\frac{\beta_0}{\sqrt{\frac{N-K}{4(K-j)}\beta_0^2 +\beta_0^2}} =
\frac{1}{\sqrt{1+\frac{N-K}{4(K-j)}}} =
\frac{\sqrt{4(K-j)}}{\sqrt{N+3K-4j}} .\]
To prove $\|\tilde{\psi}^{i, 0}_{i_1, \ldots, i_j}\|
\geq \|\tilde{\psi}^{i, 1}_{i_1, \ldots, i_j}\|$,
we calculate the vector
\[ \ket{\tilde{\psi}^{i, 0}_{i_1, \ldots, i_j}}=P_{(T^{i, 0}_{j-1})^{\perp}}
\ket{\psi^{i, 0}_{i_1, \ldots, i_j}} .\]
Both vector $\ket{\psi^{i, 0}_{i_1, \ldots, i_j}}$ and
subspace $T^{i, 0}_{j-1}$ are fixed by
\[ U_\pi \ket{x}=\ket{x_{\pi(1)}\ldots x_{\pi(N)}} \]
for any permutation $\pi$ that fixes $i$ and
maps $\{i_1, \ldots, i_j\}$ to itself.
This means that $\ket{\tilde{\psi}^{i, 0}_{i_1, \ldots, i_j}}$
is fixed by any such $U_{\pi}$ as well.
Therefore, the amplitude of $\ket{x}$, $|x|=K$, $x_i=0$ in
$\ket{\tilde{\psi}^{i, 0}_{i_1, \ldots, i_j}}$ only depends on
$|\{i_1, \ldots, i_j\}\cap \{t:x_t=1\}|$.
This means $\ket{\tilde{\psi}^{i, 0}_{i_1, \ldots, i_j}}$ is
of the form
\[ \ket{\psi_0}=\sum_{m=0}^j \alpha_m \mathop{\sum_{x:|x|=K, x_i=0}}_{
|\{i_1, \ldots, i_j\}\cap \{t:x_t=1\}| = m} \ket{x} .\]
To simplify the following calculations, we multiply
$\alpha_0$, $\ldots$, $\alpha_j$ by the same constant
so that $\alpha_j=1/\sqrt{{N-j-1 \choose K-j}}$.
Then,
$\ket{\tilde{\psi}^{i, 0}_{i_1, \ldots, i_j}}$
remains a multiple of $\ket{\psi_0}$ but may no longer be
equal to $\ket{\psi_0}$.
$\alpha_0$, $\ldots$, $\alpha_{j-1}$
should be such that the state
is orthogonal to $T_{j-1}$ and, in particular, orthogonal
to states $\ket{\psi^{i, 0}_{i_1, \ldots, i_l}}$ for
$l\in\{0, \ldots, j-1\}$. By writing out
$\langle \psi_0 \ket{\psi^{i, 0}_{i_1, \ldots, i_l}}=0$, we get
\begin{equation}
\label{eq-system}
\sum_{m=l}^j \alpha_m {N-j-1 \choose K-m} {j-l \choose m-l} =0 .
\end{equation}
To show that, we first note that
$\ket{\psi^{i, 0}_{i_1, \ldots, i_l}}$ is a uniform superposition
of all $\ket{x}$, $|x|=K$, $x_i=0$, $x_{i_1}=\ldots=x_{i_l}=1$.
If we want to choose $x$ subject to those constraints
and also satisfying $|\{i_1, \ldots, i_j\}\cap \{t:x_t=1\}|=m$,
we have to set $x_t=1$ for $m-l$ different $t\in\{i_{l+1}, \ldots, i_j\}$
and for $K-m$ different $t\notin\{i, i_1, \ldots, i_j\}$.
This can be done in ${j-l \choose m-l}$
and ${N-j-1 \choose K-m}$ different ways, respectively.
By solving the system of equations (\ref{eq-system}),
we get
that the only solution is
\begin{equation}
\label{eq-solution}
\alpha_m=(-1)^{j-m}
\frac{{N-j-1 \choose K-j}}{{N-j-1 \choose K-m}} \alpha_j
.\end{equation}
Let $\ket{\psi'_0}=\frac{\ket{\psi_0}}{\|\psi_0\|}$ be the normalized
version of $\ket{\psi_0}$. Then,
\[ \ket{\tilde{\psi}^{i, 0}_{i_1, \ldots, i_j}} =
\langle \psi'_0\ket{\psi^{i, 0}_{i_1, \ldots, i_j}} \ket{\psi'_0} ,\]
\begin{equation}
\label{eq-2807}
\| \tilde{\psi}^{i, 0}_{i_1, \ldots, i_j} \| =
\langle \psi'_0\ket{\psi^{i, 0}_{i_1, \ldots, i_j}} =
\frac{\langle \psi_0\ket{\psi^{i, 0}_{i_1, \ldots, i_j}}}{\|\psi_0\|}
\end{equation}
First, we have
\[ \langle \psi_0 \ket{\psi^{i, 0}_{i_1, \ldots, i_j}} = 1 ,\]
because $\ket{\psi^{i, 0}_{i_1, \ldots, i_j}}$ consists of
${N-j-1 \choose K-j}$
basis states $\ket{x}$, $x_i=0$, $x_{i_1}=\ldots=x_{i_j}=1$, each of which
has amplitude $1/\sqrt{{N-j-1 \choose K-j}}$
in both $\ket{\psi_0}$ and $\ket{\psi^{i, 0}_{i_1, \ldots, i_j}}$.
Second,
\[
\|\psi_0 \|^2 = \sum_{m=0}^j {j \choose m} {N-j-1 \choose K-m} \alpha_m^2 =
\sum_{m=0}^j {j \choose m} \frac{{N-j-1 \choose K-j}^2}{
{N-j-1 \choose K-m}} \alpha_j^2 \]
\[
=
\sum_{m=0}^j {j \choose m} \frac{{N-j-1 \choose K-j}}{{N-j-1 \choose K-m}} =
\sum_{m=0}^j {j \choose m} \frac{(K-m)!(N-K+m-j-1)!}{(K-j)!(N-K-1)!} \]
\begin{equation}
\label{eq-2807a}
= \sum_{m=0}^j {j \choose m}
\frac{(K-m) \ldots (K-j+1)}{(N-K-1) \ldots (N-K+m-j)}
\end{equation}
with the first equality following because there are
${j \choose m}{N-j-1\choose K-m}$ vectors $x$ such that
$|x|=K$, $x_i=0$, $x_t=1$ for $m$ different $t\in\{i_1, \ldots, i_j\}$
and $K-m$ different $t\notin\{i, i_1, \ldots, i_j\}$,
the second equality following from equation
(\ref{eq-solution}) and the third equality following from our choice
$\alpha_j=1/\sqrt{{N-j-1 \choose K-j}}$.
We can similarly calculate $\| \tilde{\psi}^{i, 1}_{i_1, \ldots, i_j} \|$.
We omit the details and just state the result. The counterpart of
equation (\ref{eq-2807}) is
\[ \| \tilde{\psi}^{i, 1}_{i_1, \ldots, i_j} \| =
\frac{\langle \psi_1\ket{\psi^{i, 1}_{i_1, \ldots, i_j}}}{\|\psi_1\|} ,\]
with $\ket{\psi_1}$ being the counterpart of $\ket{\psi_0}$:
\[ \ket{\psi_1}=\sum_{m=0}^j \alpha_m \mathop{\sum_{x:|x|=K, x_i=1}}_{
|\{i_1, \ldots, i_j\}\cap \{l:x_l=1\}| = m} \ket{x} ,\]
with $\alpha_0=1/\sqrt{{N-j-1 \choose K-j-1}}$.
Similarly as before, we get
$\langle \psi_1\ket{\psi^{i, 1}_{i_1, \ldots, i_j}}=1$
and
\[ \|\psi_1 \|^2 =
\sum_{m=0}^j {j \choose m}
\frac{{N-j-1 \choose K-j-1}}{{N-j-1 \choose K-m-1}}
\]
\begin{equation}
\label{eq-2807b}
=\sum_{m=0}^j {j \choose m}
\frac{(K-m-1) \ldots (K-j)}{(N-K) \ldots (N-K+m-j+1)}
\end{equation}
Each term in (\ref{eq-2807a}) is
$\frac{(K-m)(N-K+m-j)}{(K-j)(N-K)}$
times the corresponding term in equation (\ref{eq-2807b}).
We have
\[ \frac{K-m}{K-j} \frac{N-K+m-j}{N-K} \leq \frac{K}{K/2} \cdot 2 = 4, \]
because $j\leq K/2$ and $N-K+m-j\leq N-K$ (because of $m\leq j$).
Therefore, $\|\psi_0\|^2 \leq 4 \|\psi_1\|^2$
which implies
\[ \| \tilde{\psi}^{i, 0}_{i_1, \ldots, i_j} \| =
\frac{1}{\|\psi_0\|}\geq \frac{1}{\sqrt{4} \|\psi_1\|} =
\frac{1}{2} \| \tilde{\psi}^{i, 1}_{i_1, \ldots, i_j} \| .\]
\hfill{\rule{2mm}{2mm}}
{\bf Acknowledgment.}
I would like to thank Robert \v Spalek and Ronald de Wolf for very helpful comments
on a draft of this paper.
|
nlin/0508016
|
\section*{List of Figure Captions}
Fig. 1. In a) and b) the scheme of a 2D waveguide array with the defect on the central site, and the concept for incoming, reflected, and transmitted waves in the scattering problem are shown; c) Density plot of the plane waves group velocity (absolute value), $V=2$; d) Resonance diagram: Thick solid and dashed lines represent the resonance condition curve given by (\ref{res_cond}) for $E=0.1$ and $E=1$, respectively. Thin solid and dashed lines represent the $(k_x,k_y)$ parameter submanifold corresponding to the \emph{uni-directional} motion of the wavepacket with $v_y=v_x$ and $v_y=v_x/2$, respectively.
Fig. 2. Power density distribution $p_{m,n}$ (\ref{power}) at $z=10$ (after the scattering process). Initial wavepacket parameters are: $k_x=k_y=1.55$ (resonant scattering), $A=0.1$. Coupling constants are: $V=2,\;\epsilon=1$.
Fig. 3. Power density distribution $p_{m,n}$ (\ref{power}) at $z=12.5$ (after the scattering process). Initial wavepacket parameters are: $k_x=k_y=2.2$ (non-resonant scattering), $A=0.1$. Coupling constants are: $V=2,\;\epsilon=1$.
Fig. 4. The total scattered power for the diagonal propagating wavepacket ($k_x=k_y$) calculated from numerical simulations (points) and by the approximation (\ref{1d-est}) (continuous line). All the parameter values are the same as in Figs.~\ref{fig2},\ref{fig3}.
\newpage
\begin{figure}[t]
\includegraphics{fig1.eps}
\caption{In a) and b) the scheme of a 2D waveguide array with the defect on the central site, and the concept for incoming, reflected, and transmitted waves in the scattering problem are shown; c) Density plot of the plane waves group velocity (absolute value), $V=2$; d) Resonance diagram: Thick solid and dashed lines represent the resonance condition curve given by (\ref{res_cond}) for $E=0.1$ and $E=1$, respectively. Thin solid and dashed lines represent the $(k_x,k_y)$ parameter submanifold corresponding to the \emph{uni-directional} motion of the wavepacket with $v_y=v_x$ and $v_y=v_x/2$, respectively.}
\label{fig1}
\end{figure}
\newpage
\begin{figure}[t]
\includegraphics{fig2.eps}
\caption{Power density distribution $p_{m,n}$ (\ref{power}) at $z=10$ (after the scattering process). Initial wavepacket parameters are: $k_x=k_y=1.55$ (resonant scattering), $A=0.1$. Coupling constants are: $V=2,\;\epsilon=1$.}
\label{fig2}
\end{figure}
\newpage
\begin{figure}[t]
\includegraphics{fig3.eps}
\caption{Power density distribution $p_{m,n}$ (\ref{power}) at $z=12.5$ (after the scattering process). Initial wavepacket parameters are:
$k_x=k_y=2.2$ (non-resonant scattering), $A=0.1$. Coupling constants are: $V=2,\;\epsilon=1$.}
\label{fig3}
\end{figure}
\newpage
\begin{figure}[t]
\includegraphics{fig4.eps}
\caption{The total scattered power for the diagonal propagating wavepacket ($k_x=k_y$) calculated from numerical simulations (points) and by the approximation (\ref{1d-est}) (continuous line). All the parameter values are the same as in Figs.~\ref{fig2},\ref{fig3}.}
\label{fig4}
\end{figure}
\end{document}
|
1606.03225
|
\section{Introduction}
Random sequential adsorption (RSA) is a conceptually easy procedure to randomly deposit objects on a hypersurface which, for simplest cases, is flat and homogeneous. Historically, a one-dimensional version of the RSA problem occurred first in 1939 \cite{Flory1939}, where interactions between attachments to a discrete polymer line was studied. Then, in 1959, A. R\'enyi found analytically the saturated random packing density for one-dimensional continuum RSA by solving the so-called car parking problem \cite{Renyi1958}. Later two-dimensional (2D) RSA became a very successful approach for modeling monolayers in the process of irreversible adsorption \cite{Feder1980,Evans1993,Adamczyk2012}. A sketch of the 2D RSA algorithm for anisotropic shapes is as follows: (i) the position and orientation of the virtual particle is drawn according to the probability distribution that reflects properties of an underlying substrate --- when a surface is homogeneous, this probability distribution will be uniform; (ii) next it is tested if the virtual particle overlaps or intersects with any of particles already added to the packing; (iii) if not, the particle is added to the packing, otherwise, if there is an overlap, it is abandoned. Such attempts should proceed until the packing is saturated, that is, to the final case when there is no space for any virtual particle on a substrate. In practice, because of substantial slowing down, the simulation is terminated when the probability of successfully adding a virtual particle is sufficiently small, with further extrapolation to the saturated packing. Details on the extrapolation method and on when to stop the algorithm at certain accuracy are given in the following sections of this article.
In this study we are searching for the shape that gives densest saturated random packing. The relation between particle shape and packing density has been intensively studied in the context of the related problem of random close packings (RCP), where particles are tightly packed in a jammed configuration and are touching many of their closest neighbors \cite{Baule2014}. Interestingly, in the case of RCP, convex anisotropic shapes typically give denser packings than disks or spheres \cite{Donev2004, Man2005}, and the densest packing were found for particles (ellipsoids, spherocylinders or dimers) of long-to-short axis ratio around $1.5$ \cite{Donev2004, Faure2009, Zhao2012, Baule2013}. Recently, it has been analytically proven that sufficiently sphere-like, but anisotropic shapes pack more densely than spheres, which is in accordance with the Ulam's conjecture, which posits that spheres have the lowest optimal packing density among all convex shapes \cite{Kallus2016}. Although, in the case of RSA this is not true for elongated rectangles \cite{Vigil1990} as well as for squares \cite{Viot1990} that gives lower saturated packing fraction than disks, it is still possible that the disk is a local minimum of the packing density in some space of shapes. However, such behavior is not supported by the results obtained for regular polygons \cite{Ciesla2014star}, where packing fractions seems to be slightly below the one for disks.
In our recent report \cite{Ciesla2015} we have studied several shapes and shown that the highest packing of $0.5833$ was obtained for a smoothed dimer --- a concave shape derived from a dimer of two overlapping disks. This value is comparable to the maximum packing fraction reported earlier for ellipsoids and spherocylinders \cite{Sherwood1999,Viot1992}, but due to the numerical accuracy of these results it was impossible to determine which of these three shapes gives the highest maximal random coverage. The main aim of this study is to settle this issue. Moreover, we also find the packing fraction for shapes derived from linear polymers that are in between smoothed dimers and spherocylinders. Additionally, we have also included concepts introduced in Ref.\ \cite{Zhang2013}, which resulted in speeding up the RSA algorithm and even in obtaining saturated packings in a finite time.
In this paper we first introduce the RSA model in detail, then we gather results for the kinetics and saturated random packings. We also examine spatial and angular correlations in the jammed state for ellipses, followed by a discussion of the measurement errors. The article is closed by a brief summary. We provide an Appendix with explicit formulas for areas of investigated geometries.
\section{Model}
Examples of the shapes we consider are shown in Fig.~\ref{fig:shapes}.
\begin{figure}[htb]
\centerline{%
\includegraphics[width=0.9\columnwidth]{shapes}}
\caption{(Color online) The types of shapes for which saturated random packings are studied. From the left: linear polymers built of two, three and four overlapping disks of the same radius, spherocylinder and ellipse. All presented shapes have the same long-to-short axis ratio $w/h = 2$. Below it is shown how all linear polymers are smoothed.}
\label{fig:shapes}
\end{figure}
Linear polymers are built of identical disks. In this study we restricted ourselves to dimers, trimers, tetramers, pentamers and decamers. All these shapes were smoothed as shown in Fig.\ \ref{fig:shapes} for the case of a dimer.
Here, we consider configuration build of smoothed shape that correspond 1-to-1 to configurations build of the unsmoothed particles, but have higher density. However, in general disks forming smoothed particles do not have to overlap or even touch themselves. But, in order to preserve the mentioned correspondence between set of disks and smoothed particles, which is useful for packing generation procedure, the distance between two closest disks centers should not be larger than $h\sqrt{2}$, where $h$ is the disk diameter. Thus the long-to-short axis ratio for smoothed particles should not exceed $(k-1)\sqrt{2} + 1$, where $k$ is a number of disks in a particle. Besides linear polymers, we studied spherocylinders and ellipses. The anisotropy of the shape is defined as a long-to-short axis ratio: $x = w/h$. To find the anisotropy that gives highest packing fraction we varied the parameter $x$ between $1.1$ and $2.5$. This particular interval was chosen according to results of previous studies, \emph{e.g.} \cite{Ciesla2015,Vigil1989}. Formulas for the areas covered by these shapes are collected in the Appendix. To make the comparison of packing fraction between different shapes as clear as possible, all shapes' sizes were re-scaled so that their areas are equal to $1$. For example, an ellipse of anisotropy $x$ has the short semi-axis of length $\sqrt{1/ (x\pi)}$ and the long semi-axis of length $\sqrt{x/\pi}$.
\par
These shapes were thrown onto a square surface of a side size $1000$ with an area $S=10^6$ and periodic boundary conditions. Checking if shapes are overlapping is straightforward for the overlapping disk and sphero-cylinder cases as there it is based on simple disk-disk, disk-interval, and interval-interval intersections. In the case of ellipses the exact Vieillard-Baron criterion was used \cite{Vieillard-Baron1972}.
As the particle surface area is unity, the packing density is equal to the packing fraction:
\begin{equation}
\label{eq:nq}
\theta(t) = \frac{N(t)}{S},
\end{equation}
where $N(t)$ is a number of particles in a packing after a number of RSA iterations corresponding to time $t$ measured in the dimensionless time units
\begin{equation}
t = \frac{n}{S},
\end{equation}
where $n$ is number of RSA algorithm steps.
\par
The simulation was stopped when $t=10^6$. To improve statistics, up to $100$ independent simulations were performed for each shape. This specific values of packing size and the simulation time were chosen to ensure a desired level of numerical error of the average saturated random packing \cite{Ciesla2016}, which for our purposes should be below $0.001$. To generate random packing the modified version of RSA algorithm introduced in Ref.\ \cite{Zhang2013} was used. The modification is that the algorithm traces the unoccupied places where subsequent particles can possibly be placed. Therefore, the random position of consecutive shape can be limited to these places. For example a center of a disk of radius $r$ cannot be closer to a boundary of any previously placed disk than $r$. This modification allows one to obtain strictly saturated packing, as such unoccupied spaces must vanish or be filled in by a disk. The simulation stops when there are no such regions. For anisotropic particles, the shape of such unoccupied regions depends on the orientation of the particle that tries to fit there. To work around this problem, we decided to exclude only areas where it is not possible to place center of particle in any possible orientation. For example, the center of subsequent spherocylinder of height $h$ cannot be closer from boundary of another spherocylinder than $h/2$ as well as the center of an ellipse cannot be closer from the other ones than its shorter semi-axis length (see Fig.\ \ref{fig:regions}).
\begin{figure}[htb]
\centerline{%
\includegraphics[width=0.35\columnwidth]{regions}
}
\caption{(Color online) Example of ellipses random packing. An elliptic region surrounding each ellipse shows the area where it is not possible to place center of next ellipse. The only places where center of next ellipse can potentially be placed are the black regions. Here, they fill approximately 7\% of whole packing; thus, selecting random point only from these regions speeds up simulation 14 times.}
\label{fig:regions}
\end{figure}
The cost of such a solution is that there are regions in which an anisotropic particle will never fit. For sure, there could be placed the center of disk of radius $h/2$, but we cannot determine if an anisotropic particle will fit there. Therefore, the generated packing are not saturated; however, drawing a place for the next particle only from these black regions can significantly speed up a simulation. This makes it possible to study substantially larger packings and to increase the effective number of RSA iterations compared to our previous study \cite{Ciesla2015}. Note, that one simulation step when particle position is selected only in regions of the total size $S_{reg}$ corresponds to $S/S_{reg}$ iterations of the original RSA procedure. Thus, one iteration corresponds to increase of dimensionless time by $1/S_{reg}$. During the simulations the number of particles in a packing as a function of time $N(t)$ was recorded.
\section{Results}
Fragments of sample packings are presented in Fig.~\ref{fig:examples}.
\begin{figure}[htb]
\centerline{%
\includegraphics[width=0.35\columnwidth]{2mer}
\hspace{0.05\columnwidth}
\includegraphics[width=0.35\columnwidth]{4mer}
}
\vspace{0.05\columnwidth}
\centerline{%
\includegraphics[width=0.35\columnwidth]{spherocylinder}
\hspace{0.05\columnwidth}
\includegraphics[width=0.35\columnwidth]{ellipse}
}\caption{(Color online) Examples of obtained random packings of dimers, tetramers, spherocylinders and ellipses after $t=10^6$ iterations of RSA algorithm. The presented packings' sizes are $10 \times 10$ with periodic boundary conditions. The boundaries of the systems are indicated by black lines. The parameter $x$ equals $2.0$ for all four shapes.}
\label{fig:examples}
\end{figure}
Because the simulation is stopped after $t=10^6$, the resulting packings are most likely not saturated. To estimate the number of particles in saturated packing, the kinetics of the RSA are clarified in the following subsection.
\subsection{RSA kinetics}
Asymptotically, for large enough time $t$, the kinetics of RSA is governed by the power law \cite{Pomeau1980,Swendsen1981}
\begin{equation}
\label{eq:fl}
\theta(t) = \theta - A t^{-1/d}
\end{equation}
where $\theta \equiv \theta(t\to\infty)$, $A$ is a positive constant and $d$ depends on particle shape and properties of a surface on which particles are packed. For flat and homogeneous surfaces parameter $d$ can be interpreted as a number of degrees of freedom of a particle \cite{Ciesla2013pol, Hinrichsen1986}. Thus, for RSA of disks $d=2$, but for anisotropic particles $d=3$ because orientation of a particle gives an additional degree of freedom, even when the anisotropy is quite small \cite{Viot1992, Ciesla2014ring, Ciesla2014dim}.
The examples of RSA kinetics of studied shapes are presented in Fig.~\ref{fig:kinetics}, where we plot $d N(t)/dt$ vs.\ $t$ on a log-log scale.
\begin{figure}[htb]
\centerline{%
\includegraphics[width=0.45\columnwidth]{kinetics_3}
\hspace{0.05\columnwidth}
\includegraphics[width=0.45\columnwidth]{kinetics_10}
}
\vspace{0.05\columnwidth}
\centerline{%
\includegraphics[width=0.45\columnwidth]{kinetics_sph}
\hspace{0.05\columnwidth}
\includegraphics[width=0.45\columnwidth]{kinetics_el}
}
\caption{(Color online) RSA kinetics for trimers, decamers, spherocylinders and ellipses. Main panels show number of added particles in a time unit (or the time derivative of $N$) versus time. Insets show the dependence on $x$ of the fitted exponent in Eq.~\ref{eq:fl}. Error bars are smaller than symbol sizes. Dashed lines correspond to $d=3$ degrees of freedom, which is characteristic of anisotropic molecules.}
\label{fig:kinetics}
\end{figure}
The data for dimers has been presented in Ref. \ \cite{Ciesla2014dim}. Firstly, for all studied shapes, the numerical data in main panels in Fig.\ \ref{fig:kinetics} lie along straight lines, confirming that Eq.~(\ref{eq:fl}) is fulfilled. As expected the parameter $d$, obtained from fitting numerical data to the relation (\ref{eq:fl}) shown on insets in Fig.\ \ref{fig:kinetics}, is around $3$; it becomes slightly lower for small $x$, which agrees with previous observations \cite{Viot1990, Ciesla2014dim}.
\subsection{Saturated random packing fractions}
\label{sec:packing}
The estimation of $\theta$ for finite-time simulations can be performed as follows. Having parameter $d$ and using a new variable $y = t^{-1/d}$, Eq.~(\ref{eq:fl}) can be converted to: $\theta(y) = \theta + A' y$. Thus, points $(\theta(y), y)$ measured during a simulation should lie along a straight line which crosses the axis $y=0$ at $\theta$.
The error of such $\theta$ estimation originates in error of the exponent $-1/d$, which in our simulation is at the order of $0.001$. The corresponding error of $\theta$ is in our case smaller than statistical error.
Another problem originates from the finite size of a system. According to Ref. \ \cite{Ciesla2016} for our setup this error should be comparable with the statistical error. Moreover, as we are mainly interested in comparing packing fraction of different shapes the results of such comparisons should not depend on system size, assuming that the system is big enough.
The source of errors and its influence on obtained result are discussed in detail in Sec. \ref{sec:errors}.
The obtained packing fractions are presented in Fig.~\ref{fig:q}.%
\begin{figure}[htb]
\vspace{0.2in}
\centerline{%
\includegraphics[width=0.7\columnwidth]{theta}
}
\caption{(Color online) Saturated packing fraction dependence on parameter $x$ for all studied shapes. Dots represents data from numerical simulations. Bars corresponding to statistical errors are smaller than symbol sizes. Solid lines are 4-th order polynomial fits: $\theta(x) = 0.29531 + 0.47397 \, x - 0.28573 \, x^2 + 0.075803 \, x^3 - 0.0076961 \, x^4$ for ellipses, $\theta(x) = 0.17371 + 0.75347 \, x - 0.5237 \, x^2 + 0.16448 \, x^3 - 0.019968 \, x^4$ for spherocylinders, $\theta(x) = 0.17846 + 0.73906 \, x - 0.50782 \, x^2 + 0.15703 \, x^3 - 0.018723 \, x^4$ for decamers, $\theta(x) = 0.18918 + 0.71263 \, x - 0.48458 \, x^2 + 0.14852 \, x^3 - 0.017676 \, x^4$ for pentamers, $\theta(x) = 0.14672 + 0.82962 \, x - 0.60441 \, x^2 + 0.20274 \, x^3 - 0.026843 \, x^4$ for tetramers, $\theta(x) = 0.2038 + 0.66753 \, x - 0.4367 \, x^2 + 0.12827 \, x^3 - 0.014935 \, x^4$ for trimers, and $\theta(x) = 0.19671 + 0.66073 \, x - 0.41212 \, x^2 + 0.11609 \, x^3 - 0.014196 \, x^4$ for dimers.}
\label{fig:q}
\end{figure}
For all shapes the maximal packing fraction is reached for $x \in [1.5; 2]$. It is similar to previous results for other shapes \cite{Ciesla2015} and confirms reasoning presented in \cite{Vigil1989} that for large $t$ anisotropy causes particles to align in parallel, which increases a packing fraction, but, on the other hand, at the beginning of RSA the anisotropic particle blocks significantly more space than a disk of the same area, which lowers the packing fraction. Thus, the optimum is reached for a small anisotropy. The data are fit to 4-th order polynomials, which allow us to accurately estimate an optimal anisotropy, and the value of the highest possible packing fraction. For convenience these data are collected together in the Table~\ref{tab:results}. Besides long-to-short axis ratio $x$ we used another measure of anisotropy, namely the shape factor, defined as \cite{Richard2001, Moucka2005}
\begin{equation}
\zeta = \frac{C^2}{4\pi},
\end{equation}
where $C$ is circumference of an object of unit surface area. Interestingly, the maximal packing fraction is reached for $\zeta = 1.136 \pm 0.011$ for all studied shapes, while the long-to-short axis ratio $x$ varies over a much wider relative range.
The statistical errors are of the order of $2.4 \cdot 10^{-5}$. Fluctuations of numerical values near the maxima (see Fig.\ref{fig:q}) suggest that the accuracy of the maximum coverage is a bit lower.
\begin{table}[htb]
\caption{\ Maximal possible saturated packing fractions and corresponding values of parameter $x$ for which they are reached. The error of $\theta$ does not exceed $10^{-4}$ (see Sec. \ref{sec:errors}). The error of $x$ corresponds to the width of the maximum of fitted function (see Fig.\ref{fig:q}), and is equal to $0.07$.}
\label{tab:results}
\begin{tabular*}{0.5\textwidth}{@{\extracolsep{\fill}}cccc}
\hline
shape & $x$ & $\zeta$ & $\theta$ \\
\hline
dimer & 1.61 & 1.127 & 0.58132 \\
trimer & 1.72 & 1.125 & 0.58200 \\
tetramer & 1.77 & 1.130 & 0.58237 \\
pentamer & 1.79 & 1.131 & 0.58249 \\
decamer & 1.81 & 1.132 & 0.58269 \\
spherocylinder & 1.82 & 1.133 & 0.58281 \\
ellipse & 1.85 & 1.147 & 0.58405 \\
\hline
\end{tabular*}
\end{table}
It is worth commenting on the difference between the packing fraction of smoothed dimers obtained here and in Ref.\ \cite{Ciesla2015}, that is slightly larger than the error margin. The most probable cause of this discrepancy is the different boundary conditions used in the previous study (open boundaries), which most likely introduced a systematic error. A reliable comparison of packing fractions given by different shapes requires using the same boundary conditions for all of them.
\subsection{Structure of densest packing}
\label{sec:correlations}
Packing fraction contains information about mean density of shapes only. More details of packing structures can be obtained by studying correlation functions. Here we limit ourselves to the densest packing configuration of ellipses and two types of correlations. The first is the density correlation function which is proportional to the probability density function $p(r)$ of finding two particles, whose centers are separated by a distance $r$:
\begin{equation}
G(r) = \frac{p(r)}{\theta 2 \pi r}.
\end{equation}
The denominator is a normalization factor insuring that $G(r\to \infty) \to 1$.
The density correlation function for ellipses of different anisotropy is shown in Fig.\ \ref{fig:correlations}.
\begin{figure}[htb]
\centerline{%
\includegraphics[width=0.7\columnwidth]{correlations}
}
\caption{(Color online) The density correlation function for ellipses of three different anisotropies. The black line is for saturated random packing of disks (ellipse with $x=1$) and takes a role of a reference frame. The distance is measured between particle centers. All shapes have unit surface area.}
\label{fig:correlations}
\end{figure}
The presented correlation functions show behavior typical for packings build of anisotropic objects. They become non-zero for the closest possible distance between particle centers. Note that the larger anisotropy under a constant surface area condition implies a shorter possible distance between centers of neighboring objects, as they become thinner. That is why for larger $x$ the $G(r)$ start rising for smaller $r$. The growth is not as fast as for spheres because, due to different relative orientations, closest objects are at different distance from each other. This effect is stronger for larger anisotrpies. The maximum (near $r=1.2$) shifts to the larger distances with growth of anisotropy. The minimum is observed for $r \approx 2$. In general, the larger anisotropy, the smoother the density correlation function.
The second studied property of a packing structure is the local orientational ordering. Here we used the following definition of this parameter \cite{Ciesla2013pol}:
\begin{equation}
q(|\vec{r}|) = \left< 2\left[ \left< \left[ \hat{u}(\vec{x}) \cdot \hat{u}(\vec{x}+\vec{r}) \right]^2 \right>_r -\frac{1}{2} \right] \right>_x,
\end{equation}
where $\hat{u}(\vec{x})$ is a unit vector along the long axis of a particle placed at point $\vec{x}$. The $\langle \cdot \rangle_r$ is an average over particles at a distance $r$, while $\langle \cdot \rangle_x$ is an average over different particle positions. The parameter $q$ is equal $1$ when particles are in parallel, and is equal $0$ when their orientations are random. The minimum value of $q=-1$ is possible if objects at a distance $r$ are perpendicular.
The local orientational ordering in packing is shown in Fig.\ \ref{fig:order}.
\begin{figure}[htb]
\centerline{%
\includegraphics[width=0.7\columnwidth]{order}
}
\caption{(Color online) The propagation of the local orientational order inside a jammed sample of ellipses of three different anisotropies. The distance is measured between particle centers. All shapes have unit surface area.}
\label{fig:order}
\end{figure}
As expected, the closest possible placing of particles requires parallel alignment. The more interesting is the drop of $q(r)$ below zero for $r \approx 1.3$. To explain this, note that the minimum is near $(w+h)/2$ which is possible for a T-like configuration of neighboring shapes. Such configurations also prevents other particles to align in parallel, therefore the mean order is rather perpendicular than random. Particles at a distance $r>2$ are oriented randomly.
\section{Estimation of measurement error}
\label{sec:errors}
As noted before, the setup of RSA algorithm was chosen accordingly to Ref.\ \cite{Ciesla2016} to ensure a level of numerical error of the average saturated random packing below $0.001$. To be sure that obtained results are precise enough we study in detail the case of RSA of ellipses of anisotropy $1.85$.
In general, besides the statistical error of average packing fraction $\theta$, which was used in previous section, there are two sources of systematic error. They originate in finite number of RSA iterations and finite system size. The purpose of this section is to find out how these two sources affect the total error.
The statistical error depends on system size and number of independent packings. For $100$ independent square boxes of $S=10^6$ the standard deviation of average packing fraction $\theta$ is $2.4 \cdot 10^{-5}$. The influence of finite simulation time was estimated by generating independent packings up to dimensionless time $t=10^6$ and $10^7$. The results are presented in Fig.\ \ref{fig:long}.
\begin{figure}[htb]
\centerline{%
\includegraphics[width=0.7\columnwidth]{long}
}
\caption{(Color online) The dependence of the packing fraction of ellipses ($x=1.85$) on $y=t^{-1/d}$. The fitted value of the parameter $d$ is $0.3314$ and $0.3352$, for simulations stopped at $t=10^6$ and $t=10^7$, respectively. Packing size $S=10^6$ was the same in both cases. Black dots and red squares are data obtained from simulations. Black solid and blue dashed lines are linear fits to these data. The obtained values of saturated packing fractions are $\theta = 0.58402 \pm 0.00002$ and $0.58395 \pm 0.00002$ for $t=10^6$ and $t=10^7$, respectively. Inset show the same data but instead of the fitted value of $d$, the theoretical one $d=1/3$ is used in both cases. Now the obtained values of saturated packing fractions are $\theta = 0.58396 \pm 0.00002$ and $0.58399 \pm 0.00002$ for $t=10^6$ and $t=10^7$, respectively.}
\label{fig:long}
\end{figure}
The data were analyzed as described in Sec.\ \ref{sec:packing}. Slopes of both lines are slightly different because the measured value of $d$ is not the same in both the cases. However, both slopes agree within error limits. The difference between the obtained values of $\theta$ is $7 \cdot 10^{-5}$ and is approximately three times larger than the statistical error.
The dependence of average packing fraction on system size is shown in Fig.\ \ref{fig:size}.
\begin{figure}[htb]
\centerline{%
\includegraphics[width=0.7\columnwidth]{size}
}
\caption{(Color online) Dependence of the packing fraction of ellipses ($x=1.85$) upon the size of surface $S = L^2$. Inset zooms the data for large $S$. Simulations were stopped at $t=10^6$. The red dashed line corresponds to $\theta = 0.58402$.}
\label{fig:size}
\end{figure}
The results indicate that for $S=10^6$ finite-size effects are negligible in comparison with statistical errors (see Fig.\ \ref{fig:size} inset). It should be noted that in all studied cases the system size was quite large and periodic boundary conditions were implemented, thus significant finite size effects were not expected and it was confirmed by the data. Additional simulations performed for smaller packings suggest that finite-size effects can affect estimation of packing fraction for $S<10^4$. It is a little surprising as the density correlations in similar systems are superexponentialy damped and typically are not noticeable at distances $L \sim 10\ $\cite{Zhang2013, Bonnier1994}.
In summary, the main contribution to errors of presented results comes from finite simulation time and statistics. The total error of the average packing fraction of ellipses of $1.85$ anisotropy ratio is approximately $10^{-4}$ and is below needed level of accuracy.
The error of an anisotropy $x$, for which the highest packing fraction occurs can be estimated as the half of the interval for which the fit $f(x)> \theta - \Delta\theta$ (see Fig.\ref{fig:q}). For studied shapes this condition gives $\Delta x \le 0.07$.
\section{Summary}
Several anisotropic and concave shapes of particles were analyzed in terms of maximal possible random packing fraction. It was found that the highest packing fraction is obtained for ellipses of long-to-short semi-axis ratio of $1.85 \pm 0.07$. The saturated random packing fraction for such a shape is $0.58405 \pm 0.0001$ which is higher than for smoothed $n$-mers ($0.58269 \pm 0.0001$ reached for anisotrpy $1.81 \pm 0.07$ for $n = 10$) and spherocylinders ($0.58281 \pm 0.0001$ reached for anisotropy $1.82 \pm 0.07$). Interestingly, the $n$-mers give smaller saturated packings than the spherocylinder, and one can see that with increasing $n$ the concave particles are closer to the behavior of the convex spherocylinder particle, being the extreme case. This outcome is somewhat expected, as the spherocylinder is the Minkowski sum (union of infinite number of disks along a finite line, see for instance \cite{Mulder2005, Rosso2006} and references therein) of the interval and a sphere (or disk in $2D$). Additionally, Table~\ref{tab:results} presents that the saturated packing fractions differ among those shapes less than the fraction of one percent, but at the same time corresponding anisotropies vary of the order of a dozen or so percent. In conclusion the presented results for the arrangements of shapes shown in Fig.~\ref{fig:shapes} indicate that particles giving similar maximal random coverage may significantly differ in their long-to-short axis ratio, although at the same time their shape factors $\zeta$ are nearly identical.
\section*{Acknowledgments}
This research was carried out with the support of the Interdisciplinary Centre for Mathematical and Computational Modeling (ICM) at University of Warsaw under grant no.\ G-27-8. G.~Paj\c{a}k acknowledges support of Cracowian Consortium `$,\!,$Materia-Energia-Przysz\l{}o\'s\'c'' im.\ Mariana Smoluchowskiego' within the KNOW grant.
\section*{Appendix}
\label{sec:appendix}
The area of linear polymer built of $k$ disks of unit radius, having overall long-to-short axis ratio $x$ is
\begin{equation}
S_\mathrm{pol} = \pi +
(k-1)\left( \frac{r_\mathrm{c}}{2} \sqrt{4-r_\mathrm{c}^2} + 2 \arcsin \frac{r_\mathrm{c}}{2} \right),
\end{equation}
where
\begin{equation}
r_\mathrm{c} = \frac{x-1}{2(k-1)}
\end{equation}
is a distance between neighboring disks' centers.
The smoothed linear polymer area contains also $2(k-1)$ fragments of area:
\begin{equation}
S_\mathrm{ad} = \left\{
\begin{array}{c c}
\frac{r_\mathrm{c}}{4} \left(\sqrt{16 - r_\mathrm{c}^2} -\sqrt{4 - r_\mathrm{c}^2} \right) -
\arcsin \frac{r_\mathrm{c}}{2} & 0 \leq r_\mathrm{c} < 2 \\
\frac{r_\mathrm{c}}{4} \sqrt{16 - r_\mathrm{c}^2} -\frac{\pi}{2} & 2 \leq r_\mathrm{c} \le 2\sqrt{3}
\end{array}
\right.
\end{equation}
The circumference of such smoothed $k$-mer is:
\begin{equation}
C_\mathrm{pol} = 2 [ 2\alpha + (2k-1)(\pi - 2 \alpha)],
\end{equation}
where $\alpha = \arccos r_c$ and $\alpha \in (0, \pi/2)$.
The area of a spherocylinder of height $h=2$ and long-to-short axis ratio $x$ is
\begin{equation}
S_\mathrm{sph} = \pi + 2(2x-2) .
\end{equation}
The circumference of such spherocylinder is:
\begin{equation}
C_\mathrm{sph} = 2[\pi + (2x-2)].
\end{equation}
The area of an ellipse of short semi-axis equal to $1$ and long-to-short axis ratio $x$ is
\begin{equation}
S_\mathrm{ell} = x\pi .
\end{equation}
The circumference of an ellipse is known exactly in terms of the elliptic functions, but can be accurately estimated using the following relation due to Ramanujan \cite{Ramanujan1914}:
\begin{equation}
C_\mathrm{ell} \approx \pi(x+1) \left[ \frac{3 (x-1)^2}{(x+1)^2\left(\sqrt{4-3\frac{(x-1)^2}{(x+1)^2}} + 10 \right)} +1 \right]
\end{equation}
To make the area equal to 1, we rescale the two dimensions by $1/\sqrt{S}$.
|
1912.10830
|
\section{Introduction}
Dark energy, the source of the late-time accelerating expansion, has been studied a lot since the observations of Supernovae Type Ia. For example, in Ref.\cite{Cheng:2018nhz}, the authors constructed the type Ia supernova spectrum by training an artificial neural network, in Ref.\cite{Feng:2009jr}, bulk viscosity of dark energy is taken into account to alleviate the age problem of the universe, and in Ref.\cite{Feng:2009hr}, dark energy is investigated in the braneworld scenario to avoid the big rip ending of the universe, for reviews on dark energy see Refs.\cite{Li:2012dt} and \cite{Bahamonde:2017ize}. It has been shown that more than two thirds of the energy density in the universe is completely unknown, after which the dark energy is named. What we have known is that the equation of state of dark energy is nearly $w\sim-1$ at present \cite{Aghanim:2018eyx}. The gravity force raised by dark energy is a kind of repulsive force; however, in the earth no one has observed such anti-gravity force in the lab.
The vacuum energy from quantum field theory or the cosmological constant from general relativity can be considered as such a kind of dark energy, for its equation of state $w=-1$. However, observations have not confirmed $w=-1$ while actually it deviates from $-1$ slightly\cite{Aghanim:2018eyx}, which means a dynamical dark energy with varying $w$ may be more consistent with observations. Quintessence is a kind of dynamical dark energy model, in which a scalar field minimally coupled to gravity drives the universe to accelerate. Recently, a scalar field that was once used to make the universe inflate in the early time \cite{Kallosh:2013hoa}\cite{Kallosh:2013yoa}\cite{Galante:2014ifa}\cite{Carrasco:2015uma} with a pole in its kinetic term is proposed as a new kind of dark energy model\cite{Linder:2019caj}, which is called the pole dark energy model. In this model, the original field that is non-minimally coupled to the gravity does not need a very unnatural flat potential. The transformation that transforms the original field with non-canonical kinetic term into a new one with canonical kinetic term could make the potential of the new field much more flat. Then the universe will be accelerated by this flat potential energy.
We generalize the pole dark energy model and propose a multi-pole one, in which the kinetic term may have multiple poles. Poles can come from the super-gravity theory due to the nonminimal coupling to the gravitational field or the geometric properties of the K\"ahler manifold\cite{Broy:2015qna}\cite{Terada:2016nqg}. Besides, the k-essence model \cite{ArmendarizPicon:2000dh} is a dark energy model with non-canonical kinetic terms. Here, we treat it phenomenologically as what has been done in Ref.\cite{Linder:2019caj}. We find the poles can place some restrictions on the values of the original scalar field, which means the original scalar field does not need to change a lot when its corresponding transformed field with canonical kinetic term have much more changes. The later time evolution of the universe is obtained explicitly for the two pole model, while dynamical analysis is performed for the multiple pole model. We find that it does have a stable solution, which corresponds to the universe dominated by the potential energy of the scalar field.
In Sec.\ref{sec:mul}, we introduce the multi-pole dark energy model. The relation between the original scalar field that has two poles in its kinetic term and the transformed canonical one will be shown, and the properties of the transformed potential will also be presented. The cosmological evolution driven by the two pole model will be given in Sec.\ref{sec:cos}. For a general multi-pole dark energy, we will perform the dynamical analysis in Sec.\ref{sec:dyn}. In Sec.\ref{sec:dis} discussions and conclusions will be presented.
\section{The multi-pole dark energy }\label{sec:mul}
In general, the Lagrangian for a scalar field with poles in the kinetic term could be written as
\begin{eqnarray}
\mathcal{L} = -\frac{1}{2}\frac{k^2}{f^2(\sigma)}(\partial\sigma)^2 - V(\sigma)\,,
\end{eqnarray}
where $V(\sigma)$ is the potential and $f(\sigma)$ is some function of the scalar field. Function $f$ may have multiple zero points by construction. The parameter $k$ could be positive or negative. When $k<0$, it is equivalent to changing the overall sign of the $f$ function while keeping $k$ positive. Without losing of generality, $k$ will be taken as $\pm1$ in the numerical calculations. Poles can come from the super-gravity theory due to the nonminimal coupling to the gravitational field or the geometric properties of the K\"ahler manifold. In the pole dark energy model Ref.\cite{Linder:2019caj}, function $f$ is taken as a power law: $f(\sigma)=\sigma^{p/2}$, and the pole resides at only one point $\sigma=0$ with residue $k^2$ and order $p$.
\subsection{Two poles}
After performing the transformation $d\phi = kd\sigma/f(\sigma)$, the non-canonical kinetic term of $\sigma$ is transformed to the canonical form for the scalar field $\phi$ :
\begin{equation}\label{eqi:can}
\mathcal{L} = -\frac{1}{2}(\partial \phi)^2 - V(\sigma(\phi))\,.
\end{equation}
If the function $f$ can be phenomenologically taken as the following form :
\begin{eqnarray}\label{equ:mp}
f(\sigma) = \sigma(1 -\beta\sigma^q) \,,
\end{eqnarray}
which residues at $\sigma=0$ and $\sigma=\beta^{-1/q}$ with parameters $q>1, \beta>0$ in the unit of $8\pi G = 1$, we can get an explicit relation between $\phi$ and $\sigma$:
\begin{eqnarray}
\phi &=&
\frac{k}{q}\ln\bigg(\frac{\sigma^q}{1-\beta\sigma^q} \bigg)\,, \\
\sigma &=& \left(\frac{1}{e^{-q\phi/k}+\beta } \right)^{1/q} \,.\label{equ:sigp}
\end{eqnarray}
When $k>0, \beta=0$, we have $f=\sigma$ from Eq.(\ref{equ:mp}). That is just the pole dark energy model in Ref.\cite{Linder:2019caj} with $p=2$ there. This model is often used for inflation. When $k<0$, $f\sim \beta \sigma^{q+1}$ for large $\sigma$, which is coincident with the pole dark energy model when $p=2{q+1}$. Function $f$ can be also written in terms of $\phi$:
\begin{equation}
f = e^{\phi/k}\bigg( 1+\beta e^{q\phi/k}\bigg)^{-1-\frac{1}{q}} \,.
\end{equation}
We will take the branch $0<\sigma<\beta^{-1/q}$, which corresponds to $\phi\in(-\infty, \infty)$. By contrast, $\sigma$ is taken in the branch of $0<\sigma $ in the pole dark energy model. It shows the second pole makes a constraint on the $\sigma$ field. When the parameter $q$ is chosen to be $q<-1$, it takes the branch $\sigma>\beta^{-1/q}$ correspondingly, see Eq.(\ref{equ:sigp}). Therefore, when two poles are very close to each other, such as a very large $\beta$ in the two pole model, one can take another branch by setting suitable values of the parameters, such as $q<-1$ here and the result will not be changed.
In the case of power law potential, we have
\begin{eqnarray}
V \sim \sigma^n \,, \rightarrow V \sim (\beta + e^{-q\phi/k})^{-n/q} \,.
\end{eqnarray}
For $k>0$, when $\phi$ goes to infinite, the potential becomes
\begin{eqnarray}
V|_{\phi\rightarrow\infty} \sim \beta^{-\frac{n}{q}}\left(1-\frac{n}{q\beta}e^{-q\phi/k}\right)\,,
\end{eqnarray}
which is basically an uplifted exponential potential. Otherwise, when $\phi$ goes to minus infinity, the potential becomes $V|_{\phi\rightarrow-\infty}\sim e^{n\phi/k}$. For $k<0$, the limits are exchanged. Note that after transforming to the canonical form, we get a flat potential for the transformed new scalar field $\phi$ even if the original field $\sigma$ has a steep one. The first derivative of the potential with respect to $\phi$ is given by:
\begin{eqnarray}
\frac{V_\phi}{V} \equiv \frac{dV/d\phi}{V} =\frac{n}{k} \frac{ e^{-q\phi/k}}{\beta + e^{-q\phi/k}} \,.
\end{eqnarray}
When $\beta = 0$, $V_\phi/V$ is a constant. However, in the case of $\beta \neq 0$, $V_\phi/V \sim e^{-q\phi/k}$. For $k>0$, when $\phi$ is going large, $V_\phi/V\ll 1$, which indicates the potential has a flat plateau at that moment.
In the case of a dilaton potential, we have
\begin{eqnarray}
V\sim e^{-\alpha\sigma}\,,\rightarrow V\sim e^{-\alpha (\beta + e^{-q\phi/k})^{-1/q} }\,,
\end{eqnarray}
which gives a super-exponential behavior as that in Ref.\cite{Linder:2019caj}. And $V_\phi/V$ is
\begin{eqnarray}
\frac{V_\phi}{V} = \frac{\alpha}{k} \frac{ e^{-q\phi/k}}{\beta + e^{-q\phi/k}} \frac{1}{(\beta + e^{-q\phi/k})^{1/q}}\,.
\end{eqnarray}
When $\beta=0$, $V_\phi/V \sim e^{\phi/k}$, and while $\beta\neq 0 $ and $k>0$, $V_\phi/V \sim e^{-q\phi/k}$, which also gives a flat plateau-like potential.
In fact, for a general potential $V(\sigma)$, we have
\begin{eqnarray}
\frac{V_\phi}{V} = \frac{V_\sigma}{V}\frac{f}{k} = \frac{V_\sigma}{V} \frac{e^{-q\phi/k}}{(\beta + e^{-q\phi/k})^{1/q+1}}\,. \label{equ:vvpoles}
\end{eqnarray}
When $\phi\rightarrow \infty$, $\sigma$ approaches its second residue $\sigma\rightarrow \beta^{-1/q}$ from Eq.(\ref{equ:sigp}). Then $V_\sigma/V$ becomes a constant, and $V_\phi/V\sim e^{-q\phi/k}$. So the second pole in the kinetic term of $\sigma$ can really help us to get a flat plateau-like potential without fine-tuning any parameters.
\subsection{Multiple poles}
When the kinetic term of $\sigma$ has multiple poles, we can not get an analytical formula for $V(\phi)$; therefore, we will perform the dynamical analysis for this general case in Sec.\ref{sec:dyn}. Note that we always take a branch of $\sigma$ that does not cross the residue points. For example, $\sigma\in(0,\beta^{-1/q})$ in the last section. It means the zeros of $f(\sigma)$ will place some restrictions on the values of $\sigma$. The change of $\sigma$ field is then not too much during the evolution, even there is a big change of the $\phi$ field. With help of poles in the kinetic term of $\sigma$ or zero points of the function $f(\sigma)$, we can get a flat plateau-like potential easily, since $V_\phi/V\rightarrow 0$ when $f$ approaches any one of its zero points, see Eq.(\ref{equ:vvpoles}).
\section{Cosmological evolution}\label{sec:cos}
The late-time evolution of a flat universe is determined by the Friedmann equation :
\begin{equation}了\label{equ:frid}
H^2 = \frac{1}{3M_p^2}\left(\rho_m + \frac{1}{2}\dot\phi^2 + V(\phi)\right) \,,
\end{equation}
which includes the dark matter and dark energy components. Here $M_p^2=1/8\pi G$ is the reduced Planck mass, and also $M_p=1$ in the unit of $8\pi G =1$. The dot over $\phi$ denotes the derivatives with respect to time, and $\rho_m$ is the energy density of dark matter. The equation of motion for the $\phi$ field is given by
\begin{equation}\label{equ:eom}
\ddot \phi + 3H\dot \phi +\frac{dV}{d\phi} = 0\,.
\end{equation}
We also have the dynamic equation:
\begin{equation}\label{equ:deom}
\dot H = -\frac{1}{2} (\rho_m + \dot\phi^2)\,.
\end{equation}
Let $x=\ln a$ and introduce the following field and potential:
\begin{equation}\label{equ:rede}
\psi = \frac{\phi}{M_p}\,, \quad U = \frac{V}{3H_0^2M_p^2}\,,
\end{equation}
where we have recovered the unit to see that both $\psi$ and $U$ are dimensionless and $H_0$ is the present value of the Hubble parameter. Then, the Friedmann equation then becomes
\begin{equation}
E^2 \left( 1-\frac{1}{6}\psi'^2\right)= \Omega_{m0} e^{-3x} + U \,,
\end{equation}
where the prime denotes the derivatives with respect to $x$ and $\Omega_{m0}\equiv \frac{\rho_{m0}}{3H_0^2}$, $E\equiv H/H_0$. Eq.(\ref{equ:deom}) becomes
\begin{eqnarray}\label{equ:deom2}
EE' = -\frac{3}{2}\Omega_{m0} e^{-3x} + \frac{1}{2} E^2\psi'^2\,.
\end{eqnarray}
After a straightforward calculation, the equation of motion for $\psi$ could be written as
\begin{equation}\label{equ:eom2}
\bigg(\Omega_{m0} e^{-3x} + U\bigg)\left( \psi'' + \frac{1}{2} \psi'^3 +3\psi'\right) +3\left( 1-\frac{1}{6}\psi'^2\right)\left(\frac{dU}{d\psi}-\frac{1}{2}\Omega_{m0} e^{-3x}\psi'\right) = 0\,.
\end{equation}
The equation of state is given by
\begin{eqnarray}\label{equ:eos}
w &=& \frac{\dot\phi^2/2-V}{\dot\phi^2/2+V} = -1+2\left[1+\frac{U\left( 6-\psi'^2\right)}{(\Omega_{m0} e^{-3x} + U)\psi'^2}\right]^{-1} \,.
\end{eqnarray}
It is clear that when the $\psi$ field's kinetic energy is much smaller than its potential, $w\sim -1$. The evolution of the filed $\psi$ can be obtained by numerically solving Eq.(\ref{equ:eom2}).
For $V=m^2\sigma^2/2$ with
\begin{equation}\label{equ:mod1}
U = U_0(\beta + e^{-q\psi/k})^{-2/q}\,,\quad U_0 = \frac{m^2}{6H_0^2}\,,
\end{equation}
and $q=2$ we solved Eq.(\ref{equ:eom2}) and plotted the evolution of $\psi$ as the redshift $z=1/a-1$ in Fig.\ref{fig:phiz}. One can always make $k^2=1$ by redefining $\sigma$. Therefore, without losing generality, we set $k=1$ in the numerical calculation. Note that $k=-1$ is equivalent to changing the overall sign of the function $f$ while keeping $k=1$, see Eq.(\ref{equ:mp}). In other words, when $k=-1$, we can redefine $\psi\rightarrow -\psi$, then $U$ will not be changed and $dU/d\psi \rightarrow -dU/d\psi$, and then Eq.(\ref{equ:eom}) will be not changed but an overall minus sign.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.45\textwidth,angle=0]{phiz1.eps}
\includegraphics[width=0.45\textwidth,angle=0]{phizz1.eps}
\caption{\label{fig:phiz}The evolution of $\psi$ and $\psi'$ with pow law potential as the function of redshift $z$ with the same initial conditions and with $\Omega_{m0}=0.3,q=2,\alpha=1$ and different $\beta$ values }
\end{center}
\end{figure}
From Fig.\ref{fig:phiz}, $\psi$ increases in the early time and decreases at present $z\rightarrow 0$. It shows that large value of $\beta$ could slow down the decreasing process of $\psi$, and depress the increasing of the kinetic momentum energy $\sim \psi'^2$. It means that with the help of $\beta$, the potential energy will be the main part of the energy of $\psi$; therefore, its equation of state $w\rightarrow-1$, see Fig.\ref{fig:w}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.44\textwidth,angle=0]{w1.eps}\,
\includegraphics[width=0.44\textwidth,angle=0]{wz1.eps}
\caption{\label{fig:w}For power law potential, the evolution of the equation of state $w$ and its running $w'$ as the function of redshift $z$ with $\Omega_{m0}=0.3,q=2,n=2$ and different $\beta$ values. }
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.45\textwidth,angle=0]{wwz1.eps}
\caption{\label{fig:wwz} For power law potential, the dynamics of $w-w'$ phase space with $\Omega_{m0}=0.3,q=2,n=2$ and different $\beta$ values. }
\end{center}
\end{figure}
The evolution of the equation of state $w$ and its running $w'$ are plotted in Fig.\ref{fig:w}. We also plot their phase space in Fig.\ref{fig:wwz}. It is clear that large values of $\beta$ could indeed make the model much more suitable to describe the present accelerating universe. And the running of $w$ almost vanishes ($w'\sim 0$) at present when $\beta$ is large.
Now we take the potential as $V= V_0e^{-\alpha \sigma}$, or
\begin{equation}
U = U_0e^{-\alpha (\beta + e^{-q\psi/k})^{-1/q} } \,,\quad U_0 = \frac{V_0}{6H_0^2}
\end{equation}
to solve Eq.(\ref{equ:eom2}) numerically. The evolution of $\psi$ and $w$ and their derivatives are plotted in Figs.\ref{fig:phiz3}-\ref{fig:wwz3}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.45\textwidth,angle=0]{phiz3.eps}
\includegraphics[width=0.45\textwidth,angle=0]{phizz3.eps}
\caption{\label{fig:phiz3} The evolution of $\psi$ and $\psi'$ with dilaton potential as the function of redshift $z$ with the same initial conditions and with $\Omega_{m0}=0.3,q=2,\alpha=1$ and different $\beta$ values. }
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.45\textwidth,angle=0]{w3.eps}
\includegraphics[width=0.45\textwidth,angle=0]{wz3.eps}
\caption{\label{fig:w3}For dilaton potential, the evolution of the equation of state $w$ and its running $w'$ as the function of redshift $z$ with $\Omega_{m0}=0.3,q=2,\alpha=1$ and different $\beta$ values. }
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.45\textwidth,angle=0]{wwz3.eps}
\caption{\label{fig:wwz3} For dilaton potential, the dynamics of $w-w'$ phase space with $\Omega_{m0}=0.3,q=2,\alpha=1$ and different $\beta$ values. }
\end{center}
\end{figure}
\section{Dynamical analysis }\label{sec:dyn}
Dynamical analysis is an effective method to reveal the novel phenomena arising from nonlinear equations without solving them. It can produce good numerical estimates of parameters connected with general features such as stability. This method has already been used for analyzing the evolution of the universe, see Refs.\cite{Feng:2012wx}\cite{Feng:2014fsa}.
In this section, we will treat the $f(\sigma)$ as a general function, which may have multiple zero points, and perform the dynamical analysis on the whole system of equations.
\subsection{Dyanamical equations}
From Eq.(\ref{equ:eom}) and using $d\phi = kd\sigma/f$, we have
\begin{equation}\label{equ:sig}
\ddot \sigma-\frac{df}{fd\sigma}\dot \sigma^2+3H\dot \sigma + \frac{dV}{d\sigma}\frac{f^2}{k^2} = 0\,.
\end{equation}
After defining the dimensionless variables in the units of $8\pi G= 1$:
\begin{eqnarray}\label{equ:def}
X=\frac{k\dot\sigma}{\sqrt{6}fH}\,,\quad Y = \frac{\sqrt{V}}{\sqrt{3} H} \,,\quad \lambda = \frac{f}{k}\frac{V_\sigma}{V}\,,
\end{eqnarray}
with potential $V_\sigma=dV/d\sigma$, we have the constraint arisen from the Friedmann equation
\begin{equation}
1 = \Omega_m + X^2+Y^2 \,, \quad \Omega_m = \frac{\rho_m}{3H^2}\,,
\end{equation}
therefore, the whole dynamical system is given by
\begin{eqnarray}
\frac{dX}{dx}&=& -3X- \sqrt{\frac{3}{2}} \lambda Y^2+ \frac{3}{2}X(1+X^2-Y^2) \,,\label{equ:sys1}\\
\frac{dY}{dx}&=&Y\left[ \sqrt{\frac{3}{2}} \lambda X +\frac{3}{2}(1+X^2-Y^2)\right]\,, \label{equ:sys2}\\
\frac{d\lambda}{dx} &=& \sqrt{6}X\lambda \left( \Gamma -\lambda \right)\,,\label{equ:sys3}
\end{eqnarray}
where
\begin{equation}
\Gamma \equiv \frac{f_\sigma}{k} + \lambda \frac{VV_{\sigma\sigma}}{ V_\sigma^2}\,,
\end{equation}
with the convention $f_\sigma=df/d\sigma, V_{\sigma\sigma}=d^2V/d\sigma^2$.
The equation of state can also be written in terms of $X,Y$:
\begin{equation}
w = \frac{X^2-Y^2}{X^2+Y^2}\,.
\end{equation}
Generally, the system (\ref{equ:sys1})-(\ref{equ:sys3}) is not an strictly autonomous system, but in some cases it is indeed an autonomous system. For example, when $\Gamma = \lambda$. In this case, it implies
\begin{equation}
\frac{df/d\sigma}{f} = \frac{dV/d\sigma}{V}-\frac{d^2V/d\sigma^2}{dV/d\sigma}\,.
\end{equation}
After integrating the above equation, we have $f\frac{dV}{d\sigma}\sim V$ up to some integration constant. By using the transformation
\begin{equation}
\phi = \int \frac{d\sigma}{f} \sim \int \frac{dV}{Vd\sigma}d\sigma = \ln V\,,
\end{equation}
we then get an exponential potential for $\phi$. Take $V=m^2\sigma^2/2$, we get $f\sim \sigma$, which corresponds to the $\beta=0$ case in Eq.(\ref{equ:mod1}). When $\Gamma=\lambda$, it shows $\lambda=\lambda_c$ is a constant due to Eq.(\ref{equ:sys3}). The dynamical system then reduces to $2-$dimensional one.
When $\lambda_c\neq 0$, there are five critical points ($X_c,Y_c$)
\begin{eqnarray}\label{equ:5cr}
(0,0)\,, (1,0)\,,(-1,0) \,,\left(-\frac{\lambda_c}{\sqrt{6}},\sqrt{1-\frac{\lambda_c^2}{6}}\right)\,,\left(-\sqrt{\frac{3}{2}}\frac{1}{\lambda_c},\sqrt{\frac{3}{2}}\frac{1}{\lambda_c}\right)\,.
\end{eqnarray}
These critical points are the same as those for a quintessence model, and their stabilities have already been investigated in the literature, see Ref.\cite{Bahamonde:2017ize} and references therein.
When $\Gamma\neq \lambda$, Eqs.(\ref{equ:sys1})-(\ref{equ:sys3}) do not formulate an autonomous system. By constructing, the $f$ function has multi-zero points. When the system approaches one of the zero points, $\lambda$ will become nearly vanishing due to the definition of $\lambda\sim f$ in Eq.(\ref{equ:def}). The derivative of $\Gamma$ with respect to $x$ is given by
\begin{equation}
\frac{d\Gamma}{dx} =\frac{f_{\sigma\sigma}f}{k^2}\sqrt{6}X + \frac{d\lambda}{dx} \frac{VV_{\sigma\sigma}}{ V_\sigma^2} + \lambda \frac{d}{dx}\left( \frac{VV_{\sigma\sigma}}{ V_\sigma^2}\right)\,.
\end{equation}
When $f\rightarrow0$, we have $\lambda\sim 0$, $d\lambda/dx\sim 0$ due to Eq.(\ref{equ:sys3}) and correspondingly $d\Gamma/dx\sim 0$ due to the above equation.
By introducing the following variables:
\begin{eqnarray}
\Gamma_{A(1)} &=& \frac{f_\sigma}{k}\\
\Gamma_{A(n)} &=& \frac{f^{(n)} f^{n-1}}{f_\sigma^{n}} \,, \quad
\Gamma_{B(n)} = \frac{V^{(n)} V^{n-1}}{V_\sigma^{n}} \,, n\geq 2\,,
\end{eqnarray}
where $f^{(n)} \equiv d^nf/d\sigma^n$ and $V^{(n)} \equiv d^nV/d\sigma^n$. We can rewrite $\Gamma$ into two parts:
\begin{equation}
\Gamma = \Gamma_{A(1)} + \lambda\Gamma_{B(2)} \,;
\end{equation}
therefore, the dynamical equations for these variables are given by
\begin{eqnarray}
\frac{d\Gamma_{A(1)}}{dx} &=& \frac{f^{(2)}}{k}\frac{\dot\sigma}{H} = \sqrt{6}X\frac{f^{(2)}f}{k^2} = \sqrt{6}X\Gamma_{A(1)}^2\frac{f^{(2)}f}{f_\sigma^2} = \sqrt{6}X\Gamma_{A(1)}^2\Gamma_{A(2)} \,,\label{equ:sys4}\\
\frac{d\Gamma_{A(2)}}{dx} &=& \left( \frac{f^{(3)}f}{f_\sigma^2} + \frac{f^{(2)}f_\sigma}{f_\sigma^2}-2\frac{(f^{(2)})^2f}{f_\sigma^3}\right) \frac{\dot\sigma}{H} =\sqrt{6}X\Gamma_{A(1)} \bigg(\Gamma_{A(3)}+\Gamma_{A(2)}-2\Gamma_{A(2)}^2\bigg)\,,
\end{eqnarray}
and
\begin{eqnarray}
\nonumber
\frac{d\Gamma_{A(n)}}{dx} &=& \left(\frac{f^{(n+1)} f^{n-1}}{f_\sigma^{n}}+(n-1)\frac{f^{(n)} f^{n-2}}{f_\sigma^{n-1}}-n\frac{f^{(n)} f^{n-1} f^{(2)}}{f_\sigma^{(n+1)}}\right)\frac{\dot\sigma}{H} \\
&=&\sqrt{6}X\Gamma_{A(1)}\bigg(\Gamma_{A(n+1)}+(n-1)\Gamma_{A(n)}-n\Gamma_{A(n)}\Gamma_{A(2)}\bigg)\,,\label{equ:sys5}\\
\nonumber
\frac{d\Gamma_{B(n)}}{dx} &=& \left( \frac{V^{(n+1)} V^{n-1}}{V_\sigma^{n}} + (n-1)\frac{V^{(n)} V^{n-2}}{V_\sigma^{(n-1)}} - n \frac{V^{(n)} V^{n-1}V^{(2)}}{V_\sigma^{(n+1)}} \right) \frac{\dot\sigma}{H} \\
&=& \sqrt{6}X\lambda\bigg(\Gamma_{B(n+1)}+ (n-1)\Gamma_{B(n)} - n \Gamma_{B(n)}\Gamma_{B(2)}\bigg)\,,\label{equ:sys6}
\end{eqnarray}
for $n=2,3,\cdots, N$.
Note that $f=0$ leads to $\lambda=0$, we then get $\Gamma_{A(n)}=0$, and $d\Gamma_{A(n)}/dx=d\Gamma_{B(n)}/dx=0$, $n=2,3,\cdots,N$, which is indeed a critical point for the whole $2(N+1)$-dimensional dynamical system Eqs.(\ref{equ:sys1})-(\ref{equ:sys3}), Eq.(\ref{equ:sys4}) and Eqs.(\ref{equ:sys5})(\ref{equ:sys6}). The critical points projected on the subspace ($X_c,Y_c, \lambda_c=0 $) are
\begin{eqnarray}
(0,0,0)\,, (0,1,0)\,, (1,0,0)\,, (-1,0,0)\,,
\end{eqnarray}
with constants $\Gamma_{A(1)c}, \Gamma_{A(n)c}$, and $\Gamma_{B(n)c}$. When $\lambda\neq 0$, there are three critical points,
\begin{eqnarray}
(0,0,\lambda_c)\,,\left(-\frac{\lambda_c}{\sqrt{6}},\sqrt{1-\frac{\lambda_c^2}{6}},\lambda_c\right)\,,\left(-\sqrt{\frac{3}{2}}\frac{1}{\lambda_c},\sqrt{\frac{3}{2}}\frac{1}{\lambda_c},\lambda_c\right)\,,
\end{eqnarray}
where the second point requires $\lambda_c^2\leq 6$. Both of the last two points require
\begin{eqnarray}
\Gamma_{A(1)} &=& 0\,,\\
\Gamma_{B(2)} &=& 1\,,\\
\Gamma_{B(n+1)}&=&\Gamma_{B(n)}\bigg[ n \Gamma_{B(2)}- (n-1)\bigg] = \Gamma_{B(n)} = 1\,,\quad n\geq 2\,.\label{equ:bb}
\end{eqnarray}
In other words, these two points require an exponential potential that we have discussed before.
\subsection{Perturbations around the critical points}
When the critical points have $\lambda_c=0$, the linear perturbations of $\Gamma_{A(n)}$ are governed by
\begin{eqnarray}
\frac{d\delta\Gamma_{A(1)}}{dx} &=&\sqrt{6}X\Gamma_{A(1)}^2\delta\Gamma_{A(2)}\,,\\
\frac{d\delta \Gamma_{A(n)}}{dx} &=& \sqrt{6}X\Gamma_{A(1)}\bigg(\delta \Gamma_{A(n+1)}+(n-1)\delta \Gamma_{A(n)}\bigg)\,,\quad n\geq 2 \,,
\end{eqnarray}
and those of $\Gamma_{B(n)}$ are
\begin{equation}
\frac{d\delta \Gamma_{B(n)}}{dx}
= 0\,,\quad n\geq 2\,.
\end{equation}
We also have
\begin{equation}
\frac{d\delta \lambda}{dx}= \sqrt{6}X \Gamma_{A(1)} \delta \lambda\,,\label{equ:per3}
\end{equation}
and
\begin{eqnarray}
\frac{d\delta X}{dx}&=& -3\delta X- \sqrt{\frac{3}{2}} Y^2\delta \lambda + \frac{3}{2}\delta X(1+X^2-Y^2) + 3(X\delta X-Y\delta Y) \,,\label{equ:sys11}\\
\frac{d\delta Y}{dx}&=& \frac{3}{2}(1+X^2-Y^2)\delta Y + Y\left[ \sqrt{\frac{3}{2}} \delta\lambda X +3(X\delta X-Y \delta Y)\right]\,, \label{equ:sys12}
\end{eqnarray}
These perturbations $\delta \Gamma_{A(n)}$, $\delta \Gamma_{B(n)}$ and $\delta\lambda$ are obviously constants near the critical points ($0,0,0$) and ($0,1,0$). Eqs.(\ref{equ:sys11})(\ref{equ:sys12}) become
\begin{eqnarray}
\frac{d\delta X}{dx}&=& - \frac{3}{2}\delta X \,,\\
\frac{d\delta Y}{dx}&=& \frac{3}{2}\delta Y
\end{eqnarray}
near the critical point $(0,0,0)$ and
\begin{eqnarray}
\frac{d\delta X}{dx}&=& -3\delta X- \sqrt{\frac{3}{2}} \delta \lambda -3\delta Y \,,\\
\frac{d\delta Y}{dx}&=& - 3\delta Y
\end{eqnarray}
near the critical point $(0,1,0)$.
The critical point $(0,0,0)$ corresponds to the matter dominated universe with $\Omega_m=1$, and it is a saddle point; while the critical point $(0,1,0)$ corresponds to the de Sitter universe, in which the potential of $\phi$ dominates the energy density. From Eq.(\ref{equ:per3}), $d\delta\lambda/dx = 0 $, so it leads to a vanished determinant of the coefficient matrix for the linear perturbation system.
Let $X = r\sin\theta\cos\eta, \sqrt{1-Y^2} = r\sin\theta\sin\eta, \lambda = r\cos\theta$, then we have
$X^2+1-Y^2+\lambda^2 = r^2$. The critical point ($0,1,0$) corresponds to $r=0$.
The dynamical system (\ref{equ:sys1})-(\ref{equ:sys3}) become
\begin{eqnarray}
\frac{dr}{dx} &=&r R(\theta,\eta) + o(r)\,,\label{equ:d1}\\
\frac{d\theta}{dx} &=& R(\theta,\eta) \cot\theta + o(r)\,,\label{equ:d2}\\
\frac{d\eta}{dx} &=& \Xi(\theta,\eta) + o(r)\label{equ:d3}
\end{eqnarray}
with
\begin{eqnarray}
R(\theta,\eta) &\equiv& -\frac{1}{2}\bigg[ \left(\sqrt{6}\sin\theta\cos\eta+\cos\theta\right)^2 +4\sin^2\theta - 1 \bigg] \,,\\
\Xi(\theta,\eta) &\equiv& -\frac{1}{2}\cos (2\eta) \csc \eta \left(\sqrt{6} \cot \theta+3 \cos \eta\right)\,.
\end{eqnarray}
Then we have
\begin{eqnarray}
\frac{dr}{rd\theta} &=& \tan\theta \,, \label{equ:dr} \\
\frac{dr}{rd\eta} &=& \frac{R(\theta,\eta)}{\Xi(\theta,\eta)}
= \frac{\sin \eta \left(\sin ^2\theta (6 \sec (2 \eta)+3)+\sqrt{6} \sin (2 \theta ) \cos \eta \sec (2 \eta)\right)}{\sqrt{6} \cot \theta +3 \cos\eta}\,.\label{equ:dr2}
\end{eqnarray}
Eq.(\ref{equ:dr}) indicates $r$ increases when $\theta$ become large; therefore, if $\theta$ decreases with time,that is $d\theta/dx<0$, $r$ will decrease, the system is thus stable as an attractor at the point $(0,1,0)$. Actually, from Eq.(\ref{equ:d1}), $r$ will decrease with time when $R(\theta,\eta)<0$. That the range of ($\theta,\eta$) makes $R(\theta,\eta)<0$ is plotted in Fig.\ref{fig:rfun}, in which it is the part without gridding.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.4\textwidth,angle=0]{rfun.eps}
\caption{\label{fig:rfun} The space of ($\theta,\eta$). The red gridding range corresponds to $R(\theta,\eta)>0$, while others correspond to $R(\theta,\eta)<0$. }
\end{center}
\end{figure}
The critical point ($\pm 1, 0, 0$) corresponds to the universe dominated by the kinetic energy of $\phi$, and the perturbations of $X,Y,\lambda $ around this point are governed by
\begin{eqnarray}
\frac{d\delta X}{dx}&=& 3X\delta X \,,\\
\frac{d\delta Y}{dx}&=& 3\delta Y \,,\\
\frac{d\delta \lambda}{dx} &=& \sqrt{6} X \Gamma_{A(1)} \delta \lambda \,.
\end{eqnarray}
These two points are both unstable critical points. In the case of $\lambda_c\neq 0$, the critical point ($0, 0, \lambda_c $) is not interesting, for it corresponds to the universe without $\Omega_\phi$, and it is a saddle point just like the ($0, 0, 0$) point. The other two points have already been investigated in the literature. In summary, the multi-pole dark energy model does have stable attractor solutions just as the quintessence model.
\section{Discussions and Conclusions}\label{sec:dis}
In the multi-pole dark energy model, a flat potential for the field $\sigma$ is no longer needed. After transforming to the canonical kinetic form, we could have a stable solution, which corresponds to the dark energy dominated universe. A scaling solution could be also obtained. For example, if $V(\sigma)=m^2\sigma^2/2$, and the required potential of $\psi$ that leads to a constant equation of state $w=w_c$ is $V_s(\psi)$, then function $f$ should be chosen as
\begin{equation}
f(\sigma) = \sqrt{\frac{1}{2m^2V_s}} \frac{dV_s}{d\phi}\bigg(\phi = V_s^{-1}(m^2\sigma^2/2) \bigg)\,,
\end{equation}
where $V_s^{-1}$ is the inverse function of $V_s$.
The whole dynamical system Eqs.(\ref{equ:sys1})-(\ref{equ:sys3}), Eq.(\ref{equ:sys4}) and Eqs.(\ref{equ:sys5})(\ref{equ:sys6}) seems to have infinite dimensions, since there is always a new variable $\Gamma_{A(n+1)}$ or $\Gamma_{B(n+1)}$ that appears in the equation of $d\Gamma_{A(n)}/dx$ or $d\Gamma_{B(n)}/dx$. If the function $f$ or $V$ has a maximum order of $\sigma$, e.g. $\sigma^{N-1}$, then $\Gamma_{A(N)}=0$ or $\Gamma_{B(N)}=0$. As a result, the whole system is closed to form an autonomous system, and it has $2(N+1)$-dimensions.
In conclusion, we have proposed a multi-pole dark energy model. The cosmological evolution is obtained explicitly for the two pole model, while dynamical analysis on the whole system is performed for the multi-pole model. We find that this kind of dark energy model could have a stable solution, which corresponds to the universe dominated by the potential energy of the scalar field. Thus, the multi-pole dark energy also appears worthy of future investigation.
\acknowledgments
This work is supported by National Science Foundation of China grant Nos.~11105091 and~11047138, ``Chen Guang" project supported by Shanghai Municipal Education Commission and Shanghai Education Development Foundation Grant No. 12CG51, and Shanghai Natural Science Foundation, China grant No.~10ZR1422000. CJF would like to thank Prof. Eric V. Linder for very helpful comments.
|
1912.10877
|
\section{Introduction}
\begin{figure}[t]
\centerline{\includegraphics[width=3cm]{assets/logo.eps}}
\caption{
The logo of \texttt{Yao}. It contains a Chinese character pronounced as \texttt{y\=ao}, which stands for being unitary.
}
\label{fig:logo}
\end{figure}
\texttt{Yao} is a software for solving practical problems in quantum computation research. Given the limitations of near-term noisy intermediate-scale quantum circuits~\cite{preskill2018quantum}, it is advantageous to treat quantum devices as co-processors and compliment their abilities with classical computing resources.
Variational quantum algorithms have emerged as a promising research direction in particular.
These algorithms typically involve a quantum circuit with adjustable gate parameters and a classical optimizer.
Many of these quantum algorithms, including the variational quantum eigensolver for ground states~\cite{peruzzo2014variational, wecker2015progress, mcclean2016theory}, quantum approximate optimization
algorithm for combinatorial problems~\cite{farhi2014quantum}, quantum circuit learning for classification and regression~\cite{2018arXiv180206002F, mitarai2018quantum},
and quantum circuit Born machine for generative modeling~\cite{Benedetti2019,liu2018differentiable} have had small scale demonstrations in experiments~\cite{o2016scalable, Kandala2017, havlivcek2019supervised, zhu2019training, qaoa-exp19, leyton2019robust}.
There are still fundamental issues in this field that call for better quantum software alongside hardware advances.
For example, variational optimization of random circuits may encounter exponentially vanishing gradients~\cite{McClean2018} as the qubit number increases. Efficient quantum software is crucial for designing and verifying quantum algorithms in these challenging regimes.
Other research demands also call for quantum software that features a small overhead for repeated feedback control, convenient circuit structure manipulations, and efficient gradient calculation besides simply pushing up the number of qubits in experiments.
\begin{figure*}[t]
\centering
\centerline{\includegraphics[width=\textwidth, trim={0 5cm 2cm 1cm}, clip]{assets/YaoFramework.pdf}}
\caption{
Quantum block intermediate representation plays a central role in \texttt{Yao}. The images of GPU and quantum circuits are taken from JuliaGPU~\cite{besard2018effective} and IBM q-experience~\cite{Garcia2019}.
}\label{fig:arch}
\end{figure*}
On the other hand, deep learning and its extension \textit{differentiable programming} offer great inspiration and techniques for programming quantum computers.
Differentiable programming~\cite{diffprogramming} composes differentiable components to a learnable architecture and then learns the whole program by optimizing an objective function. The components are typically, but not limited to, neural networks. The word "differentiable" originates from the usual requirement of a gradient-based optimization scheme, which is crucial for scaling up to high dimensional parameter spaces. Differentiable programming removes laborious human efforts and sometimes produces even better programs than humans can produce themselves~\cite{karpathy}.
Differentiable programming is a sensible paradigm for variational quantum algorithms, where parameters of quantum circuits are modified within a particular parameter space, to optimize a loss function.
In this regard, programming quantum circuits in the differentiable paradigms address a much long term issue than the short-term considerations of compensating low-depth noisy quantum circuits with hybrid quantum-classical algorithms. Designing innovative and profitable quantum algorithms is, in general, nontrivial due to the lack of quantum intuitions. Fortunately, differentiable programming offers a new paradigm for devising novel quantum algorithms, much like what has already happened to the classical software landscape~\cite{karpathy}.
The algorithmic advances in differentiable programming hugely benefit from rapid development in software frameworks~\cite{chen2015mxnet, abadi2016tensorflow, NEURIPS2019_9015, maclaurin2015autograd, Flux.jl-2018,innes2019zygote}, among which the automatic differentiation (AD) of the computational graph is the key technique behind the scene. A computational graph is a directed acyclic graph that models the computational process from input to output of a program. In order to evaluate gradients via the automatic differentiation, machine learning packages~\cite{chen2015mxnet, abadi2016tensorflow, NEURIPS2019_9015, maclaurin2015autograd,Flux.jl-2018,innes2019zygote} construct computational graphs in various ways.
It is instructive to view quantum circuits from the perspective of computational graphs with additional properties such as reversibility. In this regard, contextual analysis of the quantum computational graphs can be even more profitable than neural networks.
For example, uncomputing (also known as adjoint or dagger) a sub-program plays a central role in reversible computing~\cite{Bennett1973} since it returns qubit resources to the pool. While in differentiable programming of quantum circuits, exploiting the reversibility of the computational graph allows differentiating through the quantum circuit with constant memory independent to its depth.
Inspired by differentiable programming software, we design \texttt{Yao} to be around the domain-specific computational graph, the quantum block intermediate representation (QBIR). A block refers to a tensor representation of quantum operations, which can be quantum circuits and quantum operators of various granularities (quantum gates, Hamiltonian, or the whole program). As shown in \Fig{fig:arch}, QBIR offers a hardware-agnostic abstraction of quantum circuits. It is called an intermediate representation due to its stage in the quantum compilation, which bridges the high-level quantum algorithms and low-level device-specific instructions. \texttt{Yao} provides rich functionalities to construct, inspect, manipulate, and differentiate quantum circuits in terms of QBIR.
\begin{mdframed}[
frametitle={What can \texttt{Yao} do ?},
outerlinewidth=0.6pt,
innertopmargin=6pt,
innerbottommargin=6pt,
roundcorner=4pt]
\begin{itemize}
\item Optimize a variational circuit with $10,000$ layers using reverse-mode AD on a laptop, see \Lst{lst:ad}.
\item Construct sparse matrix representation of 20 site Heisenberg Hamiltonian in approximately 5 seconds, see \Lst{lst:benchmark-matrix-ad}.
\item Simulate Shor's 9 qubit error correction code symbolically, see \App{app:symbolic}.
\item Send your circuit or Hamiltonian to a remote host in the form of \texttt{YaoScript}, see \App{app:yaoscript}.
\item Compile an arbitrary two-qubit unitary to a target circuit structure via gradient optimization, see \App{app:gatelearning} and \texttt{Yao}'s \href{https://github.com/QuantumBFS/QuAlgorithmZoo.jl/blob/v0.1.0/examples/PortZygote/gate\_learning.jl}{\texttt{QuAlgorithmZoo}}.
\item Solve ground state of a $6\times 6$ lattice spin model with a tensor network inspired quantum circuit on GPU~\cite{liu2019variational}.
\end{itemize}
\end{mdframed}\label{md:yaocando}
A distinct feature of \texttt{Yao} is its builtin automatic differentiation engine. Instead of building upon existing machine learning frameworks~\cite{chen2015mxnet, abadi2016tensorflow, NEURIPS2019_9015, maclaurin2015autograd, Flux.jl-2018,innes2019zygote}, we design \texttt{Yao}'s automatic differentiation engine to exploit reversibility in quantum computing, where the QBIR serves as a reversible computational graph. This implementation features speed and constant memory cost with respect to the circuit depth.
\texttt{Yao} dispatches QBIR to low-level instructions of quantum registers of various types (CPU, GPU, and QPU in the future). Extensions of \texttt{Yao} can be done straightforwardly by defining new QBIR nodes or quantum register types. As a bonus of the generic design, symbolic manipulation of quantum circuits in \texttt{Yao} follows almost for free. \texttt{Yao} achieves all these great flexibility and extensibility without sacrificing performance. \texttt{Yao} achieves one of the best performance for simulating quantum circuits of small to intermediate sizes Sec.\ref{sec:performance}, which are arguably the most relevant to quantum algorithms design for near-term devices.
\texttt{Yao} adds a unique solution to the landscape of open source quantum computing software includes \texttt{Quipper}~\cite{green2013quipper},
\texttt{ProjectQ}~\cite{steiger2016projectq},
\texttt{Q\#}~\cite{Svore2018},
\texttt{Cirq}~\cite{cirq},
\texttt{qulacs}~\cite{qulacs2019variational},
\texttt{PennyLane}~\cite{bergholm2018pennylane},
\texttt{qiskit}~\cite{Qiskit}, and \texttt{QuEST}~\cite{Jones2019}. References~\cite{fingerhuth2018open, LaRose2019overviewcomparison, Benedetti_2019} contain more complete surveys of quantum software.
Most software represents quantum circuits as a sequence of
instructions. Thus, users need to define their abstraction for circuits with rich structures. \texttt{Yao} offers QBIR and related utilities to compose and manipulate complex quantum circuits.
\texttt{Yao}'s QBIR is nothing but an abstract syntax tree, which is a commonly used data structure in modern programming languages thanks to its strong expressibility for control flows
and hierarchical structures. \texttt{Quipper}~\cite{green2013quipper} has adopted a similar strategy for the functional programming of quantum computing. \texttt{Yao} additionally introduces \texttt{Subroutine} to manage the scope of active and ancilla qubits. Besides these basic features, \texttt{Yao} puts a strong focus on differentiable programming of quantum circuits.
In this regards, \texttt{Yao}'s batched quantum register with GPU acceleration and built-in AD engine offers
significant speedup and convenience compared to \texttt{PennyLane}~\cite{bergholm2018pennylane} and \texttt{qulacs}~\cite{qulacs2019variational}.
An overview for the ecosystems of \texttt{Yao}, ranging from the low level customized bit operations and linear algebra to high-level quantum algorithms, is provided in Figure \ref{fig:packages}. In Sec.~\ref{sec:qbir} we introduce the quantum block intermediate representation; In Sec.~\ref{sec:ad} we explain the mechanism of the reversible computing and automatic differentiation in \texttt{Yao}. The quantum register which stores hardware specific information about the quantum states in \texttt{Yao} are explained in Sec.~\ref{sec:qregisters}. In Sec.~\ref{sec:performance}, we compare performance of \texttt{Yao} against other frameworks to illustrate the excellent efficiency of \texttt{Yao}. In Sec. \ref{sec:extending} we emphasize the flexibility and extensibility of Yao, perhaps its most important features for integrating with existing tools. The applications of \texttt{Yao} and future directions for developing \texttt{Yao} are discussed in Sec.~\ref{sec:application} and Sec.~\ref{sec:roadmap} respectively. Finally we summarize in Sec.~\ref{sec:sum}. The Appendices (\ref{app:external}-\ref{app:reading}) show various aspects and versatile applications of \texttt{Yao}.
\begin{mdframed}[
frametitle={Why Julia ?},
outerlinewidth=0.6pt,
innertopmargin=6pt,
innerbottommargin=6pt,
roundcorner=4pt]
Julia is fast! The language design avoids the typical compilation and execution uncertainties associated with dynamic languages~\cite{jeff2015juliacon}. Generic programming in Julia~\cite{bezanson2012julia}
helps \texttt{Yao} reach optimized performance while still keeping the code base general and concise.
Benchmarks in Sec.~\ref{sec:performance} show that \texttt{Yao} reaches one of the best performances with generic codes written purely in Julia.
Julia codes can be highly extensible thanks to its type system and multiple dispatch mechanics.
\texttt{Yao} builds its customized type system and dispatches to the quantum registers and circuits with a general interface.
Moreover, Julia's meta-programming ability makes developing customized syntax and device-specific programs simple.
Julia's dynamic and generic approach for GPU programming~\cite{besard2018effective} powers \texttt{Yao}'s CUDA extension.
Julia integrates well with other programming languages.
It is generally straightforward to use external libraries written in other languages in \texttt{Yao}.
For example, the symbolic backend of \texttt{Yao} builds on \texttt{SymEngine} written in C++.
In \App{app:external}, we show an example of using
the Python package \texttt{OpenFermion}~\cite{Mcclean2017} within \texttt{Yao}.
\end{mdframed}
\begin{figure}
\centerline{\includegraphics[width=0.4\textwidth]{assets/stack.pdf}}
\captionsetup{singlelinecheck=off}
\caption[]{The packages in \texttt{Yao}'s ecosystem.
\begin{itemize}
\item \textbf{BitBasis} provides bitwise operations,
\item \textbf{LuxurySparse} is an extension of Julia's builtin \textbf{SparseArrays}. It defines customized sparse matrix types and implements efficient operations relevant to quantum computation.
\item \textbf{YaoBase} is an abstract package that defines basic abstract type for quantum registers and operations.
\item \textbf{YaoArrayRegister} defines a register type as well as instruction sets,
\item \textbf{YaoBlocks} defines utilities to construct and manipulate quantum circuits. It contains a builtin AD engine \textbf{YaoBlocks.AD} (Note: it is reexported in \textbf{Yao}, hence referred as \textbf{Yao.AD} in the main text).
\item \textbf{YaoSym} provides symbolic computation supports.
\item \textbf{Yao} is a meta package that re-export \textbf{YaoBase}, \textbf{YaoArrayRegister}, \textbf{YaoBlocks} and \textbf{YaoSym}.
\item \textbf{CuYao} is a meta-package which contains \textbf{Yao} and provides specializations on CUDA devices.
\item \textbf{YaoExtensions} provides utilities for constructing circuits and hamiltonians, faithful gradient estimator of quantum circuit, and some experimental features.
\item \textbf{QuAlgorithmZoo} contains examples of quantum algorithms and applications.
\end{itemize}}
\label{fig:packages}
\end{figure}
\begin{figure}
\centering
\input{assets/qft.tex}
\caption{Quantum Fourier transformation circuit. The red and blue dashed blocks are built by the \textbf{hcphases} and \textbf{cphase} functions in the Listing~\ref{lst:qft}.}
\label{fig:qft}
\end{figure}
\begin{figure*}[t]
\centering
\input{assets/qft-diagram}
\caption{Quantum Fourier transformation circuit as a QBIR.
The red nodes are roots of the composite \textbf{ChainBlock}. The blue nodes indicate the composite \textbf{ControlBlock} and \textbf{PutBlock}.
Green nodes are primitive blocks.}
\label{fig:qft-tree}
\end{figure*}
\section{Quantum Block Intermediate Representation}\label{sec:qbir}
The QBIR is a domain-specific abstract syntax tree for quantum operators, including circuits and observables. In this section, we introduce QBIR and its central role in \texttt{Yao} via concrete examples.
\subsection{Representing Quantum Circuits}\label{sec:representing-quantum-circuits}
Figure~\ref{fig:qft} shows the quantum Fourier transformation circuit~\cite{coppersmith1994approximate,ekert1996quantum, jozsa1998quantum} which contains the \texttt{hcphases} blocks (marked in red) of different sizes. Each block itself is also a composition of Hadamard gates and \texttt{cphase} blocks (marked in blue) on various locations. In \texttt{Yao}, it takes three lines of code to construct the QBIR of the QFT circuit.
\begin{minipage}{.44\textwidth}
\begin{lstlisting}[label=lst:qft, caption=quantum Fourier transform]
julia> using Yao
julia> cphase(i, j) = control(i, j=> shift(
2π/(2^(i-j+1))));
julia> hcphases(n, i) = chain(n, i==j ?
put(i=>H) : cphase(j, i) for j in i:n);
julia> qft(n) = chain(hcphases(n, i)
for i in 1:n)
julia> qft(3)
nqubits: 3
chain
├─ chain
│ ├─ put on (1)
│ │ └─ H gate
│ ├─ control(2)
│ │ └─ (1,) shift(1.5707963267948966)
│ └─ control(3)
│ └─ (1,) shift(0.7853981633974483)
├─ chain
│ ├─ put on (2)
│ │ └─ H gate
│ └─ control(3)
│ └─ (2,) shift(1.5707963267948966)
└─ chain
└─ put on (3)
└─ H gate
\end{lstlisting}
\end{minipage}
The function \texttt{cphase} defines a control phase shift gate
with the \texttt{control} and \texttt{shift} functions.
The function \texttt{hcphases} defines the recursive pattern in the QFT circuit, which puts a Hadamard gate in the first qubit of the subblock and then chains it with several control shift gates. The \texttt{chain} block is a composition of blocks with the same number of qubits. It is equivalent to matrix multiplication in reverse order mathematically.
Finally, one composes the QFT circuit of a given size by chaining the \texttt{hcphases} blocks.
Overall, these codes construct a tree representation of the circuit shown in \Fig{fig:qft-tree}.
The subtrees are composite blocks (\texttt{ChainBlock}, \texttt{ControlBlock}, and \texttt{PutBlock}) with different
composition relations indicated in their roots. The leaves of the tree are primitive blocks. \App{app:block} shows the builtin block types of \texttt{Yao}, which are open to extension as shown in \App{app-qftblock}.
In \texttt{Yao}, to execute a quantum circuit, one can simply feed a quantum state into the QBIR.
\begin{minipage}{.44\textwidth}
\begin{lstlisting}[label=lst:apply, caption=apply! and pipe]
julia> rand_state(3) |> qft(3);
# same as apply!(rand_state(3), qft(3))
\end{lstlisting}
\end{minipage}
Here, we define a random state on 3 qubits and pass it through the QFT circuit.
The pipe operator \texttt{|>} is overloaded to call the \texttt{apply!} function
which applies the quantum circuit block to the register and modifies the register \textbf{inplace}.
The generic implementation of QBIR in \texttt{Yao} allows supporting both numeric and symbolic data types.
For example, one can inspect the matrix representation of quantum gates defined in \texttt{Yao} with symbolic variables.
\begin{minipage}{.44\textwidth}
\begin{lstlisting}[label=lst:shift, caption=inspecting gates]
julia> using Yao, SymEngine
julia> @vars θ
(θ,)
julia> shift(θ) |> mat
2×2 LinearAlgebra.Diagonal
{Basic,Array{Basic,1}}:
1 ⋅
⋅ exp(im*θ)
julia> control(2,1,2=>shift(θ)) |> mat
4×4 LinearAlgebra.Diagonal{Basic,
Array{Basic,1}}:
1 ⋅ ⋅ ⋅
⋅ 1 ⋅ ⋅
⋅ ⋅ 1 ⋅
⋅ ⋅ ⋅ exp(im*θ)
\end{lstlisting}
\end{minipage}
Here, the \texttt{@vars} macro declares the symbolic variable $\theta$. The \texttt{mat} function constructs the matrix representation of a quantum block. \App{app:symbolic} shows another example of demonstrating Shor's 9 qubits code for quantum error correction with symbolic computation.
\subsection{Manipulating Quantum Circuits}\label{sec:symbolic-manipulation}
In essence, QBIR represents the algebraic operations of a quantum circuit as types. Being an algebraic data type system, QBIR automatically allows pattern matching with Julia's multiple dispatch mechanics. Thus, one can manipulate quantum circuits in a straightforward manner using pattern matching on their QBIR.
\begin{minipage}{.44\textwidth}
\begin{lstlisting}[label=lst:decompose, caption=gate decomposition]
julia> decompose(x::HGate) =
Rz(0.5π)*Rx(0.5π)*Rz(0.5π);
julia> decompose(x::AbstractBlock) =
chsubblocks(x, decompose.(subblocks(x)));
julia> qft(3) |> decompose
nqubits: 3
chain
├─ chain
│ ├─ put on (1)
│ │ └─ chain
│ │ ├─ rot(ZGate, 1.5707963267948966)
│ │ ├─ rot(XGate, 1.5707963267948966)
│ │ └─ rot(ZGate, 1.5707963267948966)
│ ├─ control(2)
│ │ └─ (1,) shift(1.5707963267948966)
│ └─ control(3)
│ └─ (1,) shift(0.7853981633974483)
├─ chain
│ ├─ put on (2)
│ │ └─ chain
│ │ ├─ rot(ZGate, 1.5707963267948966)
│ │ ├─ rot(XGate, 1.5707963267948966)
│ │ └─ rot(ZGate, 1.5707963267948966)
│ └─ control(3)
│ └─ (2,) shift(1.5707963267948966)
└─ chain
└─ put on (3)
└─ chain
├─ rot(ZGate, 1.5707963267948966)
├─ rot(XGate, 1.5707963267948966)
└─ rot(ZGate, 1.5707963267948966)
\end{lstlisting}
\end{minipage}
For example, consider a practical situation where one needs to decompose the Hadamard gate into three rotation
gates~\cite{Karalekas_2020}. The codes in Listing~\ref{lst:decompose} define
compilation passes by dispatching the \texttt{decompose} function on different quantum block types.
For the generic \texttt{AbstractBlock},
we apply \texttt{decompose} recursively to all its sub-blocks and use the function \texttt{chsubblocks} defined in \texttt{Yao}
to substitute the blocks.
The recursion terminates on primitive blocks where \texttt{subblocks} returns an empty set.
Due to the specialization of \texttt{decompose} method on Hadamard gates, a chain of three rotation gates are returned as a subblock instead.
\begin{minipage}{.44\textwidth}
\begin{lstlisting}[label=lst:iqft, caption=inverse QFT]
julia> iqft(n) = qft(n)';
julia> iqft(3)
nqubits: 3
chain
├─ chain
│ └─ put on (3)
│ └─ H gate
├─ chain
│ ├─ control(3)
│ │ └─ (2,) shift(-1.5707963267948966)
│ └─ put on (2)
│ └─ H gate
└─ chain
├─ control(3)
│ └─ (1,) shift(-0.7853981633974483)
├─ control(2)
│ └─ (1,) shift(-1.5707963267948966)
└─ put on (1)
└─ H gate
\end{lstlisting}
\end{minipage}
Besides replacing gates, one can also modify a block by applying tags to it. For example, the \texttt{Daggered} tag takes the hermitian conjugate of the block. We use the \texttt{\textquotesingle} operator to apply the \texttt{Daggered} tag. Similar to the implementation of \texttt{Transpose} on matrices in Julia, the dagger operator in \texttt{Yao} is "lazy" in the sense that one simply marks the block as \texttt{Daggered} unless there are specific daggered rules defined for the block. For example, the hermitian conjugate of a \texttt{ChainBlock} reverses the order of its child nodes and
propagate the \texttt{Daggered} tag to each subblock. Finally, we have the following rules for primitive blocks,
\begin{itemize}
\item Hermitian gates are unchanged under dagger operation
\item The hermitian conjugate of a rotational gate $R_{\sigma}(\theta) \rightarrow R_{\sigma}(-\theta)$
\item Time evolution block $e^{-iHt} \rightarrow e^{-iH(-t^*)}$
\item Some special constant gates are hermitian conjugate to each other, e.g. \texttt{T} and \texttt{Tdag}.
\end{itemize}
With these rules, we can define the inverse QFT circuit directly in Listing~\ref{lst:iqft}.
\subsection{Matrix Representation} \label{sec:matrep}
Quantum blocks have a matrix representations of different types for optimized performance.
By default, the \texttt{apply!} method act quantum blocks to quantum registers using their matrix representations.
The matrix representation is also useful for determining operator properties such as hermicity, unitarity, reflexivity, and commutativity. Lastly, one can also use \texttt{Yao}'s sparse matrix representation for quantum many-body computations such as exact diagonalization and (real and imaginary) time evolution.
For example, one can construct the Heisenberg Hamiltonian and obtain its ground state using the Krylov space solver via the \texttt{KrylovKit.jl}~\cite{krylovkit} in Listing \ref{lst:heisenberg}. The arithmetic operations \texttt{*} and \texttt{sum} return \texttt{ChainBlock} and \texttt{Add} blocks respectively. It is worth noticing the differences between the QBIR arithmetic operations of the quantum circuits and those of Hamiltonians. Since the Hamiltonians are generators of quantum unitaries (i.e., $U = e^{-iHt}$), it is natural to perform additions for Hamiltonians (and other observables) and multiplications for unitaries. \texttt{YaoExtensions} provides some convenience functions for creating Hamiltonians on various lattices and variational quantum circuits.
\begin{minipage}{.44\textwidth}
\begin{lstlisting}[label=lst:heisenberg, caption=Heisenberg Hamiltonian]
julia> using KrylovKit: eigsolve
julia> bond(n, i) = sum([put(n, i=>σ) * put(
n, i+1=>σ) for σ in (X, Y, Z)]);
julia> heisenberg(n) = sum([bond(n, i)
for i in 1:n-1]);
julia> h = heisenberg(16);
julia> w, v = eigsolve(mat(h)
,1, :SR, ishermitian=true)
\end{lstlisting}
\end{minipage}
The \texttt{mat} function creates the sparse matrix representation of the Hamiltonian block.
To achieve an optimized performance, we extend Julia's built-in sparse matrix types for various quantum gates.
\App{app:sparse} lists these customized matrix types and promotion rules among them.
Time evolution under a quantum Hamiltonian invokes the Krylov space method~\cite{DifferentialEquations.jl-2017}, which repeatedly applies the Hamiltonian block to the register. In this case, one can use the \texttt{cache} tag to create a \texttt{CachedBlock} for the Hamiltonian. Then, the \texttt{apply!} method makes use of the sparse matrix representation cached in the memory.
Continuing from Listing~\ref{lst:heisenberg}, the following codes in Listing~\ref{lst:cache-te} show
that constructing and caching the matrix representation boosts the performance of time-evolution.
\begin{minipage}{.44\textwidth}
\begin{lstlisting}[label=lst:cache-te, caption=Hamiltonian evolution is faster with cache]
julia> using BenchmarkTools
julia> te = time_evolve(h, 0.1);
julia> te_cache = time_evolve(cache(h), 0.1);
julia> @btime $(rand_state(16)) |> $te;
1.042 s (10415 allocations: 1.32 GiB)
julia> @btime $(rand_state(16)) |> $te_cache;
71.908 ms (10445 allocations: 61.48 MiB)
\end{lstlisting}
\end{minipage}
On the other hand, in many cases \texttt{Yao} can make use
of efficient specifications of the \texttt{apply!} method for various blocks and apply them
on the fly without generating the matrix representation.
The codes in Listing~\ref{lst:cache-qft} show that this approach can be faster for simulating quantum circuits.
\begin{minipage}{.44\textwidth}
\begin{lstlisting}[label=lst:cache-qft, caption=Circuit simulation is faster without cache]
julia> r = rand_state(10);
julia> @btime r |> $(qft(10));
550.466 μs (3269 allocations: 184.58 KiB)
julia> @btime r |> $(cache(qft(10)));
1.688 ms (234 allocations: 30.02 KiB)
\end{lstlisting}
\end{minipage}
\section{Reversible Computing and Automatic Differentiation}\label{sec:ad}
Automatic differentiation efficiently computes the gradient of a program. It is the engine behind the success of deep learning~\cite{baydin2018automatic}.
The technique is particularly relevant to differentiable programming of quantum circuits.
In general, there are several modes of AD: the reverse mode caches intermediate state and then evaluate all gradients in a single backward run. The forward mode computes the gradients in a single pass together with the objective function, which does not require caching intermediate state but has to evaluate the gradients of all parameters one by one.
\texttt{Yao}'s builtin reverse mode AD engine (Sec.~\ref{sec:reverse-mode})
provides more efficient circuit differentiation for variational quantum algorithms compared to conventional reverse mode differentiation and forward mode differentiation (Sec. \ref{sec:forward-mode}).
By taking advantage of the reversible nature of quantum circuits, the memory complexity is reduced to constant compared to typical reverse mode AD~\cite{baydin2018automatic}. This property allows one to simulate very deep variational quantum circuits. Besides, \texttt{Yao} supports the forward mode AD (Sec.~\ref{sec:forward-mode}), which is a faithful quantum simulation of the experimental situation. In the classical simulation, the complexity of forward mode is unfavorable compared to reverse mode because one needs to run the circuit repeatedly for each component of the gradient.
\subsection{Reverse Mode: Builtin AD Engine with Reversible Computing} \label{sec:reverse-mode}
The submodule \texttt{Yao.AD} is a built-in AD engine. It back-propagates through quantum circuits using the computational graph information recorded in the QBIR.
In general, reverse mode AD needs to cache intermediate states in the forward pass for the backpropagation. Therefore, the memory consumption for backpropagating through a quantum simulator becomes unacceptable as the depth of the quantum circuit increases. Hence simply delegating AD to existing machine learning packages~\cite{chen2015mxnet, abadi2016tensorflow, NEURIPS2019_9015, maclaurin2015autograd,Flux.jl-2018,innes2019zygote} is not a satisfiable solution.
\texttt{Yao}'s customized AD engine exploits the inherent reversibility of quantum circuits~\cite{griewank2008evaluating,gomez2017reversible}.
By uncomputing the intermediate state in the backward pass, \texttt{Yao.AD} mostly performs in-place operations without allocations.
\texttt{Yao.AD}'s superior performance is in line with the recent efforts of implementing efficient backpropagation through
reversible neural networks~\cite{gomez2017reversible,chen2018neural}.
In the forward pass we update the wave function $|\psi_k\rangle$ with inplace operations
\begin{align}
\begin{split}
\ldots\\
|\psi_{k+1}\rangle = U_k |\psi_k\rangle ,\\
\ldots
\end{split}
\end{align}
where $U_k$ is a unitary gate parametrized by $\theta_k$. We define the adjoint of a variable as $\overline{x} = \frac{\partial \mathcal{L}}{\partial x^*}$ according to Wirtingers derivative~\cite{Hirose2003} for complex numbers, where $\mathcal{L}$ is a real-valued objective function that depends on the final state. Starting from $\overline{\mathcal{L}} = 1$ we can obtain the adjoint of the output state.
To pull back the adjoints through the computational graph, we perform the backward calculation~\cite{Giles2008}
\begin{align}
\begin{split}
\ldots\\
|\psi_k \rangle &= U_k^\dagger | \psi_{k+1} \rangle \\
\overline{|\psi_k \rangle} &= U_k^\dagger \overline{| \psi_{k+1} \rangle} \\
\ldots
\label{eq:apply-back}
\end{split}
\end{align}
The two equations above are implemented \texttt{Yao.AD} with the \texttt{apply\_back!} method. Based on the obtained information, we can compute the adjoint of the gate matrix using ~\cite{Giles2008}
\begin{align}
\begin{split}
\overline{U_k} = \overline{ | \psi_{k+1} \rangle } \langle \psi_k|.
\label{eq:outer-product}
\end{split}
\end{align}
This outer product is not explicitly stored as a dense matrix. Instead, it is handled efficiently by customized low rank matrices described in \App{app:sparse}.
Finally, we use \texttt{mat\_back!} method to compute the adjoint of gate parameters $\overline{\theta_k}$
from the adjoint of the unitary matrix $\overline{U_k}$.
Figure~\ref{fig:yaoad} demonstrates the procedure in a concrete example.
The black arrows show the forward pass without any allocation except for the output state and the objective function $\mathcal{L}$.
In the backward pass, we uncompute the states (blue arrows) and backpropagate the adjoints (red arrows) at the same time.
For the block defined as \texttt{put(nbit, i=>chain(Rz($\alpha$), Rx($\beta$), Rx($\gamma$)))},
we obtain the desired $\overline\alpha$, $\overline\beta$ and $\overline\gamma$
by pushing the adjoints back through the \texttt{mat} functions of \texttt{PutBlock} and \texttt{ChainBlock}.
The implementation of the AD engine is generic so that it works automatically with symbolic computation.
We show an example of calculating the symbolic derivative of gate parameters in \App{app:symad}. One can also integrate
\texttt{Yao.AD} with classical automatic differentiation engines such as \texttt{Zygote} to handle mixed classical and quantum computational graphs, see~\cite{betaVQE}.
\begin{minipage}{.44\textwidth}
\begin{lstlisting}[label=lst:ad, caption=10000-layer VQE]
julia> using Yao, YaoExtensions
julia> n = 10; depth = 10000;
julia> circuit = dispatch!(
variational_circuit(n, depth),
:random);
julia> gatecount(circuit)
Dict{Type{#s54} where #s54 <:
AbstractBlock,Int64} with 3 entries:
RotationGate{1,Float64,ZGate} => 200000
RotationGate{1,Float64,XGate} => 100010
ControlBlock{10,XGate,1,1} => 100000
julia> nparameters(circuit)
300010
julia> h = heisenberg(n);
julia> for i = 1:100
_, grad = expect'(h, zero_state(n)=>
circuit)
dispatch!(-, circuit, 1e-3 * grad)
println("Step $i, energy = $(expect(
h, zero_state(10)=>circuit))")
end
\end{lstlisting}
\end{minipage}
To demonstrate the efficiency of \texttt{Yao}'s AD engine, we use the codes in Listing~\ref{lst:ad} to simulate the variational quantum eigensolver (VQE)~\cite{Peruzzo2014} with depth $10,000$ (with $300,010$ variational parameters) on a laptop. The simulation would be extremely challenging without \texttt{Yao}, either due to overwhelming memory consumption in the reverse mode AD or unfavorable computation cost in the forward mode AD.
Here, \texttt{variational\_circuit} is predefined in \texttt{YaoExtensions} to have a hardware efficient architecture~\cite{kandala2017hardware} shown in \Fig{fig:pcircuit-benchmark}. The \texttt{dispatch!} function with the second parameter specified to \texttt{:random} gives random initial parameters.
The \texttt{expect} function
evaluates expectation values of the observables; the second argument can be a wave function or a pair of the input wave function and circuit ansatz like above.
\texttt{expect\textquotesingle} evaluates the gradient of this observable for the input wave function and circuit parameters. Here, we only make use of its second return value. For batched registers, the gradients of circuit parameters are accumulated rather than returning a batch of gradients.
\texttt{dispatch!(-, circuit, ...)} implements the gradient descent algorithm with energy as the loss function. The first argument is a binary operator that computes a new parameter based on the old parameter in \texttt{c} and the third argument, the gradients. Parameters in a circuit can be extracted by calling \texttt{parameters(circuit)}, which collects parameters into a vector by visiting the QBIR in depth-first order. The same parameter visiting order is used in \texttt{dispatch!}. In case one would like to share parameters in the variational circuit, one can simply use the same block instance in the QBIR. In the training process, gradients can be updated in the same field. After the training, the circuit is fully optimized and returns the ground state of the model Hamiltonian with zero state as input.
\begin{figure}
\centerline{\includegraphics[width=\columnwidth,trim={2cm 1cm 2cm 0cm}, clip]{assets/yaoad.pdf}}
\caption{Builtin automatic differentiation engine \textbf{Yao.AD}.
Black arrows represent the forward pass. The blue arrow represents uncomputing. The red arrows indicate the backpropagation of the adjoints.}
\label{fig:yaoad}
\end{figure}
\subsection{Forward Mode: Faithful Quantum Gradients} \label{sec:forward-mode}
Compared to the reverse mode, forward mode AD is more closely related to how one measures the gradient in the actual experiment.
The implementation of the forward mode AD is particularly simple for the ``rotation gates'' $R_\Sigma(\theta) \equiv e^{-i\Sigma\theta/2}$
with the generator $\Sigma$ being both hermitian and reflexive ($\Sigma^2 = 1$). For example, $\Sigma$ can be the Pauli gates X,
Y and Z, or multi-qubit gates such as CNOT, CZ, and SWAP. Every two-qubit gate can be decomposed into Pauli rotations and CNOTs (or CZs) via gate transformation~\cite{Crooks2019}. Under these conditions, the gradient to a circuit parameter is~\cite{Li2017,mitarai2018quantum, Schuld2019, Nakanishi2019}
\begin{equation}
\frac{\partial \langle O \rangle_\theta}{\partial \theta} = \frac{1}{2}\left(\langle O \rangle_{\theta+\frac{\pi}{2}} - \langle O \rangle_{\theta-\frac{\pi}{2}}\right)
\label{eq:shiftrule}
\end{equation}
where $\langle O \rangle_\theta$ denotes the expectation of the observable $O$ with the given parameter $\theta$. Therefore, one just needs to run the simulator twice to estimate the gradient. \texttt{YaoExtensions} implements \Eq{eq:shiftrule} with Julia's broadcasting semantics and obtains the full gradients with respect to all parameters. Similar features can be found in \texttt{PennyLane}~\cite{bergholm2018pennylane} and \texttt{qulacs}~\cite{qulacs2019variational}. We refer this approach as the \textit{faithful gradient}, since it mirrors the experimental procedure on a real quantum device. In this way, one can estimate the gradients in the VQE example Listing~\ref{lst:ad} using \Eq{eq:shiftrule}
\begin{minipage}{.44\textwidth}
\begin{lstlisting}
# this will be slow
julia> grad = faithful_grad(h, zero_state(n)
=>circuit; nshots=100);
\end{lstlisting}
\end{minipage}
where one faithfully simulates \texttt{nshots} projective measurements.
In the default setting \texttt{nshots=nothing}, the function evaluates the exact expectation on the quantum state. Note that simulating projective measurement, in general, involves rotating to eigenbasis of the observed operator. \texttt{Yao} implements an efficient way to break the measurement into the expectation of local terms by diagonalizing the observed operator symbolically as bellow.
\begin{minipage}{.44\textwidth}
\begin{lstlisting}[label=lst:eigen, caption=The eigendecomposition of a QBIR.]
julia> O = chain(5, put(5,2=>X), put(5,3=>Y))
nqubits: 5
chain
├─ put on (2)
│ └─ X
└─ put on (3)
└─ Y
julia> E, U = YaoBlocks.eigenbasis(O)
(nqubits: 5
chain
├─ put on (2)
│ └─ Z
└─ put on (3)
└─ Z
, nqubits: 5
chain
├─ put on (2)
│ └─ H
└─ put on (3)
└─ chain
├─ H
└─ S
)
\end{lstlisting}
\end{minipage}
The return value of \texttt{eigenbasis} contains two QBIRs \texttt{E} and \texttt{U} such that \texttt{O = U*E*U\textquotesingle}.
\texttt{E} is a diagonal operator that represents the observable in the measurement basis. \texttt{U} is a circuit that rotates computational basis to the measurement basis.
The above gradient estimator \Eq{eq:shiftrule} can also be generalized to statistic functional loss, which is useful for generative modeling with an implicit probability distribution given by the quantum circuits~\cite{liu2018differentiable}. The symmetric statistic functional of order two reads
\begin{equation}
\mathcal{F}_\theta = \expect{K(x, y)}{x\sim p_\theta, y\sim p_\theta},
\end{equation}
where $K$ is a symmetric function, $p_\theta$ is the output probability distribution of a parametrized quantum circuit measured on the computational basis. If the circuit is parametrized by rotation gates, the gradient of the statistic functional is
\begin{eqnarray}
\frac{\partial \mathcal{F}_\theta}{\partial \theta} =&\expect{K(x,y)}{x\sim p_{\theta + \frac{\pi}{2}}, y\sim p_\theta}\nonumber\\&-\expect{K(x,y)}{x\sim p_{\theta-\frac{\pi}{2}},y\sim p_\theta},
\end{eqnarray}
which is also related to the measure valued gradient estimator for stochastic optimization~\cite{Mohamed2019}.
Within this formalism, \texttt{Yao} provides the following interfaces to evaluate gradients with respect
to the maximum mean discrepancy loss~\cite{Li2017e, Gretton2012}, which measures the probabilistic distance between two sets of samples.
\begin{minipage}{.44\textwidth}
\begin{lstlisting}[label=lst:mmd, caption=gradient of maximum mean discrepancy]
julia> target_p = normalize!(rand(1<<5));
julia> kf = brbf_kernel(2.0);
julia> circuit = variational_circuit(5);
julia> mmd = MMD(kf, target_p);
julia> g_reg, g_params = expect'(
mmd, zero_state(5)=>circuit);
julia> g_params = faithful_grad(
mmd, zero_state(5)=>circuit);
\end{lstlisting}
\end{minipage}
\section{Quantum Registers}\label{sec:qregisters}
The quantum register stores hardware-specific information about the quantum states.
In classical simulation on a CPU, the quantum register is an array containing the quantum wave function.
For GPU simulations, the quantum register stores the pointer to a GPU array. In an actual experiment, the register should be the quantum device that hosts the quantum state. \texttt{Yao} handles all of these cases with a unified \texttt{apply!} interface, which dispatches the instructions depending on different types of QBIR nodes and registers.
\subsection{Instructions on Quantum Registers} \label{sec:instruct}
Quantum registers store quantum states in contiguous memory, which can either
be the CPU memory or other hardware memory, such as a CUDA device.
\begin{minipage}{.44\textwidth}
\begin{lstlisting}[label=lst:cuyao, caption=CUDA register]
julia> using CuYao
# construct the |1010> state
julia> r = ArrayReg(bit"1010");
# transfer data to CUDA
julia> r = cu(r);
\end{lstlisting}
\end{minipage}
Each register type has its own device-specific instruction set. They are declared in \texttt{Yao} via the "instruction set" interface, which includes
\begin{itemize}
\item \textbf{gate instruction}: \texttt{instruct!}
\item \textbf{measure instruction}: \texttt{measure} and \texttt{measure!}
\item \textbf{qubit management instructions}: \texttt{focus!} and \texttt{relax!}
\end{itemize}
The instruction interface provides a clean way to extend support to various backends without the user having to worry about changes to frontend interfaces.
\begin{minipage}{.44\textwidth}
\begin{lstlisting}[label=lst:instruct, caption=instruct! and measure]
julia> r = zero_state(4);
julia> instruct!(r, Val(:X), (2, ))
ArrayReg{1, Complex{Float64}, Array...}
active qubits: 4/4
julia> samples = measure(r; nshots=3)
3-element Array{BitBasis.BitStr{4,Int64},1}:
0010 ₍₂₎
0010 ₍₂₎
0010 ₍₂₎
julia> [samples[1]...]
4-element Array{Int64,1}:
0
1
0
0
\end{lstlisting}
\end{minipage}
For example, the rotation gate shown in \Fig{fig:arch} is interpreted as
\texttt{instruct!(reg, Val(:Rx), (2,), $\theta$)}. The second parameter specifies
the gate, which is a \texttt{Val} type with a gate symbol as a type parameter. The third parameter is
the qubit to apply, and the fourth parameter is the rotation angle. The CNOT gate is interpreted as \texttt{instruct!(reg, Val(:X), (1,), (2,), (1,))}, where the last three
tuples are gate locations, control qubits, and configuration of the control qubits (0 for inverse control, 1 for control). Respectively.
The \texttt{measure} function simulates measurement from the quantum register and provides bit strings,
while \texttt{measure!} returns the bit string and also collapse the state.
In the last line of the above example, we convert a bit string $\texttt{0010}_{\texttt{(2)}}$ to a vector \texttt{[0, 1, 0, 0]}. Note that the order is reversed since the readout of a bit string is in the little-endian format.
\subsection{Active qubits and environment qubits}\label{sec:scoping}
In certain quantum algorithms, one only applies the circuit block to a subset of qubits. For example, see the quantum phase estimation ~\cite{nielsen2010quantum} shown in \Fig{fig:phase-estimation}.\\
\begin{figure}[h]
\centerline{\includegraphics[trim={0 1.2cm 0 0}, clip, width=9cm]{assets/phase_est.pdf}}
\caption{5-qubit quantum Phase estimation circuit. This circuit contains three components.
First, apply Hadamard gates to $n$ ancilla qubits. Then apply controlled unitary to $n+m$ qubits and finally apply inverse QFT to $n$ ancilla qubits.}\label{fig:phase-estimation}
\end{figure}
The QFT circuit block defined in Listing~\ref{lst:qft} can not be used directly in this case since the block size does not match the number of qubits. We introduce the concept of active and environment qubits to address this issue. Only the active qubits are visible to circuit blocks under operation. We manage the qubit resources with the \texttt{focus!} and its reverse \texttt{relax!} instructions.
For example, if we want to apply the QFT algorithm on qubits 3,6,1 and 2,
the \texttt{focus!} activates the four qubits 3,6,1,2 in the given order and deactivates the rest
\begin{minipage}{.44\textwidth}
\begin{lstlisting}[label=lst:focusrelax, caption=focus! and relax!]
julia> reg = rand_state(10)
julia> focus!(reg, (3,6,1,2))
julia> reg |> qft(4)
julia> relax!(reg, (3,6,1,2); to_nactive=10)
\end{lstlisting}
\end{minipage}
Since it is a recurring pattern to first \texttt{focus!}, then \texttt{relax!} on the same qubits in many quantum algorithms, we introduce a \texttt{Subroutine} node to manage the scope automatically.
Hence, the phase estimation circuit in \Fig{fig:phase-estimation} can be defined with the following codes.
\begin{minipage}{.44\textwidth}
\begin{lstlisting}[label=lst:pe, caption=quantum phase estimation]
PE(n, m, U) = chain(
n+m, # total number of qubits
repeat(H, 1:n), # apply H from 1:n
chain(control(
k,
n+1:n+m=>matblock(U^(2^(k-1))))
for k in 1:n
),
# apply inverse QFT on a local scope
subroutine(qft(n)', 1:n)
)
\end{lstlisting}
\end{minipage}
The \texttt{matblock} method in the codes constructs a quantum circuit from a given unitary matrix.
\subsection{Batched Quantum Registers}\label{sec:batched}
The batched register is a collection of quantum wave functions. It can be samples of classical data for quantum machine learning tasks~\cite{huggins2018towards} or an ensemble of pure quantum states for thermal state simulation~\cite{betaVQE}. For both applications, having the batch dimension not only provides convenience but may also significantly speed up the simulations.
We adopt the Single Program Multiple Data (SPMD)~\cite{darema1988single} design in \texttt{Yao} similar to modern machine learning frameworks so that it can make use of modern multi-processors such as multi-threading or GPU support
(and potentially multi-processor QPUs).
Applying a quantum circuit to a batched register means to apply the
same quantum circuit to a batch of wave functions in parallel,
which is extremely friendly to modern multi-processors.
The memory layout of the quantum register is a matrix of the size $2^a \times 2^r B$,
where $a$ is the number of system qubits, $r$ is the number of remaining qubits (or environment qubits), $B$ is the batch size. For gates acting on the active qubits, the remaining qubits and batch dimension can be treated on an equal footing.
We put the batch dimension as the last dimension because Julia array is column majored. As the last dimension, it favors broadcasting on the batch dimensions.
One can construct a batched register in \texttt{Yao} and perform operations on it. These operations are automatically broadcasted over the batch dimension.
\begin{minipage}{.44\textwidth}
\begin{lstlisting}[label=lst:batched-reg, caption=a batch of quantum registers]
julia> reg = rand_state(4; nbatch=5);
julia> reg |> qft(4) |> measure!
5-element Array{BitBasis.BitStr{4,Int64},1}:
1011 ₍₂₎
1011 ₍₂₎
0000 ₍₂₎
1101 ₍₂₎
0111 ₍₂₎
\end{lstlisting}
\end{minipage}
Note that we have used the \texttt{measure!} function to collapse all batches.
The measurement results are represented in \texttt{BitStr} type which is a subtype of \texttt{Integer} and has a static length. Here, it pretty-prints the measurement results and provides a convenient readout of measuring results.
\section{Performance}\label{sec:performance}
As introduced above, \texttt{Yao} features a generic and extensible implementation without sacrificing performance. Our performance optimization strategy heavily relies on Julia's multiple dispatch. As a bottom line, \texttt{Yao} implements a general multi-control multi-qubit arbitrary-location gate instruction as the fallback. We then fine-tune various specifications for better performance. Therefore, in many applications, the construction and operation of QBIR do not even invoke matrix allocation. While in cases where the gate matrix is small (number of qubits smaller than 4), \texttt{Yao} automatically employs the corresponding static sized types~\cite{staticarrays} for better performance.
The sparse matrices \texttt{IMatrix}, \texttt{Diagonal}, \texttt{PermMatrix} and \texttt{SparseMatrixCSC} introduced in \App{app:sparse} also have their static version defined in \texttt{LuxurySparse.jl}~\cite{luxurysparse}.
Besides, we also utilize unique structures of frequently used gates and dispatch to specialized implementations. For example,
Pauli X gate can be executed by swapping the elements in
the register directly.
\begin{figure*}[t]
\centerline{\includegraphics[width=0.95\textwidth]{assets/gates.pdf}}
\caption{Benchmarks of (a) Pauli-X gate; (b) Hadamard gate; (c) CNOT gate; (d) Toffolli gate.}
\label{fig:benchmark}
\end{figure*}
\begin{figure*}[t]
\centerline{\includegraphics[width=1\textwidth]{assets/pcircuit-benchmark.pdf}}
\caption{
(a) A parameterized quantum circuit with single qubit rotation and CNOT gates;
(b) Benchmarks of the parameterized circuit;
(c) Benchmarks of the parametrized circuit, the batched version. Line ``yao" represents the batched registers, ``yao (cuda)" represents the batched register on GPU, ``yao $\times$ 1000" is running on a non-batched register repeatedly for $1000$ times.
}
\label{fig:pcircuit-benchmark}
\end{figure*}
We benchmark \texttt{Yao}'s performance with other quantum computing software. Note that the exact classical simulation of the generic quantum circuit is doomed to be exponential~\cite{haner20170,markov2008simulating, pednault2017breaking, zhang2019alibaba}. \texttt{Yao}'s design puts a strong emphasis on the performance of small to intermediate-sized quantum circuits since the high-performance simulation of such circuits is crucial for the design of near-term algorithms that run repeatedly or in parallel.
\subsection{Experimental Setup}
\begin{table}[h!]\centering
\begin{minipage}{\columnwidth}
\ra{1.3}
\scalebox{1.0}{
\begin{tabularx}{\textwidth}{X X c}\toprule
\textbf{Package} & \textbf{Language} & \textbf{Version}\\
\hline
\texttt{Cirq~\cite{cirq}} & Python & 0.8.0\\
\texttt{qiskit~\cite{Qiskit}} & C++/Python & 0.19.2\\
\texttt{qulacs~\cite{qulacs2019variational}} & C++/Python & 0.1.9\\
\texttt{PennyLane~\cite{bergholm2018pennylane}} & Python & 0.7.0\\
\texttt{QuEST~\cite{Jones2019}} & C/Python & 3.0.0\\
\texttt{ProjectQ~\cite{steiger2016projectq}} & C++/Python & 0.4.2\\
\texttt{Yao} & Julia & 0.6.2\\
\texttt{CuYao} & Julia & 0.2.2\\
\bottomrule
\end{tabularx}
}
\caption{Packages in the benchmark.}\label{tbl-mattype}
\end{minipage}
\end{table}
Although \texttt{QuEST} is a package originally written in C,
we benchmark it in Python via \texttt{pyquest-cffi}~\cite{pyquest} for convenience.
\texttt{Pennylane} is benchmarked with its default backend~\cite{pennylane2019repo}. Since the package was designed primarily for being run on the quantum hardware, its benchmarks contain a certain overhead that was not present in other frameworks~\cite{pennylane2019issue}. \texttt{qiskit} is benchmarked with \texttt{qiskit-aer} 0.5.1~\cite{qiskit2019aer} and \texttt{qiskit-terra} 0.14.1~\cite{qiskit2019terra} using the statevector method of the qasm simulator.
\begin{table}[h!]\centering
\begin{minipage}{\columnwidth}
\ra{1.3}
\scalebox{1.1}{
\begin{tabularx}{0.9\textwidth}{X c}\toprule
\textbf{Software} & \textbf{Version}\\
\hline
Python & 3.8.3\\
Numpy & 1.18.1\\
MKL & 2019.3\\
Julia & 1.5.2\\
\bottomrule
\end{tabularx}
}
\caption{The environment setup of the machine for benchmark.}\label{tbl-env}
\end{minipage}
\end{table}
Our test machine contains an Intel(R) Xeon(R) Gold 6230 CPU with a Tesla V100 GPU accelerator.
SIMD is enabled with \textbf{AVX2} instruction set. The benchmark time is measured via \texttt{pytest-benchmark}~\cite{pytest2019benchmark} and
\texttt{BenchmarkTools}~\cite{BenchmarkTools.jl-2016} with minimum running time. We ignore the compilation time
in Julia since one can always get rid of such time by compiling the program ahead of time. The benchmark scripts and complete reports are maintained online at the repository~\cite{quantum2019benchmark}. For more detailed and latest benchmark configuration one should always refer to this repository.
\subsection{Single Gate Performance}\label{sec:single-gate-performance}
We benchmark several frequently used quantum gates, including the Pauli-X Gate, the Hadamard gate (H), the controlled-NOT gate (CNOT), and the Toffoli Gate. These benchmarks measure the performance of executing one single gate instruction.
Figure~\ref{fig:benchmark} shows the running times of various gates applied on the second qubit of the register from size 4 to 25 qubits in each package in the unit of nano seconds. One can see that \texttt{Yao}, \texttt{ProjectQ}, and \texttt{qulacs} reach similar performance when the number of qubits $n>20$.
They are at least several times faster than other packages.
Having similar performance in these three packages suggests that they all reached the top performance for this type of full amplitude classical simulation on CPU.
\subsection{Parametrized Quantum Circuit Performance}\label{sec:circuit-performance}
Next, we benchmark the parameterized circuit of depth $d=10$ shown in \Fig{fig:pcircuit-benchmark}(a). This type of hardware-efficient circuits was employed in the VQE experiment~\cite{kandala2017hardware}. These benchmarks further test the performance of circuit abstraction in practical applications.
The results in \Fig{fig:pcircuit-benchmark}(b) shows that \texttt{Yao} reaches the best performance for more than 10 qubits on CPU.
\texttt{qulacs}'s well tuned C++ simulator is faster than \texttt{Yao} for fewer qubits.
On a CUDA device, \texttt{Yao} and \texttt{qulacs} show similar performance. \texttt{qiskit} cuda backend shows better performance for more that 20 qubits. These benchmarks
also, show that CUDA parallelization starts to be beneficial for a qubit number larger than $16$.
Overall, \texttt{Yao} is one of the fastest quantum circuit simulators for this type of application.
Lastly, we benchmark the performance of batched quantum register introduced in Sec \ref{sec:batched}
in \Fig{fig:pcircuit-benchmark}(c) with a batch size $1000$.
We only measure \texttt{Yao}'s performance due to the lack of native support of SPMD
in other quantum simulation frameworks. \texttt{Yao}'s CUDA backend (labeled as \texttt{yao (cuda)}) offers large speed up (>10x) compared to the CPU backend (labeled as \texttt{yao}). For reference, we also plot the timing of a bare loop over the batch dimension on a CPU (labeled as \texttt{yao $\times$ 1000}). One can see that batching offers substantial speedup for small circuits.
The overhead of simulating small to intermediate-sized circuits is particularly relevant for designing variational quantum algorithms where the same circuit may be executed million times during training. \texttt{Yao} shows the least overhead in these benchmarks.
\texttt{qulacs} also did an excellent job of suppressing these overheads.
\subsection{Matrix Representation and Automatic Differentiation Performance}\label{sec:mat-performance}
As discussed in Sec.~\ref{sec:matrep} and Sec.~\ref{sec:reverse-mode}, \texttt{Yao} features highly optimized matrix representation and reverse mode automatic differentiation for the QBIR. We did not attempt a systematic benchmark due to the lack of similar features in other quantum software frameworks.
Here, we simply show the timings of constructing the sparse matrix representation of 20 site Heisenberg Hamiltonian and differentiating its energy expectation through a variational quantum circuit of depth 20 (200 parameters) on a laptop. The forward mode AD discussed in Sec.~\ref{sec:forward-mode} is slower by order of a hundred in such simulations.
\begin{minipage}{0.44\textwidth}
\begin{lstlisting}[label=lst:benchmark-matrix-ad, caption=benchmark mat and AD performance]
julia> using BenchmarkTools, Yao,
YaoExtensions
julia> @btime mat($(heisenberg(20)));
6.330 s (10806 allocations: 10.34 GiB)
julia> @btime expect'($(heisenberg(20)),
$(zero_state(20))=>
$(variational_circuit(20)));
5.054 s (58273 allocations: 4.97 GiB)
\end{lstlisting}
\end{minipage}
\section{Extensibility}\label{sec:extending}
In the previous section we have demonstrated the excellent efficiency of \texttt{Yao} in comparison to other frameworks. We nevertheless emphasize that the most important feature of \texttt{Yao} is its flexibility and extensibility.
\subsection{Extending QBIR nodes}\label{sec:extend-ir-nodes}
It is easy to extend \texttt{Yao} with new gates and quantum block nodes. One can define constant gates by giving its matrix representation. For example, the \texttt{FSim} gate that appears in Google supremacy experiments~\cite{Google2019} is composed of an \texttt{ISWAP} gate and a \texttt{cphase} gate with a fixed angle.
\begin{minipage}{0.44\textwidth}
\begin{lstlisting}[label=lst:fsim, caption=FSim gate]
julia> using Yao, LuxurySparse
julia> @const_gate ISWAP = PermMatrix(
[1,3,2,4], [1,1.0im,1.0im,1])
# FSim is already defined in YaoExtensions
julia> @const_gate MyFSim = mat((ISWAP*
control(2, 2, 1=>shift(-π/6)))')
julia> put(10, (4,2)=>MyFSim)
nqubits: 10
put on (4, 2)
└─ MyFSim gate
\end{lstlisting}
\end{minipage}
The macro \texttt{@const\_gate} defines a primitive gate that subtypes \texttt{ConstantGate} abstract type. It generates global gate instances \texttt{ISWAP} and \texttt{MyFSim} as well as new gate types \texttt{MyFSimGate} and \texttt{ISWAPGate} for dispatch.
This macro also binds operators properties such as \texttt{ishermitian}, \texttt{isreflexive} and \texttt{isunitary} by inspecting the given matrix representation.
In \App{app-qftblock}, we provide a more sophisticated example of extending QBIR.
\subsection{Extending the Quantum Register}
A new register type can be defined by dispatching the "instruction set" interfaces introduced in Sec.~\ref{sec:instruct}. For example, in the CUDA backend \texttt{CuYao}~\cite{cuyao} we dispatch \texttt{instruct!} and \texttt{measure!} to the \texttt{ArrayReg\{B,T,<:CuArray\}} type. Here besides the batch number \texttt{B} and data type \texttt{T}, the third type parameter \texttt{<:CuArray} specifies the storage type. The dispatch directs the instructions to the CUDA kernels written in \texttt{CUDAnative}~\cite{besard2018effective}, which significantly boosts the performance by parallelizing both for the batch and the Hilbert space dimensions and. A comparison between the batched or single
register of the parameterized circuit is shown in Sec \ref{sec:performance}. We provide more detailed examples in the developer's guide of \App{app:reading}.
\section{Applications}\label{sec:application}
The \texttt{Yao} framework has been employed in practical research projects during its development and has evolved with the requirements of research.
\texttt{Yao} simplifies the original implementation of quantum circuit Born machine~\cite{liu2018differentiable} originally written in \texttt{ProjectQ}~\cite{steiger2016projectq} from
about 200 lines of code to less than 50 lines of code with about 1000x performance improvement
as shown in our benchmark \Fig{fig:pcircuit-benchmark}. This simplification enabled further exploration of the algorithm in
~\cite{zeng2019learning} with the much simpler codebase.
The tensor network inspired quantum circuits described in~\cite{huggins2018towards,liu2019variational} allow the study of large systems with a reduced number of qubits. For example, one can solve the ground state of a $6\times 6$ frustrated Heisenberg lattice model with only 12 qubits. These circuits can also compress a quantum state to the hardware with fewer qubits~\cite{wwspaper}. These applications need to measure and reuse qubits in the circuits. Thus, one can not take \texttt{nshots} measurements to the wavefunction. Instead, one constructs a batched quantum register with \texttt{nbatch} states and samples bitstrings in parallel. \texttt{Yao}'s SPMD friendly design and CUDA backend significantly simplifies the implementation and boosts performance.
Automatic differentiation can also be used for gate learning. It is well-known that a general two-qubit gate can be compiled to a fixed gate instruction set that includes single-qubit gates and CNOT gates~\cite{Shende2004}.
Given a circuit structure, one can approximate an arbitrary U(4) unitary (up to a global phase) instantly by gradient optimization of the operator fidelity. The code can be found in \App{app:gatelearning}.
A recent project extending VQE~\cite{Peruzzo2014} to thermal quantum states~\cite{betaVQE} integrates \texttt{Yao} seamlessly with the differentiable programming package \texttt{Zygote}~\cite{zygote2019mike}. \texttt{Yao}'s efficient AD engine and batched quantum register support allow joint training of quantum circuit and classical neural network effortlessly.
\section{Roadmap} \label{sec:roadmap}
\subsection{Hardware Control}
Hardware control is one of \texttt{Yao}'s next major missions. In principle, \texttt{Yao} already has a lightweight tool \texttt{YaoScript} (\App{app:yaoscript}) to serialize QBIR to the data stream for dump/load and internet communication, which can be used to control devices on the cloud.
For even better integration with existing protocols, we plan to support the parsing and code generation for other low level languages
such as OpenQASM~\cite{cross2017open}, eQASM~\cite{fu2019eqasm} and Quil~\cite{smith2016practical}. As an ongoing project towards this end, we and the author of the general-purpose parser generator \texttt{RBNF}~\cite{rbnf}
developed a QASM parser and codegen in \texttt{YaoQASM}~\cite{yao2019qasm}. It allows one to parse the OpenQASM~\cite{cross2017open} to QBIR and dump QBIR to OpenQASM, which can be used for controlling devices in the IBM Q Experience~\cite{steiger2016projectq, Garcia2019}.
\subsection{Circuit Compilation and Optimization}
Circuit compilation and optimization is a key topic for quantum computing towards practical experiments. A new language interface along with a compiler for \texttt{Yao}~\cite{yao2020ir} is under development. It should be more compilation friendly with a design based on Julia's native abstract syntax tree. By integrating with Julia compiler, it will allow Yao to model quantum channels in a seamlessly way.
On the other hand, quantum circuit simplification and optimization are crucial for reducing the cost of both simulations and experiments. The ongoing circuit simplification project~\cite{yao2020zx} will also support better pattern matching and term rewriting system with support of ZX calculus~\cite{kissinger2020Pyzx}. This will allows smarter and more systematic circuit simplifications such as the ones in Refs.~\cite{iten2019efficient,maslov2008quantum,kissinger2019tcount}.
\subsection{Noisy Simulation}
Noise important for simulation of near-term quantum devices.
Currently, \texttt{Yao} does not support noisy simulations directly. However,
a batched register in \texttt{Yao} be conveniently converted to a reduced density matrix.
By porting \texttt{Yao} with \texttt{QuantumInformation.jl}~\cite{Gawron2018}, one can carry out noisy simulation with density matrices. One can find a blog in \App{app:reading}.
\subsection{Tensor Networks Backend}
Quantum circuits are special cases of tensor networks with the unitary constraints on the gates.
Thus, tensor network algorithms developed in the quantum many-body community are potentially useful
for simulating quantum circuits, especially the shallow ones with a large number of qubits where the state vector does not fit into memory~\cite{boixo2017simulation,chen2018classical,guo2019general}. In this sense, one can perform more efficient simulations at a larger scale by exploring low-rank structures in the tensor networks with a trade-off of almost negligible errors~\cite{pan2019contracting}.
Besides serving as an efficient simulation backend, the tensor networks are also helpful for quantum machine learning, given the fact that they have already found many applications in various machine learning tasks~\cite{Stoudenmire2016, han2018unsupervised, PhysRevB.99.155131, glasser2019expressive, bradley2019modeling}. As an example, one can envision, training a unitary tensor network with classical algorithms and then load it to an actual quantum device for fast sampling. In this regard, quantum devices become specialized inference hardware.
The ongoing project \texttt{YaoTensorNetwork}~\cite{yaotn} provides utilities to dump a quantum circuit into a tensor network format. There are already working examples of generating tensor networks for QFT, variational circuits~\cite{Kandala2017}, and for demonstrating the quantum supremacy on random circuits~\cite{boixo2018characterizing, Google2019}. The dumped tensor networks can be further used for quantum circuit simplification~\cite{Backens2014} and quantum circuit simulation based on exact or approximated tensor network contractions.
\section{Summary}\label{sec:sum}
We have introduced \texttt{Yao}, an open source Julia package for quantum algorithm design. \texttt{Yao} features
\begin{itemize}
\item differentiable programming quantum circuits with a built-in AD engine leveraging reversible computing,
\item batched quantum registers with CUDA parallelization,
\item symbolic manipulation of quantum circuits,
\item strong extensibility,
\item top performance for relevant applications.
\end{itemize}
The quantum block abstraction of the quantum circuit is central to these features. Generic programming, which in Julia is based on the type system and multiple dispatch, is key to the extensibility and efficiency of \texttt{Yao}. Along with the roadmap Sec.~\ref{sec:roadmap}, \texttt{Yao} is evolving towards an even more versatile framework for quantum computing research and development.
\section{Acknowledgement}
Thanks to Jin Zhu for the Chinese calligraphy of \texttt{Yao}'s logo. The QASM compilation was greatly aided by
Taine Zhao's effort on the Julia parser generator \texttt{RBNF.jl}, we appreciate
his help and insightful discussion about compilation. This work owes much to enlightening conversation
and help from open source community including: Tim Besard and Valentin Churavy for their work on the CUDA
transpiler \texttt{CUDAnative} and suggestions on our CUDA backend implementation, Mike Innes and Harrison Grodin for their
helpful discussion about automatic differentiation and symbolic manipulation, Juan Gomez, Christopher J. Wood, Damian Steiger, Damian Steiger, Craig Gidney, corryvrequan, Johannes Jakob Meyer and Nathan Killoran for reviewing the performance benchmarks~\cite{quantum2019benchmark}.
We thank Divyanshu Gupta for integrating \texttt{Yao} with \texttt{DifferentialEquations.jl}~\cite{diffeq.jl}, Wei-Shi Wang, Yi-Hong Zhang, Tong Liu, Yu-Kun Zhang, and Si-Rui Lu for being beta users and offering valuable suggestions. We thank Roger Melko for helpful suggestions of this manuscript, Hao Xie and Arthur Pesah for proofreading this manuscript.
The first author would like to thank Roger Melko, Miles Stoudenmire, Xi Xiong for their
kindly help on his Ph.D. visa issue during the development.
\texttt{Yao}'s development is supported by the National Natural Science Foundation of China under
the Grant No.~11774398, No.~11747601 and No.~11975294, the Ministry of Science and Technology of China under the Grant
No. 2016YFA0300603 and No. 2016YFA0302400, the Strategic Priority Research Program of Chinese Academy of Sciences Grant No. XDB28000000, and Huawei Technologies under Grant No. YBN2018095185.
\input{bib.tex}
\onecolumn\newpage
|
2302.00969
|
\section{#1}}
\title{Strong completeness of a class of $L^2$-type Riesz spaces
\footnote{{\bf Keywords:} Strong completeness; Riesz spaces; conditional expectation operators; Riesz representation; strong dual.\
{\em Mathematics subject classification (2010):} 46B40; 60F15; 60F25.}}
\author{Wen-Chi Kuo\footnote{Supported in part by National Research Foundation of South Africa grant number CSUR160503163733.},
School of Mathematics\\
University of the Witwatersrand\\
Private Bag 3, P O WITS 2050, South Africa\\ \\
Anke Kalauch\\
TU-Dresden\\
\\ Bruce A. Watson \footnote{Supported in part by the Centre for Applicable Analysis and
Number Theor
.} \\
School of Mathematics\\
University of the Witwatersrand\\
Private Bag 3, P O WITS 2050, South Africa }
\maketitle
\abstract{\noindent
Strong convergence and convergence in probability were generalized to the setting of a Riesz space with conditional expectation operator, $T$, in [{{\sc Y. Azouzi, W.-C. Kuo, K. Ramdane, B. A. Watson}, {Convergence in Riesz spaces with conditional expectation operators}, {\em Positivity}, {\bf 19} {(2015), 647-657}}] as $T$-strong convergence and convergence in $T$-conditional probability, respectively. Generalized $L^{p}$ spaces for the cases of $p=1,2,\infty$, were discussed in the setting of Riesz spaces as ${L}^{p}(T)$ spaces in [{{\sc C. C. A. Labuschagne, B. A. Watson}, {Discrete stochastic integration in Riesz spaces}, {\em Positivity}, {\bf 14} {(2010), 859-875}}].
An $R(T)$ valued norm, for the cases of $p=1,\infty,$ was introduced on these spaces in
[{{\sc W. Kuo, M. Rogans, B.A. Watson}, {Mixing processes in Riesz spaces}, {\em Journal of Mathematical Analysis and Application}, {\bf 456} {(2017), 992-1004}}]
where it was also shown that $R(T)$ is a universally complete $f$-algebra and that these spaces are $R(T)$-modules.
In [{{\sc Y. Azouzi, M. Trabelsi}, {$L^p$-spaces with respect to conditional expectation on Riesz spaces}, {\em Journal of Mathematical Analysis and Application}, {\bf 447} {(2017), 798-816}}] functional calculus was used to consider ${L}^{p}(T)$ for $p\in (1,\infty)$.
The strong sequential completeness of the space ${L}^{1}(T)$, the natural domain of the conditional expectation operator $T$,
and the strong completeness of ${L}^{\infty}(T)$ was established in
[{{\sc W.-C. Kuo, D. Rodda, B. A. Watson}, {Sequential strong completeness of the natural domain of Riesz space conditional expectation operators}, {\em Proc. AMS}, {\bf 147} {(2019), 1597–1603}}].
In the current work the $T$-strong completeness of ${L}^{2}(T)$ is established along with a Riesz-Fischer type theorem where the duality is with
respect to the $T$-strong dual. It is also shown that the conditional expectation operator $T$ is a weak order unit for the $T$-strong dual.}
\parindent=0in
\parskip=.2in
\section{Introduction}
Strong convergence was generalized to Dedekind complete Riesz spaces with a conditional expectation operator in \cite{AKRW} as $T$-strong convergence.
Generalized $L^{p}$ spaces for $p=1,2,\infty$ were discussed in the setting of Riesz spaces as ${L}^{p}(T)$ spaces in \cite{LW}. An $R(T)$ valued norm, for the cases of $p=1,\infty,$ was introduced on the ${L}^{p}(T)$ spaces in \cite{KRW} where it was also shown that $R(T)$ is a universally complete $f$-algebra and that these spaces are $R(T)$-modules.
More recently, in \cite{AT}, the ${L}^{p}(T)$, for $p\in (1,\infty)$, spaces were considered. We also refer the reader to \cite{JHVDW} for an interesting study of sequential order convergence in
vector lattices using convergence structures and filters.
In \cite{KRodW} the $T$-strong sequential completeness of the natural domain, ${L}^{1}(T)$, of the Riesz space conditional expectation operator $T$
was established, i.e. that each $T$-strong Cauchy sequence in ${L}^{1}(T)$ converges $T$-strongly in ${L}^{1}(T)$.
\begin{defs}
We say that a net $(f_\alpha)$ in ${\mathcal{L}}^p(T)$, where $p\in [0,\infty]$,
is a strong Cauchy net if
$$v_\alpha:=\sup_{\beta,\gamma\ge \alpha}\|f_\beta-f_\gamma\|_{T,p}$$
is eventually defined in $R(T)$ and has order limit zero.
\end{defs}
The term $T$-strong here means with respect to the $R(T)$ valued norm induced by the conditional expectation operator $T$ in the given space. It was also shown that ${L}^{\infty}(T)$ is $T$-strongly complete i.e. that every $T$-strong Cauchy net in ${L}^{\infty}(T)$ is $T$-strongly convergent.
In the current work the $T$-strong completeness of ${L}^{2}(T)$ is established, i.e. that each net in ${L}^{2}(T)$ which is Cauchy with respect to the $R(T)$-valued norm, $f\mapsto (T|f|^2)^{1/2}:=\|f\|_{T,2}$, is convergent in ${L}^{2}(T)$ with respect to this norm.
This is proved via the a Riesz-Fischer type theorem where the duality is with
respect to the $T$-strong dual of ${L}^{2}(T)$. It is also shown that the conditional expectation operator $T$ is a weak order unit for the $T$-strong dual of ${L}^{2}(T)$.
The issue of completeness of ${L}^{2}(T)$ is important in the
theory of stochastic integrals in Riesz spaces, since these integrals are defined
to be limits of Cauchy nets in ${L}^{2}(T)$,
see for example \cite{GL}.
The results also impact on the study of martingales in Riesz spaces, see \cite{Stoica, Troitsky}.
\section{Preliminaries}
Throughout this work $E$ will denote a Dedekind complete Riesz space with weak order unit and $T$ will denote a strictly positive conditional expectation operator on $E$.
By $T$ being a conditional expectation
operator on $E$ we mean that $T$ is a linear positive order continuous projection on $E$ which maps weak order units to weak order units and has range $R(T)$ closed with respect to order limits in $E$.
This gives that there is at least one weak order unit, say $e$, with $Te=e$, and that
$R(T)$ is Dedekind complete with considered as a subspace of $E$.
By $T$ being strictly positive we mean that if
$f\in E_+$, the positive cone of $E$, and $f\ne 0$ then $Tf\in E_+$ and $Tf\ne 0$.
A strictly positive conditional expectation operator, $T$, on a Dedekind complete Riesz space with weak order unit, can be extended to a strictly positive conditional expectation operator, also denoted $T$, on its natural domain, denoted $L^1(T):=\mbox{dom}(T)-\mbox{dom}(T)$.
We say that $E$ is $T$-universally complete if $E=L^1(T)$.
From the definition of $\mbox{dom}(T)$, see \cite{KLW-exp}, $E$ is $T$-universally complete if and only
if for each upwards directed net $(f_{\alpha})_{\alpha \in \Lambda}$ in $E^+$ such that $(Tf_{\alpha})_{\alpha \in \Lambda}$ is order bounded in $E_{u}$, we have that $(f_{\alpha})_{\alpha \in \Lambda}$ is order convergent in $E$.
Here $E_u$ denotes the universal completion of $E$, see \cite[page 323]{L-Z}.
$E_u$ has an $f$-algebra structure which can be chosen so that $e$ is the multiplicative identity.
For $T$ acting on $E=L^1(T)$, $R(T)$ is a universally complete $f$-algebra and $L^1(T)$ is an $R(T)$-module.
From \cite[Theorem 5.3]{KLW-exp}, $T$ is an averaging operator, i.e. if $f\in R(T)$
and $g\in E$ then $T(fg)=fT(g)$.
This prompts the definition of an $R(T)$ (vector valued) norm $\|\cdot\|_{T,1}:=T|\cdot|$ on $L^1(T)$.
The homogeneity is with respect to multiplication by elements of $R(T)^+$.
The definition of the Riesz space
$L^{2}(T):=\{f\in L^1(T)\,|\, f^2\in L^1(T)\}$ was given in
\cite{LW}.
By the averaging property, $L^{2}(T)$ is an $R(T)$-modules and the map
$$f\mapsto\|f\|_{T,2}:=(T(f^2))^{1/2}, \quad f\in L^{2}(T)$$
is an $R(T)$-valued norm on $L^{2}(T)$. Aspects for this development for $L^{p}(T)$ with general $1<p<\infty$ can be found in \cite{AT, grobler-1}. Here the multiplication is as defined in the $f$-algebra $E_u$.
Proofs of various H\"older type inequalties in Riesz spaces with conditional expectation operators can be found in \cite{KRW} and \cite{AT}. In particular,
\begin{equation}\label{holder}
T|fg|\le \|f\|_{T,2}\|g\|_{T,2}, \, \mbox{ for all } f,g\in {{L}}^2(T).
\end{equation}
If $F$ is a universally complete Riesz space with weak order unit, say $e$, then $F$ is an $f$-algebra and $e$ can be taken as the algebraic unit, see \cite[Theorem 3.6]{V-E}.
We recall from \cite[Appendix]{KKW-1} the following material on partial inverses in Riesz spaces.
\begin{defs}
Let $F$ be a universally complete Riesz space with weak order unit, say $e$ and take $e$ as the algebraic unit of the associated $f$-algebra structure.
We say that $g\in F$ has a partial inverse if there exists $h\in F$ such that $gh=hg=P_{|g|}e$ where $P_{|g|}$ denotes the band projection onto the band generated by $|g|$.
We refer to $h$ as the canonical partial inverse of $g$ if in addition to being a partial inverse to $g$, we have that $(I-P_{|g|})h=0$, i.e. $h\in {\cal B}_{|g|}$, where ${\cal B}_{|g|}$ is the band generated by $|g|$.
\end{defs}
The following result gives existence, uniqueness and positivity results concerning partial inverses and canonical partial inverses.
We denote by ${\cal B}_f$ the band generated by $f$ and by $P_f$ the band projection onto ${\cal B}_f$.
\begin{thm}
Let $F$ be a universally complete Riesz space with weak order unit, say $e$, which also take as the algebraic unit of the associated $f$-algebra structure. Each $g\in F$ has a partial inverse $h\in F$.
The canonical partial inverse of $g$ is unique and in this case $g$ is also the canonical partial inverse of $h$. If $g\in F^+$ then so is its canonical partial inverse.
\end{thm}
\section{Dual spaces}
Let $E=L^{2}(T)$. We say that a map
${\frak f}:E\to R(T)$
is a $T$-linear functional on $E$ if it is additive, $R(T)$-homogeneous and order bounded.
We denote the space of $T$-linear functionals on $E$ by $E^*$ and call it the $T$-dual of $E$.
We note that $E^*\subset {\cal L}_b(E, R(T))$, since $R(T)$-homogeneity implies real linearity.
Further as $R(T)$ is Dedekind complete, so is ${\cal L}_b(E, R(T))$, see \cite[page 12]{AB}, and
${\cal L}_b(E, R(T))={\cal L}_r(E, R(T))$, where ${\cal L}_r(E, R(T))$ denotes the regular operators, see \cite[page 10]{AB}.
Following the notation of \cite{AB}, we denote the order continuous elements of ${\cal L}_b(E, R(T))$ by ${\cal L}_n(E, R(T))$ and by \cite[page 44]{AB}, ${\cal L}_n(E, R(T))$ is a band in ${\cal L}_b(E, R(T))$, and since ${\cal L}_b(E, R(T))$ is Dedekind complete, so is ${\cal L}_n(E, R(T))$.
If ${\frak f}\in E^*$ and
there is $k\in R(T)^+$ such that
\begin{equation}
|{\frak f}(g)|\le k\|g\|_{T,2},\quad\mbox{for all } g\in E,\label{domination}
\end{equation}
we say that ${\frak f}$ is $T$-strongly bounded.
We denote the space of $T$-strongly bounded $T$-linear functionals on $E$ by
\begin{eqnarray*}
\hat{E}:=\{{\frak f}\in E^*\,|\, {\frak f} \mbox{ $T$-strongly bounded }\}
\end{eqnarray*}
and refer to it as the $T$-strong dual of $E$.
Further
$$\|{\frak f}\|:=\inf\{k\in R(T)^+\,|\,|{\frak f}(g)|\le k\|g\|_{T,2}\quad\mbox{for all } g\in E\}$$
defines and $R(T)$-valued norm on $\hat{E}$ with
\begin{eqnarray}\label{norm-bound}
|{\frak f}(g)|\le \|{\frak f}\|\,\|g\|_{T,2}
\end{eqnarray}
for all $g\in L^2(T)$.
We note here that as the map $g\mapsto \|g\|_{T,2}$ is order continuous the domination in (\ref{domination}) gives that each ${\frak f}\in \hat{E}$ is order continuous. Thus $\hat{E}\subset E^*\cap {\cal L}_n(E, R(T))$.
From \cite{KKW-1} we have the following Riesz-Frechet representation theorem.
\begin{thm}[Riesz-Frechet representation theorem in Riesz space]\label{thm-final}
The map $\Psi$ defined by $\Psi(f)(g):=T_f(g)=T(fg)$ for $f,g\in {{L}}^2(T)$ is a bijection between $E={{L}}^2(T)$ and, its $R(T)$-homogeneous strong dual,
$\hat{E}$. This map is additive, $R(T)$-homogeneous
and $R(T)$-valued norm preserving in the sense that $\|T_f\|=\|f\|_{T,2}$ for all $y\in {{L}}^2(T)$.
\end{thm}
A direct application of Theorem \ref{thm-final} gives that $T\in \hat{E}$ since $\Psi(e)=T$ and
$L^2(T)$ is an $R(T)$ module in which $ef=f$ for all $f\in L^2(T)$.
\begin{lem}[Riesz-Kantorovich]\label{lem-RK}
The space $E^*$ is a Riesz space with respect to the partial ordering ${\frak f}\le {\frak g}$ if and only if ${\frak f}(x)\le {\frak g}(x)$ for all $x\in E_+$. This partial ordering is equivalent to defining the lattice operations by
$$({\frak f}\vee {\frak g})(x):=\sup\{{\frak f}(y)+{\frak g}(z)\,|\,y,z\in E_+, y+z=x\}$$
and
$$({\frak f}\wedge {\frak g})(x):=\inf\{{\frak f}(y)+{\frak g}(z)\,|\,y,z\in E_+, y+z=x\}$$
for all $x\in E_+$ and extending these operators to $E$ by the Kantorovich Theorem, \cite[page 7]{AB}.
Here ${\frak f}_\alpha\downarrow {\frak 0}$ in $E^*$ if and only if ${\frak f}_\alpha(x)\downarrow 0$ in $E$ for each $x\in E_+$.
$E^*$ is Dedekind complete and an $R(T)$-module.
\end{lem}
\begin{proof}
If ${\frak f},{\frak g}\in E^*$ then
${\frak f},{\frak g}\in {\cal L}_b(E, R(T))$ and hence (as ${\cal L}_b(E, R(T))$ is a Riesz space) ${\frak f}+{\frak g}\in {\cal L}_b(E, R(T))$.
Further, for $\alpha\in R(T)$ and $x\in E$, since ${\frak f}$ and ${\frak g}$ are $R(T)$-homogeneous, we have that
$$({\frak f}+{\frak g})(\alpha x):={\frak f}(\alpha x)+{\frak g}(\alpha x)=\alpha({\frak f}(x)+{\frak g}(x))=:\alpha ({\frak f}+{\frak g})(x).$$
Thus ${\frak f}+{\frak g}\in E^*$.
To show that $E^*$ is a sublattice of ${\cal L}_b(E, R(T))$ it remains to be shown that ${\frak f}\vee {\frak g}\in E^*$ for each ${\frak f},{\frak g}\in E^*$, with ${\frak f}\vee {\frak g}$ as defined in the lemma statement.
This also ensures that ${\frak f}\vee_{{\cal L}_b(E, R(T))}{\frak g}={\frak f}\vee_{E^*}{\frak g}$.
As $E^*\subset {\cal L}_b(E, R(T))$ to show that
${\frak f}\vee {\frak g}\in E^*$ is the same is
${\frak f}\vee_{{\cal L}_b(E, R(T))}{\frak g}=:{\frak f}\vee {\frak g}$
it suffices to show that ${\frak f}\vee{\frak g}$ is in $E^*$.
Since ${\frak f}\vee{\frak g}\in {{\cal L}_b(E, R(T))}$ it remains only to show that it is $R(T)$ homogeneous.
For $\alpha\in R(T)_+$ and $x\in E_+$ we have
$$({\frak f}\vee{\frak g})(\alpha x):=\sup\{{\frak f}(y)+{\frak g}(z)\,|\,y,z\in E_+, y+z=\alpha x\}.$$
If $y,z\in E_+$ and $y+z=\alpha x$, then $y,z \in B_\alpha$ where $B_\alpha$ is the band in $E$ generated by $\alpha$.
Now as $R(T)$ is universally complete, $\alpha$ has a partial inverse $\beta\in R(T)_+\cap B_\alpha$ such that $$\alpha\cdot \beta =\beta\cdot \alpha= P_\alpha e$$
where $P_\alpha$ denotes the band projection onto the band $B_\alpha$, generated by $\alpha$.
Thus $\beta y+\beta z=P_\alpha x$ and $\alpha\beta y=y$ and similarly for $z$.
Hence
\begin{eqnarray*}({\frak f}\vee{\frak g})(\alpha x)&=&\sup\{\alpha({\frak f}(\beta y)+{\frak g}(\beta z)\,|\,y,z\in E_+,\beta y+\beta z=P_\alpha x\}\\
&\le&
\sup\{\alpha({\frak f}(y')+{\frak g}(z')\,|\,y',z'\in E_+,y'+z'=P_\alpha x\}\\
&\le&
\sup\{\alpha({\frak f}(y')+{\frak g}(z')\,|\,y',z'\in E_+,y'+z'=x\}\\
&=&\alpha ({\frak f}\vee{\frak g})(x)
\end{eqnarray*}
and
\begin{eqnarray*}
\alpha\cdot ({\frak f}\vee{\frak g})(x)&=&\sup\{\alpha\cdot ({\frak f}(y)+{\frak g}(z))\,|\,y,z\in E_+, y+z=x\}\\
&=&\sup\{({\frak f}(\alpha y)+{\frak g}(\alpha z))\,|\,y,z\in E_+, y+z=x\}\\
&\le &\sup\{({\frak f}(\alpha y)+{\frak g}(\alpha z))\,|\,y,z\in E_+, \alpha y+\alpha z=\alpha x\}\\
&\le &\sup\{({\frak f}(y')+{\frak g}(z'))\,|\,y',z'\in E_+, y'+z'=\alpha x\}\\
&=&({\frak f}\vee{\frak g})(\alpha x)
\end{eqnarray*}
Thus $({\frak f}\vee{\frak g})(\alpha x)=\alpha ({\frak f}\vee{\frak g})(x)$ for $\alpha \in R(T)_+$ and $x\in E_+$.
After proving the analogous result for ${\frak f}\wedge{\frak g}$ and as we know $({\frak f}\vee{\frak g})(-x)=-({\frak f}\wedge{\frak g})(x)$ the homogeneity follows for all $x\in E$ and $\alpha \in R(T)$.
Finally we show that $E^*$ is Dedekind complete.
As ${\cal L}_b(E, R(T))$ is Dedekind complete, it suffices to show that $E^*$ is order closed in ${\cal L}_b(E, R(T))$.
If $({\frak f}_\gamma)$ is a net in $E^*$ with order limit ${\frak f}$ in ${\cal L}_b(E, R(T))$ then for each $\alpha \in R(T)$ we have that ${\frak f}_\gamma(\alpha x)=\alpha{\frak f}_\gamma$.
Further by order continuity of multiplication by elements of $R(T)$ it follows that the net $(\alpha {\frak f}_\gamma)$ has order limit $\alpha {\frak f}$.
However, for each $x\in E$, $({\frak f}_\gamma(\alpha x))$ has order limit ${\frak f}(\alpha x)$ and $(\alpha{\frak f}_\gamma(x))$ has order limit $\alpha{\frak f}(x)$ in $R(T)\subset E$.
Thus ${\frak f}(\alpha x)=\alpha{\frak f}(x)$ giving ${\frak f}\in E^*$.
\qed
\end{proof}
\begin{lem}
The space $\hat{E}$ is an $R(T)$-module and a Dedekind complete Riesz subspace of $E^*$.
\end{lem}
\begin{proof}
From its definition it is clear that $\hat{E}$ is a vector subspace of $E^*$ and is an $R(T)$-module.
It remains to prove that $\hat{E}$ is a sublattice of $E^*$ and that it is Dedekind complete.
To show that $\hat{E}$ is a sublattice of $E^*$ we observe that if ${\frak f},{\frak g}\in \hat{E}$ then there exist $k_{\frak f},k_{\frak g}\in R(T)_+$ so that
$|{\frak f}(x)|\le k_{\frak f}\|x\|_{T,2}$ and $|{\frak g}(x)|\le k_{\frak g}\|x\|_{T,2}$ for all $x\in E$.
From Lemma \ref{lem-RK},
$$|({\frak f}\vee {\frak g})(x)|\le\sup\{|{\frak f}(y)+{\frak g}(z)|\,|\,y,z\in E_+, y+z=x\}\le (k_{\frak f}+k_{\frak g})\|x\|_{T,2}$$
for all $x\in E$, giving ${\frak f}\vee{\frak g}\in \hat{E}$.
For the Dedekind completeness of we observe that if $C\subset \hat{E}_+$ is a non-empty set bounded above by ${\frak g}\in \hat{E}_+$ then $C\subset {E^*}_+$ and is bounded above by ${\frak g}$ in $E^*$.
Thus ${\frak h}:=\sup C$ exists in $E^*$, as $E^*$ is Dedekind complete. Further ${\frak h}\le {\frak g}$ so there exists $k\in R(T)_+$ so that
$$|{\frak h}(x)|\le |{\frak g}(x)|\le k\|x\|_{T,2}.$$
Thus ${\frak h}\in \hat{E}$. If $\hat{{\frak h}}\in \hat{E}$ were an upper bound on $C$ in $\hat{E}$ then $\hat{{\frak h}}$ is also an upper bound for $C$ in $E^*$ making ${\frak h}\le \hat{{\frak h}}$, thus ${\frak h}$ is the least upper bound of $C$.
\qed
\end{proof}
\begin{thm}
$T$ is a weak order unit for $\hat{E}$ .
\end{thm}
\begin{proof} Let ${\frak f}\in \hat{E}_+$.
For $x\in L^2(T)$ with $x\ge 0$ we have
\begin{eqnarray*}
\sup_{n\in\N} ({\frak f}\wedge nT)(x)&=&\sup_{n\in\N} \left(\inf_{u+v=x,u,v\ge 0}({\frak f}(u)+nT(v))\right)\\
&=&\sup_{n\in\N} \left(\inf_{u+v=x,u,v\ge 0}T(y({\frak f})u+nv)\right)\\
&=&\sup_{n\in\N} \left(\inf_{x\ge v\ge 0}T(y({\frak f})(x-v)+nv)\right)\\
&=&\sup_{n\in\N} \left(T(y({\frak f})x)+\inf_{x\ge v\ge 0}T(v(ne-y({\frak f})))\right)\\
&=&T(y({\frak f})x)+\sup_{n\in\N} \left(\inf_{x\ge v\ge 0}T(v(ne-y({\frak f})))\right).
\end{eqnarray*}
Here
$$-T(x(ne-y({\frak f}))^-)\le\inf_{x\ge v\ge 0}T(v(ne-y({\frak f})))\le 0$$
But $e$ is a weak order unit so $(ne-y({\frak f}))^-\downarrow 0$ in order as $n\to\infty$ giving that
$-T(x(ne-y({\frak f}))^-)\uparrow 0$ in order as $n\to\infty$. Thus
$$\sup_{n\in\N} \left(\inf_{x\ge v\ge 0}T(v(ne-y({\frak f}))\right)=0$$
and
$$\sup_{n\in\N} ({\frak f}\wedge nT)(x)=0$$
making $T$ a weak order unit for $\hat{E}$.
\qed
\end{proof}
\begin{thm}\label{thm-order-pr}
$\Psi$ is a bijection between $E_+$ and $\hat{E}_+$. Hence $\Psi$ and $\Psi^{-1}$ are order preserving bijections. In particular $\Psi(f)^\pm=\Psi(f^\pm)$, $\Psi(f)\vee\Psi(g)=\Psi(f\vee g)$ and
$\Psi(f)\wedge\Psi(g)=\Psi(f\wedge g)$, for all $f,g\in E$.
\end{thm}
\begin{proof}
From Theorem \ref{thm-final}, the map $\Psi:E\to\hat{E}$ is a bijection.
If ${\frak f}\in \hat{E}_+$ then there exists $f\in E$ such that
${\frak f}=\Psi(f)$ and it follows from Lemma \ref{lem-RK} that
\begin{eqnarray*}
0\le (\Psi(f)\wedge 0)(x)&:=&\inf\{T(fy)\,|\,y,z\in E_+, y+z=x\}\\
&=&
\inf\{T(fy)\,|\,0\le y\le x\}
\end{eqnarray*}
for all $x\in E_+$.
If $f\notin E_+$, then $f^-=0\vee (-f)\ne 0$. Let $P_{f^-}$ be the band projection onto the band generated by $f^-$ (all bands in $E$ are principal bands) and $p_{f^-}=P_{f^-}e\in E_+$. Taking $x=p_{f^-}$ gives
$$0\le {\frak f}(x)=({\frak f}\wedge 0)(x)=(\Psi(f)\wedge 0)(x)=(-Tf^-)\wedge 0\notin E_+,$$
since $T$ is a strictly positive operator which gives $Tf^->0$. Hence a contradiction and $f\in E_+$.
If $f\in E_+$ then for $x\in E_+$,
$$(\Psi(f)\wedge 0)(x)=\inf\{T(fy)\,|\,y,z\in E_+, y+z=x\}\ge 0$$
as $y\ge 0$ and $T$ is a positive operator. Thus $\Psi(f)\in \hat{E}_+$ and $\Psi$ is a bijection between $E_+$ and $\hat{E}_+$.
The remaining claims of the theorem follow directly $\Psi$ be a linear bijection between $E_+$ and $\hat{E}_+$.
\qed
\end{proof}
\begin{thm}
$\Psi$ is a bijection between the components of $e$ in $E$ and the components of $T$ in $\hat{E}$.
\end{thm}
\begin{proof}
We begin by showing that $\Psi(p)$ is a component of $T$ for each $p$ a component of $e$. Using Theorem \ref{thm-order-pr}, we have $0\le \Psi(p)\le \Psi(e)=T$.
Again using Theorem \ref{thm-order-pr}, we have
\begin{eqnarray*}
\Psi(p)\wedge \Psi(e-p)=\Psi(p\wedge (e-p))=\Psi(0)=0.
\end{eqnarray*}
Hence we have shown that $\Psi(p)$ is a component of $T$ for each $p$ a component of $e$.
For the converse we merely work with $\Psi^{-1}$.
\qed
\end{proof}
From the above, for components $p,q$ of $e$ in $E$. we have that
\begin{eqnarray}\label{comp-mult}
\Psi(p)\cdot \Psi(q)=\Psi(p\cdot q)
\end{eqnarray}
where multiplication is with respect to the $f$-algebra structures of $E_e$ and $\hat{E}_T$ with $e$ and $T$ being their respective algebraic units.
\begin{lem}\label{lem-oc}
$\Psi$ and $\Psi^{-1}$ are order continuous and
\begin{eqnarray}\label{mult}
\Psi(p)\cdot \Psi(q)=\Psi(p\cdot q).
\end{eqnarray}
for all $p\in E_e$ and $q\in E$.
\end{lem}
\begin{proof}
Let $f_\alpha$ be a net in $E$ with order limit $f$ and let $g\in E$, then
$$\Psi(f_\alpha)(g)=T(f_\alpha g)\to T(fg)$$
as pairwise multiplication is an order continuous map from $L^2(T)\times L^2(T)\to L^1(T)$, and $T$ is order continuous on $L^1(T)$. Thus giving that $\Psi$ is order continuous.
Suppose that $({\frak f}_\alpha)\subset \hat{E}_+$ is a downwards directed net with ${\frak f}_\alpha\downarrow 0$. Then $(\Psi^{-1}({\frak f}_\alpha))\subset E_+$ is a downwards directed net with $\Psi^{-1}({\frak f}_\alpha)\downarrow h\ge 0$. So for $g\in E_+$ we have
$$T(\Psi^{-1}({\frak f}_\alpha)g)\downarrow T(hg)$$ from the order continuity of multiplication and of $T$.
However
$$T(\Psi^{-1}({\frak f}_\alpha)g)={\frak f}_\alpha(g)\downarrow 0.$$
Thus $T(hg)=0$ for all $g\in E_+$, in particular for $g=h$. Thus $T(h^2)=0$, which with the strict positivity of $T$ gives $h^2=0$, and thus $h=0$. Hence $\Psi^{-1}$ is order continuous.
As $\Psi$ is linear, (\ref{comp-mult}) extends immediately to $p$ and $q$ being linear combinations of components of $e$. Applying Freudenthal's theorem along with the order continuity of $\Psi$ and multiplication, the result follows.
\qed
\end{proof}
It should be noted that (\ref{mult}) can be extended to $p\in L^\infty(T)$ where
$$L^\infty(T)=\{f\in E^u\,|\, |f|\le \alpha \mbox{ for some } \alpha\in R(T)\},$$
see \cite{KRW} for more details on $L^\infty(T)$ and this multiplication.
We can characterize the band projections on $\hat{E}$.
\begin{thm}\label{thm-band}
The band projections on $\hat{E}$ are the maps $\hat{P}=\Psi P\Psi^{-1}$ where $P$ is a band projection on $E$.
\end{thm}
\begin{proof}
Let $\hat{P}$ be a band projection on $\hat{E}$ with range $\hat{B}$. As all band in $\hat{E}$ are principal bands and as $T$ is a weak order unit for $\hat{E}$, it follows that $\hat{P}T$ is a generator of $\hat{B}$, further it is a component of $T$. Thus there is a component $p$ of $e$ in $E$ so that $\Psi(p)= \hat{P}T$.
Let ${\frak g}\in \hat{E}$ then the action of $\hat{P}$ on ${\frak g}$ is given by
$$\hat{P}{\frak g}=(\hat{P}T)\cdot {\frak g}=\Psi(p)\cdot \Psi(\Psi^{-1}({\frak g})).$$
Applying (\ref{comp-mult}) we have
$$\Psi(p)\cdot \Psi(\Psi^{-1}({\frak g}))=\Psi(p\cdot \Psi^{-1}({\frak g}))=\Psi(P \Psi^{-1}({\frak g})),$$
where $P$ is the band projection in $E$ generated by $p$.
Thus $\hat{P}{\frak g}=\Psi(P \Psi^{-1}({\frak g})).$
\qed
\end{proof}
\begin{cor}
For each band projection $\hat{P}$ on $\hat{E}$ we have that the
band projection $P:=\Psi^{-1}\hat{P}\Psi$ on $E$ has
$\hat{P}{\frak f}={\frak f} P, $ for all ${\frak f}\in\hat{E}$.
\end{cor}
\begin{proof}
Here $\hat{P}=\Psi P\Psi^{-1}$. Hence, for all ${\frak f}\in\hat{E}$ and $g\in E$,
$$(\hat{P}{\frak f})(g)=(\Psi P\Psi^{-1}{\frak f})(g)=\Psi(P\Psi^{-1}{\frak f})(g)=T(\Psi^{-1}{\frak f}\cdot Pg)={\frak f}(Pg)={\frak f}\circ P(g),$$
for all ${\frak f}\in \hat{E}$ and $g\in E$. Here we have used that
$\Psi^{-1}{\frak f}\cdot Pg=P\Psi^{-1}{\frak f}\cdot g$.
\qed
\end{proof}
\begin{note}
Each ${\frak f}\in \hat{E}$ can be considered as a $R(T)$-valued Radon measure $\mu_{\frak f}$ defined on the components of $e$ in $E$ by $\mu_{\frak f}(p)={\frak f}(p)$. In this sense the Riesz representation of the measure is as $\mu_{\frak f}(p)=T(\Psi({\frak f})\cdot p)$ and $\Psi({\frak f})$ is the Radon-Nikod\'ym derivative of $\mu_{\frak f}$ with respect to $\mu_T$.
\end{note}
\section{Completeness}
As in the scalar case, we can define a norm on $\hat{E}$, however in the case under consideration here this is an $R(T)$-valued norm or in the
notation of \cite{KRW} a $T$-norm. For ${\frak f}\in \hat{E}$ we define
\begin{equation}
\|{\frak f}\|=\inf\{k\in R(T)^+\,|\,|{\frak f}(x)|\le k\|x\|_{T,2} \mbox{ for all } x\in E\}.\label{dual-norm}
\end{equation}
As $R(T)$ is Dedekind complete it follows that $\|{\frak f}\|\in R(T)^+$.
For each $x\in E$, $\{k\in R(T)^+\,|\,|{\frak f}(x)|\le k\|x\|_{T,2} \}$ is a closed (with respect to order limits in $E$) convex cone in $R(T)^+$. Thus
$$Y:=\bigcap_{x\in E} \{k\in R(T)^+\,|\,|{\frak f}(x)|\le k\|x\|_{T,2} \}$$
is convex additive and closed w.r.t. order limits suprema and infima in $R(T)^+$.
Thus $Y$ has a unique minimal element
$$\|{\frak f}\|:=\inf \bigcap_{x\in E} \{k\in R(T)^+\,|\,|{\frak f}(x)|\le k\|x\|_{T,2} \}.$$
As $\|{\frak f}\|\in Y$ we have
$|{\frak f}(x)|\le \|{\frak f}\|\,\|x\|_{T,2}$ for each ${\frak f}\in \hat{E}$ and $x\in {L}^{2}(T)$.
\begin{thm}
The dual space $\hat{E}$ is strongly complete.
\end{thm}
\begin{proof}
Let $({\frak f}_\alpha)$ be a strong Cauchy net in $\hat{E}$.
Since
$$|{\frak f}_\alpha(x)-{\frak f}_\beta(x)|\le \|{\frak f}_\alpha-{\frak f}_\beta\| \,\|x\|_{T,2}$$
it follows that for each $x\in {L}^{2}(T)$, $({\frak f}_\alpha(x))$ is a Cauchy net in $R(T)$.
Eventually
\begin{align*}
v_\alpha:=\sup_{\beta,\gamma\ge\alpha} \|{\frak f}_\beta-{\frak f}_\gamma\|_{T,2}
\end{align*}
exists as an element of $R(T)$ and $v_\alpha\downarrow 0$.
Here by eventually we mean that there is there is $\beta$ in the index set of the net, so that for $\alpha \ge \beta$ the given statement holds.
So eventually
\begin{eqnarray}\label{cauchy-bound}
|{\frak f}_\beta(x)-{\frak f}_\gamma(x)|\le v_\alpha\|x\|_{T,2},\quad\mbox{ for all } \beta,\gamma\ge \alpha,
\end{eqnarray}
and the Cauchy net $({\frak f}_\alpha(x))$ is eventually bounded in $R(T)$.
Here the net $(|{\frak f}_\beta(x)-{\frak f}_\gamma(x)|)$ is considered as being index by $(\alpha,\beta)$
with $(\alpha,\beta)\le (\alpha_1,\beta_1)$ meaning $\alpha\le \alpha_1$ and $\beta\le \beta_1$.
We may thus define
$\underline{\frak f}(x):=\liminf_\alpha {\frak f}_\alpha(x)$, $\overline{\frak f}(x):=\limsup_\alpha {\frak f}_\alpha(x)$ in $R(T)$.
Here
$$0\le\overline{\frak f}(x)-\underline{\frak f}(x)
=\lim_\alpha(\sup_{\beta\ge\alpha}{\frak f}_\beta(x)-\inf_{\gamma\ge\alpha}{\frak f}_\gamma(x))
=\lim_\alpha\sup_{\beta,\gamma\ge\alpha}({\frak f}_\beta(x)-{\frak f}_\gamma(x))
\le \lim_\alpha v_\alpha\|x\|_{T,2}=0.$$
So we can set ${\frak f}(x):=\overline{\frak f}(x)=\underline{\frak f}(x)$ with $({\frak f}_\alpha(x))$ converging in order to ${\frak f}(x)$, see \cite{AKRW}, and hence
${\frak f}(x)$ is defined for each $x\in {L}^{2}(T)$.
From the linearity of order limits and the linearity of each ${\frak f}_\alpha$ it follows that ${\frak f}: {L}^{2}(T) \to R(T)$ is a linear map.
Taking the order limit in (\ref{cauchy-bound}) with respect to $\gamma$ gives
\begin{eqnarray}\label{limit-bound}
|{\frak f}_\beta(x)-{\frak f}(x)|\le v_\alpha\|x\|_{T,2},\quad\mbox{ for all } \beta\ge \alpha, x\in L^2(T).
\end{eqnarray}
Thus $$|{\frak f}(x)|\le |{\frak f}_\alpha(x)|+v_\alpha\|x\|_{T,2}\le (\|{\frak f}_\alpha\|+v_\alpha)\|x\|_{T,2}$$
for all $x\in L^2(T)$,
from which we have that ${\frak f}\in \hat{E}$.
Now (\ref{limit-bound}) can be written as
\begin{eqnarray*}
\|{\frak f}_\beta-{\frak f}\|\le v_\alpha,\quad\mbox{ for all } \beta\ge \alpha,
\end{eqnarray*}
giving that
$({\frak f}_\alpha)$ converges strongly to ${\frak f}$.
\qed
\end{proof}
We recall from \cite{KRodW} some of the concepts of strong completeness as they relate to ${{L}}^2(T)$.
\begin{defs}
We say that a net $(f_\alpha)$ in ${{L}}^2(T)$, is a strong Cauchy net if
$$v_\alpha:=\sup_{\beta,\gamma\ge \alpha}\|f_\beta-f_\gamma\|_{T,2}$$
is eventually defined and has order limit zero, i.e. there exists $\delta$ such that
$v_\alpha$ is defined for all $\alpha\ge \delta$ and $\displaystyle{\inf_{\alpha\ge \delta} v_\alpha = 0}$.
\end{defs}
It should be noted that this is equivalent to requiring that $\|f_\beta-f_\gamma\|_{T,2}$ as a net indexed by $(\beta,\gamma)$ with componentwise directedness, converges to $0$ in order.
We are now in a position to give the definition of strong completeness.
\begin{defs}
We say that ${{L}}^2(T)$ is strongly complete if each strong Cauchy net
$(f_\alpha)$ in ${{L}}^2(T)$, is strongly convergent in ${{L}}^2(T)$,
i.e. there is $f\in {{L}}^2(T)$ so that
$$w_\alpha:=\sup_{\beta\ge \alpha}\|f_\beta-f\|_{T,2}$$
is eventually defined and has order limit zero, i.e. there exists $\delta$ such that
$w_\alpha$ is defined for all $\alpha\ge \delta$ and $\displaystyle{\inf_{\alpha\ge \delta} w_\alpha = 0}$.
\end{defs}
It should be noted that for the case of $E=L^2(\Omega,\Sigma,\mu)$ where $\mu$ is a finite measure and $Tf={\bf 1}\frac{1}{\mu(\Omega)}\int_\Omega f\,d\mu$,
where ${\bf 1}$ is the constant $1$ function, then ${{L}}^2(T)=E$, the vector norm $\|f\|_{T,2}=\|f\|_2$ is the standard $L^2$ norm and the
concepts of strong Cauchy nets and strong completeness coincide with those of norm/strong Cauchy nets and norm/strong completeness.
\begin{thm}
${{L}}^2(T)$ is strongly complete.
\end{thm}
\begin{proof}
The map $\Psi: y\mapsto T_y$ is an $R(T)$-linear $R(T)$-valued vector norm preserving bijection between ${{L}}^2(T)$ and $\hat{E}$.
So given a strong Cauchy net $(y_\alpha)$ in ${{L}}^2(T)$, $(\Psi(y_\alpha)$ is a strong Cauchy net in $\hat{E}$ and is thus
strongly convergent to some ${\frak f}\in \hat{E}$. As $\Psi$ is a bijection, there is $y \in {{L}}^2(T)$ so that ${\frak f}=\Psi(y)=T_y$.
Due to the $R(T)$-linearity of $\Psi$ and this map be $R(T)$-valued vector norm preserving, the strong convergence of
$(\Psi(y_\alpha)$ to ${\frak f}$ in $\hat{E}$ yields the strong convergence of $(y_\alpha)$ to $y$ in ${{L}}^2(T)$.
\qed
\end{proof}
|
2302.01084
|
\section{Introduction}
We write $Y ( p_1 , p_2 ; G )$ for the optimal constant (the optimal ratio of both sides) of Young's convolution inequality on a locally compact group $G$.
The main result of this paper is that $Y ( p_1 , p_2 ; G ) \leq Y ( p_1 , p_2 ; H )$ holds for any closed subgroup $H \subset G$ (Theorem \ref{thm:Young-order-reversing}).
As a corollary, we generalize the estimate of $Y ( p_1 , p_2 ; G )$ from above by Beckner \cite{MR385456}, Fournier \cite{MR461034}, Klein--Russo \cite{MR499945}, and Nielsen \cite{MR1304346} to any connected Lie group $G$ such that the center of the semisimple part is a finite group such as connected linear Lie groups and connected solvable Lie groups.
That is, we have $Y ( p_1 , p_2 ; G ) \leq Y ( p_1 , p_2 ; \mathbb{R} )^{ \dim G - r ( G ) }$, where $r ( G )$ is the dimension of the maximal compact subgroups of $G$ (Corollary \ref{cor:Young-non-compact-dimension}).
We set the H\"{o}lder conjugate $1 \leq p' \leq \infty$ of $1 \leq p \leq \infty$ as
\begin{align}
\frac{1}{p} + \frac{1}{p'} = 1.
\end{align}
The $L^p$-norm with respect to the left Haar measure $dg$ on the locally compact group $G$ is written as $\| \cdot \|_p$.
We define the convolution $\phi_1 * \phi_2$ of measurable functions $\phi_1 , \phi_2 \colon G \to \mathbb{C}$ as
\begin{align}
\phi_1 * \phi_2 ( g' )
:= \int_{G}^{} \phi_1 ( g ) \phi_2 ( g^{-1} g' ) dg.
\end{align}
In addition, there exists a unique continuous homomorphism $\Delta \colon G \to \mathbb{R}_{> 0}$ such that
\begin{align}
\int_{G}^{} \phi ( g^{-1} ) dg
= \int_{G}^{} \frac{\phi ( g )}{\Delta ( g )} dg. \label{eq:modular-function}
\end{align}
This $\Delta$ is called the modular function of $G$.
Then the notion of the optimal constant $Y ( p_1 , p_2 ; G )$ is defined as follows.
\begin{definition}[{\cite[Section 2]{MR499945} \cite{MR1304346}}]
\label{def:optimal-constant}
For any locally compact group $G$ and any $1 \leq p_1 , p_2 \leq \infty$ with
\begin{align}
\frac{1}{p_1} + \frac{1}{p_2} \geq 1, \label{eq:optimal-constant-p1-p2-condition}
\end{align}
the optimal constant $Y ( p_1 , p_2 ; G )$ is defined as
\begin{align}
Y ( p_1 , p_2 ; G ) := \sup \{ \| \phi_1 * ( \phi_2 \Delta^{1/p_1'} ) \|_p \mid \phi_1 , \phi_2 \colon G \to \mathbb{C} , \; \| \phi_1 \|_{p_1} = \| \phi_2 \|_{p_2} = 1 \}.
\end{align}
Here $\Delta$ is the modular function of $G$, and $p$ is given by
\begin{align}
\frac{1}{p_1} + \frac{1}{p_2}
= 1 + \frac{1}{p}. \label{eq:optimal-constant-p-definition}
\end{align}
\end{definition}
The main result of this paper is the following theorem.
\begin{theorem}
\label{thm:Young-order-reversing}
Suppose that $1 \leq p_1 , p_2 \leq \infty$ satisfy \eqref{eq:optimal-constant-p1-p2-condition}.
Then $Y ( p_1 , p_2 ; G ) \leq Y ( p_1 , p_2 ; H )$ holds for any closed subgroup $H \subset G$ of any locally compact group $G$.
\end{theorem}
When $H$ is a normal subgroup of $G$, Theorem \ref{thm:Young-order-reversing} is essentially known.
That is, Cowling--Martini--M\"{u}ller--Parcet proved
\begin{align}
Y ( p_1 , p_2 ; G )
\leq Y ( p_1 , p_2 ; H ) Y ( p_1 , p_2 ; G / H ) \label{eq:Young-product-normal}
\end{align}
\cite[Proposition 2.2]{MR4000236}.
Since $Y ( p_1 , p_2 ; G / H ) \leq 1$ by the classical Young's inequality (Example \ref{ex:Young-order-reversing-example} \ref{item:Young-order-reversing-example-classic}), Theorem \ref{thm:Young-order-reversing} follows from \eqref{eq:Young-product-normal}.
The inequality \eqref{eq:Young-product-normal} was essentially proved by Beckner when $G$ is abelian \cite[Section IV.5]{MR385456} (see also Fact \ref{fact:Beckner}), and by Klein--Russo when $G$ is a semidirect product of $H$ and $G / H$ \cite[Lemma 2.4]{MR499945}.
Theorem \ref{thm:Young-order-reversing} has some interesting examples.
For instance, we have the classical Young's inequality by applying Theorem \ref{thm:Young-order-reversing} to the case where $H$ is trivial (Example \ref{ex:Young-order-reversing-example} \ref{item:Young-order-reversing-example-classic}).
In addition, if the identity component $G_0 \subset G$ is open (e.g. $G$ is a Lie group), then we have $Y ( p_1 , p_2 ; G ) = Y ( p_1 , p_2 ; G_0 )$ (Example \ref{ex:Young-order-reversing-example} \ref{item:Young-order-reversing-example-component}) by Theorem \ref{thm:Young-order-reversing}.
Thus, it suffices to consider the identity component to determine the value of $Y ( p_1 , p_2 ; G )$ for any Lie group $G$.
In addition, we have $Y ( p_1 , p_2 ; G ) \leq Y ( p_1 , p_2 ; \mathbb{R} )$ for any locally compact group $G$ which has no open compact subgroup by Theorem \ref{thm:Young-order-reversing}.
This claim is a generalization of the previous results of Fournier, Nielsen, and the author (Corollary \ref{cor:Young-R-compare}).
As a corollary of Theorem \ref{thm:Young-order-reversing}, we bound $Y ( p_1 , p_2 ; G )$ for any connected Lie group $G$ such that the center of the semisimple part is a finite group such as connected linear Lie groups and connected solvable Lie groups by using the dimension $r ( G )$ of the maximal compact subgroups.
That is, the following corollary holds, where $\# G$ is the cardinality of a group $G$, and $Z ( G )$ is the center of $G$.
\begin{corollary}
\label{cor:Young-non-compact-dimension}
Let $R \lhd G$ be the radical (the largest connected solvable closed normal subgroup) of a connected Lie group $G$, and $r ( G )$ be the dimension of the maximal compact subgroups of $G$.
If $\# Z ( G / R ) < \infty$, then
\begin{align}
Y ( p_1 , p_2 ; G ) \leq Y ( p_1 , p_2 ; \mathbb{R} )^{\dim G - r ( G )} \label{eq:Young-non-compact-dimension-state}
\end{align}
holds for any $1 \leq p_1 , p_2 \leq \infty$ with \eqref{eq:optimal-constant-p1-p2-condition}.
\end{corollary}
Corollary \ref{cor:Young-non-compact-dimension} bounds $Y ( p_1 , p_2 ; G)$ explicitly from above because $Y ( p_1 , p_2 ; \mathbb{R} )$ is determined explicitly by Beckner (Fact \ref{fact:Beckner}).
Although Corollary \ref{cor:Young-non-compact-dimension} is known for some $G$ (Table \ref{tab:Young-non-compact-dimension-compare}), to the best of our knowledge, it contains some new examples such as
\begin{align}
Y ( p_1 , p_2 ; SL_2 (\mathbb{R}) )
\leq Y ( p_1 , p_2 ; \mathbb{R} )^2. \label{eq:SL2-bound}
\end{align}
It is known that the equality of \eqref{eq:Young-non-compact-dimension-state} holds for some connected Lie groups such as connected compact Lie groups (Corollary \ref{cor:Young-R-compare}) and connected nilpotent Lie groups (Fact \ref{fact:Nielsen}).
We prove Corollary \ref{cor:Young-non-compact-dimension} in Section \ref{sec:non-compact-dimension} by using an argument similar to the generalization of the Brunn--Minkowski inequality to any Lie group by Jing--Tran--Zhang \cite[Theorem 1.1]{jing2021nonabelian}.
Here is the organization of this paper.
In Section \ref{sec:Young-known-result}, we compare some known results with Theorem \ref{thm:Young-order-reversing} and Corollary \ref{cor:Young-non-compact-dimension}.
In Section \ref{sec:boundary}, we show Theorem \ref{thm:Young-order-reversing} when $( p_1 , p_2 )$ is on the boundary of the range that satisfies the assumption.
In Section \ref{sec:Young-order-proof}, we show Theorem \ref{thm:Young-order-reversing} when $( p_1 , p_2 )$ is not on the boundary.
In Section \ref{sec:non-compact-dimension}, we show Corollary \ref{cor:Young-non-compact-dimension} by using Theorem \ref{thm:Young-order-reversing} and the argument of Jing--Tran--Zhang.
\section{Comparison of some known results on the optimal constant}
\label{sec:Young-known-result}
In this section, we compare some known results on the optimal constant $Y ( p_1 , p_2 ; G )$ with Theorem \ref{thm:Young-order-reversing} (Subsection \ref{subsec:Young-known-result-order}) and Corollary \ref{cor:Young-non-compact-dimension} (Subsection \ref{subsec:Young-known-result-non-compact-dimension}).
\subsection{Comparison with Theorem \ref{thm:Young-order-reversing}}
\label{subsec:Young-known-result-order}
In this subsection, we see some relations between known results and Theorem \ref{thm:Young-order-reversing}.
Here are some examples of Theorem \ref{thm:Young-order-reversing}.
\begin{example}
\label{ex:Young-order-reversing-example}
\begin{enumerate}
\item \label{item:Young-order-reversing-example-classic}
Since the trivial group $\{ e \} \subset G$ containing only the identity element $e \in G$ is a closed subgroup, we have $Y ( p_1 , p_2 ; G ) \leq Y ( p_1 , p_2 ; \{ e \} )$ by Theorem \ref{thm:Young-order-reversing}.
The equality $Y ( p_1 , p_2 ; \{ e \} ) = 1$ holds by definition and hence we have the classical Young's inequality
\begin{align}
Y ( p_1 , p_2 ; G ) \leq 1. \label{eq:Young-order-reversing-example-classic-state}
\end{align}
There are at least two proofs of \eqref{eq:Young-order-reversing-example-classic-state}.
\begin{enumerate}
\item \label{item:Young-order-reversing-example-classic-Riesz-Thorin}
The method to deduce the case of $p_1 = 1 , p_2'$ by using the Riesz--Thorin theorem.
\item \label{item:Young-order-reversing-example-classic-Holder}
The direct method to use H\"{o}lder's inequality repeatedly.
\end{enumerate}
These proofs can be found in some literature listed in Table \ref{tab:classical-Young}.
Terp indicated that one can prove \eqref{eq:Young-order-reversing-example-classic-state} by the method \ref{item:Young-order-reversing-example-classic-Holder} even when $G$ is not unimodular, but there is no explicit proof in this paper.
By using \eqref{eq:Young-order-reversing-example-classic-state}, Terp generalized the Hausdorff--Young inequality to any locally compact group \cite[Theorem 5.2]{MR3730047}.
The proof of Theorem \ref{thm:Young-order-reversing} in Section \ref{sec:Young-order-proof} can be regarded as a generalization of the method \ref{item:Young-order-reversing-example-classic-Holder}.
\begin{table}
\centering
\caption{Some literature in which the proof of \eqref{eq:Young-order-reversing-example-classic-state} is mentioned}
\label{tab:classical-Young}
\vspace{8pt}
\begin{tabular}{c||c|c}
& $G$: unimodular & $G$: general \\
\hline \hline
\ref{item:Young-order-reversing-example-classic-Riesz-Thorin} & Weil \cite{MR0005741} & Klein--Russo \cite[Lemma 2.1]{MR499945} \\
\hline
\ref{item:Young-order-reversing-example-classic-Holder} & Hewitt-Ross \cite[Theorem 20.18]{MR551496} & (Terp \cite[Lemma 1.1]{MR3730047})
\end{tabular}
\end{table}
\item \label{item:Young-order-reversing-example-component}
We have $Y ( p_1 , p_2 ; G ) \leq Y ( p_1 , p_2 ; G_0 )$ by Theorem \ref{thm:Young-order-reversing}, where $G_0 \subset G$ is the identity component.
If $G_0$ is open (e.g. $G$ is a Lie group), then a Haar measure on $G_0$ corresponds to that on $G$ and hence $Y ( p_1 , p_2 ; G ) \geq Y ( p_1 , p_2 ; G_0 )$.
Thus, we obtain $Y ( p_1 , p_2 ; G ) = Y ( p_1 , p_2 ; G_0 )$ when $G_0$ is open.
\end{enumerate}
\end{example}
By using Theorem \ref{thm:Young-order-reversing}, we obtain necessary and sufficient conditions for satisfying $Y ( p_1 , p_2 ; G ) \leq Y ( p_1 , p_2 ; \mathbb{R} )$ as follows.
\begin{corollary}
\label{cor:Young-R-compare}
The following conditions \ref{item:Young-R-compare-open-compact}-\ref{item:Young-R-compare-not-1-G0} on the locally compact group $G$ are equivalent for any $1 < p_1 , p_2 < \infty$ with
\begin{align}
\frac{1}{p_1} + \frac{1}{p_2} > 1. \label{eq:Young-R-compare-condition}
\end{align}
\begin{enumerate}
\item \label{item:Young-R-compare-open-compact}
The locally compact group $G$ has no open compact subgroup.
\item \label{item:Young-R-compare-G0-non-compact}
The identity component $G_0 \subset G$ is not compact.
\item \label{item:Young-R-compare-subgroup-G}
The locally compact group $G$ has a closed subgroup which is isomorphic to $\mathbb{R}$ as a topological group.
\item \label{item:Young-R-compare-subgroup-G0}
The identity component $G_0$ has a closed subgroup which is isomorphic to $\mathbb{R}$ as a topological group.
\item \label{item:Young-R-compare-less-G}
One has $Y ( p_1 , p_2 ; G ) \leq Y ( p_1 , p_2 ; \mathbb{R} )$.
\item \label{item:Young-R-compare-less-G0}
One has $Y ( p_1 , p_2 ; G_0 ) \leq Y ( p_1 , p_2 ; \mathbb{R} )$.
\item \label{item:Young-R-compare-not-1-G}
One has $Y ( p_1 , p_2 ; G ) \neq 1$.
\item \label{item:Young-R-compare-not-1-G0}
One has $Y ( p_1 , p_2 ; G_0 ) \neq 1$.
\end{enumerate}
\end{corollary}
When $G$ is unimodular, $\ref{item:Young-R-compare-open-compact} \Longleftrightarrow \ref{item:Young-R-compare-not-1-G}$ was proved by Fournier \cite[Theorems 1 and 3]{MR461034}, and $\ref{item:Young-R-compare-open-compact} \Longleftrightarrow \ref{item:Young-R-compare-G0-non-compact} \Longleftrightarrow \ref{item:Young-R-compare-less-G} \Longleftrightarrow \ref{item:Young-R-compare-less-G0} \Longleftrightarrow \ref{item:Young-R-compare-not-1-G} \Longleftrightarrow \ref{item:Young-R-compare-not-1-G0}$ was essentially proved in the previous paper of the author \cite[Corollary 1.3 and Remark 2.2]{MR4540843}.
When $G$ is not necessarily unimodular, $\ref{item:Young-R-compare-subgroup-G} \Longleftrightarrow \ref{item:Young-R-compare-subgroup-G0}$ follows from the connectedness of $\mathbb{R}$.
The results of Iwasawa \cite[Theorem 13]{MR29911} and the Gleason--Yamabe theorem \cite{MR39730} \cite[Theorem 5']{MR58607} imply $\ref{item:Young-R-compare-G0-non-compact} \Longrightarrow \ref{item:Young-R-compare-subgroup-G0}$.
Nielsen proved $\ref{item:Young-R-compare-open-compact} \Longleftrightarrow \ref{item:Young-R-compare-not-1-G}$ \cite[Theorem 1]{MR1304346}.
The author showed $\ref{item:Young-R-compare-open-compact} \Longleftrightarrow \ref{item:Young-R-compare-G0-non-compact}$ in the previous paper \cite[Remark 2.4 (3)]{satomi2021inequality} by using the result of Hewitt--Ross \cite{MR551496}.
Theorem \ref{thm:Young-order-reversing} implies $\ref{item:Young-R-compare-subgroup-G0} \Longrightarrow \ref{item:Young-R-compare-less-G0} \Longrightarrow \ref{item:Young-R-compare-less-G}$.
Theorem \ref{thm:Young-order-reversing} and Example \ref{ex:Young-order-reversing-example} \ref{item:Young-order-reversing-example-classic} imply $\ref{item:Young-R-compare-not-1-G0} \Longrightarrow \ref{item:Young-R-compare-not-1-G}$.
The implications $\ref{item:Young-R-compare-less-G} \Longrightarrow \ref{item:Young-R-compare-not-1-G}$ and $\ref{item:Young-R-compare-less-G0} \Longrightarrow \ref{item:Young-R-compare-not-1-G0}$ are deduced from $Y ( p_1 , p_2 ; \mathbb{R} ) < 1$.
Beckner determined the value of $Y ( p_1 , p_2 ; \mathbb{R}^n )$ explicitly as the following fact and hence $Y ( p_1 , p_2 ; \mathbb{R} ) < 1$ is essentially obtained.
\begin{fact}[Beckner {\cite[Theorem 3]{MR385456}}]
\label{fact:Beckner}
Let $1 \leq p \leq \infty$ be as in \eqref{eq:optimal-constant-p-definition} for $1 \leq p_1 , p_2 \leq \infty$ with \eqref{eq:optimal-constant-p1-p2-condition}.
Then
\begin{align}
Y ( p_1 , p_2 ; \mathbb{R}^n )
= \left( \frac{B ( p_1 ) B ( p_2 )}{B ( p )} \right)^{n / 2}, & &
B ( p )
:=
\left\{
\begin{aligned}
& \frac{p^{1 / p}}{p'^{1 / p'}} & & \text{if $1 < p < \infty$} \\
& 1 & & \text{if $p = 1, \infty$}
\end{aligned}
\right.
\end{align}
holds for any $n \in \mathbb{Z}_{\geq 1}$.
\end{fact}
\begin{remark}
There are some proofs of Fact \ref{fact:Beckner} (and its generalization named the Brascamp--Lieb inequality \cite[Theorem 1]{MR412366}).
\begin{enumerate}
\item
Beckner reduced Fact \ref{fact:Beckner} to the problem of finding the integrable solutions of
\begin{align}
\psi_1 (x_1) \psi_2 (x_2) = \nu_1 (x_1 + x_2) \nu_2 (x_1 - x_2) \label{eq:independent}
\end{align}
by using the Minkowski integral inequality.
Beckner proved the existence of rotation invariant extremal functions by using the Riesz--Sobolev rearrangement inequality, and proved Fact \ref{fact:Beckner} by showing that the rotation invariant solutions of \eqref{eq:independent} are only Gaussian functions.
Lieb proved the Brascamp--Lieb inequality by showing implicitly that the solutions of \eqref{eq:independent} are only Gaussian functions (the Darmois--Skitovich theorem \cite{MR61322} \cite{MR0055597}) even when the function is not necessarily rotation invariant \cite[Theorem 6.2]{MR1069246}.
\item
Brascamp--Lieb proved Fact \ref{fact:Beckner} by showing $Y ( p_1 , p_2 ; \mathbb{R}^n ) = Y ( p_1 , p_2 ; \mathbb{R} )^n$ and bounding the behavior of $Y ( p_1 , p_2 ; \mathbb{R}^n )$ as $n \to \infty$ from above.
In addition, Brascamp--Lieb generalized Fact \ref{fact:Beckner} to the Brascamp--Lieb inequality by a similar argument.
\item
Barthe gave a direct proof \cite[Theorem 1]{MR1616143} utilizing the change of variable by Henstock--Macbeath \cite[Section 5]{MR56669} and the weighted AM-GM inequality.
In addition, Barthe proved that a similar argument is valid for the Brascamp--Lieb inequality \cite[Theorem 1]{MR1650312}.
\item
Carlen--Lieb--Loss proved the rank-one Brascamp--Lieb inequality by showing the property that the integral of the product of the exponentiations of the solutions of the heat equations is increasing with time \cite[Theorem 3.1]{MR2077162}.
Cordero-Erausquin--Ledoux proved this property by using the estimate of Shannon's differential entropy \cite[Theorem 6]{MR2644890}.
\end{enumerate}
In addition, there are many works \cite{MR1008726} \cite{MR2377493} \cite{MR2448061} \cite{MR2496567} \cite{MR2661170} \cite{MR2674705} \cite{MR2806562} \cite{MR3364694} \cite{MR3239122} \cite{MR3431655} \cite{MR3723636} \cite{MR3610015} \cite{MR3777414} \cite{MR4173156} and surveys \cite{MR1898210} \cite{MR2657116} \cite{MR3204854} about Fact \ref{fact:Beckner} and the Brascamp--Lieb inequality.
\end{remark}
\subsection{Comparison with Corollary \ref{cor:Young-non-compact-dimension}}
\label{subsec:Young-known-result-non-compact-dimension}
In this section, we see some relations between known results and Corollary \ref{cor:Young-non-compact-dimension}.
Table \ref{tab:Young-non-compact-dimension-compare} lists the authors who proved Corollary \ref{cor:Young-non-compact-dimension} for some $G$.
If $G$ is compact, then $G$ is unimodular and Corollary \ref{cor:Young-non-compact-dimension} corresponds to \eqref{eq:Young-order-reversing-example-classic-state}.
Thus, Corollary \ref{cor:Young-non-compact-dimension} was essentially proved by Weil (Example \ref{ex:Young-order-reversing-example} \ref{item:Young-order-reversing-example-classic}).
The equivalent conditions \ref{item:Young-R-compare-open-compact}-\ref{item:Young-R-compare-not-1-G0} in Corollary \ref{cor:Young-R-compare} is also equivalent to $r ( G ) < \dim G$ for any connected Lie group $G$.
Thus, Corollary \ref{cor:Young-non-compact-dimension} gives a stronger bound than Corollary \ref{cor:Young-R-compare} of $Y ( p_1 , p_2 ; G )$ from above for any connected Lie group $G$ with $\# Z ( G / R ) < \infty$.
When $G$ is either a simply connected solvable Lie group or a connected nilpotent Lie group, Nielsen determined the value of $Y ( p_1 , p_2 ; G )$ as follows.
\begin{fact}[Nielsen {\cite[Corollaries (a) and (b)]{MR1304346}}]
\label{fact:Nielsen}
Suppose that $1 < p_1 , p_2 < \infty$ satisfy \eqref{eq:Young-R-compare-condition}.
If $G$ is either a simply connected solvable Lie group or a connected nilpotent Lie group, then
\begin{align}
Y ( p_1 , p_2 ; G )
= Y ( p_1 , p_2 ; \mathbb{R} )^{\dim G - \mathrm{rank} ( \ker ( \tilde{G} \to G ) ) }
\end{align}
holds, where $\tilde{G}$ is the universal cover of $G$.
\end{fact}
We have $r ( G ) = \mathrm{rank} ( \ker ( \tilde{G} \to G ) )$ for any connected solvable Lie group $G$ (Example \ref{ex:non-compact-dimension-bound-apply} \ref{item:non-compact-dimension-bound-apply-Nielsen}).
Thus, if $G$ is either a simply connected solvable Lie group or a connected nilpotent Lie group, then Corollary \ref{cor:Young-non-compact-dimension} follows from Fact \ref{fact:Nielsen}.
In addition, Bennett--Bez--Buschenhenke--Cowling--Flock gave a stronger bound than Corollary \ref{cor:Young-non-compact-dimension} in the case where each support of the functions is sufficiently small as follows.
\begin{table}
\centering
\caption{Authors who proved Corollary \ref{cor:Young-non-compact-dimension} for some $G$}
\label{tab:Young-non-compact-dimension-compare}
\vspace{8pt}
\begin{tabular}{c|c}
Connected Lie group $G$ & Author \\
\hline \hline
Compact group & Weil (Example \ref{ex:Young-order-reversing-example} \ref{item:Young-order-reversing-example-classic}) \\
\hline
$\mathbb{R}^n$ & Beckner (Fact \ref{fact:Beckner}) \\
\hline
Simply connected nilpotent Lie group & Klein--Russo \cite[Corollary 2.5']{MR499945} \\
\hline
Simply connected solvable Lie group, & \multirow{2}{*}{Nielsen (Fact \ref{fact:Nielsen})} \\
Nilpotent Lie group &
\end{tabular}
\end{table}
\begin{fact}[Bennett--Bez--Buschenhenke--Cowling--Flock {\cite[Corollary 2.4]{MR4173156}}]
\label{fact:Bennett-Bez-Buschenhenke-Cowling-Flock}
Let $1 \leq p \leq \infty$ be as in \eqref{eq:optimal-constant-p-definition} for $1 \leq p_1 , p_2 \leq \infty$ with \eqref{eq:optimal-constant-p1-p2-condition}.
Then for any Lie group $G$ and any $Y_0 > Y ( p_1 , p_2 ; \mathbb{R} )^{\dim G}$, there exists a non-empty open set $V \subset G$ such that
\begin{align}
\| \phi_1 * ( \phi_2 \Delta^{1 / p_1'} ) \|_p \leq Y_0 \| \phi_1 \|_{p_1} \| \phi_2 \|_{p_2} \label{eq:Bennett-Bez-Buschenhenke-Cowling-Flock-inequality}
\end{align}
for any measurable functions $\phi_1 , \phi_2 \colon G \to \mathbb{C}$ whose supports are contained in $V$.
\end{fact}
The connectedness is assumed in the original paper of Bennett--Bez--Buschenhenke--Cowling--Flock.
Nevertheless, Fact \ref{fact:Bennett-Bez-Buschenhenke-Cowling-Flock} holds without connectedness because $V$ can be rearranged to satisfy $V \subset G_0$.
The value $Y ( p_1 , p_2 ; \mathbb{R} )^{\dim G}$ in Fact \ref{fact:Bennett-Bez-Buschenhenke-Cowling-Flock} is the best possible by the result of Cowling--Martini--M\"{u}ller--Parcet \cite[Proposition 2.4 (i)]{MR4000236}.
Fact \ref{fact:Bennett-Bez-Buschenhenke-Cowling-Flock} gives a stronger bound than Theorem \ref{thm:Young-order-reversing} with the assumption that the supports are sufficiently small.
For example, when $G = SL_2 ( \mathbb{R} )$, we have \eqref{eq:SL2-bound} by $\dim G = 3$ and $r ( G ) = \dim SO ( 2 ) = 1$.
On the other hand, for any $Y_0 > Y ( p_1 , p_2 ; \mathbb{R} )^3$, there exists a non-empty open set $V \subset SL_2 ( \mathbb{R} )$ such that \eqref{eq:Bennett-Bez-Buschenhenke-Cowling-Flock-inequality} holds for any $\phi_1 , \phi_2 \colon SL_2 ( \mathbb{R} ) \to \mathbb{C}$ whose supports are contained in $V$.
\section{Optimal constant on the boundary}
\label{sec:boundary}
In this section, we show Theorem \ref{thm:Young-order-reversing} when $( p_1 , p_2 )$ is on the boundary of the range that satisfies the assumption.
In this case, the equality of the classical Young's inequality \eqref{eq:Young-order-reversing-example-classic-state} holds for any locally compact group $G$ as follows.
\begin{lemma}
\label{lem:Young-bound}
Suppose that $1 \leq p_1 , p_2 \leq \infty$ with \eqref{eq:optimal-constant-p1-p2-condition} satisfy at least one of the following cases \ref{item:Young-bound-p1}, \ref{item:Young-bound-p2}, and \ref{item:Young-bound-p}.
\begin{enumerate}
\item \label{item:Young-bound-p1}
One has $p_1 = 1$.
\item \label{item:Young-bound-p2}
One has $p_2 = 1$.
\item \label{item:Young-bound-p}
One has $1 / p_1 + 1 / p_2 = 1$.
\end{enumerate}
Then $Y ( p_1 , p_2 ; G ) = 1$ holds for any locally compact group $G$.
\end{lemma}
Theorem \ref{thm:Young-order-reversing} follows from Lemma \ref{lem:Young-bound} in the cases \ref{item:Young-bound-p1}, \ref{item:Young-bound-p2}, and \ref{item:Young-bound-p}.
We show the following lemma of the symmetry of the optimal constant to prove Lemma \ref{lem:Young-bound}.
\begin{lemma}
\label{lem:optimal-constant-transform}
Let $G$, $p_1$, $p_2$, $p$, $\Delta$, and $Y ( p_1 , p_2 ; G )$ be as in Definition \ref{def:optimal-constant}.
\begin{enumerate}
\item \label{item:optimal-constant-transform-convolution}
One has
\begin{align}
\phi_1 * ( \phi_2 \Delta^{1 / p_1'} ) ( g ' )
= \left( \frac{\phi_2 ( \cdot^{-1} )}{\Delta^{1 / p_2}} \right) * \left( \frac{\phi_1 ( \cdot^{-1} )}{\Delta^{1 / p}} \right) ( g'^{-1} ) \Delta ( g' )^{- 1 / p} \label{eq:optimal-constant-transform-convolution-state}
\end{align}
for any $g' \in G$ and any measurable functions $\phi_1 , \phi_2 \colon G \to \mathbb{C}$.
\item \label{item:optimal-constant-transform-constant}
One has $Y ( p_1 , p_2 ; G ) = Y ( p_2 , p_1 ; G )$.
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}
\item
We have
\begin{align}
\phi_1 * ( \phi_2 \Delta^{1 / p_1'} ) ( g ' )
& = \int_{G}^{} \phi_1 ( g ) \phi_2 ( g^{-1} g' ) \Delta ( g^{-1} g' )^{1 / p_1'} dg \\
& = \int_{G}^{} \phi_1 ( g' g ) \phi_2 ( g^{-1} ) \Delta ( g^{-1} )^{1 / p_1'} dg \label{eq:convolution-transform}
\end{align}
by the left invariance of $dg$.
Since
\begin{align}
\frac{1}{p} + \frac{1}{p_1'}
= \frac{1}{p_1} + \frac{1}{p_2} - 1 + \frac{1}{p_1'}
= \frac{1}{p_2} \label{eq:p-definition-transform}
\end{align}
by \eqref{eq:optimal-constant-p-definition}, we have
\begin{align}
\phi_1 ( g' g ) \phi_2 ( g^{-1} ) \Delta ( g^{-1} )^{1 / p_1'}
= \frac{\phi_2 ( g^{-1} ) \phi_1 ( g' g )}{\Delta ( g )^{1 / p_2} \Delta ( g^{-1} g'^{-1} )^{1 / p} \Delta ( g' )^{1 / p}}.
\end{align}
Thus, it follows that
\begin{align}
\int_{G}^{} \phi_1 ( g' g ) \phi_2 ( g^{-1} ) \Delta ( g^{-1} )^{1 / p_1'} dg
& = \int_{G}^{} \frac{\phi_2 ( g^{-1} ) \phi_1 ( g' g )}{\Delta ( g )^{1 / p_2} \Delta ( g^{-1} g'^{-1} )^{1 / p} } dg \Delta ( g' )^{- 1 / p} \\
& = \left( \frac{\phi_2 ( \cdot^{-1} )}{\Delta^{1 / p_2}} \right) * \left( \frac{\phi_1 ( \cdot^{-1} )}{\Delta^{1 / p}} \right) ( g'^{-1} ) \Delta ( g' )^{- 1 / p}
\end{align}
and hence we obtain \eqref{eq:optimal-constant-transform-convolution-state} by \eqref{eq:convolution-transform}.
\item
It suffices to show
\begin{align}
\| \phi_1 * ( \phi_2 \Delta_G^{1 / p_1'} ) \|_p
\leq Y ( p_2 , p_1 ; G) \| \phi_1 \|_{p_1} \| \phi_2 \|_{p_2} \label{eq:transform-conclusion}
\end{align}
for any measurable functions $\phi_1 , \phi_2 \colon G \to \mathbb{R}_{\geq 0}$.
We have
\begin{align}
\| \phi_1 * ( \phi_2 \Delta_G^{1 / p_1'} ) \|_p^p
& = \int_{G}^{} \left( \frac{\phi_2 ( \cdot^{-1} )}{\Delta^{1 / p_2}} \right) * \left( \frac{\phi_1 ( \cdot^{-1} )}{\Delta^{1 / p}} \right) ( g'^{-1} )^p \Delta ( g' )^{- 1} dg' \\
& = \left\| \left( \frac{\phi_2 ( \cdot^{-1} )}{\Delta^{1 / p_2}} \right) * \left( \frac{\phi_1 ( \cdot^{-1} )}{\Delta^{1 / p}} \right) \right\|_p^p \label{eq:convolution-Lp-transform}
\end{align}
by \ref{item:optimal-constant-transform-convolution} and \eqref{eq:modular-function}.
Since
\begin{align}
\frac{1}{p} + \frac{1}{p_2'}
= \frac{1}{p_1} \label{eq:p-definition-transform-similar}
\end{align}
by an argument similar to \eqref{eq:p-definition-transform}, we have
\begin{align}
\left\| \left( \frac{\phi_2 ( \cdot^{-1} )}{\Delta^{1 / p_2}} \right) * \left( \frac{\phi_1 ( \cdot^{-1} )}{\Delta^{1 / p}} \right) \right\|_p
\leq Y ( p_2 , p_1 ; G ) \left\| \frac{\phi_2 ( \cdot^{-1} )}{\Delta^{1 / p_2}} \right\|_{p_2} \left\| \frac{\phi_1 ( \cdot^{-1} )}{\Delta^{1 / p_1}} \right\|_{p_1}. \label{eq:optimal-apply}
\end{align}
The equality
\begin{align}
\left\| \frac{\phi_1 ( \cdot^{-1} )}{\Delta^{1 / p_1}} \right\|_{p_1}^{p_1}
= \int_{G}^{} \frac{\phi_1 ( g^{-1} )^{p_1}}{\Delta ( g )} dg
= \| \phi_1 \|_{p_1}^{p_1}
\end{align}
holds by \eqref{eq:modular-function} and hence we have
\begin{align}
\left\| \frac{\phi_1 ( \cdot^{-1} )}{\Delta^{1 / p_1}} \right\|_{p_1}
& = \| \phi_1 \|_{p_1}, &
\left\| \frac{\phi_2 ( \cdot^{-1} )}{\Delta^{1 / p_2}} \right\|_{p_2}
& = \| \phi_2 \|_{p_2}.
\end{align}
Thus, we obtain \eqref{eq:transform-conclusion} by \eqref{eq:convolution-Lp-transform} and \eqref{eq:optimal-apply}.
\qedhere
\end{enumerate}
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:Young-bound}]
It suffices to show
\begin{align}
Y ( p_1 , p_2 ; G) \geq 1 \label{eq:Young-classic-reverse}
\end{align}
by Example \ref{ex:Young-order-reversing-example} \ref{item:Young-order-reversing-example-classic}.
First, we show \eqref{eq:Young-classic-reverse} in the case \ref{item:Young-bound-p}.
We assume that an integrable function $\phi \colon G \to \mathbb{R}_{\geq 0}$ satisfies $\| \phi \|_1 = 1$ and
\begin{align}
\phi_1 ( g )
& := \left\{
\begin{aligned}
& \phi ( g )^{1 / p_1} & & \text{if $p_1 < \infty$} \\
& 1 & & \text{if $p_1 = \infty$}
\end{aligned}
\right. , &
\phi_2 ( g )
& := \left\{
\begin{aligned}
& \left( \frac{\phi ( g^{-1} )}{\Delta ( g )} \right)^{1 / p_2} & & \text{if $p_2 < \infty$} \\
& 1 & & \text{if $p_2 = \infty$}
\end{aligned}
\right. .
\end{align}
Since
\begin{align}
\| \phi_1 \|_{p_1}
& = \| \phi \|_1
= 1, &
\| \phi_2 \|_{p_2}
& = \int_{G}^{} \frac{\phi ( g^{-1} )}{\Delta ( g )} dg
= \| \phi \|_1
= 1
\end{align}
hold by $1 / p_1 + 1 / p_2 = 1$ and \eqref{eq:modular-function}, the convolution $\phi_1 * ( \phi_2 \Delta^{1 / p_1'} )$ is continuous.
Thus, one has $\| \phi_1 * ( \phi_2 \Delta^{1 / p_1'} ) \|_\infty = 1$ by
\begin{align}
\phi_1 * ( \phi_2 \Delta^{1 / p_1'} ) ( e )
= \| \phi \|_1
= 1
\end{align}
and hence we obtain \eqref{eq:Young-classic-reverse}.
Second, we show \eqref{eq:Young-classic-reverse} in the case \ref{item:Young-bound-p1}.
One may assume $p_2 < \infty$ by the case \ref{item:Young-bound-p}.
When $\phi_2 \in L^{p_2} ( G )$ is fixed, there exists a sequence of integrable functions $\phi_{1 , m} \colon G \to \mathbb{R}_{\geq 0}$ with $\| \phi_{1 , m} \|_1 = 1$ such that $\phi_{1 , m} * \phi_2$ converges to $\phi_2$ \cite[Theorem 20.15]{MR551496}.
Thus, we obtain \eqref{eq:Young-classic-reverse} because $\| \phi_{1 , m} * \phi_2 \|_{p_2}$ converges to $\| \phi_1 \|_{p_1} \| \phi_2 \|_{p_2} = \| \phi_2 \|_{p_2}$.
Finally, \eqref{eq:Young-classic-reverse} in the case \ref{item:Young-bound-p2} follows from the case \ref{item:Young-bound-p1} by Lemma \ref{lem:non-compact-dimension-bound} \ref{item:optimal-constant-transform-constant}.
\end{proof}
\section{Proof of Theorem \ref{thm:Young-order-reversing}}
\label{sec:Young-order-proof}
In this section, we prove Theorem \ref{thm:Young-order-reversing} in the general case.
In Subsection \ref{subsec:Young-order-proof-Haar-subgroup}, we prepare a lemma (Lemma \ref{lem:subgroup-measure}) which represents the left Haar measure on $G$ by that on a closed subgroup $H \subset G$.
In Subsection \ref{subsec:Young-order-proof-Holder}, we give some inequalities (Example \ref{ex:Holder-apply}) to prove Theorem \ref{thm:Young-order-reversing} by applying H\"{o}lder's inequality (Fact \ref{fact:Holder}).
In Subsection \ref{subsec:Young-order-proof-Young-order-proof}, we complete the proof of Theorem \ref{thm:Young-order-reversing} by using Subsection \ref{subsec:Young-order-proof-Haar-subgroup} and Subsection \ref{subsec:Young-order-proof-Holder}.
\subsection{Representing the Haar measure by using a closed subgroup}
\label{subsec:Young-order-proof-Haar-subgroup}
In this subsection, we prove a lemma (Lemma \ref{lem:subgroup-measure}) which represents the left Haar measure on $G$ by that on a closed subgroup $H \subset G$, and we give some examples (Example \ref{ex:subgroup-represent}) which is used in the proof of Theorem \ref{thm:Young-order-reversing}.
We write $X := H \backslash G$ and $\overline{g} := H g \in X$ for the right coset of $g \in G$.
\begin{lemma}
\label{lem:subgroup-measure}
Let $H \subset G$ be a closed subgroup of a locally compact group $G$.
\begin{enumerate}
\item \label{item:subgroup-measure-modular-extend}
There exists a continuous function $\delta \colon G \to \mathbb{R}_{> 0}$ such that $\delta |_H$ is the modular function of $H$ and
\begin{align}
\int_{H}^{} \phi ( h g ) dh \delta ( g ) \label{eq:subgroup-measure-modular-extend-invariant}
\end{align}
is left $H$-invariant for any measurable function $\phi \colon G \to \mathbb{C}$.
\item \label{item:subgroup-measure-quotient}
We fix $\delta$ in \ref{item:subgroup-measure-modular-extend}.
Then there exists a Borel measure $d \overline{g}$ on $X$ such that
\begin{align}
\int_{X}^{} \int_{H}^{} \phi ( h g ) dh \delta ( g ) d \overline{g}
= \int_{G}^{} \phi ( g ) dg \label{eq:subgroup-measure-quotient-condition}
\end{align}
for any integrable function $\phi \colon G \to \mathbb{C}$.
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}
\item
There exists a continuous function $\rho \colon G \to \mathbb{R}_{> 0}$ such that
\begin{align}
\rho ( g h ) = \frac{\Delta_H ( h ) \rho ( g )}{\Delta ( h )} \label{eq:rho-function}
\end{align}
for any $g \in G$ and $h \in H$ \cite[Proposition 2.56]{MR3444405}, where $\Delta_H$ is the modular function of $H$.
By scaling, we may assume $\rho ( e ) = 1$.
When the continuous function $\delta \colon G \to \mathbb{R}_{> 0}$ is defined as
\begin{align}
\delta ( g )
:= \frac{1}{\Delta ( g^{-1} ) \rho ( g^{-1})}, \label{eq:delta-definition}
\end{align}
we have
\begin{align}
\delta ( h g )
& = \frac{1}{\Delta ( g^{-1} h^{-1} ) \rho ( g^{-1} h^{-1} )} \\
& = \frac{\Delta ( h^{-1} )}{\Delta ( g^{-1} h^{-1} ) \Delta_H ( h^{-1} ) \rho ( g^{-1})} \\
& = \frac{\Delta_H ( h )}{\Delta ( g^{-1} ) \rho ( g^{-1})} \\
& = \Delta_H ( h ) \delta ( g ) \label{eq:delta-distribute}
\end{align}
by \eqref{eq:rho-function}.
In particular, we have
\begin{align}
\delta ( h )
= \Delta_H ( h ) \delta ( e )
= \Delta_H ( h ).
\end{align}
Thus, it follows from \eqref{eq:delta-distribute} that
\begin{align}
\int_{H}^{} \phi ( h h' g ) dh \delta ( h' g )
= \int_{H}^{} \phi ( h h' g ) dh \delta ( h' ) \delta ( g )
= \int_{H}^{} \phi ( h g ) dh \delta ( g )
\end{align}
for any $h' \in H$ and any measurable function $\phi \colon G \to \mathbb{C}$ and hence we obtain \ref{item:subgroup-measure-modular-extend}.
\item
For any $g \in G$ and $h' \in H$, we have
\begin{align}
\delta ( h' g ) \int_{H}^{} \phi ( h g ) dh
= \delta ( g ) \int_{H}^{} \phi ( h h'^{-1} g ) dh
= \delta ( g ) \delta ( h' ) \int_{H}^{} \phi ( h g ) dh
\end{align}
for any measurable function $\phi \colon G \to \mathbb{C}$ by the left invariance of \eqref{eq:subgroup-measure-modular-extend-invariant} and hence
\begin{align}
\delta ( h' g ) = \delta ( h' ) \delta ( g ). \label{eq:delta-distribute-obtain}
\end{align}
Then
\begin{align}
\rho ( g ) := \frac{1}{\Delta ( g ) \delta ( g^{-1} )}
\end{align}
is a continuous function on $G$.
Since $\delta |_H$ is a modular function of $H$, we have
\begin{align}
\rho ( g h )
= \frac{1}{\Delta ( g h ) \delta ( h^{-1} g^{-1} )}
= \frac{\delta ( h )}{\Delta ( h ) \Delta ( g ) \delta ( g^{-1} )}
= \frac{\delta ( h ) \rho ( g )}{\Delta ( h )}
\end{align}
for any $g \in G$ and $h \in H$ by \eqref{eq:delta-distribute-obtain}.
Thus, there exists a Borel measure $dgH$ on $G / H$ such that
\begin{align}
\int_{G / H}^{} \int_{H}^{} \omega ( g h ) dh dgH
= \int_{G}^{} \omega ( g ) \rho ( g ) dg \label{eq:dgH-condition}
\end{align}
for any integrable function $\omega \colon G \to \mathbb{C}$ \cite[Theorem 2.58]{MR3444405}.
We set the Borel measure $d \overline{g}$ on $X := H \backslash G$ as
\begin{align}
\int_{X}^{} \alpha ( \overline{g} ) d\overline{g}
= \int_{G / H}^{} \alpha ( H g^{-1} ) dgH.
\end{align}
Then
\begin{align}
\int_{X}^{} \int_{H}^{} \phi ( h g ) dh \delta ( g ) d\overline{g}
= \int_{G / H}^{} \int_{H}^{} \phi ( h g^{-1} ) dh \delta ( g^{-1} ) dgH \label{eq:X-represent}
\end{align}
holds for any measurable function $\phi \colon G \to \mathbb{C}$.
Since $\delta |_H$ is the modular function of $H$, we have
\begin{align}
\int_{H}^{} \phi ( h g^{-1} ) dh
= \int_{H}^{} \frac{\phi ( h^{-1} g^{-1} )}{\delta ( h )} dh
\end{align}
by \eqref{eq:modular-function}.
Thus, one has
\begin{align}
\int_{G / H}^{} \int_{H}^{} \phi ( h g^{-1} ) dh \delta ( g^{-1} ) dgH
& = \int_{G / H}^{} \int_{H}^{} \frac{\phi ( h^{-1} g^{-1} )}{\delta ( h )} dh \delta ( g^{-1} ) dgH \\
& = \int_{G / H}^{} \int_{H}^{} \phi ( h^{-1} g^{-1} ) \delta ( h^{-1} g^{-1} ) dh dgH \qquad \label{eq:X-represent-transform}
\end{align}
by \eqref{eq:delta-distribute-obtain}.
In addition, it follows from \eqref{eq:dgH-condition} that
\begin{align}
\int_{G / H}^{} \int_{H}^{} \phi ( h^{-1} g^{-1} ) \delta ( h^{-1} g^{-1} ) dh dgH
& = \int_{G}^{} \phi ( g^{-1} ) \delta ( g^{-1} ) \rho ( g ) dg \\
& = \int_{G}^{} \frac{\phi ( g^{-1} )}{\Delta ( g )} dg. \label{eq:dgH-condition-apply}
\end{align}
Therefore, we obtain \eqref{eq:subgroup-measure-quotient-condition} by \eqref{eq:modular-function}, \eqref{eq:X-represent}, \eqref{eq:X-represent-transform}, and \eqref{eq:dgH-condition-apply}.
\qedhere
\end{enumerate}
\end{proof}
Now we give some examples of Lemma \ref{lem:subgroup-measure} to prove Theorem \ref{thm:Young-order-reversing}.
\begin{example}
\label{ex:subgroup-represent}
Let $\phi_1 , \phi_2 \colon G \to \mathbb{R}_{\geq 0}$ be measurable functions.
We suppose $1 < p_1 , p_2 , p < \infty$ satisfy \eqref{eq:optimal-constant-p-definition}.
\begin{enumerate}
\item \label{item:subgroup-represent-phi1}
Let $s ( h , g ) := \phi_1 ( h g ) \delta ( g )^{1 / p_1}$ for $g \in G$ and $h \in H$.
Then
\begin{align}
S ( \overline{g} )
:= \int_{H}^{} s ( h , g )^{p_1} dh
= \int_{H}^{} \phi_1 ( h g )^{p_1} dh \delta ( g )
\end{align}
is well-defined by Lemma \ref{lem:subgroup-measure} \ref{item:subgroup-measure-modular-extend}, and we have
\begin{align}
\int_{X}^{} S ( \overline{g} ) d\overline{g}
= \| \phi_1 \|_{p_1}^{p_1} \label{eq:subgroup-represent-phi1-integral}
\end{align}
by Lemma \ref{lem:subgroup-measure} \ref{item:subgroup-measure-quotient}.
\item \label{item:subgroup-represent-phi2-g'}
Let $t ( g , g' ) := ( \phi_2 ( g^{-1} g' )^{p_2} \delta ( g' ) )^{1 / p}$ for $g , g' \in G$.
Since
\begin{align}
\int_{H}^{} t ( h^{-1} h' g , g' )^p dh
= \int_{H}^{} t ( h^{-1} g , g' )^p dh
\end{align}
for any $h' \in H$, the function
\begin{align}
T ( \overline{g} , \overline{g'} )
:= \int_{H}^{} t ( h^{-1} g , g' )^p dh
= \int_{H}^{} \phi_2 ( g^{-1} g' )^{p_2} dh \delta ( g' )
\end{align}
is well-defined by Lemma \ref{lem:subgroup-measure} \ref{item:subgroup-measure-modular-extend}.
Thus, we have
\begin{align}
\int_{X}^{} T ( \overline{g} , \overline{g'} ) d\overline{g'}
= \int_{X}^{} \int_{H}^{} \phi_2 ( g^{-1} h g' )^{p_2} dh \delta ( g' ) d\overline{g'}
= \int_{G}^{} \phi_2 ( g^{-1} g' )^{p_2} dg'
= \| \phi_2 \|_{p_2}^{p_2}
\end{align}
by lemma \ref{lem:subgroup-measure} \ref{item:subgroup-measure-quotient}.
\item \label{item:subgroup-represent-phi2-g}
Let
\begin{align}
u ( g , h , g' )
:= \left( \frac{\phi_2 ( g^{-1} h g' )^{p_2} \Delta ( g^{-1} h g' ) \delta ( g )}{\delta ( h )} \right)^{1 / p_1'}
\end{align}
for $g , g' \in G$ and $h \in H$.
Since $\delta |_H$ is a modular function of $H$ by Lemma \ref{lem:subgroup-measure} \ref{item:subgroup-measure-modular-extend},
\begin{align}
\int_{H}^{} u ( g , h , h' g' )^{p_1'} dh
& = \int_{H}^{} \frac{\phi_2 ( g^{-1} h h' g' )^{p_2} \Delta ( g^{-1} h h' g' ) \delta ( g )}{\delta ( h )} dh \\
& = \int_{H}^{} \phi_2 ( g^{-1} h^{-1} h' g' )^{p_2} \Delta ( g^{-1} h^{-1} h' g' ) dh \delta ( g ) \\
& = \int_{H}^{} \phi_2 ( g^{-1} h^{-1} g' )^{p_2} \Delta ( g^{-1} h^{-1} g' ) dh \delta ( g ) \label{eq:subgroup-represent-phi2-g-invariant}
\end{align}
is independent in $h' \in H$ by \eqref{eq:modular-function}.
Thus, the function
\begin{align}
U ( \overline{g} , \overline{g'} ) := \int_{H}^{} u ( g , h , g' )^{p_1'} dh
\end{align}
is well-defined by Lemma \ref{lem:subgroup-measure} \ref{item:subgroup-measure-modular-extend}.
Now we prove
\begin{align}
\int_{X}^{} U ( \overline{g} , \overline{g'} ) d\overline{g}
= \| \phi_2 \|_{p_2}^{p_2} \label{eq:subgroup-represent-phi2-g-integral}
\end{align}
for any $g' \in G$.
We have
\begin{align}
\int_{X}^{} U ( \overline{g} , \overline{g'} ) d\overline{g}
= \int_{X}^{} \int_{H}^{} \phi_2 ( g^{-1} h^{-1} g' )^{p_2} \Delta ( g^{-1} h^{-1} g' ) dh \delta ( g ) d\overline{g}
\end{align}
by \eqref{eq:subgroup-represent-phi2-g-invariant} and
\begin{align}
\int_{X}^{} \int_{H}^{} \phi_2 ( g^{-1} h^{-1} g' )^{p_2} \Delta ( g^{-1} h^{-1} g' ) dh \delta ( g ) d\overline{g}
& = \int_{G}^{} \phi_2 ( g^{-1} g' )^{p_2} \Delta ( g^{-1} g' ) dg \\
& = \int_{G}^{} \phi_2 ( g^{-1} )^{p_2} \Delta ( g^{-1} ) dg
\end{align}
by Lemma \ref{lem:subgroup-measure} \ref{item:subgroup-measure-quotient}.
Since
\begin{align}
\int_{G}^{} \phi_2 ( g^{-1} )^{p_2} \Delta ( g^{-1} ) dg
= \int_{G}^{} \phi_2 ( g )^{p_2} dg
= \| \phi_2 \|_{p_2}^{p_2}
\end{align}
by \eqref{eq:modular-function}, we obtain \eqref{eq:subgroup-represent-phi2-g-integral}.
\item \label{item:subgroup-represent-convolution}
Let $s$, $t$, and $u$ be as in \ref{item:subgroup-represent-phi1}, \ref{item:subgroup-represent-phi2-g'}, and \ref{item:subgroup-represent-phi2-g}, respectively.
Now we show
\begin{align}
& \quad \phi_1 * ( \phi_2 \Delta^{1/p_1'} ) ( h' g' ) \\
& = \int_{X}^{} \int_{H}^{} \frac{s ( h , g ) t ( h'^{-1} h g , g') u ( g , h^{-1} h' , g' ) \delta ( h^{-1} h' )^{1 / p_1'}}{\delta ( g' )^{1 / p}} dh d\overline{g} \label{eq:subgroup-represent-convolution-claim}
\end{align}
for any $h' \in H$ and $g' \in G$.
Since
\begin{align}
& \quad \phi_1 * ( \phi_2 \Delta^{1/p_1'} ) ( h' g' ) \\
& = \int_{G}^{} \phi_1 ( g ) \phi_2 ( g^{-1} h' g' ) \Delta ( g^{-1} h' g' )^{1 / p_1'} dg \\
& = \int_{X}^{} \int_{H}^{} \phi_1 ( h g ) \phi_2 ( g^{-1} h^{-1} h' g' ) \Delta ( g^{-1} h^{-1} h' g' )^{1 / p_1'} dh \delta ( g ) d\overline{g}
\end{align}
by Lemma \ref{lem:subgroup-measure}, we obtain \eqref{eq:subgroup-represent-convolution-claim} by
\begin{align}
& \quad \phi_1 ( h g ) \phi_2 ( g^{-1} h^{-1} h' g' ) \Delta ( g^{-1} h^{-1} h' g' )^{1 / p_1'} \delta ( g ) \\
& = \frac{s ( h , g ) t ( h'^{-1} h g , g') u ( g , h^{-1} h' , g' ) \delta ( h^{-1} h' )^{1 / p_1'}}{\delta ( g' )^{1 / p}}.
\end{align}
\item \label{item:subgroup-represent-Lp-norm}
Since
\begin{align}
\| \phi_1 * ( \phi_2 \Delta^{1/p_1'} ) \|_p^p
= \int_{X}^{} \int_{H}^{} \phi_1 * ( \phi_2 \Delta^{1/p_1'} ) ( h' g' )^p dh' \delta ( g' ) d\overline{g'}
\end{align}
by Lemma \ref{lem:subgroup-measure}, we have
\begin{align}
& \quad \| \phi_1 * ( \phi_2 \Delta^{1/p_1'} ) \|_p^p \\
& = \int_{X}^{} \int_{H}^{} \int_{X}^{} \int_{H}^{} s ( h , g ) t ( h'^{-1} h g , g') u ( g , h^{-1} h' , g' ) \delta ( h^{-1} h' )^{1 / p_1'} dh d\overline{g}^p dh' d\overline{g'}
\end{align}
by \ref{item:subgroup-represent-convolution}.
\end{enumerate}
\end{example}
\subsection{H\"{o}lder's inequality}
\label{subsec:Young-order-proof-Holder}
In this subsection, we obtain some inequalities (Example \ref{ex:Holder-apply}) by using H\"{o}lder's inequality (Fact \ref{fact:Holder}) to prove Theorem \ref{thm:Young-order-reversing}.
\begin{fact}[H\"{o}lder's inequality]
\label{fact:Holder}
Let $k , l \in \mathbb{Z}_{\geq 1}$ and $p_{i,j} , c_i > 0$ for $i = 1 , \cdots , k$ and $j = 1 , \cdots , l$.
Then
\begin{align}
& \quad \int_{G}^{} \phi_1 (g)^{p_1} \cdots \phi_l (g)^{p_l} dg^c \\
& \leq \int_{G}^{} \phi_1 (g)^{p_{1,1}} \cdots \phi_l (g)^{p_{1,l}} dg^{c_1} \cdots \int_{G}^{} \phi_1 (g)^{p_{k,1}} \cdots \phi_l (g)^{p_{k,l}} dg^{c_k}
\end{align}
holds for any measurable functions $\phi_1 , \cdots , \phi_l \colon G \to \mathbb{R}_{\geq 0}$ on a measure space $G$, where
\begin{align}
c := c_1 + \cdots + c_k, & &
p_j := \frac{p_{1,j} c_1 + \cdots + p_{k,j} c_k}{c}
\end{align}
for $j = 1 , \cdots , l$.
\end{fact}
\begin{example}
\label{ex:Holder-apply}
Let $\phi_1$, $\phi_2$, $S$, $t$, $T$, $u$, and $U$ be as in Example \ref{ex:subgroup-represent}.
Suppose that $1 < p_1 , p_2 , p < \infty$ satisfy \eqref{eq:optimal-constant-p-definition}.
\begin{enumerate}
\item \label{item:Holder-apply-H}
We have
\begin{align}
\int_{H}^{} ( t ( h^{-1} g , g' ) u ( g , h , g' ) )^{p_2} dh^{1 / p_2}
\leq T ( \overline{g} , \overline{g'} )^{1 / p} U ( \overline{g} , \overline{g'} )^{1 / p_1'}
\end{align}
by \eqref{eq:p-definition-transform} and Fact \ref{fact:Holder}.
\item \label{item:Holder-apply-X}
We show
\begin{align}
& \quad \int_{X}^{} S ( \overline{g} )^{1 / p_1} T ( \overline{g} , \overline{g'} )^{1 / p} U ( \overline{g} , \overline{g'} )^{1 / p_1'} d\overline{g}^p \\
& \leq \left( \| \phi_1 \|_{p_1}^{p_1 / p_2'} \| \phi_2 \|_{p_2}^{p_2 / p_1'} \right)^p \int_{X}^{} S ( \overline{g} ) T ( \overline{g} , \overline{g'} ) d\overline{g}. \label{eq:Holder-apply-X-claim}
\end{align}
One has
\begin{align}
p \left( \frac{1}{p_1'} + \frac{1}{p_2'} \right) + 1
= p \left( 2 - \frac{1}{p_1} - \frac{1}{p_2} + \frac{1}{p}\right)
= p
\end{align}
by \eqref{eq:optimal-constant-p-definition}.
Thus, it follows from \eqref{eq:p-definition-transform-similar} and Fact \ref{fact:Holder} that
\begin{align}
& \quad \int_{X}^{} S ( \overline{g} )^{1 / p_1} T ( \overline{g} , \overline{g'} )^{1 / p} U ( \overline{g} , \overline{g'} )^{1 / p_1'} d\overline{g}^p \\
& \leq \left( \int_{X}^{} S ( \overline{g} ) d\overline{g}^{1 / p_2'} \int_{X}^{} U ( \overline{g} , \overline{g'} ) d\overline{g}^{1 / p_1'} \right)^p \int_{X}^{} S ( \overline{g} ) T ( \overline{g} , \overline{g'} ) d\overline{g}
\end{align}
and hence we obtain \eqref{eq:Holder-apply-X-claim} by Example \ref{ex:subgroup-represent} \ref{item:subgroup-represent-phi1} and \ref{item:subgroup-represent-phi2-g}.
\end{enumerate}
\end{example}
\subsection{Completion of the proof}
\label{subsec:Young-order-proof-Young-order-proof}
In this subsection, we complete the proof of Theorem \ref{thm:Young-order-reversing} by using Example \ref{ex:subgroup-represent} and Example \ref{ex:Holder-apply}.
\begin{proof}[Proof of Theorem \ref{thm:Young-order-reversing}]
It suffices to consider the case of $1 < p_1 , p_2 , p < \infty$ by Lemma \ref{lem:Young-bound}.
Let $s$, $S$, $t$, $T$, $u$, and $U$ be as in Example \ref{ex:subgroup-represent} for $\phi_1, \phi_2 \colon G \to \mathbb{R}_{\geq 0}$ with $\| \phi_1 \|_{p_1} = \| \phi_2 \|_{p_2} = 1$.
Then it suffices to show
\begin{align}
& \quad \int_{X}^{} \int_{H}^{} \int_{X}^{} \int_{H}^{} s ( h , g ) t ( h'^{-1} h g , g') u ( g , h^{-1} h' , g' ) \delta ( h^{-1} h' )^{1 / p_1'} dh d\overline{g}^p dh' d\overline{g'} \\
& \leq Y ( p_1 , p_2 ; H )^p \label{eq:conclusion}
\end{align}
by Example \ref{ex:subgroup-represent} \ref{item:subgroup-represent-Lp-norm}.
We have
\begin{align}
& \quad \int_{H}^{} \int_{X}^{} \int_{H}^{} s ( h , g ) t ( h'^{-1} h g , g') u ( g , h^{-1} h' , g' ) \delta ( h^{-1} h' )^{1 / p_1'} dh d\overline{g}^p dh' \\
& \leq \int_{X}^{} \int_{H}^{} \int_{H}^{} s ( h , g ) t ( h'^{-1} h g , g') u ( g , h^{-1} h' , g' ) \delta ( h^{-1} h' )^{1 / p_1'} dh^p dh'^{1 / p} d\overline{g}^p \label{eq:Minkowski-apply}
\end{align}
by the Minkowski integral inequality.
Since $\delta |_H$ is the modular function of $H$ by Lemma \ref{lem:subgroup-measure} \ref{item:subgroup-measure-modular-extend}, we have
\begin{align}
& \quad \int_{H}^{} \int_{H}^{} s ( h , g ) t ( h'^{-1} h g , g') u ( g , h^{-1} h' , g' ) \delta ( h^{-1} h' )^{1 / p_1'} dh^p dh'^{1 / p} \\
& \leq Y ( p_1 , p_2 ; H ) S ( \overline{g} )^{1 / p_1} \int_{H}^{} ( t ( h^{-1} g , g' ) u ( g , h , g' ) )^{p_2} dh^{1 / p_2} \\
& \leq Y ( p_1 , p_2 ; H ) S ( \overline{g} )^{1 / p_1} T ( \overline{g} , \overline{g'} )^{1 / p} U ( \overline{g} , \overline{g'} )^{1 / p_1'}
\end{align}
by Example \ref{ex:Holder-apply} \ref{item:Holder-apply-H}.
Thus, it follows from \eqref{eq:Minkowski-apply} that
\begin{align}
& \quad \int_{H}^{} \int_{X}^{} \int_{H}^{} s ( h , g ) t ( h'^{-1} h g , g') u ( g , h^{-1} h' , g' ) \delta ( h^{-1} h' )^{1 / p_1'} dh d\overline{g}^p dh' \\
& \leq \left( Y ( p_1 , p_2 ; H ) \int_{X}^{} S ( \overline{g} )^{1 / p_1} T ( \overline{g} , \overline{g'} )^{1 / p} U ( \overline{g} , \overline{g'} )^{1 / p_1'} d\overline{g} \right)^p.
\end{align}
We have
\begin{align}
\int_{X}^{} S ( \overline{g} )^{1 / p_1} T ( \overline{g} , \overline{g'} )^{1 / p} U ( \overline{g} , \overline{g'} )^{1 / p_1'} d\overline{g}^p
\leq \int_{X}^{} S ( \overline{g} ) T ( \overline{g} , \overline{g'} ) d\overline{g}
\end{align}
by $\| \phi_1 \|_{p_1} = \| \phi_2 \|_{p_2} = 1$ and Example \ref{ex:Holder-apply} \ref{item:Holder-apply-X} and hence
\begin{align}
& \int_{H}^{} \int_{X}^{} \int_{H}^{} s ( h , g ) t ( h'^{-1} h g , g') u ( g , h^{-1} h' , g' ) \delta ( h^{-1} h' )^{1 / p_1'} dh d\overline{g}^p dh' \\
& \leq Y ( p_1 , p_2 ; H )^p \int_{X}^{} S ( \overline{g} ) T ( \overline{g} , \overline{g'} ) d\overline{g}.
\end{align}
Thus, it follows that
\begin{align}
& \quad \int_{X}^{} \int_{H}^{} \int_{X}^{} \int_{H}^{} s ( h , g ) t ( h'^{-1} h g , g') u ( g , h^{-1} h' , g' ) \delta ( h^{-1} h' )^{1 / p_1'} dh d\overline{g}^p dh' d\overline{g'} \\
& \leq Y ( p_1 , p_2 ; H )^p \int_{X}^{} \int_{X}^{} S ( \overline{g} ) T ( \overline{g} , \overline{g'} ) d\overline{g} d\overline{g'}.
\end{align}
Since
\begin{align}
\int_{X}^{} \int_{X}^{} S ( \overline{g} ) T ( \overline{g} , \overline{g'} ) d\overline{g} d\overline{g'}
= \int_{X}^{} \int_{X}^{} T ( \overline{g} , \overline{g'} ) d\overline{g'} S ( \overline{g} ) d\overline{g}
= \int_{X}^{} S ( \overline{g} ) d\overline{g}
= 1
\end{align}
by $\| \phi_1 \|_{p_1} = \| \phi_2 \|_{p_2} = 1$ and Example \ref{ex:subgroup-represent} \ref{item:subgroup-represent-phi1} \ref{item:subgroup-represent-phi2-g'}, we obtain \eqref{eq:conclusion}.
\end{proof}
\section{Proof of Corollary \ref{cor:Young-non-compact-dimension}}
\label{sec:non-compact-dimension}
In this section, we show Corollary \ref{cor:Young-non-compact-dimension} by using Theorem \ref{thm:Young-order-reversing} and the argument of Jing--Tran--Zhang \cite{jing2021nonabelian}.
We write $\mathcal{A}$ for the set of the connected Lie groups $G$ satisfying the assumption in Corollary \ref{cor:Young-non-compact-dimension} (i.e. the center of the semisimple part of $G$ is a finite group).
We note that any connected solvable Lie group is an element of $\mathcal{A}$.
Let $r ( G )$ be the dimension of the maximal compact subgroups for a connected Lie group $G$.
We show the following lemma to prove Corollary \ref{cor:Young-non-compact-dimension}.
\begin{lemma}
\label{lem:non-compact-dimension-bound}
Suppose that a real number $d ( G )$ is defined for each $G \in \mathcal{A}$, and
\begin{align}
d ( G )
\geq d ( H ) + d ( G / H ) \label{eq:non-compact-dimension-bound-normal}
\end{align}
holds for any connected closed normal subgroup $H \in \mathcal{A}$ of any $G \in \mathcal{A}$ with $G / H \in \mathcal{A}$.
For $G \in \mathcal{A}$, we denote by $I ( G )$ the inequality
\begin{align}
d ( G )
\geq d ( \mathbb{R} ) \dim G + ( d ( \mathbb{R} / \mathbb{Z} ) - d ( \mathbb{R} ) ) r ( G ).
\end{align}
\begin{enumerate}
\item \label{item:non-compact-dimension-bound-exact}
Suppose that a connected closed normal subgroup $H \in \mathcal{A}$ of $G \in \mathcal{A}$ satisfies $G / H \in \mathcal{A}$.
If $I ( H )$ and $I ( G / H )$ hold, then $I ( G )$ also holds.
\item \label{item:non-compact-dimension-bound-solvable}
Every non-trivial connected solvable Lie group $G$ satisfies $I ( G )$.
\item \label{item:non-compact-dimension-bound-order}
Furthermore, we assume that
\begin{align}
d ( \mathbb{R} / \mathbb{Z} ) = 0 \leq d ( H ) \leq d ( G ) \label{eq:non-compact-dimension-bound-order-assume}
\end{align}
for any connected closed normal subgroup $H \in \mathcal{A}$ of any $G \in \mathcal{A}$.
Then every $G \in \mathcal{A}$ satisfies $I ( G )$, that is,
\begin{align}
d ( G )
\geq d ( \mathbb{R} ) ( \dim G - r ( G ) ). \label{eq:non-compact-dimension-bound-order-simple}
\end{align}
\end{enumerate}
\end{lemma}
Jing--Tran--Zhang generalized the Brunn--Minkowski inequality to any Lie group by essentially using Lemma \ref{lem:non-compact-dimension-bound} \cite[Theorem 1.1]{jing2021nonabelian}.
Now we give some examples of Lemma \ref{lem:non-compact-dimension-bound} \ref{item:non-compact-dimension-bound-solvable}.
\begin{example}
\label{ex:non-compact-dimension-bound-apply}
\begin{enumerate}
\item \label{item:non-compact-dimension-bound-apply-equal}
If the equality of \eqref{eq:non-compact-dimension-bound-normal} holds for any connected closed normal subgroup $H \in \mathcal{A}$ of any $G \in \mathcal{A}$, then the equality of $I ( G )$ also holds for any connected solvable Lie group $G$.
Actually, if $G$ is not trivial, then this claim can be shown by replacing $d ( G )$ with $- d ( G )$ and applying Lemma \ref{lem:non-compact-dimension-bound} \ref{item:non-compact-dimension-bound-solvable}.
If $G$ is trivial, we have $d ( G ) = d ( G ) + d ( G )$ because the equality of \eqref{eq:non-compact-dimension-bound-normal} holds.
Thus, we have $d ( G ) = r ( G ) = 0$ and hence the equality of $I ( G )$ holds.
\item \label{item:non-compact-dimension-bound-apply-Nielsen}
Let $d ( G ) := \mathrm{rank} ( \ker ( \tilde{G} \to G ) )$ for connected Lie group $G$, where $\tilde{G}$ is the universal covering of $G$.
Now we show $d ( G ) = r ( G )$ for any connected solvable Lie group $G$.
The kernel $\ker ( \tilde{G} \to G )$ is isomorphic to the fundamental group $\pi_1 ( G )$ of $G$ \cite[Theorem 9.5.4]{MR3025417}.
Since $\pi_1 ( H ) \to \pi_1 ( G )$ is injective and $\pi_1 ( G ) / \pi_1 ( H )$ is isomorphic to $\pi_1 ( G / H )$ for any $G \in \mathcal{A}$ and any connected closed normal subgroup $H \in \mathcal{A}$ of $G$ \cite[Remark 11.1.17]{MR3025417}, we have
\begin{align}
d ( G )
= \mathrm{rank} ( \pi_1 ( G ) )
= \mathrm{rank} ( \pi_1 ( H ) ) + \mathrm{rank} ( \pi_1 ( G / H ) )
= d ( H ) + d ( G / H ).
\end{align}
Thus, the equality of $I ( G )$ holds for any connected solvable Lie group by \ref{item:non-compact-dimension-bound-apply-equal}.
In this case, we obtain
\begin{align}
d ( G )
= d ( \mathbb{R} ) \dim G + ( d ( \mathbb{R} / \mathbb{Z} ) - d ( \mathbb{R} ) ) r ( G )
= r ( G )
\end{align}
by $d ( \mathbb{R} ) = 0$ and $d ( \mathbb{R} / \mathbb{Z} ) = 1$.
\end{enumerate}
\end{example}
Now we show the following lemma to prove Lemma \ref{lem:non-compact-dimension-bound}.
\begin{lemma}
\label{lem:non-compact-dimension-deduce}
Let $G$ be a connected Lie group.
\begin{enumerate}
\item \label{item:non-compact-dimension-deduce-normal}
One has $r ( G ) = r ( H ) + r ( G / H )$ for any closed normal subgroup $H \lhd G$.
\item \label{item:non-compact-dimension-deduce-exist-solvable}
Every solvable Lie group $G$ with $\dim G \geq 2$ satisfies the following condition \ref{item:non-compact-dimension-deduce-exist-solvable-condition}.
\begin{enumerate}
\item \label{item:non-compact-dimension-deduce-exist-solvable-condition}
There exists a closed normal subgroup $H \in \mathcal{A}$ of $G$ such that $G / H \in \mathcal{A}$ and $1 \leq \dim H < \dim G$.
\end{enumerate}
\item \label{item:non-compact-dimension-deduce-exist}
Suppose that $G \in \mathcal{A}$ and $\dim G \geq 2$.
If $G$ does not satisfy the condition \ref{item:non-compact-dimension-deduce-exist-solvable-condition}, then there exists a closed subgroup $H \in \mathcal{A}$ of $G$ with $\dim H < \dim G$ such that
\begin{align}
\dim H - r ( H ) = \dim G - r ( G ). \label{eq:non-compact-dimension-deduce-exist-state}
\end{align}
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}
\item
Let $K \subset G$ be a maximal compact subgroup of $G$.
Then $K \cap H$ and $K / (K \cap H)$ are maximal compact subgroups of $H$ and $G / H$, respectively \cite[Theorem 14.3.13 (i) (a)]{MR3025417}.
Thus, we obtain
\begin{align}
r ( G )
= \dim K
= \dim ( K \cap H ) + \dim ( K / (K \cap H) )
= r ( H ) + r ( G / H ).
\end{align}
\item
Since $G$ is a connected solvable Lie group, there exists a connected closed solvable normal subgroup $H \lhd G$ such that $\dim ( G / H ) = 1$.
Then $1 \leq \dim H < \dim G$ holds by $\dim G \geq 2$, and we have
\begin{align}
\dim G
= \dim H + \dim ( G / H ). \label{eq:dimension-sum}
\end{align}
Since $G / H$ is abelian by $\dim ( G / H ) = 1$, we have $G / H \in \mathcal{A}$.
Thus, $G$ satisfies the condition \ref{item:non-compact-dimension-deduce-exist-solvable-condition}.
\item
Let $R \lhd G$ be the radical (the largest connected solvable closed normal subgroup) of $G$.
Since $\dim G \geq 2$ and $G$ does not satisfy the condition \ref{item:non-compact-dimension-deduce-exist-solvable-condition}, $G$ is not a solvable Lie group by \ref{item:non-compact-dimension-deduce-exist-solvable}.
Thus, we have $\dim R < \dim G$.
Since $G / R \in \mathcal{A}$ by $G \in \mathcal{A}$, the connected Lie group $G$ is semisimple (i.e. $\dim R = 0$) by the assumption that $G$ does not satisfy the condition \ref{item:non-compact-dimension-deduce-exist-solvable-condition}.
Let $G = K A N$ be the Iwasawa decomposition.
Then the closed subgroup $H := A N \subset G$ is a simply connected solvable Lie group \cite[Theorem 6.46]{MR1920389}.
Thus, we have $\dim H < \dim G$, $H \in \mathcal{A}$, and $r ( H ) = 0$.
Since $K \subset G$ is a maximal compact subgroup of $G$ by $G \in \mathcal{A}$ \cite[Theorem 6.31 (g)]{MR1920389}, the equality
\begin{align}
\dim H - r ( H )
= \dim G - \dim K
= \dim G - r ( G )
\end{align}
is obtained.
\qedhere
\end{enumerate}
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:non-compact-dimension-bound}]
\begin{enumerate}
\item
Since
\begin{align}
d ( G )
\geq d ( \mathbb{R} ) ( \dim H + \dim ( G / H ) ) + ( d ( \mathbb{R} / \mathbb{Z} ) - d ( \mathbb{R} ) ) ( r ( H ) + r ( G / H ) )
\end{align}
by $I ( H )$, $I ( G / H )$, and \eqref{eq:non-compact-dimension-bound-normal}, we obtain $I ( G )$ by \eqref{eq:dimension-sum} and Lemma \ref{lem:non-compact-dimension-deduce} \ref{item:non-compact-dimension-deduce-normal}.
\item
We prove it by induction on $\dim G$.
If $\dim G = 1$, then either $G = \mathbb{R}$ or $G = \mathbb{R} / \mathbb{Z}$ holds.
We obtain $I ( \mathbb{R} )$ by $r ( \mathbb{R} ) = 0$, and $I ( \mathbb{R} / \mathbb{Z} )$ by $r ( \mathbb{R} / \mathbb{Z} ) = 1$.
Now we show $I ( G )$ when $\dim G \geq 2$.
Then $G$ is a solvable Lie group and hence $G$ satisfies the condition \ref{item:non-compact-dimension-deduce-exist-solvable-condition} by Lemma \ref{lem:non-compact-dimension-deduce} \ref{item:non-compact-dimension-deduce-exist-solvable}.
Then $H$ and $G / H$ are connected solvable Lie groups, we have $I ( H )$ and $I ( G / H )$ by the induction hypothesis.
Thus, we also have $I ( G )$ by \ref{item:non-compact-dimension-bound-exact}.
\item
We prove it by induction on $\dim G$.
If $\dim G = 0$, then one has $r ( G ) = 0$ and hence
\begin{align}
d ( G ) \geq 0 = d ( \mathbb{R} ) ( \dim G - r ( G ) ).
\end{align}
If $\dim G = 1$, then $G$ is a solvable Lie group and hence $I ( G )$ follows from \ref{item:non-compact-dimension-bound-solvable}.
Now we show $I ( G )$ when $\dim G \geq 2$.
If $G$ satisfies the condition \ref{item:non-compact-dimension-deduce-exist-solvable-condition}, then $I ( G )$ follows from \ref{item:non-compact-dimension-bound-exact}.
Thus, it suffices to show $I ( G )$ when $G$ does not satisfy the condition \ref{item:non-compact-dimension-deduce-exist-solvable-condition}.
By Lemma \ref{lem:non-compact-dimension-deduce} \ref{item:non-compact-dimension-deduce-exist}, there exists a closed subgroup $H \in \mathcal{A}$ of $G$ such that $\dim H < \dim G$ and \eqref{eq:non-compact-dimension-deduce-exist-state}.
We have $I ( H )$ by the induction hypothesis and hence
\begin{align}
d ( G )
\geq d ( H )
\geq d ( \mathbb{R} ) ( \dim H - r ( H ) )
\end{align}
by \eqref{eq:non-compact-dimension-bound-order-assume} and \eqref{eq:non-compact-dimension-bound-order-simple}.
Thus, we obtain $I ( G )$ by \eqref{eq:non-compact-dimension-deduce-exist-state}.
\qedhere
\end{enumerate}
\end{proof}
\begin{proof}[Proof of Corollary \ref{cor:Young-non-compact-dimension}]
Let $d ( G ) := - \ln ( Y ( p_1 , p_2 ; G ) )$ for locally compact group $G$.
Then
\begin{align}
d ( G )
& := - \ln ( Y ( p_1 , p_2 ; G ) ) \\
& \geq - \ln ( Y ( p_1 , p_2 ; H ) Y ( p_1 , p_2 ; G / H ) ) \\
& = - \ln ( Y ( p_1 , p_2 ; H ) ) - \ln ( Y ( p_1 , p_2 ; G / H ) ) \\
& = d ( H ) + d ( G / H )
\end{align}
holds for any connected closed normal subgroup $H \lhd G$ and hence we have \eqref{eq:non-compact-dimension-bound-normal}.
In addition,
\begin{align}
d ( G )
:= - \ln ( Y ( p_1 , p_2 ; G ) )
\geq - \ln ( Y ( p_1 , p_2 ; H ) )
= d ( H ) \label{eq:d-order}
\end{align}
holds for any closed subgroup $H \subset G$ by Theorem \ref{thm:Young-order-reversing}.
We have
\begin{align}
d ( H )
:= - \ln ( Y ( p_1 , p_2 ; H ) )
\geq 0 \label{eq:d-positive}
\end{align}
by Example \ref{ex:Young-order-reversing-example} \ref{item:Young-order-reversing-example-classic}, and the equality holds for $H := \mathbb{R} / \mathbb{Z}$ by Corollary \ref{cor:Young-R-compare}.
Since \eqref{eq:non-compact-dimension-bound-order-assume} follows from \eqref{eq:d-order} and \eqref{eq:d-positive}, we have \eqref{eq:non-compact-dimension-bound-order-simple} for $G \in \mathcal{A}$ by Lemma \ref{lem:non-compact-dimension-bound} \ref{item:non-compact-dimension-bound-order}.
Thus, the inequality
\begin{align}
Y ( p_1 , p_2 ; G )
= e^{- d ( G )}
\leq e^{- d ( \mathbb{R} ) ( \dim G - r ( G ) )}
= Y ( p_1 , p_2 ; \mathbb{R} )^{\dim G - r ( G )}
\end{align}
is obtained.
\end{proof}
\section*{Acknowledgement}
This work was supported by JSPS KAKENHI Grant Number JP19J22628 and Leading Graduate Course for Frontiers of Mathematical Sciences and Physics (FMSP).
The author would like to thank his advisor Toshiyuki Kobayashi for his support.
The author is also grateful to Yuichiro Tanaka, Toshihisa Kubo for their careful comments.
\printbibliography
\noindent
Takashi Satomi: Graduate School of Mathematical Sciences, The University of Tokyo, 3-8-1 Komaba Meguro-ku Tokyo 153-8914, Japan.
\noindent
E-mail: tsatomi@ms.u-tokyo.ac.jp
\end{document}
|
2103.12961
|
\section{Introduction}
\label{sec:1}
Since the introduction of Dirac monopole by P.A.M. Dirac \cite{kn:1}, magnetic monopole has become an subject that attracts a lot of interest, both theoretically and experimentally. Since then the Dirac monopole has been generalized to non-Abelian monopoles, most notably Wu-Yang monopole in SU(2) Yang-Mills theory \cite{kn:2} and 't Hooft-Polyakov monopole in SU(2) Yang-Mills-Higgs (YMH) theory \cite{kn:3}. While the Dirac monopole and Wu-Yang monopole possess infinite energy due to the presence of point singularity in the solutions, the 't Hooft-Polyakov monopole possesses finite energy with no singularity found anywhere. The mass of 't Hooft-Polyakov monopole was estimated to be of order 137 $M_{\scalebox{.6}{\mbox{W}}}$, where $M_{\scalebox{.6}{\mbox{W}}}$ is the mass of intermediate vector boson.
The coupling of gravity to the SU(2) YMH theory, known as the SU(2) Einstein-Yang-Mills-Higgs (EYMH) theory has been shown to possess important solutions \cite{kn:4}. These solutions include globally regular gravitating monopole solutions, radial excitation and magnetically charged black hole solutions. For small gravitational coupling, the gravitating monopole solution emerges smoothly from flat space 't Hooft-Polyakov monopole. The (normalized) mass of gravitating monopole solution decreases with increasing gravitational coupling and the solution ceases to exist beyond a maximal value of gravitational coupling. Besides the fundamental gravitating monopole there exists radially excited monopole solution, where the gauge field function of the $n$-th excited monopole possess $n$ nodes, and this is different from the gauge field function of fundamental monopole solution that decreases monotonically to zero. Having no flat space counterparts, these excited solutions are related to the globaly regular Bartnik-Mckinnon solutions in SU(2) Einstein-Yang-Millls (EYM) theory \cite{kn:5}. There also exist magnetically charged EYMH black hole solutions which represent counterexamples to the `no-hair' conjecture. Distinct from the embedded Reissner-Nordstrom (RN) black holes with unit magnetic charge, these black hole solutions emerge from the regular magnetic monopole solutions when a finite regular event horizon is imposed. Consequently, they have been characterized as `black holes within magnetic monopoles'.
The SU(2) $\times$ U(1) Weinberg-Salam theory has been shown to possess important topological magnetic mono-pole solution, known as the electroweak monopole or simply Cho-Maison monopole \cite{kn:6}. As a hybrid between Dirac monopole and 't Hooft-Polyakov monopole, the Cho-Maison monopole describes a real monopole dressed by the physical W-boson and Higgs field. Although the Cho-Maison monopole has a singularity at the origin which makes the energy divergent, it has been shown there are ways to regularize the energy and estimating the mass at 4 to 10 TeV \cite{kn:7,kn:8,kn:9}. Recently, there is also reports on a more natural way to regularize the energy, suggesting that the new BPS bound for the Cho-Maison monopole may not be smaller than 2.98 TeV, more probably 3.75 TeV \cite{kn:10}. As mentioned in Refs. \cite{kn:7,kn:8,kn:9,kn:10}, the non-triviality of electromagnetic U(1) ensures that the electroweak monopole must exist and this makes the experimental detection of electroweak monopole an urgent issue after the discovery of Higgs boson. For this reason experimental detectors around the globe are actively searching for magnetic monopole \cite{kn:11,kn:12,kn:13,kn:14}.
Recently gravitationally coupled electroweak mono-pole solutions in Einstein-Weinberg-Salam (EWS) theory has also been reported by Cho et al. \cite{kn:15}. Their results confirm the existence of globally regular gravitating electroweak monopole solution, before changes to the magnetically charged black hole as the Higgs vacuum value approaches to the Planck scale.
In this paper, we study in more detail the gravitating electroweak monopole in EWS theory, and report additional radially excited electroweak monopole solution, as well as the corresponding `black hole within electroweak monopole' solutions of the EWS theory. Our results therefore confirm that all solutions found in the SU(2) EYMH theory \cite{kn:4} have their corresponding counterpart in the EWS theory, but with distintive functional behaviour. From the physical point of view, these solutions are very important as Weinberg-Salam theory itself is a realistic theory.
\section{Einstein-Weinberg-Salam Theory}
\label{sec:2}
We consider the SU(2)$\times$ U(1) EWS action as
\begin{equation}
S = S_G + S_M = \int L_G \sqrt{-g}~d^4x + \int L_M \sqrt{-g}~d^4x,
\label{eq.1}
\end{equation}
with
\begin{equation}
L_G = \frac{R}{16 \pi G} ,
\label{eq.2}
\end{equation}
and
\begin{eqnarray}
L_M &=& - \frac{1}{4}F^a_{\mu\nu} F^{a\mu\nu} - \frac{\epsilon \left( \phi \right)}{4} f_{\mu\nu} f^{\mu\nu} \nonumber\\
&-& \left( \hat{D}_\mu \phi \right)^{\dagger} \left( \hat{D}^\mu \phi \right) - \frac{\lambda}{2} \left( \phi^{\dagger} \phi- \frac{\mu^2}{\lambda} \right)^2,
\label{eq.3}
\end{eqnarray}
where
\begin{equation}
\hat{D}_{\mu} \phi = \left( \partial_{\mu} - \frac{ig}{2} \sigma^a A^a_{\mu} - \frac{ig'}{2} B_{\mu} \right) \phi,
\label{eq.4}
\end{equation}
in which $\hat{D}_{\mu}$ is the covariant derivative of the SU(2) $\times$ U(1) group. The function $\epsilon \left( \phi \right)$ in Eq.(\ref{eq.3}) is a positive dimensionless function of the Higgs doublet which tends to unity asymptotically. In general, $\epsilon \left( \phi \right) $ modifies the permeability of the hypercharge U(1) gauge field while retaining the SU(2) x U(1) gauge symmetry \cite{kn:15}.
To construct globally regular gravitating monopole and magnetically charged black hole, we consider the spherically symmetric Schwarzschild-like metric
\begin{equation}
ds^2 = - N^2 A ~dt^2 + \frac{1}{A} dr^2 + r^2 d\theta^2 + r^2 \sin^2\theta d\phi^2,
\label{eq.5}
\end{equation}
with
\begin{equation}
A = 1- \frac{2G m}{r},
\label{eq.6}
\end{equation}
and the following electrically neutral ansatz for the matter functions,
\begin{eqnarray}
&& \phi = \frac{H}{\sqrt{2}} ~\xi,~~~\xi = i \begin{bmatrix}
\frac{\sin\theta}{2} e^{- i \phi} \\
-\frac{\cos\theta}{2}
\end{bmatrix}, \nonumber\\
&& A^a_0 = 0,~~ A^a_{i} = - \frac{\left( 1 - K \right) }{g r} \hat{\phi}^a \hat{\theta}_i + \frac{\left( 1 - K \right) }{g r} \hat{\theta}^a \hat{\phi}_i, \nonumber\\
&& B_0 = 0,~~~ B_i = - \frac{1}{g'} \frac{\left( 1 - \cos\theta \right)}{r \sin\theta} \hat{\phi}_i.
\label{eq.7}
\end{eqnarray}
Here $N$, $A$, $m$, $K$ and $H$ are all functions of $r$.
The $tt$ and $rr$ components of the Einstein equations then yield the equations for the metric functions
\begin{equation}
\frac{N'}{N} = 4 \pi G r \left( \frac{2 K'^2 }{g^2 r^2}+ H'^2 \right),
\label{eq.8}
\end{equation}
and
\begin{eqnarray}
m' &=& 4 \pi r^2 \left\{ A \left( \frac{K'^2}{g^2 r^2} + \frac{H'^2}{2} \right) + \frac{\left( K^2 - 1 \right)^2}{2 g^2 r^4} \right. \nonumber\\
&+& \left. \frac{\lambda}{8} \left( H^2 - \frac{2 \mu^2}{\lambda} \right)^2 + \frac{\epsilon}{2 g'^2 r^4} + \frac{1}{4} \frac{H^2 K^2}{r^2} \right\}.
\label{eq.9}
\end{eqnarray}
The equations for the matter functions read
\begin{equation}
A K'' + \left( A' + A \frac{N'}{N} \right) K' +\frac{ \left( 1-K^2 \right) K }{r^2} -\frac{1}{4} g^2 H^2 K = 0,
\label{eq.10}
\end{equation}
and
\begin{eqnarray}
&& A H'' + \left( A' + \frac{2 A}{r} + A \frac{N'}{N} \right) H' - \frac{H K^2}{2 r^2} \nonumber\\
&& - \frac{\lambda}{2} \left( H^2 - \frac{2 \mu^2}{\lambda} \right) H - \frac{1}{2 g'^2 r^4 } \frac{d \epsilon \left( H \right)}{dH} = 0.
\label{eq.11}
\end{eqnarray}
Prime denotes derivative with respect to $r$, and $H_0 = \sqrt{2} \mu/\sqrt{\lambda}$ is the Higgs vacuum expectation value.
To facilitate numerical calculation, we consider the following dimensionless coordinate $x$ and dimensionless mass function $\widetilde{m}$,
\begin{equation}
x = M_{\scalebox{.5}{\mbox{W}}} r,~~~ \widetilde{m} = G M_{\scalebox{.5}{\mbox{W}}} m,
\label{eq.12}
\end{equation}
with $M_{\scalebox{.5}{\mbox{W}}} = \frac{1}{2} g H_0$. The Higgs field is also rescaled as $H \rightarrow H_0 H$ and the solutions then depend on coupling constant $\alpha$ and $\beta$, where
\begin{equation}
\alpha^2 = 4 \pi G H_0^2,~~~\beta^2 = \frac{\lambda}{g^2},
\label{eq.13}
\end{equation}
as well as the Weinberg angle $\theta_{\scalebox{.5}{\mbox{W}}}$.
With Eqs.(\ref{eq.12})-(\ref{eq.13}), the full set of Eqs.(\ref{eq.8})-(\ref{eq.11}) transform into
\begin{eqnarray}
&& \frac{1}{N} \frac{dN}{dx} = \alpha^2 x \left[ \frac{1}{2 x^2} \left( \frac{d K}{dx} \right)^2 + \left( \frac{d H}{dx} \right)^2 \right], \nonumber
\end{eqnarray}
\begin{eqnarray}
&& \frac{d \widetilde{m}}{dx}= \alpha^2 x^2 \left\{ \frac{A}{2} \left[ \frac{1}{2 x^2} \left( \frac{d K}{dx} \right)^2 + \left( \frac{d H}{dx} \right)^2 \right] \right. \nonumber\\
&& \left. + \frac{\left( K^2 - 1 \right)^2}{8 x^4} + \frac{\beta^2}{2} \left( H^2 - 1 \right)^2 + \frac{\epsilon}{8 \omega^2 x^4} + \frac{H^2 K^2}{4 x^2} \right\}, \nonumber
\end{eqnarray}
\begin{eqnarray}
&& A \frac{d^2 K}{dx^2} + \left( \frac{dA}{dx} + \frac{A}{N} \frac{dN}{dx} \right) \frac{dK}{dx} \nonumber\\
&& + \frac{ \left( 1-K^2 \right) K }{x^2} - H^2 K = 0, \nonumber
\end{eqnarray}
\begin{eqnarray}
&& A \frac{d^2 H}{dx^2} + \left( \frac{dA}{dx} + \frac{2 A}{x} + \frac{A}{N} \frac{dN}{dx} \right) \frac{dH}{dx} - \frac{H K^2}{2 x^2} \nonumber\\
&& - 2 \beta^2 \left( H^2 - 1 \right) H - \frac{1}{2 \omega^2 x^4} \frac{d \epsilon}{d H}= 0,
\label{eq.14}
\end{eqnarray}
where $\omega = g'/g = \tan \theta_{\scalebox{.5}{\mbox{W}}}$. Here we consider physical value of $\omega = 0.53574546$ by adopting $\sin^2\theta_{\scalebox{.5}{\mbox{W}}} = 0.22301323$ \cite{kn:16}. Since $ M_{\scalebox{.5}{\mbox{H}}} = \sqrt{2} \mu$ and $M_{\scalebox{.5}{\mbox{W}}} = \frac{1}{2} g H_0$, we may also put Eq. (\ref{eq.13}) in the form of
\begin{eqnarray}
\alpha = \sqrt{ 4 \pi G} H_0 = \sqrt{4 \pi} \frac{H_0}{M_{\scalebox{.5}{\mbox{P}}} },~~~\beta = \frac{1}{2} \frac{M_{\scalebox{.5}{\mbox{H}}} }{M_{\scalebox{.5}{\mbox{W}}} },
\label{eq.15}
\end{eqnarray}
where by adopting physical values of $M_{\scalebox{.5}{\mbox{H}}} = 125.10$ GeV and $M_{\scalebox{.5}{\mbox{W}}}$ = 80.379 GeV, the physical value of $\beta$ used here is 0.77818833.
Obviously solutions to Eqs.(\ref{eq.14}) depend on the permeability function $\epsilon$. In the paper by Cho et al. \cite{kn:15}, the form of $\epsilon = \left( H / H_0 \right)^8$ is considered. In a recent paper \cite{kn:10}, $\epsilon = \left( H / H_0 \right)^n$ is also possible. For the sake of simplicity, we considered $n = 8$ in this paper. However we would like to point out that all values of $n = 1, 2, 3, ...8$ seem to produce convergent numerical results.
As our solution is electrically neutral, following Ref. \cite{kn:17}, we consider a special solutions of Eqs.(\ref{eq.14}), which is the embedded RN solutions with mass $\widetilde{m}_{\infty}$ and magnetic charge near unity,
\begin{eqnarray}
&& \widetilde{m}(x) = \widetilde{m}_{\infty} - \frac{\alpha^2 }{8 x} \left( 1 + \frac{\epsilon}{\omega^2} \right) ,~~ N(x) = 1, \nonumber\\
&& K(x) = 0,~~ H(x) = 1,
\label{eq.16}
\end{eqnarray}
where we will consider $\epsilon = 1$ as $H = 1$. The corresponding extremal RN solutions then possess horizon $x_{\scalebox{.5}{\mbox{H}}}$, where
\begin{eqnarray}
x_{\scalebox{.5}{\mbox{H}}} = \widetilde{m}_{\infty} = \frac{\alpha}{2} \sqrt{1+\frac{1}{\omega^2}}.
\label{eq.17}
\end{eqnarray}
From Eq.(\ref{eq.12}) and Eq.(\ref{eq.16}), the ADM mass can be defined as
\begin{eqnarray}
m_{\scalebox{.5}{\mbox{ADM}}} = \frac{4 \pi H^2_0 }{M_{\scalebox{.5}{\mbox{W}}}} \frac{\widetilde{m}_{\infty}}{\alpha^2},
\label{eq.18}
\end{eqnarray}
where one can readily read off the value for ADM mass from the plot of $\widetilde{m}_{\infty}/\alpha^2$ versus $\alpha$.
\section{Gravitating Monopole}
\label{sec:3}
\begin{figure}[!b]
\centering
\hskip0in
\includegraphics[width=3.3in]{Fig_1_A_K_H_vs_alpha_CM_GM}
\caption{Functions of $A(x)$, $K(x)$ and $H(x)$ versus $x$ of the fundamental gravitating electroweak monopole with physical $\beta$ and Weinberg angle for $\alpha =0.6 $ (blue), $1.0, 1.3, 1.5, 1.7$ and $1.814$ (red). Dashed line indicates non-gravitating monopole. }
\label{Fig.1}
\end{figure}
We first consider globally regular gravitating monopole solutions. Asymptotic flatness requires that the metric functions $N$ and $A$ both approach a constant at spatial infinity. We here adopt
\begin{eqnarray}
N(\infty) = 1, ~~~\widetilde{m}(\infty) = \widetilde{m}_{\infty}.
\label{eq.19}
\end{eqnarray}
This meas that $\widetilde{m}(\infty)$ which determines the total mass of the monopole is not constrained. The matter functions also approach constant asymptotically as
\begin{eqnarray}
K(\infty) = 0, ~~~H(\infty) = 1.
\label{eq.20}
\end{eqnarray}
On the other hand, regularity at the origin requires
\begin{eqnarray}
K(0) = 1, ~~~H(0) = 0,~~~\widetilde{m}(0) = 0.
\label{eq.21}
\end{eqnarray}
In SU(2) EYMH theory \cite{kn:4}, the gravitating monopole solution emerges smoothly from the flat space 't Hooft-Polyakov monopole ($\alpha = 0$) before becoming a limiting soluton at some critical value of gravitational coupling $\alpha_{\scalebox{.6}{\mbox{c}}}$ and ceases to exist beyond $\alpha_{\scalebox{.6}{\mbox{c}}}$. One generally expects that $\alpha_{\scalebox{.6}{\mbox{c}}}$ should be the maximal value of gravitational coupling, $\alpha_{\scalebox{.6}{\mbox{max}}}$. However, results shows that $\alpha_{\scalebox{.6}{\mbox{max}}}$ does not correspond to the zero of the metric function (in their notation $\mu$) when $\beta = 0$. The tabulated value is $\alpha_{\scalebox{.6}{\mbox{max}}} = 1.403$ corresponds to $\mu_{\scalebox{.6}{\mbox{min}}} = 0.03$. From $\alpha_{\scalebox{.6}{\mbox{max}}}$ instead the branch of solution bends backward, up to the critical coupling constant $\alpha_{\scalebox{.6}{\mbox{c}}}$ before the zero for $\mu_{\scalebox{.6}{\mbox{min}}}$ is formed and the solution becomes limiting solution (when $\alpha_{\scalebox{.6}{\mbox{c}}} = 1.386, \mu_{\scalebox{.6}{\mbox{min}}} = 8.07 \times 10^{-9}$). In general, the (normalized) mass of gravitating monopole solution decreases with increasing gravitational coupling until the maximal gravitational coupling $\alpha_{\scalebox{.6}{\mbox{max}}} = 1.403$.
Our results in EWS theory are plotted in Fig. \ref{Fig.1}. The metric function $A(x)$ starts to develop a pronounced minimal value ($A_{\scalebox{.6}{\mbox{min}}}$) with increasing $\alpha$ from $\alpha = 0$ up to a maximal value $\alpha_{\scalebox{.6}{\mbox{max}}}$, indicating that gravitating Cho-Maison monopole emerges smoothly from the flat space Cho-Maison monopole. The value of $A_{\scalebox{.6}{\mbox{min}}}$ decreases from one to zero at $\alpha_{\scalebox{.6}{\mbox{max}}}$ where the branch of solution becomes a black hole and ceases to exist. In our solutions, $\alpha_{\scalebox{.6}{\mbox{max}}}$ always corresponds to the lowest value of $A_{\scalebox{.6}{\mbox{min}}}$ ($\alpha_{\scalebox{.6}{\mbox{max}}} = 1.814$ for $A_{\scalebox{.6}{\mbox{min}}} = 3.2992 \times 10^{-6}$). In other words, $\alpha_{\scalebox{.6}{\mbox{max}}} = \alpha_{\scalebox{.6}{\mbox{c}}}$ in EWS theory. We have tried to search for possible lower value of $A_{\scalebox{.6}{\mbox{min}}}$ by considering the coupling constant $\alpha$ bending backwards but the results for $A_{\scalebox{.6}{\mbox{min}}}$ is always higher than the lowest $A_{\scalebox{.6}{\mbox{min}}}$ at $\alpha_{\scalebox{.6}{\mbox{max}}} = 1.814$. These results are tabulated in Table \ref{table.1} (the numerical method used in this paper is different to that of \cite{kn:4}).
Of course the existence of $\alpha_{\scalebox{.6}{\mbox{max}}} > \alpha_{\scalebox{.6}{\mbox{c}}}$ in EYMH theory has been observed for $\beta$ up to 0.7 \cite{kn:4}. It is most profound when $\beta = 0$, where $\alpha_{\scalebox{.6}{\mbox{max}}} = 1.403$ (for $\mu_{\scalebox{.6}{\mbox{min}}} = 0.035$) and $\alpha_{\scalebox{.6}{\mbox{c}}} = 1.386$ (for $\mu_{\scalebox{.6}{\mbox{min}}} = 8.07 \times 10^{-9} $). When $\beta = 0.7$, $\alpha_{\scalebox{.6}{\mbox{max}}} = 1.26027253$ (for $\mu_{\scalebox{.6}{\mbox{min}}} = 0.035$) and $\alpha_{\scalebox{.6}{\mbox{c}}} = 1.2602718$ (for $\mu_{\scalebox{.6}{\mbox{min}}} = 8.07 \times 10^{-9} $). In this case, if large $\beta$ is considered in EYMH theory, one should get $\alpha_{\scalebox{.6}{\mbox{max}}} \approx \alpha_{\scalebox{.6}{\mbox{c}}}$. This has been further confirmed in Ref. \cite{kn:18}. Then question arises if the non-existence of $\alpha_{\scalebox{.6}{\mbox{max}}} > \alpha_{\scalebox{.6}{\mbox{c}}}$ in EWS theory might due to higher value of $\beta$ considered ($\beta = 0.77818833$).
For the above reason, we first compute the gravitating monopole solutions when $\beta = 0.77818833$ in EYMH theory. Our numerical results shows that even for $\beta = 0.77818833$, the solutions possess phenomena of $\alpha_{\scalebox{.6}{\mbox{max}}} > \alpha_{\scalebox{.6}{\mbox{c}}}$, where $\alpha_{\scalebox{.6}{\mbox{max}}} = 1.2203$ for $A_{\scalebox{.6}{\mbox{min}}} = 1.6388 \times 10^{-5}$ and $\alpha_{\scalebox{.6}{\mbox{c}}} = 1.2123 $ for $ A_{\scalebox{.6}{\mbox{min}}} = 1.2211 \times 10^{-5}$. For the sake of comparison, we also compute gravitating Cho-Maison monopole solution when $\beta \rightarrow 0$. Results again show that in EWS theory $\alpha_{\scalebox{.6}{\mbox{max}}} $ always correspond to lowest value of the metric function $A(x)$. This confirms that the non-existence of $\alpha_{\scalebox{.6}{\mbox{max}}} > \alpha_{\scalebox{.6}{\mbox{c}}}$ is a generic feature of EWS theory. However we are not interested for $\beta \rightarrow 0$ in EWS theory since it is not physical.
In general, our results of gravitating Cho-Maison monopole are quite identical to the gravitating 't Hooft-Polyakov monopole in SU(2) EYMH theory except for the non-existence of `backward bending' in $\alpha$ (or $\alpha_{\scalebox{.6}{\mbox{max}}} > \alpha_{\scalebox{.6}{\mbox{c}}}$). Hence our results can be viewed as having distinctive characteristics compared to that in EYMH theory, though both approach their respective limiting solutions at some specific value of gravitational coupling. Moreover, contrary to gravitating monopole in EYMH theory, results in EWS theory describes a genuine gravitating Cho-Maison monopole turning into a black hole.
\begin{table}
\caption{Table of $A_{\scalebox{.5}{\mbox{min}}}$ for selected values of $\alpha$ near $\alpha_{\scalebox{.5}{\mbox{max}}}$ for radial excitation (r.e.) and gravitating monopole (g.m.).}
\label{tab:1}
\begin{tabular}{lll}
\hline\noalign{\smallskip}
$\alpha$ & $A_{\scalebox{.5}{\mbox{min}}}$(r.e.) & $A_{\scalebox{.5}{\mbox{min}}}$(g.m.) \\
\noalign{\smallskip}\hline\noalign{\smallskip}
1.53 & $2.5783 \times 10^{-4}$ & 0.1849 \\
1.54 & $1.8504 \times 10^{-4}$ & 0.1771 \\
1.55 & $1.2296 \times 10^{-4}$ & 0.1694 \\
1.56 & $7.2356 \times 10^{-5}$ & 0.1617 \\
1.57 & $1.2173 \times 10^{-5}$& 0.1540 \\
1.58 & $6.7480 \times 10^{-6}$ & 0.1464 \\
1.584 & $5.4718 \times 10^{-6}$ & 0.1434 \\
1.76 & - & 0.0243 \\
1.77 & - & 0.0188\\
1.78 & - & 0.0137 \\
1.79 & - & 0.0089 \\
1.80 & - & 0.0045 \\
1.81 & - & $8.2190 \times 10^{-4}$ \\
1.814 & - & $3.2992 \times 10^{-6}$ \\
\noalign{\smallskip}\hline
\end{tabular}
\label{table.1}
\end{table}
\section{Radially Excited Monopole}
\label{sec:4}
\begin{figure}[!b]
\centering
\hskip0in
\includegraphics[width=3.3in]{Fig_2_A_K_H_vs_alpha_CM_RE}
\caption{Functions of $A(x)$ , $K(x)$ and $H(x)$ versus $\alpha$ of the radial excitation for $\alpha =0.2 $ (blue), $0.6, 1.0, 1.25$ and $1.5$ (red), for physical $\beta$ and Weinberg angle.}
\label{Fig.2}
\end{figure}
For EYMH theory, besides the branch of fundamental gravitating monopole solution, there exist branches of radially excited monopole solutions. While the gauge field function of the fundamental monopole solution decreases monotonically to zero, this is not the case for radially excited solutions. In general, gauge field function of the $n$-th excited monopole solutions develop $n$ minimum node before tending to zero at spatial infinity. Similar to fundamental monopole solution, radially excited monopole solutions also exist below some maximal value of the gravitational constant $\alpha$. However, they have no flat space counterpart as $\alpha \rightarrow 0$, but tends to the Bartnik-Mckinnon solution of EYM theory .
In EWS theory, we also observed similar radially excited monopole solution, as shown in Fig. \ref{Fig.2}. The gauge field function $K(x)$ of the (1st) radially excited solution does not decrease monotonically to zero, it develops a minimum node before approaches zero at spatial infinity. These radial excitations only exist below some maximal value of the gravitational constant, $\alpha_{\scalebox{.6}{\mbox{max}}} = 1.584$. They similarly do not have flat space counterpart as $\alpha \rightarrow 0$. Following the gravitating monopole in previous section, we also tabulate the value of $A_{\scalebox{.6}{\mbox{min}}}$ for values of $\alpha$ near $\alpha_{\scalebox{.5}{\mbox{max}}}$ in Table 1. We again observed no `backward-bending' of the solution branch as reported in Ref. \cite{kn:4}.
\begin{figure}[!b]
\centering
\hskip0in
\includegraphics[width=3.3in]{Fig_3_m_infinity_over_alpha_square_vs_alpha_EWS_EYMH_GM_RE}
\caption{Top: Plot of $\widetilde{m}_{\infty}/\alpha^2$ versus $\alpha$ for gravitating monopole (blue), radial excitation (red) and the extremal RN solution (dashed line) of the EWS theory. Bottom: Plot of $\widetilde{m}_{\infty}/\alpha^2$ versus $\alpha$ for gravitating monopole (blue), radial excitation (red) and the extremal RN solution (dashed line) of the EYMH theory when $\beta = 0$. Green line represents the gravitating monopole when $\beta = 0.77818833$.}
\label{Fig.3}
\end{figure}
Following Ref. \cite{kn:17}, we plot the normalized mass $\widetilde{m}_{\infty}/\alpha^2$ versus $\alpha$ for radial excitation in Fig. \ref{Fig.3} (included in the same graph is the corresponding plot for fundamental gravitating monopole). As expected, the radial excitation branch possesses higher normalized mass than that of the fundamental gravitating monopole. In the limit of $\alpha \rightarrow 0$, the normalized mass of the fundamental gravitating monopole converges to a finite value (0.7197), whereas that of radial excitation diverges to infinity. This coincides with the statement that radial excitation does not have flat space counterpart. Both massess of the fundamental monopole and radial excitation decrease with $\alpha$, but mass of the gravitating monopole decreases monotonically while there is an inverse $\alpha$ fall-off for radial excitation.
At $\alpha_{\scalebox{.6}{\mbox{max}}}$, the radial excitation (as well as the fundamental monopole solutions) reaches their limiting functions but do not bifurcate with the branch of extremal RN solution. This is different from the results reported in Ref. \cite{kn:17}, where at $\alpha_{\scalebox{.6}{\mbox{max}}}$ the fundamental monopole approaches its limiting solution and bifurcate with the branch of extremal RN solution (bottom plot of Fig. \ref{Fig.3}). The non-bifurcation originates from Eq. (\ref{eq.16}). Recall that in EYMH theory, the metric function of the embbed RN solution with magnetic charge $P$ reads
\begin{eqnarray}
A = 1 - \frac{\widetilde{m}_{\infty}}{x} + \frac{\alpha^2}{x^2} P^2.
\label{eq.22}
\end{eqnarray}
Eq. (\ref{eq.16}) gives metric function of the embbed RN solution of EWS theory as
\begin{eqnarray}
A = 1 - \frac{\widetilde{m}_{\infty}}{x} + \frac{\alpha^2}{x^2} \frac{1}{4} \left( 1 + \frac{\epsilon}{\omega^2} \right).
\label{eq.23}
\end{eqnarray}
From Eq. (\ref{eq.23}), evaluating the third term by considering $\epsilon = 1$ and $\omega = 0.53574546$, we find that the magnetic charge of embeded RN solution is 1.0588, which is slightly higher than one and this contributes to the non-bifurcation (note that the gravitating monopole or radially excited monopole has unit magnetic charge).
Hence in general our results of radially excited Cho-Maison monopole are quite identical to the radially excited monopole in SU(2) EYMH theory. There are of course some key differences. First, radial exictation (as well as gravitating monopole) does not bifurcate with the branch of extremal RN solution at $\alpha_{\scalebox{.6}{\mbox{max}}}$. We expect that a different (more realistic) form of $\epsilon$ will contribute to the bifurcation, but that remains to be answered in future investigation. Second, for a given $\alpha$, the mass of radial excitation (or gravitating monopole) in EWS theory is always lower than the mass of their counterpart in EYMH theory. This is evident from Fig. \ref{Fig.3}.
\section{Black Hole Solutions}
\label{sec:5}
\begin{figure}[!b]
\centering
\hskip0in
\includegraphics[width=3.3in]{Fig_4_EYMH_All_Plot_m_infinity_over_alpha_square_vs_xH_final}
\caption{The (normalized) mass of the EYMH black hole solutions $\widetilde{m}_{\infty} / \alpha^2$ as a function of horizon radius $x_{\scalebox{.6}{\mbox{H}}}$ for $\alpha = 0.6, 0.7, 0.86$ and $1.0$, together with the corresponding RN solutions with unit magnetic charge (dashed lines).}
\label{Fig.4}
\end{figure}
In SU(2) EMYH theory, there exist special kind of non-Abelian black hole solutions which is different from the embedded RN black hole solutions. These black hole solutions emerge from the globally regular monopole solution when a finite regular event horizon is imposed. Characterized as `black hole within magnetic monopole', these solutions provide counter-examples to the `no hair' conjecture. With increasing horizon radius $x_{\scalebox{.6}{\mbox{H}}}$, depending on the value of gravitational coupling $\alpha$, these non-Abelian black hole either merges with Abelian RN black hole at some critical horizon radius (for $0 < \alpha < \frac{1}{2} \sqrt{3}$), or ceases to exist at some maximal value of $x_{\scalebox{.6}{\mbox{H}}}$ when a second zero of the metric function is formed (for $\frac{1}{2} \sqrt{3} < \alpha < \alpha_{\scalebox{.6}{\mbox{max}}}$, where $\alpha_{\scalebox{.6}{\mbox{max}}} = 1.403$). These behaviours of SU(2) EYMH black hole solutions are shown in Fig. \ref{Fig.4}.
\begin{figure}[!b]
\centering
\hskip0in
\includegraphics[width=3.3in]{Fig_5_A_K_H_vs_x_Blackhole_in_CM_monopole}
\caption{Black hole in electroweak monopole solutions with $\alpha = 1.6$, physical $\beta$ and Weinberg angle, plotted as a function of $x$ for $x_{\scalebox{.6}{\mbox{H}}} = 0.1111$ (blue), $0.2195, 0.3333, 0.4286, 0.5152, 0.6129, 0.7241$ and $0.8519$ (red). Dashed lines indicate regular gravitating solutions.}
\label{Fig.5}
\end{figure}
We now consider such non-Abelian black hole solutions in EWS theory. Consider again asymptotic flatness, they satisfy the same boundary conditions at spatial infinity as the globally regular solutions, Eqs. (\ref{eq.19}) and (\ref{eq.20}). The existence of a regular event horizon requires
\begin{eqnarray}
\widetilde{m} \left( x_{\scalebox{.6}{\mbox{H}}} \right) = \frac{x_{\scalebox{.6}{\mbox{H}}}}{2},~~~N \left( x_{\scalebox{.6}{\mbox{H}}} \right) < \infty.
\label{eq.24}
\end{eqnarray}
The matter functions must also satisfy
\begin{eqnarray}
\left. \frac{dA}{dx} \frac{dK}{dx} \right|_{x_{\scalebox{.6}{\mbox{H}}}} = \left. K \left( H^2 - \frac{1 - K^2}{x^2} \right) \right|_{x_{\scalebox{.6}{\mbox{H}}}},
\label{eq.25}
\end{eqnarray}
and
\begin{eqnarray}
&& \left. \frac{dA}{dx} \frac{dH}{dx} \right|_{x_{\scalebox{.6}{\mbox{H}}}}= \nonumber\\
&& \left. H \left( \frac{K^2}{2 x^2} + 2 \beta^2 \left( H^2 - 1 \right) + \frac{1}{2 \omega^2 x^4 H} \frac{d \epsilon}{d H} \right) \right|_{x_{\scalebox{.6}{\mbox{H}}}}.
\label{eq.26}
\end{eqnarray}
In particular, for a given coupling constant $\alpha$, black hole solutions corresponding to the fundamental mono-pole branch emerge from globally regular solution in the limit $x_{\scalebox{.6}{\mbox{H}}} \rightarrow 0$ and persist up with increasing horizon radius. We first consider the case of relatively large $\alpha$ ($\alpha = 1.6$). With increasing horizon radius, limiting solution is reached at a maximal value of horizon radius ($x_{\scalebox{.6}{\mbox{H}}} = 0.8519$), where a second zero of $A(x)$ is formed, Fig. \ref{Fig.5}. For smaller values of $\alpha$, black hole solutions do not reach limiting solutions but merge with the corresponding non-extremal RN solutions. This behaviour which is reminiscent of SU(2) EYMH theory can be understood clearer from Fig. \ref{Fig.6}, which shows the black hole solutions emerge from globally regular monopole solutions in the limit of $x_{\scalebox{.6}{\mbox{H}}} \rightarrow 0$ and persist up with increasing $x_{\scalebox{.6}{\mbox{H}}}$. For $0 < \alpha < 1.576$, the black hole solutions slowly converge to the corresponding non-extremal RN solutions at large horizon radius. For $1.576 < \alpha < \alpha_{\scalebox{.6}{\mbox{max}}}$ where $\alpha_{\scalebox{.6}{\mbox{max}}} = 1.814$, they however become limiting solution at maximal value of horizon radius.
\begin{figure}[!b]
\centering
\hskip0in
\includegraphics[width=3.3in]{Fig_6_m_infty_over_alpha_square_vs_xH_BIM}
\caption{The (normalized) mass of the EWS black hole solutions $\widetilde{m}_{\infty} / \alpha^2$ as a function of the horizon radius $x_{\scalebox{.6}{\mbox{H}}}$ for the values of coupling constant $\alpha = 1.0, 1.2, 1.4$ and $1.6$, together with the corresponding RN solutions with unit magnetic charge (dashed lines).}
\label{Fig.6}
\end{figure}
To better illustrate the solutions, following Ref. \cite{kn:4}, we also present the `phase diagram' of black hole solution in EWS theory for physical $\beta$ and Weinberg angle in Fig. \ref{Fig.7}. Non-Abelian black hole exist in regions of the ($\alpha, x_{\scalebox{.6}{\mbox{H}}}$) plane denoted by I and II. RN black holes as given by Eq. (\ref{eq.16}), exist in regions II and III, whereas in region IV there are no non-Abelian black hole solutions. The boundary $x_{\scalebox{.6}{\mbox{H}}} = 0$ of region I corresponds to the regular gravitating solutions. The cut-off point of $\alpha = 1.576$ as mentioned above is seen as separating region I into Ia and Ib. Approaching the curve AB in region Ia, the solutions develop double zero in $A(x)$ and they become limiting solutions. In region Ib, the non-Abelian solutions extend into region II and slowly merge into the RN solution with increasing $x_{\scalebox{.6}{\mbox{H}}}$.
Hence the `black hole in Cho-Maison monopole' of EWS theory again have identical features as the `black hole in monopole' of SU(2) EYMH theory \cite{kn:4}, \cite{kn:16}. There are however some key differences:\\
\noindent 1. First (for small $\alpha$) the EWS black hole solutions do not merge with the corresponding RN solution at critical horizon radius, but only converges towards them slowly with increasing horizon radius. The EYMH black hole solutions merge with the non-extremal RN solutions at critical value of the horizon radius \cite{kn:17}. \\
\noindent 2. Second, for region of ($\alpha, x_{\scalebox{.6}{\mbox{H}}}$) plane where non-Abelian black hole and RN black holes coexist, the mass of non-Abelian black hole solution is always lower than the mass of RN solution ($ m_{\scalebox{.6}{\mbox{n.a.}}} / m_{\scalebox{.6}{\mbox{RN}}} \leq 1$). The case in EYMH theory is however more complicated. The region of coexistence for non-Abelian black hole and RN solution exists for $ 0 < \alpha < 0.77$. Here the mass of non-Abelian black hole is smaller than the mass of RN solution ($m_{\scalebox{.6}{\mbox{n.a.}}} / m_{\scalebox{.6}{\mbox{RN}}} < 1$), but there also exists small region where $m_{\scalebox{.6}{\mbox{n.a.}}} / m_{\scalebox{.6}{\mbox{RN}}} > 1$. Then for $ 0.77 < \alpha < 0.866$, non-Abelian black hole solution joints smoothly with the RN solution at critical value of horizon radius. For $0.866 < \alpha < 1.403$, non-Abelian black hole solution reach limiting solution at maximal value of horizon radius, and does not bifurcate with the RN solution. \\
\noindent 3. Third, for a given horizon radius $x_{\scalebox{.6}{\mbox{H}}} $, the mass of black hole in EWS theory always has a lower value than its counterpart in EYMH theory.
\begin{figure}[!b]
\centering
\hskip0in
\includegraphics[width=3.3in]{Fig_7_Phase_Diagram}
\caption{`Phase diagram' of black hole solution in Einstein-Weinberg-Salam theory for physical $\beta$ and Weinberg angle.}
\label{Fig.7}
\end{figure}
\section{Conclusions}
\label{sec:6}
We have studied numerical solutions of the Einstein-Weinberg-Salam theory corresponding to: (1) fundamental gravitating electroweak monopole; (2) radially excited electroweak monopole; and (3) non-Abelian magnetically charged black hole.
The fundamental monopole solution emerges from the corresponding flat space monopole solution and extends smoothly up to a maximal value of the gravitational coupling constant $\alpha_{\scalebox{.5}{\mbox{max}}}$, before they collapse into a black hole. Besides the fundamental monopole branch, there exist branches of radially excited monopole solution, which also exist up to a maximal value of $\alpha$. However there are no flat space counterpart in the limit of $\alpha \rightarrow 0$ for the radially excited solution.
Both the normalized mass of fundamental monopole and radial excitation decrease with $\alpha$, with radial excitation branch possesses higher mass than that of the fundamental gravitating monopole. In the limit of $\alpha \rightarrow 0$, the normalized mass of the fundamental monopole branch converges to a finite value (0.7197), indicating the ADM mass of approximately 6.821 TeV. On the other hand the normalized mass of radial excitation diverges to infinity as $\alpha \rightarrow 0$, indicating that the radial excitation does not have flat space counterpart. We summarize the numerical estimate of ADM mass for gravitating monopole and radial excitation for selected values of $\alpha$ in Table \ref{table.2}.
\begin{table}
\caption{The numerical estimate of ADM mass for gravitating monopole (g.m.) and radial excitation (r.e.) with $\epsilon = \left( H/H_0\right)^8$ and physical value of $\beta$.}
\label{tab:1}
\begin{tabular}{lll}
\hline\noalign{\smallskip}
$\alpha$ & ADM mass (r.e.) & ADM mass (g.m.) \\
\noalign{\smallskip}\hline\noalign{\smallskip}
0 & $\infty$ & 6.821 TeV \\
0.20 & 25.555 TeV & 6.802 TeV \\
0.40 & 15.058 TeV & 6.747 TeV \\
0.60 & 11.320 TeV & 6.656 TeV \\
0.80 & 9.344 TeV & 6.529 TeV \\
1.00 & 8.089 TeV & 6.367 TeV \\
1.20 & 7.198 TeV & 6.171 TeV \\
1.40 & 6.506 TeV & 5.942 TeV \\
1.584 & black hole & 5.702 TeV \\
1.80 & - & 5.369 TeV \\
1.814 & - & black hole \\
\noalign{\smallskip}\hline
\end{tabular}
\label{table.2}
\end{table}
For the `black hole in electroweak monopole', black hole solutions corresponding to fundamental monopole branch emerge from globally regular solution in the limit $x_{\scalebox{.5}{\mbox{H}}} \rightarrow 0$ and persist up differently (depending on coupling constant $\alpha$) with increasing horizon radius. For a relatively large $\alpha$ ($1.576 < \alpha < 1.814$), limiting solution is reached at a maximal value of horizon radius, e.g. $x_{\scalebox{.5}{\mbox{H}}}(\mbox{max})= 0.8519$ for $\alpha = 1.6$. However for smaller values of $\alpha$ ($0 < \alpha < 1.576$), black hole solutions do not reach limiting solutions but slowly merge into the corresponding non-extremal RN solutions at large horizon radius.
Despite mostly identical, our results in Einstein-Weinberg-Salam theory have some key differences compared to that of EYMH theory: (1) Our solutions of gravitating monopole and radial excitation do not exhibit the phenomena of `backward-bending' in coupling constant $\alpha$ as observed in Refs. \cite{kn:4,kn:17}. The maximal value $\alpha_{\scalebox{.6}{\mbox{max}}}$ always correspond to the lowest value of the metric function $A$ and the solutions become limiting solutions at $\alpha_{\scalebox{.6}{\mbox{max}}}$. (2) At $\alpha_{\scalebox{.6}{\mbox{max}}}$, both the (normalized) mass of gravitating monopole and radial excitation reaches minimum value but do not bifurcate with the branch of extremal RN solution respectively. (3) The non-Abelian black hole solution converges slowly into the corresponding non-extremal RN solutions with large horizon radius. At a given horizon radius where the non-Abelian black hole and RN solution coexist, the mass of non-Abelian black hole is always lower than the mass of RN solution. (4) At a given $\alpha$, the mass of gravitating monopole (or radial excitation) in EWS theory always has lower value than its counterpart in EYMH theory. Similarly, at a given horizon radius $x_{\scalebox{.6}{\mbox{H}}} $, the mass of black hole in EWS theory is also lower than the mass of its counterpart in EYMH theory. This suggests that the configurations in EWS theory are more stable.
In SU(2) EYMH theory, gravitating monopoles and magnetically charged black holes can be generalized to gravitating dyons and dyonic black holes \cite{kn:17}. Hence our results here clearly have the dyonic generalization simply by switching on the time component of the gauge potential. We will discuss these findings in a separate paper.
|
1705.04335
|
\section{Introduction}
Any point-to-point communication link can be modeled as a quantum
channel $\mathcal{N}$ from a sender to a receiver. Of fundamental interest
are the {\em capacities} of $\mathcal{N}$ to transmit data of various types such as quantum, private, or classical data.
Informally, the
capacity of $\mathcal{N}$ to transmit a certain type of data is the optimal
rate at which that data can be transmitted with high fidelity given an
asymptotically large number of uses of $\mathcal{N}$. Capacities of a channel
quantify its value as a communication resource.
In the classical setting, the capacity of a classical channel $\mathcal{N}$ to
transmit classical data is given by Shannon's noisy coding theorem
\cite{Shannon48}. While operationally, the capacity-achieving error
correcting codes may have increasingly large block lengths, the
capacity can be expressed as a {\em single letter formula}: it is the
maximum correlation between input and output that can be generated
with a {\em single} channel use, where correlation is measured by the mutual
information.
In the quantum setting, the capacity of a quantum channel $\mathcal{N}$ to
transmit quantum data, denoted $Q(\mathcal{N})$, is given by the LSD theorem
\cite{Lloyd97,Shor02,Devetak05}. A capacity expression is found, but
it involves a quantity optimized over an {\em unbounded} number of
uses of the channel. This quantity, when optimized over $n$ channel
uses, is called the $n$-shot coherent information. Dividing the
$n$-shot coherent information by $n$ and taking the limit $n \to
\infty$ gives the capacity.
For special channels called {\em degradable} channels, the coherent
information is {\em weakly additive}, meaning that the $n$-shot
coherent information {\em is} $n$ times the $1$-shot coherent
information \cite{Shor-Devetak-05}, hence the capacity is the $1$-shot
coherent information and can be evaluated in principle.
In general, the coherent information can be {\em superadditive},
meaning that the $n$-shot coherent information can be more than $n$
times the $1$-shot coherent information, thus the optimization over $n$
is necessary \cite{DiVincenzoSS98}. Consequently, there is no general
algorithm to compute the capacity of a given channel.
Furthermore, the $n$-shot coherent information can be positive for
some small $n$ while the $1$-shot coherent information is zero
\cite{DiVincenzoSS98}.
Moreover, given any $n$, there is a channel whose $n$-shot coherent
information is zero but whose quantum capacity is positive
\cite{Cubitt2015unbounded}.
Thus we do not have a general method to determine if a given
channel has positive quantum capacity.
Even for the qubit depolarizing channel, which acts as
$\mathcal{D}_p(\rho) = (1-\frac{4p}{3})\,\rho + \frac{4p}{3} \frac{I}{2}$,
our understanding of the quantum capacity is limited. For $p=0$ the channel is perfect, so we have $Q(\mathcal{D}_0) = 1$, while for $p \geq 1/4$, we know that $Q(\mathcal{D}_{p}) = 0$ \cite{BDEFMS98}. However, for $0<p<1/4$ the quantum capacity of $\mathcal{D}_p$ is unknown despite substantial effort (see e.g.~\cite{SS07,Fern08,LW15}).
For $p\approx 0.2$, communication rates higher than the
$1$-shot coherent information are achievable \cite{DiVincenzoSS98,SS07,Fern08}, but even the threshold value of $p$ where the capacity goes to zero is unknown.
For $p$ close to zero, the best lower bound for $Q(\mathcal{D}_p)$ is the one-shot coherent information.
In this regime, the continuity bound developed in \cite{Leung-Smith-08} is
insufficient to constrain the quantum capacity of $\mathcal{D}_p$ to
the leading order in $p$, and
while various other upper bounds exist, they all differ from the one-shot coherent information by $O(p)$.
Recently, a numerical upper bound on the capacity of the
low-noise depolarizing channel \cite{SSWR15} was found to be very close
to the $1$-shot coherent information. Meanwhile, the complementary
channel for the depolarizing channel for any $p>0$ is found to always
have positive capacity \cite{LW15}, which renders several techniques
inapplicable, including those described in \cite{Watanabe2012} or a
generalization of degradability to ``information degradability'' \cite{CLS17}.
In this paper, we consider the quantum capacity of ``low-noise quantum
channels'' that are close to the identity channel, and investigate how
close the capacity is to the $1$-shot coherent information. It has been unclear whether we should
expect substantial nonadditivity of coherent information for such channels.
On the one hand, all known degenerate codes that provide a boost to quantum capacity first encode one logical qubit into a small number of physical qubits, which
incurs a significant penalty in rate. This would seem to preclude any benefit in the regime where
the $1$-shot coherent information is already quite high. On the other hand, we have no effective methods for
evaluating the $n$-letter coherent information for large $n$, and there may well exist new types of coding strategies that incur no such penalty in the large $n$ regime.
We prove in this paper that to linear order in the noise parameter,
the quantum capacity of any low-noise channel {\em is} its $1$-shot
coherent information (see Theorem \ref{lem:qbound2}). Consequently, degenerate
codes cannot improve the rates of these channels up to the same order.
For the special cases of the qubit depolarizing channel, the mixed
Pauli channel and their qudit generalizations, we show that the
quantum capacity and the $1$-shot coherent information agree to even
higher order (see Theorem \ref{cor:qpcapdepol}).
Our findings extend to the private capacity $P(\mathcal{N})$ of a quantum channel $\mathcal{N}$,
which displays similar complexities to the quantum capacity.
The private capacity is equal to the regularized private information
\cite{Devetak05}, but the private information is not additive
(\cite{SRS08,Li09,SS09}).
In \cite{HHHO05}, the private capacity, which is never smaller than
the quantum capacity, is found to be positive for some channels that
have no quantum capacity. The authors also characterize the type of
noise that hurts quantum transmission and that can be ``shielded''
from corrupting private data.
In \cite{LLSS14}, channels are found with almost no quantum capacity
but maximum private capacity.
Meanwhile, for degradable channels, the private capacity is
again equal to the $1$-shot coherent information \cite{Smith08}.
This coincidence of $P$ and $Q$ for degradable channels means that our
findings for the quantum capacity can be carried over to private
capacity fairly easily.
In the low-noise regime the private capacity is also equal to the
$1$-shot coherent information to linear order in the noise parameter,
and is equal to the quantum capacity in the same order (see Theorems \ref{lem:qbound2} and \ref{cor:qpcapdepol}).
Consequently, shielding provides little benefit.
Our results follow closely the approach in \cite{SSWR15}. Consider a
channel $\mathcal{N}$ and its complementary channel $\mathcal{N}^c$. The channel $\mathcal{N}$ is
degradable if there is another channel $\mathcal{M}$ (called a degrading map)
such that $\mathcal{M} \circ \mathcal{N} = \mathcal{N}^c$. Instead of measuring how close
$\mathcal{N}$ is to some degradable channel, \cite{SSWR15} considers how close
$\mathcal{M} \circ \mathcal{N}$ can be to $\mathcal{N}^c$ when optimizing over all channels $\mathcal{M}$, a measure we call the degradability
parameter of $\mathcal{N}$. Furthermore, this distance between $\mathcal{N}^c$ and
$\mathcal{M}\circ\mathcal{N}$ as well as the best approximate degrading map $\mathcal{M}$ can be
obtained via semidefinite programming. Continuity results, relative
to the case as if $\mathcal{N}$ is degradable, can then be obtained similarly
to \cite{Leung-Smith-08}.
This new bound in \cite{SSWR15} limits the difference between the
$1$-shot coherent information and the quantum capacity to $O(\eta \log
\eta)$ where $\eta$ is the degradability parameter. Note that
$\eta \log \eta$ does not have a finite slope at $\eta = 0$ but it
goes to zero faster than $\eta^b$ for any $b<1$. While this method
does not yield explicit upper bounds, once a channel of interest is
fixed, it is fairly easy to evaluate the degradability parameter (via
semidefinite programming) and the resulting capacity bounds
numerically.
The primary contribution in this paper is an analytic proof of a
surprising fact that, for low-noise channels whose diamond-norm
distance to being noiseless is $\varepsilon$, the degradability parameter
$\eta$ grows at most as fast as $O(\varepsilon^{1.5})$, rendering the
gap $O(\eta \log \eta)$ between the $1$-shot coherent
information and the quantum or private capacity only sublinear in
$\varepsilon$ (see Theorem \ref{thm:complementary-degrading}).
For the qubit depolarizing channel and its various generalizations,
we improve the analytic bound of $\eta$ to $O(\varepsilon^{2})$ (see Theorem \ref{thm:depol-degradability-parameter}).
Furthermore, for both results, we provide constructive approximate degrading maps and
explain why they work well.
The rest of the paper is structured as follows.
In Section \ref{sec:background}, we explain both our notations and prior
results relevant to our discussion. We present our results for a
general low-noise channel in Section \ref{sec:general} and for the
depolarizing channel in Section \ref{sec:depol-channel} (with the various
generalizations in \Cref{sec:gen-pauli-channels}). We
conclude with some extensions and implications of our results in Section \ref{sec:conclude}.
\section{Background}
\label{sec:background}
In this paper we only consider finite-dimensional Hilbert spaces.
For a Hilbert space $\mathcal{H}$, we denote by $\mathcal{B}(\mathcal{H})$ the set of linear operators on $\mathcal{H}$.
We write $M_A$ for operators defined on a Hilbert space $\mathcal{H}_A$ associated with a quantum system $A$.
We denote by $I_A$ the identity operator on $\mathcal{H}_A$, and by $\id_A$ the identity map on $\mathcal{B}(\mathcal{H}_A)$.
We denote the dimension of $A$ by $|A| = \dim\mathcal{H}_A$.
A (quantum) state $\rho_A$ on a quantum system $A$ is a positive semidefinite operator with unit trace, that is, $\rho_A\geq 0$ and $\tr\rho_A=1$.
\subsection{Quantum channels}
Any point-to-point communication link that transmits an input to
an output state can be modeled as a \emph{quantum channel}.
Mathematically, a quantum channel is a linear, completely positive,
and trace-preserving map $\mathcal{N}$ from $\mathcal{B}(\mathcal{H}_A)$ to $\mathcal{B}(\mathcal{H}_B)$.
We often use the shorthand $\mathcal{N}\colon A\to B$.
If a channel $\mathcal{N}' \colon A \to B'$ acts as $\mathcal{N}$ followed by an isometry from $B$ to $B'$, then we call $\mathcal{N}'$ \emph{equivalent} to $\mathcal{N}$.
This is an equivalence relation on the set of quantum channels, and for a given channel $\mathcal{N}$ all analysis of interest in this paper applies to any channel in the equivalence class of $\mathcal{N}$.
We summarize two useful representations for quantum channels in the following
\cite{Nielsen-Chuang,Wat16}.
For every quantum channel $\mathcal{N}\colon A\to B$ there is an \emph{isometric extension} $V\colon \mathcal{H}_A\to \mathcal{H}_B\otimes\mathcal{H}_E$ for some auxiliary Hilbert space $\mathcal{H}_E$ associated with some \emph{environment} system $E$, such that $\mathcal{N}(\rho_A) = \tr_E(V\rho_A V^\dagger)$ \cite{Stinespring}.
The isometric extension is also called the Stinespring dilation, and this representation is sometimes called the unitary representation.
Note that the isometric extension is unique up to {\em left} multiplication by some unitary on $E$, and this degree of freedom does not affect most analysis of interest.
Physically, the isometric extension distributes the input to $B$ and $E$ jointly, and the channel $\mathcal{N}$ is noisy because the information in $E$ is no longer accessible.
The \emph{complementary channel} $\mathcal{N}^c\colon A\to E$ of $\mathcal{N}$ with respect to a specific isometric extension $V$ is defined by $\mathcal{N}^c(\rho_A) \coloneqq \tr_B(V\rho_A V^\dagger)$.
Note that all channels complementary to $\mathcal{N}$ are equivalent.
The second representation we use is the \emph{Choi-Jamio\l kowski isomorphism}. It is a bijection $\mathcal{J}$ from the set of quantum channels $\mathcal{N} \colon A \to B$ to the set of positive operators $\tau_{A'B} \in \mathcal{B}(\mathcal{H}_{A'}\otimes\mathcal{H}_B)$ satisfying $\tr_B \tau_{A'B} = I_{A'}$, given by:
\begin{align}
\mathcal{J}(\mathcal{N}) \coloneqq (\id_{A'}\otimes \mathcal{N})(\gamma_{A'A}),
\end{align}
where $|\gamma\rangle_{A'A}\coloneqq \sum_i |i\rangle_{A'}\otimes|i\rangle_{A}$ is proportional to the maximally entangled state on $A'A$, the system $A'$ is isomorphic to $A$, and $\gamma_{A'A} =
|\gamma\>\<\gamma|_{A'A}$.
The inverse of $\mathcal{J}$ applied to $\tau_{A'B}$ yields a channel whose action on an operator $\rho_A$ defined on a system $A$ is given by
\begin{align}
\left( \mathcal{J}^{-1}(\tau_{A'B}) \right) (\rho_A) = \tr_{A'} \left(\tau_{A'B}\left( (\rho^T)_{A'} \otimes\mathds{1}_B\right)\right),
\end{align}
where $T$ denotes transposition with respect to the basis $\lbrace |i\rangle\rbrace$ that defines $|\gamma\rangle$. The operator $\mathcal{J}(\mathcal{N})$ is called the Choi matrix of $\mathcal{N}$. It is uniquely determined by $\mathcal{N}$ and it is basis-dependent.
The rank of $\mathcal{J}(\mathcal{N})$ is called the Choi rank of $\mathcal{N}$. It is the
minimum dimension of the environment $E$ for an isometric extension of
$\mathcal{N}$. It is basis-independent, and independent of the choice of the
isometric extension.
\subsection{The diamond norm and the continuity of the isometric extension}
\label{sec:diamond}
For an operator $M\in\mathcal{B}(\mathcal{H})$ we define the \emph{trace norm} $\|M\|_1$, the \emph{operator norm} $\|M\|_\infty$, and the max norm $\|M\|_{\rm \max}$ as follows:
\begin{align}
\|M\|_1 &\coloneqq \tr\sqrt{M^\dagger M}\, , \\
\|M\|_\infty &\coloneqq \max \left\lbrace \sqrt{\langle\psi|M^\dagger M|\psi\rangle} \colon |\psi\rangle\in\mathcal{H}, \langle\psi|\psi\rangle = 1 \right\rbrace \, , \\
\|M\|_{\rm \max} &\coloneqq \max_{i,j} |M_{i,j}|\, .
\end{align}
Note that the max norm is basis-dependent, unlike the trace and the operator norms.
We now discuss the distance measure we use for channels.
For a linear map $\Phi\colon \mathcal{B}(\mathcal{H})\to\mathcal{B}(\mathcal{K})$ between Hilbert spaces $\mathcal{H}$ and $\mathcal{K}$, we define its \emph{diamond norm} by
\begin{align}
\|\Phi\|_\diamond \coloneqq \max \lbrace \|(\id_{\mathbb{C}^n} \otimes \Phi)(M) \|_1 \colon M\in\mathcal{B}(\mathbb{C}^n\otimes\mathcal{H}), \|M\|_1 = 1, n\in\mathbb{N} \rbrace,\label{eq:diamond-norm}
\end{align}
where $\id_{\mathbb{C}^n}$ denotes the identity map on $\mathbb{C}^n$.
It suffices to take $n$ as large as $\dim(\mathcal{H})$ in the above optimization
so that the maximum can be attained.
When applied to the difference of two channels $\mathcal{N}_1 - \mathcal{N}_2$, the diamond
norm has a simple operational meaning: it is twice the maximum of the trace
distance of the two output states $(\id_{\mathbb{C}^n} \otimes \mathcal{N}_i )(M)$
(for $i=1,2$) created by the two channels on the best common input $M$, in the
presence of an ancillary space $\mathbb{C}^n$. The trace distance
of two states in turn signifies their distinguishability \cite{Helstrom}.
We summarize two characterizations of the diamond norm, a method to
upper bound it, and a continuity result for the isometric extension in
the rest of this subsection.
First, the diamond norm $\|\mathcal{N}_1 - \mathcal{N}_2\|_\diamond$ of the difference of two quantum channels $\mathcal{N}_1\colon A\to B$ and $\mathcal{N}_2\colon A\to B$ can be computed by solving the following semidefinite
program (SDP) \cite{Wat09}:
\begin{align}
\begin{aligned}
\text{\normalfont minimize: } & 2\mu\\
\text{\normalfont subject to: } & \tr_BZ_{AB} \leq \mu I_A\\
& Z_{AB} \geq \mathcal{J}(\mathcal{N}_1) - \mathcal{J}(\mathcal{N}_2)\\
& Z_{AB} \geq 0.
\end{aligned}
\label{eq:diamond-norm-sdp}
\end{align}
Second, the diamond norm of a linear CP map $\Theta$ (which is not necessarily trace-preserving) can be rewritten as (\cite{Wat12} Thm.~6):
\begin{align}
\|\Theta\|_\diamond = \max_{\rho_1,\,\rho_2} \left\lbrace \left\|(\sqrt{\rho_1}\otimes\mathds{1}_B) \mathcal{J}(\Theta) (\sqrt{\rho_2}\otimes\mathds{1}_B) \right\|_1 \right\rbrace,
\label{eq:diamond-norm-rewrite}
\end{align}
where the maximum is over density matrices $\rho_1$ and $\rho_2$ on the input system $A$ of the map $\Theta$.
We prove the following technical lemma that upper bounds the diamond norm of an arbitrary linear CP map (which need not be trace preserving) using the max norm of its Choi matrix:
\begin{lemma}\label{lem:coefficients-bound}
For a linear CP map $\Theta\colon A\to B$, we have $\|\Theta\|_\diamond \leq |A| \, |B|^2
\; \|\mathcal{J}(\Theta)\|_{\max}.$
\end{lemma}
\begin{proof}
We start with the second characterization of the diamond norm in \eqref{eq:diamond-norm-rewrite} above, and
let $\sigma_1$ and $\sigma_2$ be states on $A$ achieving the maximum in \eqref{eq:diamond-norm-rewrite}, such that
\begin{align}
\|\Theta\|_\diamond &= \left\|(\sqrt{\sigma_1}\otimes\mathds{1}_B) \, \mathcal{J}(\Theta) \, (\sqrt{\sigma_2}\otimes\mathds{1}_B) \right\|_1.\label{eq:dn-trace-distance}
\end{align}
Furthermore, we employ the fact that the trace distance can be expressed as \cite{Nielsen-Chuang}
\begin{align}
\|M\|_1 = \max_{U} |\tr(UM)|,
\end{align}
where the maximum is over unitary operators $U$.
Let $V$ be a unitary operator achieving the above maximum in
\eqref{eq:dn-trace-distance}.
We then have
\begin{align}
\|\Theta\|_\diamond &= \left|\tr \left( V (\sqrt{\sigma_1}\otimes\mathds{1}_B) \, \mathcal{J}(\Theta) \, (\sqrt{\sigma_2}\otimes\mathds{1}_B) \right) \right|\\
&= \left| \tr\left( \sqrt{V} (\sqrt{\sigma_1}\otimes\mathds{1}_B) \sqrt{\mathcal{J}(\Theta)} \sqrt{\mathcal{J}(\Theta)} (\sqrt{\sigma_2}\otimes\mathds{1}_B) \sqrt{V} \right) \right|\\
&\leq \sqrt{\tr\left(V^{1/2}(\sqrt{\sigma_1}\otimes\mathds{1}_B) \, \mathcal{J}(\Theta) \, (\sqrt{\sigma_1}\otimes\mathds{1}_B) V^{-1/2} \right)} \nonumber \\
&\qquad\qquad {} \times \sqrt{\tr\left(V^{-1/2}(\sqrt{\sigma_2}\otimes\mathds{1}_B) \, \mathcal{J}(\Theta) \, (\sqrt{\sigma_2}\otimes\mathds{1}_B) V^{1/2} \right)}\\
&= \sqrt{\tr((\sigma_1\otimes \mathds{1}_B)\mathcal{J}(\Theta))} \sqrt{\tr((\sigma_2\otimes \mathds{1}_B)\mathcal{J}(\Theta))} \\
&= |B| \sqrt{\tr\left((\sigma_1\otimes \pi_B) \, \mathcal{J}(\Theta) \, \right)} \sqrt{\tr\left( (\sigma_2\otimes \pi_B) \, \mathcal{J}(\Theta)\, \right)}\\
&\leq |B| \; \|\mathcal{J}(\Theta)\|_\infty \,,\label{eq:dn-estimate}
\end{align}
where $\pi_B = I_B/|B|$, and we have used the Cauchy-Schwarz inequality for the Hilbert-Schmidt inner product $\langle X,Y\rangle \coloneqq \tr(X^\dagger Y)$ in the first inequality, and the identity
\begin{align}
\|Z\|_\infty = \max\lbrace |\tr(\rho Z)| \colon \rho\geq 0, \tr\rho = 1\rbrace
\end{align}
in the second inequality.
To bound the operator norm of $J\equiv \mathcal{J}(\Theta)$ in \eqref{eq:dn-estimate}, let $n=|A||B|$ and $\lambda_1\geq \dots \geq \lambda_n$ be the eigenvalues of $J$ in descending order, and observe that we have
\begin{align}
\|J\|_\infty^2 = \lambda_1^2 \leq \sum_{k=1}^n \lambda_k^2 = \tr(J^2) = \sum_{i,\,j=1}^{n} J_{ij} J_{ji} = \sum_{i,\,j=1}^{n} |J_{ij}|^2 \leq n^2 \|J\|^2_{\max}.
\end{align}
Hence, $\|\mathcal{J}(\Theta)\|_\infty \leq |A| \, |B| \; \|\mathcal{J}(\Theta)\|_{\max}$, which together with \eqref{eq:dn-estimate} proves the claim.
\end{proof}
\begin{corollary}\label{cor:choi-coefficients}
If $\Theta$ is a CP map whose Choi matrix has coefficients $O(p^2)$ for $p$ in a neighborhood of $0$, then also $\|\Theta\|_\diamond = O(p^2)$.
\end{corollary}
We also use the following continuity result for the isometric extensions for channels \cite{KSW08} :
\begin{theorem}[\cite{KSW08}]\label{thm:stinespring-continuity}
For quantum channels $\mathcal{N}_1$ and $\mathcal{N}_2$,
\begin{align}
\inf_{V_1,V_2} \|V_1 - V_2\|_{\infty}^2 \leq \|\mathcal{N}_1 - \mathcal{N}_2 \|_{\diamond} \leq 2\inf_{V_1,V_2} \|V_1 - V_2\|_\infty,
\end{align}
where the infimum is over isometric extensions $V_i$ of $\mathcal{N}_i$ for $i=1,2$, respectively.
\end{theorem}
A simple consequence of \Cref{thm:stinespring-continuity} is that for two quantum channels that are close in the diamond norm, their complementary channels can be made similarly close to one another:
\begin{corollary}\label{cor:complementary-channel-norm}
Let $\mathcal{N}_1, \mathcal{N}_2$ be quantum channels with $\|\mathcal{N}_1 - \mathcal{N}_2\|_\diamond \leq \varepsilon$ for some $\varepsilon\in[0,2]$.
Then for any complementary channel $\mathcal{N}_1^c$ of $\mathcal{N}_1$, there exists a complementary channel $\mathcal{N}_2^c$ of $\mathcal{N}_2$ such that
\begin{align}
\|\mathcal{N}_1^c - \mathcal{N}_2^c\|_\diamond \leq 2\sqrt{\varepsilon}.
\end{align}
\end{corollary}
\subsection{Quantum and private capacities and approximate degradability}
\label{sec:appdeg}
The \emph{coherent information} of a state $\rho_A$ through a channel
$\mathcal{N} \colon A \to B$ is defined as
\begin{equation}
I_c(\rho_A;\mathcal{N})
\coloneqq S(\mathcal{N}(\rho_A)) - S(\mathcal{N}^c(\rho_A)) \,,
\end{equation}
where $S(\sigma) \coloneqq {-}\tr\sigma \log \sigma$ denotes the von~Neumann
entropy of $\sigma$, and $\log$ is base $2$ throughout this paper.
Note that the coherent information is independent of the choice of the
complementary channel.
The coherent information can be interpreted as follows. Let
$|\psi\>_{A'A}$ be a purification of $\rho_A$ (that is,
$\tr_{A'} |\psi\>\<\psi|_{A'A} = \rho_A$).
Then, $I_c(\rho_A;\mathcal{N}) = \frac{1}{2} \left( I(A':B) - I(A':E) \right)$
where $I(A':B) = S(A') + S(B) - S(A'B)$ is the quantum mutual information
between $A'$ and $B$, and similarly for $I(A':E)$, and the mutual
information is evaluated on the state $\id_{A'} \otimes \mathcal{N} (|\psi\>\<\psi|_{A'A})$.
The coherent information of $\mathcal{N}$, also called the $1$-shot coherent information,
is the maximum over all input states,
\begin{align}
I_c(\mathcal{N})
\coloneqq \max_{\rho_A} I_c(\rho_A;\mathcal{N}) \,.
\end{align}
The $n$-shot coherent information of $\mathcal{N}$ is
defined as $I_c^{(n)}(\mathcal{N}) \coloneqq I_c(\mathcal{N}^{\otimes n})$, and satisfies $I_c^{(n+m)}(\mathcal{N}) \geq I_c^{(n)}(\mathcal{N}) + I_c^{(m)}(\mathcal{N})$ for $n,m\in\mathbb{N}$.
The \emph{quantum capacity theorem} \cite{Lloyd97,Shor02,Devetak05}
establishes that the quantum capacity of $\mathcal{N}$ is given by the following regularized formula:
\begin{align}
Q(\mathcal{N}) = \lim_{n\rightarrow\infty} \frac{1}{n} \, I_c^{(n)}(\mathcal{N}) = \sup_{n\in\mathbb{N}}\frac{1}{n}\,I_c^{(n)}(\mathcal{N})\,,
\label{eq:qcap}
\end{align}
where the second equality follows from Fekete's lemma \cite{Fek23}.
In general, the regularization in \eqref{eq:qcap} is necessary, and renders the quantum capacity intractable to compute for most channels \cite{DiVincenzoSS98,Cubitt2015unbounded}.
However, for the class of degradable channels \cite{Shor-Devetak-05}, the formula \eqref{eq:qcap} reduces to a single-letter formula.
A channel $\mathcal{N}$ with complementary channel $\mathcal{N}^c$ is {\em degradable} if there is another channel $\mathcal{M}$ (called a degrading
map) such that $\mathcal{M} \circ \mathcal{N} = \mathcal{N}^c$.
For degradable channels,
the coherent information is {\em weakly additive}, $I_c^{(n)}(\mathcal{N}) = n I_c(\mathcal{N})$ \cite{Shor-Devetak-05}.
As a result, the limit in \eqref{eq:qcap} is unnecessary, and we have
\begin{align}
Q(\mathcal{N}) = I_c(\mathcal{N}).
\end{align}
Moreover, for a degradable channel $\mathcal{N}$ the coherent information $I_c(\rho_A; \mathcal{N})$ is concave in the input state $\rho_A$ \cite{YHD08}, and thus $I_c(\mathcal{N})$ can be efficiently computed using standard optimization techniques.
Since degradable channels have nice properties that allow us to determine their quantum capacity, we might ask if some of these properties are approximately satisfied by channels that are ``almost'' degradable.
Reference \cite{SSWR15} formalized this idea by considering how close $\mathcal{M} \circ \mathcal{N}$ can be made to $\mathcal{N}^c$ when optimizing over the channel $\mathcal{M}$.
\begin{definition}[\cite{SSWR15}; Approximate degradability]\label{def:approximate-degradability}
A quantum channel $\mathcal{N}\colon A\to B$ with environment $E$ is called \emph{$\eta$-degradable} for an $\eta\in[0,2]$ if there exists a quantum channel $\mathcal{M}\colon B\to E$ such that
\begin{align}
\|\mathcal{N}^c - \mathcal{M}\circ \mathcal{N} \|_\diamond \leq \eta.
\label{eq:approx-degradability}
\end{align}
The \emph{degradability parameter} $\dg(\mathcal{N})$ of $\mathcal{N}$ is defined as the minimal $\eta$ such that \eqref{eq:approx-degradability} holds for some quantum channel $\mathcal{M}\colon B\to E$, and $\mathcal{N}$ is degradable if $\dg(\mathcal{N})=0$.
\end{definition}
Note that every quantum channel is $2$-degradable, since $\|\mathcal{N}_1-\mathcal{N}_2\|_\diamond\leq 2$ for any two quantum channels $\mathcal{N}_1$ and $\mathcal{N}_2$.
The SDP \eqref{eq:diamond-norm-sdp} for the diamond norm can be used to express the degradability parameter $\dg(\mathcal{N})$ of \Cref{def:approximate-degradability} as the solution of the following SDP \cite{SSWR15}:
\begin{align}
\begin{aligned}
\text{\normalfont minimize: } & 2\mu\\
\text{\normalfont subject to: } & \tr_EZ_{AE} \leq \mu I_A\\
& \tr_E Y_{BE} = \mathds{1}_B\\
& Z_{AE} \geq \mathcal{J}(\mathcal{N}^c) - \mathcal{J}(\mathcal{J}^{-1}(Y_{BE})\circ\mathcal{N})\\
& Z_{AE} \geq 0, Y_{BE} \geq 0.
\end{aligned}
\label{eq:approx-deg-sdp}
\end{align}
The bipartite operator $Y_{BE}$ above corresponds to the Choi matrix of the approximate degrading map $\mathcal{M}\colon B\to E$.
While a degradable channel has capacity equal to the $1$-shot coherent
information, an $\eta$-degradable channel $\mathcal{N}$ has capacity
differing from the $1$-shot coherent information at most by a vanishing
function of $\eta$:
\begin{theorem}[\cite{SSWR15}, Thm.~3.3$\,$(i); Continuity bound]\label{thm:deg-cty}
If $\mathcal{N}$ is a channel with degradability parameter $\dg(\mathcal{N}) = \eta$, then,
\begin{align}
I_c(\mathcal{N}) \leq Q(\mathcal{N}) \leq
I_c(\mathcal{N}) + \frac{\eta}{2} \log(|E|{-}1) + \eta \log|E|
+ h \! \left(\frac{\eta}{2}\right)
+ \left(1+\frac{\eta}{2}\right) h\left( \frac{\eta}{2+\eta} \right),
\label{eq:cty-bdd}
\end{align}
where $h(x)\,{:}{=}-x\log x-(1{-}x)\log(1{-}x)$ is the binary
entropy function, and $|E|$ is the Choi rank of $\mathcal{N}$.
\end{theorem}
The private capacity of $\mathcal{N}$, denoted by $P(\mathcal{N})$, is the capacity of
$\mathcal{N}$ to transmit classical data with vanishing probability of error
such that the state in the joint environment of all channel uses has
vanishing dependence on the input. The capacity expression is found
to be the regularized private information for $\mathcal{N}$ in
\cite{Devetak05}, but the private information is not additive
(\cite{SRS08,Li09, SS09}). For degradable channels, $P(\mathcal{N}) =
I_c(\mathcal{N})$, and there is a continuity result similar to Theorem
\ref{thm:deg-cty}:
\begin{theorem}[\cite{SSWR15}, Thm.~3.3$\,$(iii) and (v) combined]\label{thm:deg-cty-p}
If $\mathcal{N}$ is a channel with degradability parameter $\dg(\mathcal{N}) = \eta$, then,
\begin{align}
I_c(\mathcal{N}) \leq P(\mathcal{N}) \leq
I_c(\mathcal{N}) + \eta \log(|E|{-}1) + 4 \eta \log|E|
+ 2 \, h \! \left(\frac{\eta}{2}\right)
+ 4 \left(1+\frac{\eta}{2}\right) h\left( \frac{\eta}{2+\eta} \right),
\label{eq:cty-bdd-private}
\end{align}
where $|E|$ and $h$ are as defined in Theorem \ref{thm:deg-cty}.
\end{theorem}
\section{General low-noise channel}\label{sec:general}
Throughout this paper we focus on {\em low-noise channels}, by which we mean a channel $\mathcal{N}$ that has isomorphic
input and output spaces $A$ and $B$ and approximates the noiseless (or identity)
channel in the diamond norm,
$\| \mathcal{N} - \id_A \|_\diamond \leq \varepsilon$, where $\varepsilon>0$ is a small
positive parameter.
\subsection{Deviation of quantum capacity from 1-shot coherent information}
We start from \Cref{thm:deg-cty} in Section \ref{sec:appdeg}, which
gives a ``continuity bound'' on the difference between the quantum
capacity and the 1-shot coherent information for an arbitrary channel
$\mathcal{N}$ with degradability parameter $\dg(\mathcal{N}) = \eta$:
\begin{align}
|Q(\mathcal{N}) - I_c(\mathcal{N})| \leq f_1(\eta),
\end{align}
where
\begin{align}
f_1(\eta) \coloneqq \frac{\eta}{2} \log(|E|-1) + \eta \log|E| + h \! \left(\frac{\eta}{2}\right) + \left(1+\frac{\eta}{2}\right) h\left( \frac{\eta}{2+\eta} \right),
\label{eq:error-term}
\end{align}
satisfying $\lim_{\eta\to 0}f_1(\eta) = 0$.
If $\mathcal{N}$ is ``almost'' degradable, then $\eta \approx 0$.
To investigate the behavior of $f_1(\eta)$ in this regime, we keep the $\eta\log\eta$ terms in $f_1(\eta)$ (knowing that these will dominate for small $\eta$), and develop the rest in a Taylor series around $0$.
For example, the first binary entropy term is expanded as
\begin{align}
h \! \left(\frac{\eta}{2}\right) = \frac{1}{2}\left(1+(\ln 2)^{-1}-\log \eta\right) \eta - \frac{1}{8} (\ln 2)^{-1} \eta^2 + O(\eta^3).
\end{align}
The second binary entropy term (including the prefactor) is expanded as
\begin{align}
\left(1+\frac{\eta}{2}\right) h\left( \frac{\eta}{2+\eta} \right) = \frac{1}{2}\left(1+(\ln 2)^{-1}-\log \eta\right) \eta + \frac{1}{8} (\ln 2)^{-1} \eta^2 + O(\eta^3).
\end{align}
The quadratic contributions cancel out in an expansion of $f_1(\eta)$, and including the two linear terms in \eqref{eq:error-term} gives
\begin{align}
f_1(\eta) = -\eta\log\eta + \left( 1 + (\ln 2)^{-1} + \frac{1}{2}\log(|E|-1) + \log |E| \right)\eta + O(\eta^3).\label{eq:error-term-expanded}
\end{align}
It follows that for small $\eta$ the function $f_1(\eta)$ is dominated by $g(\eta) \,{:}{=}\, -\eta \log \eta$
which has infinite slope at $\eta=0$:
\begin{align}
g'(\eta) = -\log(\eta) - \frac{1}{\ln 2}\xrightarrow{\eta \, \to \, 0} +\infty.
\label{dgdeta}
\end{align}
Hence, without further information on $\eta$, \Cref{thm:deg-cty} does not give
a tight approximation of the quantum capacity relative to the 1-shot coherent
information.
Instead, we consider a scenario in which the channel $\mathcal{N}(p)$ depends
on some underlying noise parameter $p\in[0,1]$ and
$\eta = \dg(\mathcal{N}(p)) \leq c p^r$ where $r>1$ and $c$ is a constant.
(Note that $f_1(\eta)$ increases with $\eta$ for small $\eta$,
so, it suffices to consider $\eta = c p^r$.)
In this case, the approximation implied by \Cref{thm:deg-cty} becomes
extremely useful -- we will show that the upper bound
$I_c(\mathcal{N}(p)) + f_1(cp^r)$ on $Q(\mathcal{N}(p))$ is now \emph{tangent} to the
$1$-shot coherent information $I_c(\mathcal{N}(p))$.
\begin{lemma} \label{lem:fderivative}
If $r>1$ and $c$ is a constant, then $\frac{d}{dp}f_1(cp^r)\Big|_{p=0} = 0$.
\end{lemma}
\begin{proof}
From \eqref{dgdeta} and the chain rule, we obtain
\begin{align}
\frac{d}{dp}g(cp^r) = \left(-\log(cp^r) - \frac{1}{\ln 2} \right) c rp^{r-1}
\end{align}
Hence, $\lim_{p\to 0} \frac{d}{dp}g(cp^r) = 0$ for any $r>1$.
Finally,
\begin{align}
\frac{d}{dp}f_1(cp^r) = \frac{d}{dp}g(cp^r) +
\left( 1+\left(\ln 2\right)^{-1} + \log|E| + \frac{1}{2}\log(|E|-1) \right) c rp^{r-1} + O(p^{3r{-}1}).
\end{align}
So, $\frac{d}{dp}f_1(cp^r)\Big|_{p=0} = 0$ as claimed.
\end{proof}
\Cref{fig:g-derivative} and \Cref{fig:g-function} plot $\frac{d}{dp}g(cp^r)$ and $g(cp^r)$, respectively, for the values $c=1$ and $r\in\lbrace 1, 1.5, 2\rbrace$.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\begin{axis}[
scale = 1.2,
domain=0.00001:0.1,
axis lines = left,
scaled ticks=false,
tick label style={/pgf/number format/fixed},
legend cell align = left,
grid = major,
samples = 150,
ymax = 6,
xmin = 0,
xlabel = $p$,
every axis x label/.style={at={(current axis.right of origin)},anchor=west,right = 0.3cm},
ylabel = $\frac{d}{dp}g(p^r)$,
ylabel style={rotate=-90},
every axis y label/.style={at={(current axis.north west)},above=0mm, left = 0.3cm},
]
\addplot[smooth,mark=none, color = gray, thick] {-ln(x)/ln(2)-1.4427};
\addplot[smooth,mark=none, color = blue, thick] {(-ln(x^1.5)/ln(2)-1.4427)*1.5*x^0.5};
\addplot[smooth,mark=none, color = red, thick] {(-ln(x^2)/ln(2)-1.4427)*2*x};
\legend{$r=1$,$r=1.5$,$r=2$};
\end{axis}
\end{tikzpicture}
\caption{Plot of the derivative of $g(p^r)$ with respect to $p$ for $r\in \lbrace 1,1.5,2\rbrace$, where $g(\eta) =-\eta\log\eta$.}
\label{fig:g-derivative}
\end{figure}
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\begin{axis}[
scale = 1.2,
domain=0.00001:0.1,
axis lines = left,
scaled ticks=false,
tick label style={/pgf/number format/fixed},
legend cell align = left,
legend style = {at = {(0.1,1)},anchor = north west},
grid = major,
samples = 150,
xlabel = $p$,
every axis x label/.style={at={(current axis.right of origin)},anchor=west,right = 0.3cm},
ylabel = $g(p^r)$,
ylabel style={rotate=-90},
every axis y label/.style={at={(current axis.north west)},above=0mm, left = 0.3cm},
ytick = {0,0.05,0.1,0.15,0.2,0.25,0.3},
]
\addplot[smooth,mark=none, color = gray, thick] {-x*ln(x)/ln(2)};
\addplot[smooth,mark=none, color = blue, thick] {-x^1.5*ln(x^1.5)/ln(2)};
\addplot[smooth,mark=none, color = red, thick] {-x^2*ln(x^2)/ln(2)};
\legend{$r=1$,$r=1.5$,$r=2$};
\end{axis}
\end{tikzpicture}
\caption{Plot of the function $g(p^r)$ for $r\in \lbrace 1,1.5,2\rbrace$, where $g(\eta) =-\eta\log\eta$.}
\label{fig:g-function}
\end{figure}
We also simplify \eqref{eq:error-term-expanded} as
\begin{lemma}
Let $\mathcal{N} = \mathcal{N}(p)$ be a channel defined in terms of a noise
parameter $p\in [0,1]$, and $\dg(\mathcal{N}) \leq c p^r$ for some
$r>1$ and some constant $c$. Then, we have
\begin{multline}
| Q(\mathcal{N}) - I_c(\mathcal{N}) | \leq
crp^{r{-}1} (-p\log p) \\
+ cp^r\left( -\log c + 1+\left(\ln 2\right)^{-1}
+ \log|E| + \frac{1}{2}\log(|E|{-}1) \right)
+ O(p^{3r}).
\end{multline}
\label{lem:qbound1}
\end{lemma}
For the private capacity, we have the following from \Cref{thm:deg-cty-p}:
\begin{align}
|P(\mathcal{N})-I_c(\mathcal{N})| \leq f_2(\eta),
\end{align}
where
\begin{align}
f_2(\eta) & \coloneqq \eta \log(|E|{-}1) + 4 \eta \log|E|
+ 2 \, h \! \left(\frac{\eta}{2}\right)
+ 4 \left(1+\frac{\eta}{2}\right) h\left( \frac{\eta}{2+\eta} \right) \\
& = -3 \eta\log\eta + \left( 3 + 3(\ln 2)^{-1} + \log(|E|-1) + 4 \log |E| \right)\eta + O(\eta^2).
\end{align}
In a similar way as above, we can derive the following:
\begin{lemma}
Let $\mathcal{N} = \mathcal{N}(p)$ be a channel defined in terms of a noise
parameter $p\in [0,1]$, and $\dg(\mathcal{N}) \leq c p^r$ for some
$r>1$ and some constant $c$. Then,
\begin{multline}
| P(\mathcal{N}) - I_c(\mathcal{N}) | \leq 3cr p^{r-1}(-p\log p)\\ + c p^r (-3\log c+3+3(\ln 2)^{-1} + \log(|E|-1)+4\log|E|) + O(p^{2r}).
\end{multline}
\label{lem:pbound1}
\end{lemma}
Our main contribution in this paper is to note that many interesting channels $\mathcal{N}$
satisfying $\|\mathcal{N}-\id\|_\diamond = O(p)$ have $\dg(\mathcal{N}) = O(cp^r)$ for some $r>1$ and a constant $c$,
such that \Cref{lem:qbound1} and \Cref{lem:pbound1} apply. In the following
subsection, we prove that any channel $\mathcal{N}$ which is $\varepsilon$-close to
the identity in the diamond norm has $\dg(\mathcal{N}) \leq 2 \varepsilon^{1.5}$.
Furthermore, in \Cref{sec:paulietal} we improve the exponent to $r=2$ for the
Pauli channels.
\subsection{Degrading a low-noise channel with its complementary channel}\label{sec:low-noise-degrading}
Low-noise channels as defined at the beginning of this section are natural examples of ``almost'' degradable channels, since the identity channel is trivially degradable: its degrading map is given by its complementary channel $\id^c(\cdot) = \tr(\cdot)|0\rangle \langle 0|_E$ that outputs a fixed state to the environment.
This suggests that the same holds approximately for low-noise channels, i.e., a channel $\mathcal{N}$ that is $\varepsilon$-close to the identity should be approximately degraded by its complementary channel $\mathcal{N}^c$.
Indeed, one of the main results of this paper, \Cref{thm:complementary-degrading} below, shows that a channel $\mathcal{N}$ with $\|\mathcal{N}-\id\|_\diamond \leq \varepsilon$ is $2\varepsilon^{3/2}$-approximately degradable with respect to its complementary channel.
We prove later (\Cref{thm:gen-pauli-channels} in \Cref{sec:paulietal}) that the dependence on $\varepsilon$ can be improved to quadratic order for the class of Pauli channels.
\begin{theorem}\label{thm:complementary-degrading}
Let $\mathcal{N}$ be a low-noise quantum channel, i.e., $\|\mathcal{N}-\id\|_\diamond \leq \varepsilon$ for some $\varepsilon\in[0,2]$.
Then $\mathcal{N}$ is $2\varepsilon^{3/2}$-approximately degradable:
\begin{align}
\| \mathcal{N}^c - \mathcal{N}^c\circ\mathcal{N} \|_\diamond \leq 2\varepsilon^{3/2}.
\end{align}
\end{theorem}
\begin{proof}
Let $\rho$ be a quantum state achieving the maximum in the definition \eqref{eq:diamond-norm} of the diamond norm of $\mathcal{N}^c - \mathcal{N}^c\circ\mathcal{N}$, so that
\begin{align}
\| \mathcal{N}^c - \mathcal{N}^c\circ\mathcal{N} \|_\diamond &= \| \id\otimes \mathcal{N}^c (\rho) - \id\otimes (\mathcal{N}^c\circ \mathcal{N})(\rho)\|_1\\
& = \| \id\otimes \mathcal{N}^c (\delta)\|_1,
\label{eq:diamond-norm-achieved}
\end{align}
where $\delta\coloneqq \rho - \id\otimes\mathcal{N}(\rho)$.
This is a traceless, Hermitian operator that can be expressed as
\begin{align}
\delta = \sum\nolimits_i \lambda_i |\psi_i\rangle\langle \psi_i|,
\end{align}
where $\sum_i\lambda_i = 0$ since $\delta$ is traceless, and $\sum_i |\lambda_i| \leq \varepsilon$ since $\|\mathcal{N}-\id\|_\diamond \leq \varepsilon$.
Furthermore, we have $\tr_2\delta = \tr_2\rho - \tr_2(\id\otimes\mathcal{N}(\rho)) = 0$ due to $\mathcal{N}$ being trace-preserving, and hence,
\begin{align}
\tr_2\delta = \sum\nolimits_i \lambda_i \tr_2|\psi_i\rangle\langle \psi_i| = 0.
\label{eq:delta-partial-trace}
\end{align}
Using the triangle inequality for the trace distance, we bound
\begin{align}
\| \mathcal{N}^c - \mathcal{N}^c\circ\mathcal{N} \|_\diamond &= \| \id\otimes\mathcal{N}^c(\delta) \|_1\\
&\leq \| \id\otimes \mathcal{N}^c(\delta) - \id \otimes \id^c(\delta)\|_1 + \|\id\otimes \id^c(\delta) \|_1,\label{eq:triangle-ineq}
\end{align}
where the complementary channel $\id^c$ of the identity map is the completely depolarizing channel
\begin{align}
\id^c(X) = \tr(X) |0\rangle\langle 0|,
\end{align}
defined in terms of some pure state $|0\rangle$ of the environment of $\mathcal{N}$.
For the first term in \eqref{eq:triangle-ineq}, we have by another application of the triangle inequality that
\begin{align}
\| \id\otimes \mathcal{N}^c(\delta) - \id \otimes \id^c(\delta)\|_1 & \leq \sum\nolimits_i |\lambda_i| \| \id\otimes\mathcal{N}^c(|\psi_i\rangle\langle \psi_i|) - \id\otimes\id^c(|\psi_i\rangle\langle \psi_i|)\|_1 \\
&\leq 2\sqrt{\varepsilon}\sum\nolimits_i |\lambda_i| \\
&\leq 2\varepsilon^{3/2},
\end{align}
where in the second inequality we used the following bound that holds for all $i$ and follows from \Cref{cor:complementary-channel-norm}:
\begin{align}
\| \id\otimes\mathcal{N}^c(|\psi_i\rangle\langle \psi_i|) - \id\otimes\id^c(|\psi_i\rangle\langle \psi_i|)\|_1 \leq \| \mathcal{N}^c - \id^c \|_\diamond \leq 2\sqrt{\varepsilon}.
\end{align}
For the second term in \eqref{eq:triangle-ineq}, we have
\begin{align}
\id\otimes\id^c(\delta) = \sum\nolimits_i \lambda_i \tr_2(|\psi_i\rangle\langle \psi_i|) \otimes |0\rangle\langle 0| = 0
\end{align}
by \eqref{eq:delta-partial-trace}, and hence, $\|\id\otimes \id^c(\delta) \|_1 = 0$.
This concludes the proof.
\end{proof}
Combining \Cref{thm:complementary-degrading} with \Cref{lem:qbound1}
and \ref{lem:pbound1}, we obtain the following main result:
\begin{theorem}\label{lem:qbound2}
If $\|\mathcal{N}-\id\|_\diamond \leq \varepsilon$ for some $\varepsilon\in[0,2]$,
then
\begin{align}
| Q(\mathcal{N}) - I_c(\mathcal{N}) | &\leq
- 3 \varepsilon^{1.5} \log \varepsilon
+ \left( \left(\ln 2\right)^{-1}
+ \log|E| + \frac{1}{2} \log(|E|{-}1) \right) \, 2 \varepsilon^{1.5}
+ O(\varepsilon^{4.5}) \\
| P(\mathcal{N}) - I_c(\mathcal{N}) | &\leq - 9 \varepsilon^{1.5} \log \varepsilon
+ \left( 3\left(\ln 2\right)^{-1}
+ 4\log|E| + \log(|E|{-}1) \right) \, 2 \varepsilon^{1.5}
+ O(\varepsilon^{3}).
\end{align}
\end{theorem}
Recall that $\varepsilon \log \varepsilon$ goes to zero faster than $\varepsilon^b$ for any
$b<1$. \Cref{lem:qbound2} thus narrows the uncertainty of both
capacities to $\varepsilon^b$ for $b \approx 1.5$. Furthermore, since the
channel $\mathcal{N}$ can already communicate quantum data at the rate
$I_c(\mathcal{N})$ using a non-degenerate quantum error correcting code
\cite{Lloyd97,Shor02,Devetak05}, \Cref{lem:qbound2} shows that
degenerate codes only improve quantum communication rates in
$O(\varepsilon^b)$ for $b \approx 1.5$. Such a code also transmits
private data, and shielding cannot improve the private capacity by the
same order.
\section{Generalized Pauli channels}
\label{sec:paulietal}
In this section, we apply our results from \Cref{sec:general} to the
\emph{generalized Pauli channels} on finite dimension $d$. This class of
channels includes
the depolarizing channel and the $XZ$-channel acting on qubits. The
quantum and private capacities of these channels have remained
unknown for more than 20 years
(except for very special extreme values of the noise parameters).
Our main result is that for generalized Pauli channels
that are $\varepsilon$-close to the identity channel, the upper bound for the
degradability parameter in \Cref{thm:complementary-degrading} can be
improved to $O(\varepsilon^2)$. We show how a complementary channel with
suitably modified noise parameter can be used to achieve such
improved approximate degrading.
We start by introducing the \emph{generalized Pauli channels} on
finite dimension $d$. Define the generalized Pauli $X$ and $Z$
operators on $\mathbb{C}^d$:
$$ X|i\> = |i{+}1\>, ~~Z|i\> = \omega^i |i\>\,,$$ where $\{|i\>\}$ is the
computational basis, addition is modulo $d$, and $\omega$ is a primitive
$d^{\rm th}$ root of unity. The generalized Pauli basis is
given by $G = \{X^k Z^l: 0 \leq k,l \leq d{-}1\}$, and the generalized
Pauli channel has the form
$$ \mathcal{N}(\rho) = \sum_{U \in G} p_U U \rho U^\dagger \,,$$
where $\{p_U\}_{U \in G}$ is a probability distribution.
The above reduces to the \emph{Pauli channel} in the special case $d=2$,
\begin{align}
\mathcal{N}_\mathbf{p}(\rho) = p_0 \rho + p_1 X\rho X + p_2 Y\rho Y + p_3 Z\rho Z,
\label{eq:gen-pauli-channel}
\end{align}
where $X, Y, Z$ are the usual Pauli matrices in $\mathcal{B}(\mathbb{C}^2)$,
and $\mathbf{p}=(p_0,p_1,p_2,p_3)$ is a probability distribution.
We first illustrate the main ideas on the simpler depolarizing channel, and then state the general result for the Pauli
channel which we prove in \Cref{sec:proof-gen-pauli-channels}. Generalization to
higher dimensions can be done by expressing the channel input in the
basis $G$, and noting that the generalized Pauli channel acts diagonally
in this basis. The derivation is straightforward and left as an
exercise for the interested readers.
\subsection{Depolarizing channel}\label{sec:depol-channel}
We first consider the \emph{qubit depolarizing channel} with error $p$:
\begin{align}
\mathcal{D}_p (\rho) \coloneqq (1-p)\rho + \frac{p}{3}(X\rho X + Y\rho Y + Z\rho Z)\quad \text{for }p\in[0,1],\label{eq:depolarizing-channel}
\end{align}
which corresponds to setting $p_1=p_2=p_3=\frac{p}{3}$ in \eqref{eq:gen-pauli-channel}.
Note that $\|\id - \mathcal{D}_p\|_\diamond = 2p$, which can be seen as follows:
The diamond norm distance in \eqref{eq:diamond-norm}
is at least $2p$ by choosing $M$ to be the
maximally entangled state, and at most $2p$ by
the feasible solution $Z_{AB} = p \mathcal{J}(\id)$
in \eqref{eq:diamond-norm-sdp}.
\Cref{thm:complementary-degrading} then implies that $\mathcal{D}_p$ is $2^{2.5} p^{1.5}$-degradable when choosing the complementary channel $\mathcal{D}_p^c$ of the depolarizing channel with error $p$ as the degrading map.
However, solving the SDP \eqref{eq:approx-deg-sdp} numerically shows that $\dg(\mathcal{D}_p)
\approx O(p^2)$, which is better than the bound promised by \Cref{thm:complementary-degrading} by half an order.
Here, we derive an analytic proof of the above numerical observation. We will
prove that $\dg(\mathcal{D}_p)$ is indeed $O(p^2)$. This is achieved by
choosing an approximate
degrading map to be the complementary channel $\mathcal{D}_s^c$, where $s =
p+ap^2$ for a suitable choice of the parameter $a>0$. To see the
intuition, suppose we want $\mathcal{D}_s^c \circ \mathcal{D}_p$ to be close to
$\mathcal{D}_p^c$. Then, choosing $s$ to be slightly larger than $p$
transmits slightly more information to Bob in the output of $\mathcal{D}_s^c$,
which compensates for the slightly worse input to $\mathcal{D}_s^c$ that is
corrupted by $\mathcal{D}_p$.
\begin{theorem} \label{thm:depol-degradability-parameter}
For $p \approx 0$, we have
\begin{align}
\dg(\mathcal{D}_p) \leq \frac{8}{9}\left(6 + \sqrt{2}\right) p^2 + O(p^{3}).
\end{align}
\end{theorem}
\begin{proof}
We first show that, setting $s = p + ap^2$, there is a value of $a$ for which $\|
\mathcal{D}_s^c \circ \mathcal{D}_p - \mathcal{D}_p^c \|_\diamond \approx O(p^2)$. Then, for this
$s$, we derive two upper bounds to $\| \mathcal{D}_s^c \circ \mathcal{D}_p - \mathcal{D}_p^c
\|_\diamond$: a rough bound with a simpler derivation showing the idea, and an improved bound with a more complex
derivation yielding a better leading constant.
The complementary channel of $\mathcal{D}_p$, which we refer to as the \emph{epolarizing channel} (cf.~\cite{LW15}), can be chosen to be
\begin{align}
\mathcal{D}_p^c(\rho) =
\begin{pmatrix}
(1- p) \tr (\rho) &
\sqrt{\frac{p(1-p)}{3}} \ip{X}{\rho} &
\sqrt{\frac{p(1-p)}{3}} \ip{Y}{\rho} &
\sqrt{\frac{p(1-p)}{3}} \ip{Z}{\rho} \\[2mm]
\sqrt{\frac{p(1-p)}{3}} \ip{X}{\rho} &
\frac{p}{3} \tr (\rho) &
- \frac{i p}{3} \ip{Z}{\rho} &
\frac{i p}{3} \ip{Y}{\rho} \\[2mm]
\sqrt{\frac{p(1-p)}{3}} \ip{Y}{\rho} &
\frac{i p}{3} \ip{Z}{\rho} &
\frac{p}{3} \tr (\rho) &
- \frac{i p}{3} \ip{X}{\rho} \\[2mm]
\sqrt{\frac{p(1-p)}{3}} \ip{Z}{\rho} &
- \frac{i p}{3} \ip{Y}{\rho} &
\frac{i p}{3} \ip{X}{\rho} &
\frac{p}{3} \tr (\rho)
\end{pmatrix}\!.
\label{eq:epolarizing-channel}
\end{align}
Using \eqref{eq:depolarizing-channel} and \eqref{eq:epolarizing-channel}, we further
obtain $\mathcal{D}_s^c \circ \mathcal{D}_p(\rho) = $
\begin{align}
\begin{pmatrix}
(1- s) \tr(\rho) &
\!\!\! \sqrt{\frac{s(1-s)}{3}} \left( 1{-}\frac{4p}{3}\right) \ip{X}{\rho} &
\!\!\! \sqrt{\frac{s(1-s)}{3}} \left( 1{-}\frac{4p}{3}\right) \ip{Y}{\rho} &
\!\!\! \sqrt{\frac{s(1-s)}{3}} \left( 1{-}\frac{4p}{3}\right) \ip{Z}{\rho} \\[2mm]
\sqrt{\frac{s(1-s)}{3}} \left( 1{-}\frac{4p}{3}\right) \ip{X}{\rho} &
\frac{s}{3} \tr(\rho) &
- \frac{i s}{3}\left( 1{-}\frac{4p}{3}\right)\ip{Z}{\rho} &
\frac{i s}{3}\left( 1{-}\frac{4p}{3}\right) \ip{Y}{\rho} \\[2mm]
\sqrt{\frac{s(1-s)}{3}} \left( 1{-}\frac{4p}{3}\right) \ip{Y}{\rho} &
\frac{i s}{3}\left( 1{-}\frac{4p}{3}\right) \ip{Z}{\rho} &
\frac{s}{3} \tr(\rho) &
- \frac{i s}{3} \left( 1{-}\frac{4p}{3}\right) \ip{X}{\rho} \\[2mm]
\sqrt{\frac{s(1-s)}{3}} \left( 1{-}\frac{4p}{3}\right) \ip{Z}{\rho} &
- \frac{i s}{3}\left( 1{-}\frac{4p}{3}\right) \ip{Y}{\rho} &
\frac{i s}{3} \left( 1{-}\frac{4p}{3}\right)\ip{X}{\rho} &
\frac{s}{3} \tr(\rho)
\end{pmatrix}\!.
\label{eq:epolarizing-channel-degraded}
\end{align}
We set $s=p+ap^2$ and $\Phi = \mathcal{D}_p^c - \mathcal{D}_s^c \circ \mathcal{D}_p = \mathcal{D}_p^c - \mathcal{D}_{p+ap^2}^c\circ \mathcal{D}_p$, which is given by the difference between \eqref{eq:epolarizing-channel} and \eqref{eq:epolarizing-channel-degraded}.
We first show that for some $a$, $\| \mathcal{D}_s^c \circ
\mathcal{D}_p - \mathcal{D}_p^c \|_\diamond \approx O(p^2)$, and we derive the following
upper bound on the degradability parameter $\dg(\mathcal{D}_p)$,
\begin{align}
\dg(\mathcal{D}_p) \leq \frac{64}{3} p^2 + O(p^{5/2}) \,. \label{eq:depol-deg-weaker-bound}
\end{align}
To upper bound $\|\Phi\|_\diamond$, we apply Lemma
\ref{lem:coefficients-bound}, which states that
\begin{align}
\|\Phi\|_\diamond
\leq |A| \, |B|^2 \, \| \mathcal{J}(\Phi) \|_{\max}
\leq 8 \; \| \mathcal{J}(\Phi) \|_{\max}.
\end{align}
Hence, we need to evaluate $\|\mathcal{J}(\Phi)\|_{\max}$, where the Choi matrix is given
by $\mathcal{J}(\Phi) = \sum_{i,j=0}^{1} |i\>\<j|$ $\otimes \, \Phi(|i\>\<j|) $.
Due to the block structure of the Choi matrix,
$\| \mathcal{J}(\Phi) \|_{\max} = \max_{i,j} \| \Phi(|i\>\<j|) \|_{\max}$.
To find this maximum,
first note that for any $i$ and $j$, the quantities
$\tr (|i\>\<j|)$,
$|\ip{X}{|i\>\<j|}|$,
$|\ip{Y}{|i\>\<j|}|$, and $|\ip{Z}{|i\>\<j|}|$ can
only be $0$ or $1$.
So, from inspection of the difference between \eqref{eq:epolarizing-channel}
and \eqref{eq:epolarizing-channel-degraded},
$\max_{i,j} \| \Phi(|i\>\<j|) \|_{\max}$ is
either $s-p=ap^2$, or $\frac{1}{3}|s-p+\frac{4}{3}ps| = \frac{1}{3}|a-\frac{4}{3}|p^2+O(p^3)$, or
\begin{align}
c(p) \coloneqq
\sqrt{\frac{p(1-p)}{3}} - \left(1-\frac{4p}{3}\right)\sqrt{\frac{(p+ap^2)(1-p-ap^2)}{3}} \,.
\label{eq:c-function}
\end{align}
Expanding $c(p)$ in a Taylor series around $p=0$ yields
\begin{align}
c(p)
= \left(\frac{4}{3\sqrt{3}}-\frac{a}{2\sqrt{3}}\right) p^{3/2} + O(p^{5/2}) \,,
\end{align}
which is $O(p^{5/2})$ if $a=\frac{8}{3}$.
With this choice, $\max_{i,j} \| \Phi(|i\>\<j|) \|_{\max} = ap^2 = \frac{8}{3}p^2$ for
sufficiently small $p$.
Altogether, $\| \mathcal{J}(\Phi) \|_{\max} \leq \frac{8}{3}p^2
+ O(p^{5/2})$, and
$\| \mathcal{J}(\Phi) \|_\diamond \leq \frac{64}{3}p^2 + O(p^{5/2})$,
which completes the proof of \eqref{eq:depol-deg-weaker-bound}.
Finally, to prove the stronger assertion of the theorem,
\begin{align}
\dg(\mathcal{D}_p) \leq \frac{8}{9}\left(6 + \sqrt{2}\right) p^2 + O(p^{3}),\label{eq:depol-deg-stronger-bound}
\end{align}
we keep the choice $a=\frac{8}{3}$ to enforce that all coefficients of $\Phi = \mathcal{D}_p^c - \mathcal{D}_{p+ap^2}^c\circ \mathcal{D}_p$ are $O(p^2)$ by Corollary \ref{cor:choi-coefficients}.
However, we upper bound $\|\Phi\|_\diamond$ with a different technique.
Since $\Phi$ is a Hermiticity-preserving map, its diamond norm is maximized by a rank-$1$ state
(see for example \cite{Wat16}).
Furthermore, since $\mathcal{D}_p^c$ and $\mathcal{D}_{p+ap^2}^c\circ \mathcal{D}_p$ are jointly covariant under the full unitary group, the diamond norm $\|\Phi\|_\diamond$ is maximized by the (normalized) maximally entangled state $\frac{1}{\sqrt{2}}(|00\rangle + |11\rangle)$ \cite[Cor.~2.5]{LKDW17}, and hence,
\begin{align}
\|\Phi\|_\diamond = \frac{1}{2} \|\mathcal{J}(\Phi)\|_1 = \frac{1}{2} \left\|\mathcal{J}(\mathcal{D}_p) - \mathcal{J}(\mathcal{D}_{p+ap^2}^c\circ \mathcal{D}_p) \right\|_1.
\end{align}
It follows from \eqref{eq:epolarizing-channel} and \eqref{eq:epolarizing-channel-degraded} that $\frac{1}{2}\mathcal{J}(\Phi) = \begin{pmatrix}
J_{00} & J_{01} \\ J_{10} & J_{11}
\end{pmatrix},$
where
\begin{align}
J_{00} &= \begin{pmatrix}
\frac{4 p^2}{3} & 0 & 0 & \frac{1}{2} c(p) \\[0.2cm]
0 & -\frac{4 p^2}{9} & -\frac{2}{27} i p^2 (8 p-3) & 0 \\[0.2cm]
0 & \frac{2}{27} i p^2 (8 p-3) & -\frac{4 p^2}{9} & 0 \\[0.2cm]
\frac{1}{2} c(p) & 0 & 0 & -\frac{4 p^2}{9}
\end{pmatrix}\\[1ex]
J_{01} &=
\begin{pmatrix}
0 & \frac{1}{2} c(p) & \frac{i}{2} c(p) & 0 \\[0.2cm]
\frac{1}{2} c(p) & 0 & 0 & -\frac{2}{27} p^2 (8 p-3) \\[0.2cm]
\frac{i}{2} c(p) & 0 & 0 & -\frac{2}{27} i p^2 (8 p-3) \\[0.2cm]
0 & \frac{2}{27} p^2 (8 p-3) & \frac{2}{27} i p^2 (8 p-3) & 0
\end{pmatrix}\\[1ex]
J_{10} &= \begin{pmatrix}
0 & \frac{1}{2} c(p) & - \frac{i}{2} c(p) & 0 \\[0.2cm]
\frac{1}{2} c(p) & 0 & 0 & \frac{2}{27} p^2 (8 p-3) \\[0.2cm]
- \frac{i}{2} c(p) & 0 & 0 & -\frac{2}{27} i p^2 (8 p-3) \\[0.2cm]
0 & -\frac{2}{27} p^2 (8 p-3) & \frac{2}{27} i p^2 (8 p-3) & 0
\end{pmatrix}\\[1ex]
J_{11} &= \begin{pmatrix}
\frac{4}{3} p^2 & 0 & 0 & -\frac{1}{2} c(p) \\[0.2cm]
0 & -\frac{4}{9} p^2 & \frac{2}{27} i p^2 (8 p-3) & 0 \\[0.2cm]
0 & -\frac{2}{27} i p^2 (8 p-3) & -\frac{4}{9} p^2 & 0 \\[0.2cm]
-\frac{1}{2} c(p) & 0 & 0 & -\frac{4}{9} p^2
\end{pmatrix}\!,
\end{align}
with $c(p)$ as defined in \eqref{eq:c-function}.
Using the triangle inequality for the trace norm, we get
\begin{align}
\frac{1}{2}\|\mathcal{J}(\Phi)\|_1 \leq \|J_{00}\|_1 + \|J_{01}\|_1 + \|J_{10}\|_1 + \|J_{11}\|_1 \eqqcolon F(p).
\end{align}
A Taylor expansion shows that $F(p) = \frac{8}{9}\left(6 + \sqrt{2}\right) p^2 + O(p^{3})$, from which the bound \eqref{eq:depol-deg-stronger-bound} follows.\footnote{See the Mathematica notebook \texttt{depol-deg-bound.nb} included as an ancillary file on \url{https://arxiv.org/abs/1705.04335}.}
\end{proof}
In \Cref{fig:depolarizing-channel} we compare the optimal degradability parameter $\dg(\mathcal{D}_p)$ with the quantity $\|\mathcal{D}_p^c - \mathcal{D}_{p+ap^2}^c\circ \mathcal{D}_p \|_{\diamond}$ for $a=\frac{8}{3}$, and the analytical upper bound $\frac{8}{9}(6+\sqrt{2})p^2$ obtained from Theorem \ref{thm:depol-degradability-parameter}.
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[
scale = 1.3,
axis lines=left,
scaled ticks=false,
tick label style={/pgf/number format/fixed, /pgf/number format/precision=3},
legend style = {at = {(0.05,0.95)},anchor = north west},
legend cell align = left,
grid = major,
xlabel = $p$,
xtick = {0,0.02,0.04,0.06,0.08,0.1},
]
\addplot[color=blue,thick] table[x=p,y=d] {depol.dat};
\addplot[color=red,thick] table[x=p, y=s] {depol.dat};
\addplot[draw=none, color=white] table[x=p, y=s] {depol.dat};
\addplot[smooth,mark=none, color = green, thick,domain=0:0.1] {8/9*(6+sqrt(2))*x^2};
\legend{$\dg(\mathcal{D}_p)$, $\|\mathcal{D}_p^c - \mathcal{D}_{s}^c\circ \mathcal{D}_p \|_{\diamond}$, where $s = p + \frac{8}{3} p^2$,$\frac{8}{9}(6+\sqrt{2})p^2$};
\end{axis}
\end{tikzpicture}
\caption{Plot of the optimal degradability parameter $\dg(\mathcal{D}_p)$ (blue) of the qubit depolarizing channel $\mathcal{D}_p$ computed using the SDP \eqref{eq:approx-deg-sdp}, together with the degradability parameter $\|\mathcal{D}_p^c - \mathcal{D}_{s}^c\circ \mathcal{D}_p \|_{\diamond}$ (red) obtained using the degrading map $\mathcal{D}_{s}^c$ with $s = p + \frac{8}{3}p^2$, and the analytical upper bound $\frac{8}{9}(6+\sqrt{2})p^2$ (green) obtained from Theorem \ref{thm:depol-degradability-parameter}.}
\label{fig:depolarizing-channel}
\end{figure}
Combining \Cref{thm:depol-degradability-parameter} with \Cref{lem:qbound1} and \Cref{lem:pbound1}, and using the fact that
\begin{align}
I_c(\mathcal{D}_p) = 1-h(p)-p\log 3,
\end{align}
we obtain the following:
\begin{theorem}\label{cor:qpcapdepol}
For small $p$, the quantum and private capacity of the qubit depolarizing channel $\mathcal{D}_p$ are given by
\begin{align}
1 -h(p) - p\log 3 ~ \leq ~ & Q(\mathcal{D}_p) ~ \leq ~ 1-h(p) - p\log 3 - \frac{16}{9}(6+\sqrt{2}) \, p^2 \log p + O(p^2)\\
1-h(p) - p\log 3 ~ \leq ~ & P(\mathcal{D}_p) ~ \leq ~ 1-h(p) - p\log 3 - \frac{16}{3}(6+\sqrt{2}) \, p^2 \log p + O(p^2).
\end{align}
\end{theorem}
\subsection{Pauli channels and the $XZ$-channel} \label{sec:gen-pauli-channels}
The above discussion can be extended to all Pauli channels of the form in \eqref{eq:gen-pauli-channel}. Note that $\| \mathcal{N}_\mathbf{p} - \id \|_\diamond = 2 (p_1 + p_2 + p_3)$ by an argument similar to the one given for the depolarizing channel.
To state our result, we consider a Pauli channel $\mathcal{N}_\mathbf{p}$, where the probabilities $p_i$ for $i=1,2,3$ are polynomials $p_i(p) = c_i p + d_i p^2 + \cdots$ in a single parameter $p\in[0,1]$ without constant terms, and $p_0 = 1 - p_1 - p_2 - p_3$.
(Note that all Pauli channels can be modeled this way, and the polynomials are not unique.)
We now define
\begin{align}
\mathbf{s}(a_1,a_2,a_3)\coloneqq (p_0', p_1(p+a_1p^2), p_2(p+a_2p^2), p_3(p+a_3 p^2)),\label{eq:s-polynomial}
\end{align}
where again $p_0' = 1 - p_1(p+a_1p^2) - p_2(p+a_2p^2) - p_3(p+a_3p^2)$.\footnote{Note that
$p_i(p+a_ip^2)$ denote the polynomial $p_i$ with argument $p+a_ip^2$, not the product
of $p_i$ and $p+a_ip^2$. }
We then have the following result, whose proof we give in \Cref{sec:proof-gen-pauli-channels}:
\begin{theorem}\label{thm:gen-pauli-channels}
Let $\mathcal{N}_{\mathbf{p}}$ be a generalized Pauli channel with $\mathbf{p} = (p_0,p_1(p),p_2(p),p_3(p))$, where $p_0 = 1-p_1(p)-p_2(p)-p_3(p)$, and the $p_i(p)$ are polynomials in $p$ with $p_i(0)=0$ for $i=1,2,3$.
Denote by $c_i$ the coefficient of $p$ in $p_i(p)$ for $i=1,2,3$.
If $c_i\neq 0$ then the choices $a_i \coloneqq 4\sum_{j\neq i}c_j$ in \eqref{eq:s-polynomial} ensure that
\begin{align}
\| \mathcal{N}_\mathbf{p}^c - \mathcal{N}_{\mathbf{s}(a_1,a_2,a_3)}^c \circ \mathcal{N}_\mathbf{p} \|_\diamond = O(p^2).\label{eq:quadratic-behavior}
\end{align}
If $c_i=0$ for some $i$, then any choice of $a_i$ for that $i$ yields \eqref{eq:quadratic-behavior}.
Furthermore, we have
\begin{align}
\dg(\mathcal{N}_{\mathbf{p}}) \leq \, 64 \, |c_1c_2+c_1c_3+c_2c_3| \, p^2 + O(p^3) \,.\label{eq:deg-pauli-channels}
\end{align}
\end{theorem}
\Cref{lem:qbound1} and \Cref{lem:pbound1} now yield the following:
\begin{corollary}\label{cor:pauli-channel-capacities}
Let $\mathcal{N}_{\mathbf{p}}$ be a Pauli channel as defined in \Cref{thm:gen-pauli-channels}, and define $
C_\mathbf{p} \coloneqq |c_1c_2 + c_1c_3+c_2c_3|.$
The quantum and private capacity of $\mathcal{N}_\mathbf{p}$ are given by
\begin{align}
I_c(\mathcal{N}_\mathbf{p}) ~\leq~ & Q(\mathcal{N}) ~\leq~ I_c(\mathcal{N}_\mathbf{p}) - \, 128 \; C_\mathbf{p} \, p^2\log p + O(p^2)\\
I_c(\mathcal{N}_\mathbf{p}) ~\leq~ & P(\mathcal{N}) \, ~\leq~ I_c(\mathcal{N}_\mathbf{p}) - \,384 \; C_\mathbf{p} \, p^2\log p + O(p^2).
\end{align}
\end{corollary}
\Cref{thm:gen-pauli-channels} and \Cref{cor:pauli-channel-capacities} include the (weaker) result from \Cref{sec:depol-channel} about the depolarizing channel, for which we have $c_i = \frac{1}{3}$ for $i=1,2,3$, and hence $a_i = \frac{8}{3}$ and $C_\mathbf{p} = \frac{1}{3}$.
Another interesting example in the class of generalized Pauli channels is the $XZ$-channel
\begin{align}
\mathcal{N}_{p,\,q}^{XZ} (\rho) \coloneqq (1-p)(1-q) \rho + p(1-q) X\rho X + pq Y\rho Y + (1-p)q Z\rho Z ,\label{eq:XZ-channel}
\end{align}
that implements independent $X$ and $Z$ errors by applying an $X$-dephasing with probability $p$, and a $Z$-dephasing with probability $q$.
For our discussion, we set $p=q$ and denote the resulting $XZ$-channel by $\mathcal{C}_p$,
\begin{align}
\mathcal{C}_p (\rho) = (1-p)^2 \rho + (p-p^2)X\rho X + p^2 Y\rho Y + (p-p^2)Z\rho Z. \label{eq:XZ-channel-p}
\end{align}
Thus, we have $c_1 = 1$, $c_2 = 0$, $c_3 = 1$, $d_1 = -1$, $d_2 = 1$, and $d_3 = -1$.
Hence, the choices $a_1 = a_2 = a_3 = 4$ ensure \eqref{eq:quadratic-behavior} by \Cref{thm:gen-pauli-channels}, and from \eqref{eq:deg-pauli-channels} we obtain the analytic bound $\dg(\mathcal{C}_p) \leq 64 p^2 + O(p^3)$.
However, similar to Theorem \ref{thm:depol-degradability-parameter}, we can further improve the bound on $\dg(\mathcal{C}_p)$:
\begin{theorem}\label{thm:XZ-improved-coefficient}
For $p\approx 0$, we have
\begin{align}
\dg(\mathcal{C}_p) \leq 16 p^2 + 32 p^{5/2} + O(p^3).
\end{align}
\end{theorem}
\begin{proof}
For the $XZ$-channel $\mathcal{C}_p$, we have $p_0 = (1-p)^2$, $p_1=p_3=p-p^2$, and $p_2=p^2$ by \eqref{eq:XZ-channel-p}.
Furthermore, as in the discussion above we choose $s=p+4p^2$, and set $q_0 = (1-s)^2$, $q_1 = q_3 = s-s^2$, and $q_2 = s^2$, such that the map $\Phi=\mathcal{C}^c_p-\mathcal{C}^c_{s}\circ\mathcal{C}_p$ as given in \eqref{eq:phi-matrix} has coefficients that are $O(p^2)$ by Corollary \ref{cor:choi-coefficients}.
Since $\Phi$ is Hermiticity-preserving, its diamond norm is maximized by a pure state \cite{Wat16}, and since both $\mathcal{C}^c_p$ and $\mathcal{C}^c_s\circ\mathcal{C}_p$ are covariant with respect to the Pauli group, a 1-design, this pure state can be chosen to be the maximally entangled state $\frac{1}{2}(|00\rangle + |11\rangle)$ \cite[Cor.~2.5]{LKDW17}.
Hence,
\begin{align}
\|\mathcal{C}^c_p - \mathcal{C}^c_s\circ\mathcal{C}_p\|_\diamond = \|\Phi\|_\diamond = \frac{1}{2} \|\mathcal{J}(\Phi)\|_1.
\end{align}
Exploiting the block structure $\frac{1}{2}\mathcal{J}(\Phi) = \begin{pmatrix}
G_{00} & G_{01} \\ G_{10} & G_{11}
\end{pmatrix}$
that follows from \eqref{eq:phi-matrix} together with the triangle inequality for the trace norm, we obtain the upper bound
\begin{align}
\dg(\mathcal{C}_p) \leq \|\mathcal{C}^c_p - \mathcal{C}^c_s\circ\mathcal{C}_p\|_\diamond = \frac{1}{2} \|\mathcal{J}(\Phi)\|_1 \leq \|G_{00}\|_1 + \|G_{11}\|_1 + \|G_{01}\|_1 + \|G_{10}\|_1, \label{eq:dg-cC-upper-bound}
\end{align}
and a Taylor expansion of the right-hand side of \eqref{eq:dg-cC-upper-bound} shows that $\dg(\mathcal{C}_p) \leq 16 p^2 + 32 p^{5/2} + O(p^3)$, which proves the claim.\footnote{We refer to the Mathematica notebook \texttt{XZ-deg-bound.nb} included in the ancillary files of \url{https://arxiv.org/abs/1705.04335} for details.}
\end{proof}
In Figure \ref{fig:XZ-channel} we compare the optimal degradability parameter $\dg(\mathcal{C}_p)$ with the quantity $\|\mathcal{C}_p^c - \mathcal{C}_s^c\circ \mathcal{C}_p \|_{\diamond}$ and the analytical upper bound $16p^2 + 32p^{5/2}$ obtained from Theorem \ref{thm:XZ-improved-coefficient}.
Numerics suggest that the coherent information $I_c(\mathcal{C}_p)$ is achieved by a completely mixed state $\pi = I/2$, giving
\begin{align}
I_c(\mathcal{C}_p) = I_c(\pi; \mathcal{C}_p) = 1-2 h(p).
\label{eq:XZ-coh-info}
\end{align}
Putting Corollary \ref{cor:pauli-channel-capacities} and Theorem \ref{thm:XZ-improved-coefficient} together, and
using the above coherent information expression, we obtain
\begin{corollary}
For small $p$, the quantum and private capacity of the $XZ$-channel $\mathcal{C}_p = \mathcal{N}_{p,p}^{XZ}$ with equal $X$- and $Z$-dephasing probability are given by
\begin{align}
1 - 2 h(p) ~\leq~ & Q(\mathcal{C}_p) ~\leq~ 1 - 2 h(p) - 32 \, p^2 \log p + O(p^2)\\
1 - 2 h(p) ~\leq~ & P(\mathcal{C}_p) \, ~\leq~ 1 - 2 h(p) - 96 \, p^2 \log p + O(p^2).
\end{align}
\end{corollary}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[
scale = 1.3,
axis lines=left,
scaled ticks=false,
tick label style={/pgf/number format/fixed, /pgf/number format/precision=3},
legend style = {at = {(0.05,0.95)},anchor = north west},
legend cell align = left,
grid = major,
xlabel = $p$,
xtick = {0,0.02,0.04,0.06,0.08,0.1},
]
\addplot[color=blue,thick] table[x=p,y=d] {XZ.dat};
\addplot[color=red,thick] table[x=p, y=s] {XZ.dat};
\addplot[draw=none, color=white] table[x=p, y=s] {XZ.dat};
\addplot[smooth,mark=none, color = green, thick,domain=0:0.1] {16*x^2+32*x^(2.5)};
\legend{$\dg(\mathcal{C}_p)$, $\|\mathcal{C}_p^c - \mathcal{C}_{s}^c\circ \mathcal{C}_p \|_{\diamond}$, where $s = p + 4 p^2$,$16p^2 + 32p^{5/2}$};
\end{axis}
\end{tikzpicture}
\caption{Plot of the optimal degradability parameter $\dg(\mathcal{C}_p)$ (blue) of the $XZ$-channel $\mathcal{C}_p$ computed using the SDP \eqref{eq:approx-deg-sdp}, together with the degradability parameter $\|\mathcal{C}_p^c - \mathcal{C}_{s}^c\circ \mathcal{C}_p \|_{\diamond}$ (red) obtained using the degrading map $\mathcal{C}_{s}^c$ with $s = p + 4p^2$, and the analytical upper bound $16p^2 + 32p^{5/2}$ (green) obtained from Theorem \ref{thm:XZ-improved-coefficient}.}
\label{fig:XZ-channel}
\end{figure}
\section{Conclusion}\label{sec:conclude}
Our results can be extended to cover \emph{generalized low-noise} channels $\mathcal{N}$, for which there exists another channel $\mathcal{M}$ such that $\| \mathcal{M}\circ\mathcal{N} - I \|_{\diamond} \leq \epsilon.$
For example, this includes all channels that are close to isometric channels.
For a generalized low-noise channel, we have by Theorem \ref{thm:complementary-degrading} that
\begin{align} \label{Eq:MN-approx-deg}
\| (\mathcal{M}\circ\mathcal{N})^{c} - (\mathcal{M}\circ\mathcal{N})^{c}\circ (\mathcal{M}\circ\mathcal{N}) \|_{\diamond} \leq 2 \epsilon^{3/2}.
\end{align}
Furthermore, up to an isometry,
\begin{align}
(\mathcal{M}\circ \mathcal{N})^c(\rho) = (\mathcal{M}^c\otimes I_{E_1})(U_\mathcal{N} \rho U_{\mathcal{N}}^\dagger),
\end{align}
where $U_\mathcal{N}: A \rightarrow BE_1$ is an isometric extension of $\mathcal{N}$ and $\mathcal{M}^c : B \rightarrow E_2$, so that $\tr_{E_2}(\mathcal{M}\circ \mathcal{N})^c(\rho) = \mathcal{N}^{c}(\rho)$.
Equation \ref{Eq:MN-approx-deg} therefore implies
\begin{align}
\| \mathcal{N}^{c} - \tr_{E_2}(\mathcal{M}\circ\mathcal{N})^{c}\circ (\mathcal{M}\circ\mathcal{N}) \|_{\diamond} \leq 2 \epsilon^{3/2},
\end{align}
so that letting $\mathcal{D} = \tr_{E_2}(\mathcal{M}\circ\mathcal{N})^{c}\circ \mathcal{M}$ we have $\| \mathcal{N}^{c} - \mathcal{D}\circ\mathcal{N} \|_{\diamond} \leq 2 \epsilon^{3/2}$ and $\mathcal{N}$ has degradability parameter no bigger than $ 2 \epsilon^{3/2}$.
We thus find that the same bounds as in Theorem \ref{lem:qbound2} apply in the case of a generalized low-noise channel $\mathcal{N}$.
We conclude with some implications of our results.
The quantum and private capacities of a quantum channel are intractable to calculate in general. This is because nonadditivity effects require us to regularize the coherent information and private information to obtain the quantum and private capacity, respectively.
We have shown that for low-noise channels, for which $\|\mathcal{N} - \id\|_\diamond \leq \varepsilon$, such nonadditivity effects are negligible. In particular, we find that both the private and quantum capacities of these channels are given by the one-shot coherent information $I_c(\mathcal{N})$, up to corrections of order $\varepsilon^{1.5}\log \varepsilon$.
Furthermore, for the qubit depolarizing channel $\mathcal{D}_p$ we obtain even tighter bounds for both the quantum and private capacities:
\begin{align}
1 -h(p) - p\log 3 ~ \leq ~ & Q(\mathcal{D}_p) ~ \leq ~ 1-h(p) - p\log 3 - \frac{16}{9}(6+\sqrt{2}) \, p^2 \log p + O(p^2)\\
1-h(p) - p\log 3 ~ \leq ~ & P(\mathcal{D}_p) ~ \leq ~ 1-h(p) - p\log 3 - \frac{16}{3}(6+\sqrt{2}) \, p^2 \log p + O(p^2),
\end{align}
and similar results hold for all generalized Pauli channels in dimension $d$.
Our key new finding is that channels within $\varepsilon$ of perfect are also exceptionally close to degradable, with degradability parameter of $O(\varepsilon^{1.5})$ in general and $O(p^2)$ for generalized Pauli channels.
The nonadditivty of coherent information for a general channel implies
that degenerate codes are sometimes needed to achieve the quantum
capacity \cite{DiVincenzoSS98,SS07,Fern08,SY08,SS09,Cubitt2015unbounded},
but little is known about these codes despite 20 years of research.
Having shown that the coherent information is essentially the quantum
capacity for low-noise channels, we have also arrived at a refreshing
result that using random block codes on the typical subspace of the
optimal input (for the 1-shot coherent information) essentially
achieves the capacity.
Likewise, our findings have implications in quantum cryptography. In quantum
key distribution, quantum states are transmitted through
well-characterized noisy quantum channels that are subject to further
adversarial attacks. Parameter estimation is used to determine the
effective channel (see for example \cite{RGK05}) and the optimal key
rate of one-way quantum key distribution is the private capacity of
the effective channel.
These effective channels typically have low noise (e.g., $1-2\%$ in
\cite{QBER09}), and our results imply near-optimality of the simple
classical error correction and privacy amplification procedures that
approach the one-shot coherent information of the effective channel.
In particular, for the XZ-channel with bit-flip probability $p$, the
optimal key rate is $1-2h(p) + O(p^2\log p)$.
\section{Acknowledgements}
We thank Charles Bennett, Ke Li, John Smolin, and John Watrous for
inspiring discussions, and Mark M.~Wilde for helpful feedback. DL is
further supported by NSERC and CIFAR, and FL and GS by the National Science Foundation under Grant Number 1125844.
|
1610.02361
|
\section{Appendix: Approximation algorithm}
\label{app:approximation}
\approximationalgorithm*
\begin{proof}
We start by artificially enlarging the weight $w_j$ of every agent $a_j$ to $\max w_i$. In doing so, we increase the energy cost contribution of each agent $a_j$ by a factor of $\tfrac{w_i}{w_j}$.
Thus any $\rho$-approximation to this weight-enhanced problem will give us a $\rho \cdot \max \tfrac{w_i}{w_j}$ approximation for the original problem.
From now on assume without loss of generality that all agents have a uniform weight $w := \max w_i$.
Let $\textsc{Opt}$ be an optimal schedule for an instance of \textsc{WeightedDelivery}\xspace with uniform agent weights $w$ and capacities $\kappa = 1$.
We call a feasible schedule $S$ \emph{restricted} if in $S$ every message $m_i$ is transported by a single agent from $s_i$ to $t_i$ without any intermediate drop-offs.
By Theorem~\ref{theo-boc-nointermediate} there exists a restricted schedule $S^R$ with $\textsc{cost}(S^R) \leq 2\cdot \textsc{cost}(\textsc{Opt})$.
Let $\textsc{Opt}^R$ be any optimal restricted schedule with total energy consumption $\textsc{cost}(\textsc{Opt}^R) \leq \textsc{cost}(S^R) \leq 2\cdot \textsc{cost}(\textsc{Opt})$.
We define a complete undirected auxiliary graph $G' = (V',E')$ on all agent starting positions as well as all message sources and destinations,
$V' = \left\{ p_1, \ldots, p_k \right\} \cup \left\{ s_1, \ldots, s_m \right\} \cup \left\{ t_1, \ldots, t_m \right\}$.
Each edge $e=(u,v) \in E'$ has length $l_e := d_G(u,v)$.
$\textsc{Opt}^R$ has a natural correspondence to a vertex-disjoint path cover $PC(\textsc{Opt}^R)$ of $G'$ with exactly $k$ simple paths $P_1, \ldots, P_k$ that has the following properties:
\begin{itemize}
\item Each path $P_j$ contains exactly one agent starting position, namely $p_j$;\\
and $p_j$ is an endpoint of $P_j$.
\item Each destination node $t_i$ is adjacent to its source node $s_i$;\\
and $s_i$ lies between $t_i$ and the endpoint $p_j$.
\end{itemize}
Each path (possibly of length $0$) with endpoint at a starting position $p_j$ corresponds to the (possibly empty) schedule $\textsc{Opt}^R|_{a_j}$ and $\textsc{cost}(\textsc{Opt}^R) = \sum_{e \in PC(\textsc{Opt}^R)} l_e$.
%
Now let $TC^*$ denote a tree cover of minimum total length $\sum_{e \in TC^*} l_e$ among all vertex-disjoint tree covers $TC$ of $G'$ with exactly $k$ trees $T_1,\ldots,T_k$ that satisfy the following properties:
\begin{itemize}
\item Each tree $T_j$ contains exactly one agent starting position, namely $p_j$.
\item Each destination node $t_i$ is adjacent to its source node $s_i$.
\end{itemize}
Since $PC(\textsc{Opt}^R)$ itself is a tree cover satisfying the two mentioned properties, we immediately get $\sum_{e \in TC^*} l_e \leq \sum_{e \in PC(\textsc{Opt}^R)} l_e = \textsc{cost}(\textsc{Opt}^R)$.
By Theorem~\ref{thm:planning-restricted} we can use DFS-traversals in each component of the optimum tree cover $TC^*$ to construct in polynomial-time a schedule $S^*$
of total energy consumption $\textsc{cost}(S^*) \leq 2\cdot \sum_{e \in TC^*} l_e \leq 2\cdot \textsc{cost}(\textsc{Opt}^R) \leq 4\cdot \textsc{cost}(\textsc{Opt})$.
It remains to show that the tree cover $TC^*$ can be found in polynomial time:
Analogously to Theorem~\ref{thm:planning-restricted} we start with an empty graph on $V'$ to which we add all edges $\left\{ (s_i, t_i)\ | \ i \in \left\{ 1, \ldots, m \right\} \right\}$.
Then we add all other edges of $E'$ in increasing order of their lengths, disregarding any edges which would result \emph{either} in the creation of a cycle
\emph{or} in a join of two starting positions $p_i, p_j$ into the same tree.
\end{proof}
\section{Appendix: Collaboration}
\label{app:BoC}
\subsection*{An algorithm for WeightedDelivery of a single message}
\nonincreasing*
\begin{proof}
%
If agent $a_i$ hands the message over to agent $a_j$ with $w_{i}<w_{j}$ in any solution, we can construct a better solution by replacing
$a_j$'s trajectory carrying the message with the same trajectory using agent $a_i$. By the same argument we may also assume without loss of
generality that the weights of the agents carrying the message in an optimum schedule are strictly decreasing, since we can merge trajectories
of equal weight.
\end{proof}
\begin{example}
This example shows that Lemma~\ref{lemma:non-increasing} is not true for more than one message.
In the graph shown in Figure~\ref{fig:example_increasing_weights}, we let one agent $a_1$ with weight $w_1=1$ start in vertex $s_2$ and a second agent $a_2$ with weight $w_2=1.5$ start in vertex $v_1$. In the optimal schedule, the message 2 with starting location $s_2$ is first transported by $a_1$ to $v_1$ and from there by $a_2$ to its destination $t_2$. Thus, the weights of the agents transporting the message $2$ are increasing in this case.
\begin{figure}[h!]
\tikzstyle{graphnode}=[circle,draw,minimum size=2.5em,scale=0.7]
\tikzstyle{dnode}=[scale=0.7]
\newcommand*{3}{1.7}
\newcommand*{1.2}{1.2}
\begin{tikzpicture}
\node (s2) at (0*3,0*1.2) [graphnode] {$s_{2}$};
\node (v1) at (1*3,0*1.2) [graphnode] {$v_{1}$};
\node (t2) at (2*3,0*1.2) [graphnode] {$t_{2}$};
\node (s1) at (1*3,1*1.2) [graphnode] {$s_{1}$};
\node (t1) at (2*3,1*1.2) [graphnode] {$t_{1}$};
\draw(s2) to node [dnode,above] () {1}(v1);
\draw(v1) to node [dnode,above] () {1}(t2);
\draw(v1) to node [dnode,left] () {1}(s1);
\draw(s1) to node [dnode,above] () {1}(t1);
\end{tikzpicture}
\caption{Example where the weights of the agents in the order they are transporting a message is increasing.}\label{fig:example_increasing_weights}
\end{figure}
\end{example}
\subsection*{Upper bound on the benefit of collaboration}
\BoCUB*
\begin{proof}
We can assume without loss of generality that in the optimal schedule $\textsc{Opt}$ every message $i$ is transported on a simple path from its starting point $s_i$ to its destination $t_i$. This can be easily achieved by letting agents drop the messages at intermediate vertices if they would otherwise transport it in a cycle. We define the directed multigraph $G_S=(V, E \dotcup \overline{E})$ as follows:
\begin{itemize}
\item $V$ is the set of vertices of the original graph $G$.
\item For every time in the optimal schedule that an agent traverses an edge $\{u,v\}$ from $u$ to $v$ while carrying a set of messages $M'$, we add the arc $e=(u,v)$ to $E$ and $\bar{e} =(v,u)$ to $\overline{E}$. We further label both edges with the set of messages $M'$ and write~$M_e = M_{\bar{e}} = M'$ to denote these labels. We call the edges in $E$ \emph{original} edges and the edges in $\overline{E}$ \emph{reverse} edges.
\end{itemize}
We say that the tour of an agent $A$ \emph{satisfies the edge labels}, if every original edge $e \in E$ is traversed at most once by~$A$ and only while carrying the \emph{exact} set of messages $M_e$, and every reverse edge $\bar{e} \in \overline{E}$ is traversed by $A$ at most once and without carrying any message.
We will show that there exists a Eulerian tour satisfying the edge labels of every connected component of $G_S$. We then let the cheapest agent in each connected component follow the respective Eulerian tour. This agent traverses every edge exactly twice as often as the edge is traversed in the optimal schedule~$\textsc{Opt}$ by all agents. As we choose the cheapest agent in each connected component, we obtain a schedule $S$ with $\textsc{cost}(S)\leq 2 \cdot \textsc{cost}(\textsc{Opt})$.
By only considering a subset of the messages and a subschedule of~$\textsc{Opt}$, we may from now on assume that~$G_S$ is strongly connected (by construction, every connected component of $G_S$ is strongly connected). Further, let $a_{\text{min}}$ be an agent with minimum cost among the agents that move in~$\textsc{Opt}$, let $M(v)$ be the set of messages currently placed on vertex $v$, and let $M(a_{\text{min}})$ be the set of messages currently transported by agent~$a_{\text{min}}$. We first show that the procedure \textproc{computeTour} computes a closed tour for~$a_{\text{min}}$ that satisfies the edge labels, and afterwards we explain how we can iterate the procedure to obtain a Eulerian tour satisfying the edge labels.
\begin{algorithm}[t]
\caption{computeTour}
\begin{algorithmic}
\Function{computeTour}{}
\State drop all messages
\If {$\exists$ edge $e \in E$ incident to current vertex}
\If{$M($current vertex$) \supseteq M_e$}
\State pickup messages $M_e$, traverse $e$ and delete it
\Else
\State let $j \in M_e \setminus M($current vertex$)$
\State \textproc{fetchMessage}($j$, currentVertex)
\EndIf
\ElsIf{$\exists$ edge $\overline{e}\in \overline{E}$}
\State traverse $\overline{e}$ and delete it
\EndIf
\EndFunction
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[t]
\caption{fetchMessage}
\begin{algorithmic}
\Function{fetchMessage}{$i, v$}
\State drop all messages
\While{$i \notin M($current vertex$)$}
\If{there is a edge $\bar{e} \in \overline{E}$ with $i \in M_e$ leaving the current vertex }
\State traverse $\bar{e}$ and delete it
\Else
\State {\bf give up}
\EndIf
\EndWhile
\While{$v \neq$ current vertex}
\State let $e\in E$ be edge incident to current vertex with $i \in M_e$
\If {$M($current vertex$) \cup M(a_{\text{min}}) \supseteq M_e$}
\State pickup messages $M_e$, drop all other messages, traverse $e$ and delete it
\Else
\State let $j \in M_e \setminus (M($current vertex$) \cup M(a_{\text{min}}))$
\State \textproc{fetchMessage}($j$, currentVertex)
\EndIf
\EndWhile
\EndFunction
\end{algorithmic}
\end{algorithm}
\medskip
\underline{Claim 1:} If the agent $a_{\text{min}}$ starts in a vertex $v_0$ and follows the tour computed by \textproc{computeTour}, it satisfies the edge labels in every step and returns to its starting location.
\medskip
Both procedures \textproc{computeTour} and \textproc{fetchMessage} make sure that the agent traverses every edge $e \in E$ with label $M_e$ while carrying the exact set of messages $M_e$ and every edge $\bar{e} \in \overline{E}$ while carrying no messages. So the first part of the claim is clear. We only need to show that~$a_{\text{min}}$ cannot get stuck at some vertex $v^*$ before returning to $v_0$. As $G_S$ is Eulerian (ignoring the labels) and edges are deleted once they are traversed, this can only happen if some call \textproc{fetchMessage}$(i, v)$ gives up at vertex~$v^*$.
This means that the current vertex $v^*$ does not contain message $i$ and has no edge $\bar{e} \in \overline{E}$ with a label containing message~$i$.
Note that when $a_{\text{min}}$ is currently proceeding according the call \textproc{fetchMessage}$(i, v)$, then it will be on a vertex of the path that message~$i$ takes from its start $s_{i}$ to its destination $t_{i}$ in the optimal schedule $\textsc{Opt}$, and this path is simple by our initial assumption.
Also note that, since edge labels are obeyed, message~$i$ only ever moves forward along its path in~$\textsc{Opt}$.
This means that if~$a_{\text{min}}$ is stuck at vertex~$v^*$, there must initially have been an edge $\bar{e}=(v^*,w) \in \overline{E}$ incident to~$v^*$ with $i \in M_e$ that was taken by the agent earlier and then deleted. The agent traverses edges in $\overline{E}$ only in the procedure \textproc{fetchMessage}. So there must have been a call \textproc{fetchMessage}$(j_1, v_1)$ before, where the agent traversed the edge $\bar{e}$. This call cannot have been completed as otherwise the original edge $e=(w,v^*) \in E$ corresponding to $\bar{e}$ would have been used by~$a_{\text{min}}$ and the message $i$ would have already reached $v^*$, since~$i\in M_e$. This contradicts that the message path of $i$ is simple and the agent is currently proceeding according the call \textproc{fetchMessage}$(i, v)$ at vertex $v^*$.
As the call \textproc{fetchMessage}$(j_1, v_1)$ is not complete, there must be a vertex $v_2$ and a message $j_2$ missing at this vertex to further carry $j_1$ on its paths to the destination, and a call \textproc{fetchMessage}$(j_2, v_2)$ which is also
incomplete. By iterating this argument, we obtain that the current stack of functions is \textproc{fetchMessage}$(j_s, v_s), \ldots,$ \textproc{fetchMessage}$(j_1, v_1)$ for some $s \in \mathbb{N}$, where $j_s=i$ and $v_s=v$. In the optimal schedule $\textsc{Opt}$ the message $j_2$ needs to be transported to $v_2$ before $j_1$ can be further transported from
$v_2$ together with $j_2$. Similarly, message $j_r$ needs to be transported to $v_r$ before message $j_{r-1}$ can be transported further together with message $j_r$ from $v_r$ for $r=2,\ldots, s$. Moreover, on the edge $e=(w,v^*)$ the messages $j_1$ and
$j_s=i$ need to be transported together and in particular, message $j_1$ needs to be
transported together with $j_s$ before $j_s$ can be transported further. But in the optimal schedule the messages must be transported in a certain sequence and it cannot be that message $i$ needs to be transported to $v$ before messages $j_1$ is transported to message $v_1$ and vice versa. Thus. \textproc{computeTour} must terminate with $a_{\text{min}}$ returning to the starting location~$v_0$.
\bigskip
\underline{Claim 2:} After completing a tour given by \textproc{computeTour} the following holds: Every message $i$ has either been transported to its destination or it is on a vertex $v_i$ such that there is a path from $v_i$ to $t_i$ with edges in $E$ containing~$i$ in their labels, and a path in the reverse direction with edges in $\overline{E}$ containing~$i$ in their labels.
\bigskip
Every edge $e \in E$ with label $M_e$ is only traversed if the agent~$a_{\text{min}}$ carries the set of messages $M_e$. Thus at any time there is a path from the current location~$v_i$ of message~$i$ to its destination~$t_i$ with edges containing~$i$ in the label. This shows the first part of the claim.
Observe that a completed call \textproc{fetchMessage}$(i,v)$ yields a closed walk, as the agent starts and ends in $v$. Moreover, it first traverses exactly all edges in $\overline{E}$ on the path from~$v$ to the current position~$v_i$ of message~$i$ and then all edges in~$E$ on the path from~$v_i$ to~$v$. Inductively, this also holds for all levels of recursive calls of \textproc{fetchMessage}. Hence, for every original edge $e \in E$ also the corresponding reverse edge $\bar{e} \in \overline{E}$ is traversed in a call of \textproc{fetchMessage}.
This fact also implies that for any edge~$E \dotcup \overline{E}$ traversed in the procedure \textproc{computeTour} (and not in \textproc{fetchMessage}), the corresponding original/reverse edge cannot be traversed in a call of \textproc{fetchMessage}. Inductively, we can therefore argue that if~$e_1,e_2,\ldots, e_s \in E \dotcup \overline{E}$ are the edges traversed in the procedure \textproc{computeTour} in this order, such that the corresponding original/reverse edges were not traversed, then~$e_s,\ldots, e_1$ is a path from the current location of~$a_{\text{min}}$ to its starting vertex. This shows that at termination for every original edge $e \in E$ also the corresponding reverse edge $\bar{e} \in \overline{E}$ is traversed.
\medskip
\underline{Claim 3:} Combining the tours returned by multiple calls of \textproc{computeTour} yields a Eulerian tour that satisfies the edge labels in every step.
\medskip
Assume that the tour~$T$ resulting from a call of \textproc{computeTour} does not traverse all edges of~$G_S$. Let~$v_0$ be the starting vertex of the tour, $v$ be the last vertex on the tour that is is incident to an edge which is not visited, and~$v_i$ be the position of message~$i$ after the tour~$T$ is finished. Further, let~$G'_S$ be the graph~$G_S$ after the call of \textproc{computeTour}, i.e., without all edges in~$T$ and message~$i$ at position~$v_i$ instead of~$s_i$. We want to show that we can run \textproc{computeTour} on~$G'_S$ with~$a_{\text{min}}$ starting at~$v$ and then add the resulting tour~$T'$ to~$T$ as follows: First~$a_{\text{min}}$ follows~$T$ until the last time it visits~$v$, then it follows~$T'$, and finally the remaining part of~$T$.
The graph~$G_S'$ is a feasible input to \textproc{computeTour} by Claim~2. It corresponds to the instance of \textsc{WeightedDelivery}\xspace, where all message transported in the schedule $\textsc{Opt}$, which are done in the tour~$T$ by~$a_{\text{min}}$, are completed, and the agent positions are adapted accordingly. By Claim~1, \textproc{computeTour} will produce a tour that satisfies the edge labels. The only problem that can occur when combining the tours~$T$ and~$T'$ therefore is that
during following the tour~$T'$, \textproc{fetchMessage}$(i,v)$ is called, but some message~$i$ is not yet transported to~$v_i$ because the tour~$T$ has not been completed. This means that vertex~$v_i$ is visited after the last time~$v$ is visited by the tour~$T$. By the choice of~$v$, all edges incident to~$v_i$ must be visited by the tour $T$, in particular, we must have that~$v_i=t_i$ and message~$i$ is delivered to its destination by the tour $T$. But then~$G'_S$ does not contain any edge with label~$i$ by Claim~2.
By iterative applying the above argument, we obtain a Eulerian tour that satisfies the edge labels in every step.
\end{proof}
\subsection*{Upper bound on the benefit of collaboration for a single message}
\BoConemessageUB*
\begin{proof}
By using Dijkstra's algorithm, we can determine the agent that can transport the message from $s$ to $t$ with lowest cost. We need to show that this is at most $1 / \ln(2)$ the cost of an optimum using all agents.
Fix an optimum solution and let the agents $a_1,a_2,\dots,a_r$ be labeled in the order in which they transport the message in this optimum solution (ignoring unused agents).
We can assume by Lemma~\ref{lemma:non-increasing} that $w_1 >w_2 > \ldots >w_r$.
By scaling, we can further assume without loss of generality that $w_r=1$ and that the total distance traveled by the message is 1.
Now, for each point $x \in [0,1]$ along the message path there is an agent $a_j$ with cost $w_j$ carrying the message at this point in the optimum schedule and we can define a function $f$ with $f(x)=w_j$.
The function $f$ is a step function that is monotonically decreasing by Lemma~\ref{lemma:non-increasing} with $f(0)=w_1$ and $f(1)=w_r=1$.
We now choose the largest $b \in [0,1]$ such that $f(x) \geq \tfrac{b}{x+1}$, see Figure~\ref{fig:benefit-of-collaboration}.
\begin{figure}[t!]
\vspace{-4ex}
\centering
\usetikzlibrary{arrows,intersections}
\tikzstyle{gLine}=[thick]
\tikzstyle{kreis}=[circle,inner sep=0pt, minimum size=3pt,draw,thick,color=black,fill=white]
\tikzstyle{niceArrow}=[->,>=stealth',
dot/.style = {draw,
fill = white,
circle,
inner sep = 0pt,
minimum size = 4pt
}]
\begin{tikzpicture}[scale=0.75]
\coordinate (O) at (0,0);
\draw[gLine,niceArrow] (-0.3,0) -- (8,0) coordinate[label = {below:$x$}] (xmax);
\draw[gLine,niceArrow] (0,-0.3) -- (0,3) coordinate[label = {}] (ymax);
\draw[gLine] (0,2.5)--(1.5,2.5);
\node (k1) at (1.5,2.5) [kreis]{};
\draw[gLine] (1.5,2.1)--(2.5,2.1);
\node (k2) at (2.5,2.1) [kreis]{};
\draw[gLine] (2.5,1.65)--(3,1.65);
\node (k3) at (3,1.65) [kreis]{};
\draw[gLine] (3,1.4)--(5.5,1.4);
\node (k3) at (5.5,1.4) [kreis]{};
\draw[gLine] (5.5,1.1)--(7,1.1);
\draw[gLine] (7,0.1)--(7,-0.1);
\node (l1) at (7,-0.4) [scale=1]{$1$};
\draw[dashed] (3,1.4)--(3,-0.1);
\draw[gLine] (3,0.1)--(3,-0.1);
\node (l2) at (3,-0.4) [scale=1]{$x^*$};
\draw[dashed] (3,1.4)--(0,1.4);
\draw[gLine] (-0.1,1.4)--(0.1,1.4);
\node (l3) at (-0.5,1.4) [scale=1]{$w_{j^*}$};
\node (l3) at (5,0.7) [color=blue, scale=1]{$\frac{b}{x+1}$};
\node (l3) at (3,2.5) [, scale=1]{$f(x)$};
\node (i4) at (3,1.4) [circle,inner sep=0pt, minimum size=4pt, fill,color=blue]{};
\draw[thick,scale=4,domain=-0:2,smooth,variable=\x,blue] plot ({\x},{0.61/(\x+1)});
\path[name path=x] (0.3,0.5) -- (6.7,4.7);
\path[name path=y] plot[smooth] coordinates {(-0.3,2) (2,1.5) (4,2.8) (6,5)};
\end{tikzpicture}
\caption{Choosing the largest $b$ such that $\tfrac{b}{x+1}$ is a lower bound on the step-function $f$ representing the weight of the agent currently transporting the message.}
\label{fig:benefit-of-collaboration}
\end{figure}
Note that $b \geq 1$ as $f(x) \geq 1 \geq \frac{b}{x+1}$ for $b=1$ and all $x \in [0,1]$.
Further, let $g_{j}$ be the distance traveled by agent $a_j$ without the message and $g:=\sum_{j=1}^r g_j w_i$ the total cost for the distances traveled by all agents without the message.
We obtain the following lower bound for an optimum solution
\begin{linenomath}
\[ \textsc{cost}(\textsc{Opt}) = \int_0^1 f(x)\,\mathrm{d}x + g \geq \int_0^1 \frac{b}{x+1}\,\mathrm{d}x +g = b \ln(2) + g. \]
\end{linenomath}
By the choice of $b$, the functions $f(x)$ and $\frac{b}{x+1}$ coincide in at least one point in the interval $[0,1]$.
%
Let this point be $x^*$ and $a_{j^*}$ be the agent carrying the message at this point. This means that $f(x^*)= \frac{b}{x^*+1} = w_{j^*}$.
We will show that it costs at most $(1/ \ln(2))\textsc{cost}(\textsc{Opt})$ for agent $a_{j^*}$ to transport the message alone from $s$ to $t$.
The cost for agent $a_{j^*}$ to reach $s$ is bounded by $g_{j^*} w_{j^*} + x^* \cdot w_{j^*}$ and the cost for transporting the message from $s$ to $t$ is bounded by $w_{j^*}$.
Thus, the cost of the algorithm using only one agent can be bounded by
\begin{linenomath}
\[ \textsc{cost}(\textsc{ALG})\leq g_{j^*} w_{j^*} + x^* \cdot w_{j^*} + w_{j^*} = g_{j^*}w_{j^*} + (x^* +1) \cdot \frac{b}{x^*+1} = b + g_{j^*} w_{j^*}. \]
\end{linenomath}
By using that $g_{j^*} w_{j^*} \leq g$, we finally obtain
$
\tfrac{\textsc{cost}(\textsc{Alg})}{\textsc{cost}(\textsc{Opt})}\leq \tfrac{b + g_{j^*} w_{j^*}}{b \ln(2) +g} \leq
\tfrac{b}{ b \ln(2)} = \tfrac{1}{\ln(2)}.
$
\end{proof}
\subsection*{Upper bound on the benefit of collaboration without intermediate dropoffs}
\nointermediate*
\begin{proof}
For $\underline{\kappa = \infty}$ this is a corollary of Theorem~\ref{theo-ub-boc}: an agent $a$ with infinite capacity can just as well keep a message $m_i$ once it was picked up,
i.e. we can simply remove all the actions for this message between the first pick-up $(a,s_i,m_i,+)$ and the last drop-off $(a,t_i,m_i,-)$.
For $\underline{\kappa = 1}$ we need a different analysis. Given an optimum schedule $\textsc{Opt}$, we look at how the trajectories of the messages are connected by the agents.
More precisely, we construct an auxiliary multigraph $G' = (V', E')$ as follows:
The vertex set $V'$ consists of all messages. Then, for each agent $a$ we look at its subsequence of the optimum schedule, $\textsc{Opt}|_{a}$.
Since $a$ has capacity 1, its subsequence consists of alternating drop-offs $(a,p,m_i,-)$ and pick-ups $(a,q,m_j,+)$.
For each drop-off followed by a pick-up action we add an edge $(m_i,m_j)$ of length $d_G(p,q)$ to $E'$ (Figure~\ref{fig:boc-nointermediate}).
These edges correspond to the portions of the optimum schedule where the agent travels without carrying a message.
%
\begin{figure}[ht!]
\centering
\includegraphics[width=\linewidth]{boc-nointermediate}
\caption{(left) An optimal schedule, (center) the auxiliary graph, (right) the 2-approximation.}
\label{fig:boc-nointermediate}
\end{figure}
We assume without loss of generality that $G'$ is connected and denote by $a_{\min}$ the agent of minimum weight involved in $\textsc{Opt}$
(otherwise we can look at each connected component and its agent of minimum weight separately).
Assume that the first action of~$a_{\min}$ in~$\textsc{Opt}$ is to move from its starting position~$p_{\min}$ to a node~$p$ where it picks up message~$m_1$
(note that $p$ can potentially lie anywhere on the trajectory between $s_1$ and $t_1$).
We model this by adding a node~$p_{\min}$ to~$V'$ and connecting it to~$m_1$ by an edge of corresponding length~$d_G(p_{\min}, p)$.
Now we can take a minimum spanning tree of $G'$ and remove all redundant edges.
Note that the total length of the minimum spanning tree is a lower bound on the sum of the distances traveled by all the agents in $\textsc{Opt}$ \emph{without} carrying a message.
Thus any schedule~$S$ of~$a_{\min}$ which move exactly twice along the trajectory of each message \emph{and} twice along the path corresponding to each edge of the minimum spanning tree of $G'$ has a cost of at most $\textsc{cost}(S) \leq 2\cdot \textsc{cost}(\textsc{Opt})$. The following tour satisfies this property and delivers each message to its destination immediately after it is picked up:
We first let~$a_{\min}$ walk to~$p$, from where~$a_{\min}$ proceeds towards~$s_1$.
When~$a_{\min}$ reaches~$s_1$, it picks up~$m_1$ and delivers it to~$t_1$ along its trajectory in~\textsc{Opt}.
Once $a_{\min}$ reaches $t_1$, we let it return to~$p$ and from there back to its original position~$p_{\min}$.
If, however, along its way from $p$ to $s_1$ or from $t_1$ to $p$ the agent visits an endpoint of a path corresponding to an edge of the minimum spanning tree,
we first let $a_{\min}$ serve the adjacent subtree recursively, see Figure~\ref{fig:boc-nointermediate}.
It is easy to see that in the resulting schedule~$S$ every message is directly transported from its source to its destination, and that the capacity is respected at all times.
\end{proof}
\section{Appendix: Coordination}
\label{app:coordination}
\begin{restatable}[Energy cost of a 3SAT-solution]{lem}{solutioncostlemma}
Given a satisfiable assignment (a solution) for the variables of a 3CNF $F$ there is a feasible schedule $\textsc{Sol}$ of the agents in $G(F)$ which
has a total (energy) cost of $\textsc{cost}(\textsc{Sol}) = 4xy + 2y + x(y^2+y+1)\varepsilon$.
\label{lem:optimum-cost}
\end{restatable}
\begin{proof} We are given $x$ variables, $y$ clauses and a satisfiable assignment of the variables. We construct a schedule from
the assignment as follows: Assume that variable $v$ is set to true and consider the corresponding variable box.
%
\begin{itemize}
%
\item The variable agent (which has weight 1) delivers all messages on the full $\mathit{true}$-path (which has length
$2y+\varepsilon$). The energy needed to do so is $2y+\varepsilon$.
%
\item Each literal agent placed on a node $v_{\mathit{false},i}$ transports two messages: the message with source
$v_{\mathit{false},i}$ and the message with source $v_{\mathit{false},i+1}$. Summing over all messages on the
$\mathit{false}$-path we need an energy of $2\cdot \left( (1+y\varepsilon) +\ldots + (1+ \varepsilon) \right)$.
%
\end{itemize}
Hence for the messages in each of the $x$ variable boxes we have an energy consumption of
$2y+\varepsilon+2y+2\sum_{i=1}^y{i\varepsilon} = 4y+(y^2+y+1)\varepsilon$. Furthermore, since we start from a satisfiable assignment, the
source of each clause message is connected to at least one not yet used agent of weight $1+i\varepsilon$. Such an agent is adjacent to the
source of that clause message only and can move to the source (distance $\tfrac{1-i\varepsilon}{1+i\varepsilon}$), pick it up and deliver it
(distance $1$), hence for each clause we get an energy consumption of exactly $(1+i\varepsilon)( \tfrac{1-i\varepsilon}{1+i\varepsilon} +1 ) =
2$.
%
Summed over all variables and all clauses we get a total energy consumption of $x \cdot \left( 4y+(y^2+y+1)\varepsilon \right) + 2y = 4xy + 2y
+ x(y^2+y+1)\varepsilon$.
\end{proof}
\subparagraph{Fixed sequence (schedule without agent assignment).} We now fix a sequence $S^-$ that describes the schedule constructed in Lemma~\ref{lem:optimum-cost} but which does not allow us to infer a satisfiable assignment.
In our sequence $S^-$ every pick-up action of one of the $4xy+y$ many messages $m_i$ at its location $s_i$ is immediately followed by its drop-off at its destination $t_i$.
Hence we are restricted to a schedule
$(\text{\textunderscore}, s_{\pi(1)}, m_{\pi(1)}, +), (\text{\textunderscore}, t_{\pi(1)}, m_{\pi(1)}, -), (\text{\textunderscore}, s_{\pi(2)}, m_{\pi(2)}, +), \ldots,$ $ (\text{\textunderscore},t_{\pi(4xy+y)}, m_{\pi(4xy+y)}, -)$,
where $\pi$ can be any permutation on $1,\ldots, 4xy+y$ that satisfies the following property:
If two messages $m_i,m_j$ lie on the same $\mathit{true}$- or $\mathit{false}$-path originating at a variable node $v$, then $d_G(F)(v, s_i) < d_G(F)(v, s_j) \Rightarrow \pi^{-1}(i) < \pi^{-1}(j)$
(meaning if $m_i$ lies to the left of $m_j$, it should precede $m_j$ in the schedule).
\subparagraph{Energy Consumption of Optimal Schedules.} In the next three Lemmata~\ref{lem:variablebox-independence},~\ref{lem:agent-movement}
and~\ref{lem:optimum-schedule-cost} we show that (i) the total energy consumption of any optimum schedule \textsc{Opt}\ is $\textsc{cost}(\textsc{Opt}) \geq \textsc{cost}(\textsc{Sol})$,
and (ii) in every optimum schedule with $\textsc{cost}(\textsc{Opt}) = \textsc{cost}(\textsc{Sol})$, each variable agent delivers exactly all messages on either the $\mathit{true}$- or the
$\mathit{false}$-path.
We remark that (i) is true independent of whether $\textsc{Opt}$ adheres tho the fixed schedule without assignments $S^-$ or not and holds for any capacity $\kappa$.
In the case of (ii), $\textsc{Opt}$ has exactly the properties as described in the proof of Lemma~\ref{lem:optimum-cost}.
Since thus for each agent $a_j$ the subsequence $\textsc{Opt}|_{a_j}$ is uniquely determined, and since each message is transported by a single agent (without intermediate drop-offs),
these subsequences can be merged to match the prescribed fixed order of the actions $S^-$.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{weight-hardness.pdf}
\caption{Agent positions ($\square$) and weights (in black); Messages ($\rightarrow$) and edge lengths (in color).}
\label{fig:weight-hardness2}
\end{figure}
\subparagraph{Lower Bound.} To this end we first give an upper bound $\mathit{UB}$ on $\textsc{cost}(\textsc{Sol})$ and a lower bound $\mathit{LB}$ on the total energy
consumption of any feasible schedule $S$. First, since $\varepsilon = (8xy)^{-2}$, we define $\mathit{UB} := 4xy + 2y + 0.25 > 4xy + 2y +
x(y^2+y+1)\varepsilon = \textsc{cost}(\textsc{Sol})$. For the lower bound, note that every agent has weight at least 1. We double count the distance traveled by the
agents via the distance covered by the messages, hence we have to be careful to take into account that an agent might carry two or more messages at
the same time (we do not want to count the energy used during that time twice or more). Hence for each of the $4xy + y$ messages we count the last
edge over which it is transported \emph{towards} its target. All message targets (and thus the distance traveled towards them) are disjoint and have
adjacent edges all of length at least $\tfrac{1-y\varepsilon}{1+y\varepsilon} > 1-2y\varepsilon > 1-\tfrac{1}{32xy}$. Additionally, before an agent
can deliver a \emph{clause message}, it needs first to travel towards its source. Such edge crossings are not counted yet, hence we can add an
additional distance of $\tfrac{1-y\varepsilon}{1+y\varepsilon}$ for each clause. Overall we get a lower bound on the total energy consumption of
$\mathit{LB} := 4xy + 2y - 0.25 < (4xy + y + y)\cdot ( 1 -\tfrac{1}{32xy} )$.
\begin{restatable}[Independence of variable boxes]{lem}{boxindependencelemma}
%
Any optimum schedule \textsc{Opt}, in which an agent placed in a variable box $u$ moves to a clause node and back into a variable box of some
(not necessarily different) variable $v$, has $\textsc{cost}(\textsc{Opt}) > \textsc{cost}(\textsc{Sol})$.
\label{lem:variablebox-independence}
\end{restatable}
\begin{proof}
Assume for the sake of contradiction that an agent leaves its variable box, walks to a clause node and later on back to a variable box. If
this agent delivers the clause message, it has to return and thus walks the corresponding clause message's distance of $1$ twice. If it doesn't,
then we haven't included the travel \emph{towards} the clause node yet. In both cases, we add at least another $1 -\tfrac{1}{32xy} > 0.5$ to
the given lower bound $\mathit{LB}$, yielding $\textsc{cost}(\textsc{Opt}) > \mathit{LB} + 0.5 = \mathit{UB} > \textsc{cost}(\textsc{Sol})$.
\end{proof}
From now on we can restrict ourselves to look at feasible schedules where each each agent either stays inside its variable box, or walks to deliver a
clause message and stays at the target of that clause message. Next we show that we can also assume that agents walk only from left to right inside a
$\mathit{true}$- or $\mathit{false}$-path:
\begin{restatable}[Agents move from left to right]{lem}{lefttorightlemma}
%
Any optimum schedule \textsc{Opt}, in which there is an agent $a$ that moves at some point in the schedule from right to left along a $\mathit{true}$- or
$\mathit{false}$-path, has $\textsc{cost}(\textsc{Opt}) > \textsc{cost}(\textsc{Sol})$.
\label{lem:agent-movement}
\end{restatable}
\begin{proof}
%
By Lemma~\ref{lem:variablebox-independence} we can restrict ourselves to schedules \textsc{Opt}\ where each message $i$ has to be transported over the
edge connecting $s_i$ and $t_i$. Without loss of generality we assume that no message $i$ ever leaves the interval $\left[ s_i, t_i \right]$
and that it is transported from $s_i$ to $t_i$ monotonically from left to right (otherwise we could adjust the schedule accordingly by keeping
the trajectories of the agents but adapting pick-up and drop-off locations).
%
First assume for the sake of contradiction that in \textsc{Opt}\ there is an agent $a$ whose trajectory contains moves from right to left of total
length at least $0.5$. The energy needed to do this is at least $0.5$, and these moves are not yet included in the lower bound $\mathit{LB}$. As
before, by adding $0.5$, we get $\textsc{cost}(\textsc{Opt}) > \mathit{LB} + 0.5 = \mathit{UB} > \textsc{cost}(\textsc{Sol})$. Hence in the following we assume that each agent moves strictly
less than $0.5$ from right to left.
%
We are going to show that any such schedule \textsc{Opt}\ can be transformed into a schedule of smaller cost, contradicting the optimality of \textsc{Opt}.
Consider the longest consecutive right-to-left move of agent $a$:
Since $a$ moves by less than $0.5$ to the left, it must come from a handover $h_i$ point lying inside an edge $(s_i, t_i)$ or go to a handover
point $h_j$ lying inside an edge $(s_j, t_j)$.
%
In the first case, $h_i$ is closer to $s_i$ than to $t_i$ and the previous action of $a$ in the schedule must have been $(a, h_i, i)$. The
agent $b$ picking up message $i$ at $h_i$ must have its starting position on or to the left of $s_i$. Hence we could replace $a$'s pick-up and
drop-off of message $i$ with pick-up and drop-off by agent $b$, thus strictly decreasing the total distance $d_a$ traveled by $a$,
contradicting the optimality of $OPT$.
%
In the second case, $h_j$ is closer to $t_j$ than to $s_j$ and agent $a$ moves to the right after picking up message $j$. Let $b$ denote the
agent that dropped off $j$ at $h_j$. By the previous remarks we know that $b$ will not move to the left in its next action (if any),
$b$ can't reach $s_j$ and no other message is inside $\left[ s_i, t_i \right]$.
Furthermore by the weights given in our hardness reduction we know $w_b < 2w_a$. Therefore we can move $h_j$ by a small $\delta > 0$ to the right,
thus strictly decreasing $w_a \cdot d_a + w_b \cdot d_b$, contradicting the optimality of $\textsc{Opt}$.
\end{proof}
From now on we assume that agents do not move from right to left at all. Now we are ready to prove the key relation between optimum schedules and
\textsc{Sol}\ schedules:
\begin{lemma}[Energy cost of an optimum schedule]
Any optimum schedule \textsc{Opt}\ either has total energy consumption $\textsc{cost}(\textsc{Opt}) > \textsc{cost}(\textsc{Sol})$ or $\textsc{cost}(\textsc{Opt}) = \textsc{cost}(\textsc{Sol})$. In the latter case, each
variable agent either delivers exactly all messages on its $\mathit{true}$-path or exactly all messages on its $\mathit{false}$-path. Furthermore the literal
agents on the respective other path deliver exactly two literal messages each. Finally, clause messages are only delivered by freed up literal
agents on the paths chosen by the variable agents.
\label{lem:optimum-schedule-cost}
\end{lemma}
\begin{proof}
%
By Lemmata~\ref{lem:variablebox-independence} and~\ref{lem:agent-movement} we may assume that no agent travels into another variable box and
that agents only move from left to right on any $\mathit{true}$- or $\mathit{false}$-path. Furthermore we have seen in the proof of Lemma~\ref{lem:agent-movement}
that this implies that every literal message is carried from its source to its target by a single agent in a continuous left-to-right motion.
We now show that if there was a variable agent $a$ which does not deliver either all messages on its adjacent $\mathit{true}$-path or its adjacent $\mathit{false}$-path,
then we would get a contradiction to optimality:
Assume first for the sake of contradiction that $a$ stays on its starting location. Then we can move $a$ to the first internal node of its
$\mathit{true}$-path, which contributes an additional $\varepsilon$-distance to the total energy consumption. Let $b$ be the agent carrying the first
literal message; we know that $b$ must have weight $1+y\varepsilon$. We replace $b$ in the schedule by $a$, saving at least $y\varepsilon$
energy already on the first literal message, contradicting the optimality of the schedule.
%
Now assume that $a$ either deviated at some internal node $v_{\mathit{true},2(y-i)}$ of the path it entered (to deliver a clause message) or that it
stopped at such an internal node ($v_{\mathit{true},2(y-i)}$ or $v_{\mathit{true},2(y-i)+1}$).
Let $b$ denote the agent carrying the message which was placed on the specified node. The edge to the adjacent clause (if any) has length
$\tfrac{1-i\varepsilon}{1+i\varepsilon}$ and $b$ has weight $1+j\varepsilon$, with $j \geq i$. Now we can switch $a$ and $b$ in the remainder
of the schedule. The potential increase of energy cost (on the clause message delivery) amounts to at most $((1+j\varepsilon)-1)\cdot
(\tfrac{1-i\varepsilon}{1+i\varepsilon}+1) < 2j\varepsilon$, while the gained energy on the next two literal messages is at least
$j\varepsilon \cdot 2$, again contradicting the optimality of the schedule.
%
Hence each variable agent delivers either all messages on its $\mathit{true}$-path or all messages on its $\mathit{false}$-path. This allows us to give a new
lower bound $\mathit{LB}'$ on the energy consumption $\textsc{cost}(\textsc{Opt})$: Each variable agent contributes an energy consumption of $2y+\varepsilon$ to the total.
Delivery of each message on the respective other path needs an agent with starting location coinciding with or to the left of the message
source, yielding an energy contribution of at least $2\cdot ((1+y\varepsilon)+ \ldots + (1+\varepsilon)) = 2y + y(y+1)\varepsilon$, with
equality if and only if each literal agent placed on $v_{\mathit{true},i}$ delivers the message placed on
$v_{\mathit{true},i}$ \emph{and} the consecutive message with source on $v_{\mathit{true},i+1}$.
Finally the source of each clause message is reached by an agent of weight $1+j\varepsilon$ over an edge of length
$\tfrac{1-i\varepsilon}{1+i\varepsilon}$, $j \geq i$, hence the delivery of each clause message needs an energy of at least
$(1+j\varepsilon)(\tfrac{1-i\varepsilon}{1+i\varepsilon}+1) \geq 2$, with equality if and only if $j = i$. Summing over the clauses and variable
boxes we get $\mathit{LB}' = 4xy + 2y + x(y^2+y+1)\varepsilon = \textsc{cost}(\textsc{Sol})$.
\end{proof}
It remains to note that the first literal agent (of weight $1+y\varepsilon$) on the path chosen by the variable agent can walk over to the other path
and from there on to deliver a clause message. However such a schedule has higher energy consumption than $\textsc{cost}(\textsc{Sol})$: the literal agent has to
cross two edges of length $\varepsilon$ and the required energy to do so has not been used in the estimated lower bound $\mathit{LB}' = \textsc{cost}(\textsc{Sol})$.
Hence we conclude:
\messageorderhardnessthm*
\section{Appendix: Planning}
\label{app:planning}
\singleagenthardness*
\begin{proof}
We proceed by a reduction from Hamiltonian cycles on a grid graph, a problem shown to be $\mathrm{NP}$-hard by Itai et al.~\cite{NPonGrid}.
A similar reduction was used for a sorting problem by Graf~\cite{ESA15}.
Given an unweighted grid graph $H = (V_H, E_H)$ with $V_H = \left\{ v_1, v_2, \ldots, v_n \right\}$. We add to every vertex $v$ a new vertex $v'$ with an edge $e= (v,v')$ and a
message with start $v$ and target $v'$. Denote the new graph with $G = (V, E)$, where $V = V_H \cup V_H'$, $V_H' = \{v' \mid v \in V_H\}$ and $E =
E_H \cup E_H'$, where $E_H' = \left\{ (v,v')\ | \ v \in V_H \right\}$.
\begin{figure}[b!]
\centering
\includegraphics[width=\linewidth]{hamilton-cycle-reduction}
\caption{Finding a Hamiltonian cycle via \textsc{WeightedDelivery}\xspace with a single agent.}
\label{fig:hamilton-cycle-reduction2}
\end{figure}
We build an instance of \textsc{WeightedDelivery}\xspace by taking $G$ and placing a single agent on an arbitrary vertex $p_1 \in V_H$. We let $w_1 = 1$ and set unit edge length $l_e = 1$ for all edges in $E_H$ and edge length $l_e=0$ for all edges in $E_{H'}$ except $(p_1,p_1')$, see Figure~\ref{fig:hamilton-cycle-reduction2}. The edge $(p_1,p_1')$ gets length $x = l_{(p_1,p_1')}=|V|$ instead.
Now let the $d_1$ be the length of the shortest path in $G$ starting in $p_1$ on
which the agent can deliver all messages.
We now argue that there is a Hamiltonian cycle in $H$ if and only if $d_1 = 2|V|$.
To see that $2|V|$ is a lower bound for $d_1$, let us distinguish whether the agent ends at $p_1'$ or not.
If we end at $p_1'$, we have to reach every $v \in V_H\setminus \{p_1\}$ at least once and also go back to $p_1$ before using $(p_1,p_1')$. This sums up to at least $(|V|-1)+1+|V|= 2|V|$. If we end somewhere else, we have to use $(p_1,p_1')$ twice, hence $d_1 \geq 2|V|$. So we get a schedule of cost $2|V|$ if and only if we reach every vertex exactly once and end at $p_1'$. When removing all the $E_H'$-steps, such a schedule directly corresponds to a Hamiltonian cycle as illustrated in Figure~\ref{fig:hamilton-cycle-reduction2}.
\end{proof}
\approximationhardness*
\begin{proof}
We build on top of a result by Karpinski, Lampis and Schmied~\cite[Theorem 4]{karpinski2015new} which shows that the symmetric, metric traveling salesperson problem is hard to approximate with ratio better than $\frac{123}{122}$. For a reduction, we take any metric, undirected graph $H$, duplicate the vertices and put a zero-length edge and a single message between each of them, just like in Theorem~\ref{thm:singleagenthardness} / Figure~\ref{fig:hamilton-cycle-reduction2}.
To find a suitable length $x$ for the extra edge $(p_1,p_1')$ at the (arbitrary) starting vertex we have to consider the following: In a traveling salesman tour in $H$, we want the agent to come back to $p_1$ in the end.
Hence in \textsc{WeightedDelivery}\xspace on $G$ we want the agent to end at $p_1'$. We achieve this by setting $x$ large enough to avoid traveling $(p_1,p_1')$ twice.
Let $M$ be the cost of a minimum spanning tree in $H$.
Clearly, both the optimum traveling salesman path and the optimum traveling salesman tour have cost at least $M$ but also at most $2M$.
Hence, setting the length $x$ of the extra edge $(p_1,p_1')$ to $2M$ ensures that any schedule for \textsc{WeightedDelivery}\xspace on $G$ which doesn't end in $p_1'$ (and thus uses $(p_1,p_1')$ twice) has cost at least $2\cdot 2M+M=5M$,
while an optimum schedule has delivery cost at most $2M+2M=4M$.
It remains to look at schedules which end in $p_1'$:
As the extra edge contributes at most two thirds of the cost of the final schedule, at least one third of the approximation gap is conserved, giving $1 + \frac{1}{3\cdot122} = \frac{367}{366}$.
\end{proof}
\restrictedplanning*
\begin{proof}(\mathversion{bold}$\kappa=\infty$\mathversion{normal}).
By the given restriction, separate planning of each $S^R|_{a_j}$ independently maintains feasibility of $S^R$. We denote by $m_{j1}, m_{j2}, \ldots, m_{jx}$ the messages appearing in $S^R_{a_j}$.
We define a complete undirected auxiliary graph $G'=(V',E')$ on the node set
$V' = \left\{ p_j \right\} \cup \left\{ s_{j1}, s_{j2}, \ldots, s_{jx} \right\} \cup \left\{ t_{j1}, \ldots, t_{jx} \right\}$ with edges $(u,v)$ having weight $d_G(u,v)$.
For $\underline{\kappa=\infty}$, the schedule $\textsc{Opt}(S^R)|_{a_j}$ corresponds to a Hamiltonian path $H$ in $G'$ of minimum length, starting in $p_j$,
subject to the condition that for each message $m_{ji}$ its source $s_{ji}$ is visited before its destination $t_{ji}$.
We approximate $\textsc{Opt}(S^R)|_{a_j}$ with a schedule $\textsc{Alg}(S^R)|_{a_j}$ that first collects all messages $m_{j1}, \ldots, m_{jx}$ before delivering all of them.
We start by computing a minimum spanning tree $MST'$ of the subgraph of $G'$ consisting of all nodes $\left\{ p_j \right\} \cup \left\{ s_{j1}, \ldots, s_{jx} \right\}$.
A DFS-traversal on $MST'$ starting from $p_j$ collects all messages, returns back to $p_j$ and has cost $2\cdot \sum_{e \in E(MST')} l_e \leq 2\cdot \textsc{cost}(\textsc{Opt}(S^R)|_{a_j})$.
Next we consider the subgraph of $G'$ consisting of all nodes $\left\{ p_j \right\} \cup \left\{ t_{j1}, \ldots, t_{jx} \right\}$.
Using Christofides' heuristic for the metric TSP path version with fixed starting point $p_j$ and arbitrary endpoint, due to Hoogeveen~\cite{Christofides76, Hoogeveen91},
we can deliver all messages by additionally paying at most $\tfrac{3}{2}\cdot \textsc{cost}(\textsc{Opt}(S^R)|_{a_j})$. In total, this gives a $3.5$-approximation.
\end{proof}
\subsection{Lower Bound on the Benefit of Collaboration}
\begin{theorem}\label{theo-lb-boc}
On instances of \textsc{WeightedDelivery}\xspace with agent capacity $\kappa$ and $m$ messages, an algorithm using one agent for delivering every message cannot achieve an approximation ratio better than $1/ \ln \left( \left( 1 + 1/(2 r)\right)^r \left( 1 + 1/(2r+1) \right) \right)$, where $r:=\min\{\kappa,m\}$.
\end{theorem}
\begin{figure}[b!]
\tikzstyle{graphnode}=[circle,draw,minimum size=3em,scale=0.53]
\tikzstyle{dnode}=[scale=0.5]
\newcommand*{3}{3}
\newcommand*{1.2}{1.2}
\begin{tikzpicture}
\foreach \j in {0,1} {
\pgfmathsetmacro\koordX{-2+0.5*\j}
\pgfmathsetmacro\koordY{0.6-0.15*\j}
\node (a\j) at (\koordX*3,0.75*1.2) [graphnode] {$v_{1,\j}$};
\node (b\j) at (\koordX*3,0.25*0.75*1.2) [graphnode] {$v_{2,\j}$};
\node (c\j) at (\koordX*3,-0.75*1.2) [graphnode] {$v_{r,\j}$};
}
\node (a3) at (-0.7*3,0.75*1.2) [dnode] {$v_{1,n-1}$};
\node (b3) at (-0.7*3,0.25*0.75*1.2) [dnode] {$v_{2,n-1}$};
\node (c3) at (-0.7*3,-0.75*1.2) [dnode] {$v_{r,n-1}$};
\node (a3) at (-0.7*3,0.75*1.2) [graphnode] {};
\node (b3) at (-0.7*3,0.25*0.75*1.2) [graphnode] {};
\node (c3) at (-0.7*3,-0.75*1.2) [graphnode] {};
\node (d0) at(0*3,0) [dnode] {$v_{0,n}$};
\node (d1) at(0.75*3,0) [dnode] {$v_{0,n+1}$};
\node (d3) at (2*3,0) [dnode] {$v_{0,2n}$};
\node (d0) at(0*3,0) [graphnode] {};
\node (d1) at(0.75*3,0) [graphnode] {};
\node (d3) at (2*3,0) [graphnode] {};
\foreach \i in {a,b,c,d} {
\foreach \j in {0} {
\pgfmathtruncatemacro{\k}{\j+1}
\draw (\i\j) -- (\i\k);
}
}
\draw (a3) -- (d0);
\draw (b3) -- (d0);
\draw (c3) -- (d0);
\draw[dashed] (a1) -- (a3);
\draw[dashed] (b1) -- (b3);
\draw[dashed] (c1) -- (c3);
\draw[dashed] (d1) -- (d3);
\draw[loosely dotted](-2*3,-0.2*1.2)--(-2*3,-.4*1.2);
\draw[loosely dotted](-1.5*3,-0.2*1.2)--(-1.5*3,-.4*1.2);
\draw[loosely dotted](-0.7*3,-0.2*1.2)--(-0.7*3,-.4*1.2);
\end{tikzpicture}
\caption{Lower bound construction for the benefit of collaboration.}\label{fig:boc-lower-bound}
\end{figure}
\begin{proof}
Consider the graph $G=(V,E)$ given in Figure~\ref{fig:boc-lower-bound}, where the length $l_e$ of every edge $e$ is $1/n$.
This means that $G$ is a star graph with center~$v_{0,n}$ and $r+1$ paths of total length 1 each. We have $r$ messages and message $i$ needs to be transported from $v_{i,0}$ to $v_{0,2n}$ for $i=1,\ldots, r$. There further is an agent $a_{i,j}$ with weight
$w_{i,j} = \tfrac{2r}{2r + j/n}$ starting at every vertex $v_{i,j}$ for $(i,j) \in \{1,\ldots, r\} \times \{0,\ldots, n-1\} \cup \{0\} \times \{n,\ldots, 2n\}$.
We first show the following: If any agent transports $s$ messages $i_1,\ldots, i_s$ from $v_{i_j,0}$ to $v_{0,2n}$, then this costs at least $2s$. Note that this implies that any schedule $S$ for delivering all messages by the agents such that every message is only carried by one agent satisfies $\textsc{cost}(S) \geq 2 r.$
So let an agent $a_{i,j}$ transport $s$ messages from the source to the destination $v_{0,2n}$. Without loss of generality let these messages be $1,\ldots, s$, which are picked up in this order.
By construction, agent $a_{i,j}$ needs to travel a distance of at least $\tfrac{j}{n}$ to reach message~$1$, then distance 1 to to move back to $v_{0,n}$, then distance 2 for picking up message $i$ and going back to $v_{0,n}$ for $i=2,\ldots,s$, and finally it needs to move distance 1 from $v_{0,n}$ to $v_{0,2n}$. Overall, agent $a_{i,j}$ therefore travels a distance of at least $2s+\tfrac{j}{n}$.
The overall cost for agent $a_{i,j}$ to deliver the $s$ messages therefore is at least
\begin{linenomath}
\begin{align*}
(2s+\tfrac{j}{n}) \cdot w_{i,j} = (2s+\tfrac{j}{n}) \cdot \tfrac{2r}{2r + j/n} \geq
(2s+\tfrac{j}{n}) \cdot \tfrac{2s}{2s + j/n} = 2s.
\end{align*}
\end{linenomath}
Now, consider a schedule $S_\text{col}$, where the agents collaborate, i.e., agent $a_{i,j}$ transports message $i$ from $v_{i,j}$ to $v_{i,j+1}$ for $i=1,\ldots, r$, $j=0,\ldots,n-1$, where we identify $v_{i,n}$ with $v_{0,n}$. Then agent $a_{0,j}$ transports all $r$ messages from $v_{0,j}$ to $v_{0,j+1}$ for $j=n,\ldots, 2n-1$. This is possible because $r\leq \kappa$ by the choice of $r$. The total cost of this schedule is given by
\begin{linenomath}
\begin{align*}
\textsc{cost}(S_\text{col})=r \cdot \int_0^1 f_\text{step} (x) dx + \int_1^2 f_\text{step} (x) dx,
\end{align*}
\end{linenomath}
where $f_\text{step} (x)$ is a step-function defined on $[0,2]$ giving the current cost of transporting the message, i.e., $f_\text{step} (x)=\tfrac{2 r}{2r+j/n}$ on the interval $[j/n,(j+1)/n)$ for $j=0,\ldots, 2n-1$. The first integral corresponds the the first part of the schedule, where the $r$ messages are transported separately and
therefore the cost of transporting message $i$ from $v_{i,j}$ to $v_{i,j+1}$ is exactly $\int_{j/n}^{(j+1)/n} f_\text{step} (x) dx = \tfrac{1}{n} \cdot \tfrac{2 r}{2r+j/n}$. The second part of the schedule corresponds to the part, where all $r$ messages are transported together by one agent at a time.\\
Observe that the function $f(x)=2 r \cdot \tfrac{1}{2r-1/n+x}$ satisfies $f(x) \geq f_\text{step} (x)$ on $[0,2]$, hence
\begin{linenomath}
\begin{align*}
\textsc{cost}(S_\text{col}) & \leq r \int_0^1 f(x) dx + \int_1^2 f (x) dx
= 2r\left(r \ln (2r - \tfrac{1}{n} +x)\Big|_0^1 + \ln (2r - \tfrac{1}{n} +x)\Big|_1^2 \right) \\
&= 2 r \ln \left( \left( \tfrac{2r - 1/n +1 }{2r-1/n}\right)^r \left( \tfrac{2r - 1/n +2 }{2r-1/n+1} \right) \right)
\stackrel{\smash{n\to\infty}}{\rightarrow}
2 r \ln \left( \left(1 + \tfrac{1}{2r}\right)^r \left( 1 + \tfrac{1}{2r+1} \right) \right).
\end{align*}
\end{linenomath}
The best approximation ratio of an algorithm that transports every message by only one agent compared to an algorithm that uses an arbitrary number of agents for every message is therefore bounded from below by
\begin{linenomath}
\begin{align*}
BoC \geq \min_S \textsc{cost}(S)/ \textsc{cost}(S_\text{col}) \geq 1 / \ln \left( \left(1 + \tfrac{1}{2r}\right)^r \left( 1 + \tfrac{1}{2r+1} \right) \right). & && \qedhere
\end{align*}
\end{linenomath}
\end{proof}
By observing that $\lim_{r\to\infty} 1/ \ln \left( \left( 1 + 1/(2 r)\right)^r \left( 1 + 1/(2r+1) \right) \right) =1/ \ln \left( e^{1/2} \right) = 2$, we obtain the following corollary.
\begin{corollary}\label{cor-lb-boc}
A schedule for \textsc{WeightedDelivery}\xspace where every message is delivered by a single agent cannot achieve an approximation ratio better than $2$ in general, and better than~$1/\ln 2 \approx 1.44$ for a single message.
\end{corollary}
\subsection{Upper Bounds on the Benefit of Collaboration}
We now give tight upper bounds for Corollary~\ref{cor-lb-boc}.
The following theorem shows that the benefit of collaboration is~2 in general.
We remark that finding an optimal schedule in which every message is transported from its source to its destination by one agent, is already $\mathrm{NP}$-hard, as shown in Theorem~\ref{thm:single-agent-hardness}.
\begin{restatable}{thm2}{BoCUB}
\label{theo-ub-boc}
Let $\textsc{Opt}$ be an optimal schedule for a given instance of \textsc{WeightedDelivery}\xspace. Then there exists a schedule $S$ such that every message is only transported by one agent and $\textsc{cost}(S)\leq 2 \cdot \textsc{cost}(\textsc{Opt})$.
\end{restatable}
\ifProofsAppendix
\begin{proof}[Proof Sketch]
We may assume (without loss of generality) that the optimal schedule~\textsc{Opt}{} transports each message along a simple path, and we construct a directed multigraph~$G_S$ with the same set of vertices and an arc for every move of an agent in~\textsc{Opt}{}.
We label every arc by the exact set of messages that were carried during the corresponding move in~\textsc{Opt}{}.
For every arc in~$G_S$ we add a backwards arc with the same label.
Obviously, every connected component of~$G_S$ is Eulerian, and we claim that any agent in each component can follow some Eulerian tour that allows to deliver all messages.
In particular, the agent needs exactly twice as many moves as the total number of moves of all agents in the component in~\textsc{Opt}{}.
If we choose the cheapest agent (in terms of weight) in each component, we obtain a tour with at most twice the cost of~\textsc{Opt}{}.
We compute the Eulerian for the cheapest agent of a component as a combination of multiple tours, respecting arc labels in the following sense:
During every move along a forward arc, the agent carries the exact set of messages prescribed by the arc label, and during every move along a backward arc, the agent does not carry any messages.
This ensures that all messages travel along the same path as in~\textsc{Opt}{}.
Whenever the agent is at a vertex~$v$ and is missing message~$i$ in order to proceed along some path, this means the current vertex must lie on $i$'s path in~\textsc{Opt}{}, and thus there must be a path of backwards edges to the current location of~$i$.
The agent follows this path and recursively brings the message back to~$v$.
In the process, more recursive calls may be necessary, but we can prove that there cannot be a circular dependence between messages.
Therefore, the procedure eventually terminates after computing a closed tour.
Note that, so far, the tour is still ``virtual'' in the sense that the agent didn't actually move but merely computed the tour.
We remove the tour from~$G_S$, update all message positions, and recursively apply the procedure starting from the last vertex along the tour that is still adjacent to untraversed edges.
By combining all (virtual) tours that we obtain in the recursion, we eventually get a Eulerian tour for the agent that obeys all arc labels.
This means that the agent can successfully simulate all moves in~\textsc{Opt}{} while ensuring that it is carrying the required messages before each move.
\end{proof}
\else
\ExecuteMetaData[appendix-collaboration.tex]{BoCUB}
\fi
\subparagraph{Single Message.} For the case of a single message, we can improve the upper bound of~2 on the benefit of collaboration from Theorem~\ref{theo-ub-boc}, to a tight bound of $1/\ln 2 \approx 1.44$.
\ifProofsAppendix
The idea of the proof is to use that the weights are non-increasing by Lemma~\ref{lemma:non-increasing}. After scaling appropriately, we assume the message path to be the interval $[0,1]$ and then choose a $b$ such that the function $\tfrac{b}{x+1}$ is a lower bound on the weight of the agent transporting the message at point $x$ on the message path. The intersection of $\tfrac{b}{x+1}$ and the step-function $f$ representing the weight of the agent currently transporting the message then gives an agent, which can transport the message with at most $(1/ \ln 2)$-times the cost of an optimal schedule.
\fi
\begin{restatable}{thm2}{BoConemessageUB}
\label{th:BoC}
There is a $(1/ \ln 2)$-approximation algorithm using a single agent for \textsc{WeightedDelivery}\xspace with~$m=1$.
\end{restatable}
\ifProofsAppendix \else
\ExecuteMetaData[appendix-collaboration.tex]{BoConemessageUB}
\fi
\subparagraph{No Intermediate Dropoffs.} In the following we show that for $\kappa \in \left\{ 1, \infty \right\}$ the upper bound of~$2$ on the benefit of collaboration still holds,
with the additional property that each message is carried by its single agent without any intermediate dropoffs.
We will make use of this result later in the approximation algorithm for \textsc{WeightedDelivery}\xspace with $\kappa=1$ (Section~\ref{sec:approx}).
\begin{restatable}{thm2}{nointermediate}
\label{theo-boc-nointermediate}
Let $\textsc{Opt}$ be an optimal schedule for a given instance of \textsc{WeightedDelivery}\xspace with $\kappa \in \left\{ 1,\infty \right\}$.
Then there exists a schedule $S$ such that (i) every message is only transported by a single agent, with exactly one pick-up and one drop-off,
(ii) $\textsc{cost}(S)\leq 2 \cdot \textsc{cost}(\textsc{Opt})$, and (iii) every agent $a_j$ returns to its starting location $p_j$.
\end{restatable}
\ifProofsAppendix \else
\ExecuteMetaData[appendix-collaboration.tex]{nointermediate}
\fi
\subsection{An Algorithm for WeightedDelivery of a Single Message}
\begin{restatable}{lem}{nonincreasing}
\label{lemma:non-increasing}
In any optimal solution to \textsc{WeightedDelivery}\xspace for a single message, if the message is delivered by agents with weights $w_1,w_2,\dots w_k$, in this order, then
(i) $w_{i} \geq w_{j}$ whenever $i<j$, and (ii) without loss of generality, $w_{i} \neq w_{j}$ for $i \neq j$.
Hence there is an optimal schedule $S$ in which no agent $a_{j}$ has more than one pair of pick-up/drop-off actions.
\end{restatable}
\ifProofsAppendix \else
\ExecuteMetaData[appendix-collaboration.tex]{nonincreasing}
\fi
\drop{
\begin{corollary}
There is an optimal schedule to \textsc{WeightedDelivery}\xspace for a single message $m_1$, in which no agent $a_{1j}$ has more than one pair of pick-up/drop-off actions in $S|_{a1j}$.
\end{corollary}
}
\begin{theorem}
An optimal solution of \textsc{WeightedDelivery}\xspace of a single message in a graph $G=(V,E)$ with $k \leq |V|$ agents can be found in $O(|V|^3)$ time.
\label{th:alg_single_msg}
\end{theorem}
\begin{proof}
We use the properties of Lemma~\ref{lemma:non-increasing} to create an auxiliary graph on which we run Dijkstra's algorithm for computing a shortest path from $s$ to $t$.
Given an original instance of single-message \textsc{WeightedDelivery}\xspace consisting of the graph $G = (V,E)$, with $s,t \in V$, we obtain the auxiliary, \emph{directed} graph $G' = (V',E')$ as follows:
\begin{itemize}
\item For each node $v \in V$ and each agent $a_i$, there is a node $v_{a_i}$ in $G'$.\\
Furthermore $G'$ contains two additional vertices $s$ and $t$.
\item For $1\leq i \leq k$, there is an arc $(s,s_{a_i})$ of cost $w_i \cdot d_{G}(p_i,s)$ and an arc $(t_{a_i},t)$ of cost $0$.
\item For $(u,v) \in E$ and $1\leq i\leq k$, there are two arcs $(u_{a_i},v_{a_i})$ and $(v_{a_i},u_{a_i})$ of cost $w_i \cdot l_{(u,v)}$.
\item For $u \in V$ and agents $a_i, a_j$ with $w_i>w_j$, there is an arc $(u_{a_i},u_{a_j})$ of cost $w_j \cdot d_{G}(p_j,u)$.
\end{itemize}
Note that any solution to the \textsc{WeightedDelivery}\xspace that satisfies the properties of Lemma~\ref{lemma:non-increasing} corresponds to some $s$-$t$-path in $G'$
such that the cost of the solution is equal to the length of this path in $G_a$ and vice versa.
This implies that the length of the shortest $s$-$t$ path in $G'$ is the cost of the optimal solution for \textsc{WeightedDelivery}\xspace in $G$.
Assuming that $k \leq |V|$, the graph $G'$ has $|V|\cdot k +2 \in O(|V|^2)$ vertices and at most $2k + (k^2|V| + |V|^2k)/2 -|V|\cdot k \in O(|V|^3)$ arcs.
The graph $G'$ can be constructed in $O(|V|^3)$ time if we use the \emph{Floyd Warshall} all pair shortest paths algorithm \cite{floyd1962algorithm, warshall1962theorem} in $G$.
Finally, we compute the shortest path from $s$ to $t$ in $G'$ in time $O(|V|^3)$, using Dijkstra's algorithm with Fibonacci heaps.
\end{proof}
\subsection{NP-Hardness for Planar Graphs}
We give a reduction from planar 3SAT: From a given planar 3SAT formula $F$ we construct an instance of \textsc{WeightedDelivery}\xspace
that allows a schedule $S$ with ``good'' energy \textsc{cost}($S$) if and only if $F$ is satisfiable.
\subparagraph{Planar 3SAT.} Let $F$ be a three-conjunctive normal form (3CNF) with $x$ boolean variables $V(F)=\left\{ v_1, \dots, v_x \right\}$ and $y$
clauses $C(F)=\left\{ c_1, \dots, c_y \right\}$. Each clause is given by a subset of at most three literals of the form $l(v_i) \in \left\{ v_i,
\overline{v_i} \right\}$. We define a corresponding graph $H(F) = (N,A)$ with a node set consisting of all clauses and all variables ($N = V(F) \cup C(F)$).
We add an edge between a clause $c$ and a variable $v$, if $v$ or $\overline{v}$ is contained in $c$. Furthermore we add a cycle consisting of edges
between all pairs of consecutive variables, i.e., $A = A_1 \cup A_2$, where
$\ A_1 = \left\{ \left\{ c_i, v_j \right\} \ | \ v_j \in c_i \text{ or } \overline{v_j} \in c_i \right\},
\ A_2 = \left\{ \left\{ v_j, v_{(j \bmod x)+1} \right\} \ | \ 1\leq j \leq x \right\}.$
We call $F$ \emph{planar} if there is a plane embedding of $H(F)$. The \emph{planar 3SAT} problem of deciding whether a given planar 3CNF $F$ is
satisfiable is known to be $\mathrm{NP}$-complete. Furthermore the problem remains $\mathrm{NP}$-complete \emph{if at each variable node} the plane
embedding is required to have all arcs representing positive literals on one side of the cycle $A_2$ and all arcs representing negative literals on
the other side of $A_2$~\cite{planar3sat82}. We will use this \emph{restricted version} in our reduction and assume without loss of generality that
the graph $H(F)\setminus A_2$ is connected and that $H(F)$ is a simple graph (i.e. each variable appears at most once in every clause).
\subparagraph{Building the Delivery Graph.} We first describe a way to transform any planar 3CNF graph $H(F)$ into a planar delivery graph $G
= G(F)$, see Figure~\ref{fig:planar3sat}.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{planar3sat}
\caption{(left) A restricted plane embedding of a 3CNF $F$ which is satisfied by $(v_1,v_2,v_3,v_4) = (true, false, false, true)$.
(right) Its transformation to the corresponding delivery graph.}
\label{fig:planar3sat}
\end{figure}
We transform the graph in five steps: First we delete all edges of the cycle $A_2$, but we keep in mind that at each variable node all positive
literal edges lie on one side and all negative literal edges on the other side. Secondly let $\deg_{H(F),A_1}(v)$ denote the remaining degree of a
variable node $v$ in $H$ and surround each variable node by a \emph{variable box}. A variable box contains two paths adjacent to $v$ on which
internally we place $\deg_{H(F),A_1}(v)$ copies of $v$: One path (called henceforth the \emph{$\mathit{true}$-path}) contains all nodes having an
adjacent positive literal edge, the other path (the \emph{$\mathit{false}$-path}) contains all nodes having an adjacent negative literal edge. In a
next step, we add a single node between any pair of node copies of the previous step. As a fourth step, we want all paths to contain the same number
of nodes, hence we fill in nodes at the end of each path such that every path contains exactly $2y \geq 2\deg_{H(F),A_1}(v)$ internal nodes. Thus each
variable box contains a variable node $v$, an adjacent $\mathit{true}$-path (with internal nodes $v_{\mathit{true},1}, \ldots, v_{\mathit{true},2y-1}$
and a final node $v_{\mathit{true},2y}$) and a respective $\mathit{false}$-path.
Finally for each clause node $c$ we add a new node $c'$ which we connect to $c$.
The new graph $G(F)$ has polynomial size and all the steps can be implemented in such a way that $G(F)$ is planar.
\subparagraph{Messages, Agents and Weights.} We are going to place one \emph{clause message} on each of the $y$ clause nodes and a \emph{literal
message} on each of the $2x$ paths in the variable boxes for a total of $4xy$ messages. More precisely, on each original clause node $c$ we place
exactly one clause message which has to be delivered to the newly created node $c'$. Furthermore we place a literal message on every internal node
$v_{\mathit{true},i}$ of a $\mathit{true}$-path and set its target to $v_{\mathit{true},i+1}$ (same for the $\mathit{false}$-path).
We set the length of all edges connecting a source to its target to 1.
Next we describe the locations of the agents in each variable box. We place one \emph{variable agent} of weight 1 on the variable node $v$. The
length of the two adjacent edges are set to $\varepsilon$, where $\varepsilon:= (8xy)^{-2}$. Furthermore we place $y$ \emph{literal agents} on each
path: The $i$-th agent is placed on $v_{\mathit{true},2(y-i)}$ (respectively $v_{\mathit{false},2(y-i)}$) and gets weight $1+i\varepsilon$. It
remains to set the length of edges between clause nodes and internal nodes of a path. By construction the latter is the starting position of an agent
of uniquely defined weight $1+i\varepsilon$; we set the length of the edge to $\tfrac{1-i\varepsilon}{1+i\varepsilon}$. For an illustration see
Figure~\ref{fig:weight-hardness}, where each agent's starting location is depicted by a square and each message is depicted by a colored arrow.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{weight-hardness.pdf}
\caption{Agent positions ($\square$) and weights (in black); Messages ($\rightarrow$) and edge lengths (in color).}
\label{fig:weight-hardness}
\end{figure}
\subparagraph{Reduction.} The key idea of the reduction is that for each variable $u$, the corresponding variable box contains a variable agent who can
\emph{either} deliver all messages on the $\mathit{true}$-path (thus setting the variable to true), \emph{or} deliver all messages on the
$\mathit{false}$-path (thus setting the variable to false). Assume $u$ is set to true. If $u$ is contained in a clause $c$, then on the adjacent node
$v_{\mathit{true},i}$ there is a (not yet used) literal agent. Intuitively, this agent was \emph{freed by the variable agent} and can thus be sent to
deliver the clause-message. If $\overline{u}$ is contained in $c$, the corresponding literal on the $\mathit{false}$-path can't be sent to deliver the
clause message, since it needs to transport messages along the $\mathit{false}$-path.
\ifProofsAppendix
There is such a feasible schedule $\textsc{Sol}$ of the agents in $G(F)$ if and only if there is a satisfiable assignment (a solution) for the variables of a 3CNF $F$.
Its total (energy) cost is $\textsc{cost}(\textsc{Sol}) := 4xy + 2y + x(y^2+y+1)\varepsilon$ (Appendix~\ref{app:coordination}: Lemma~\ref{lem:optimum-cost}).
Furthermore, we can show that \emph{any} schedule $S$ which doesn't correspond to a satisfiable variable assignment has cost $\textsc{cost}(S) > \textsc{cost}(\textsc{Sol})$
(Appendix~\ref{app:coordination}: Lemmata~\ref{lem:variablebox-independence},~\ref{lem:agent-movement} and~\ref{lem:optimum-schedule-cost}).
This is true independent of whether $S$ adheres to the schedule without agents $S^-$ or not and holds for any capacity $\kappa$.
\subparagraph{Fixed Sequence (Schedule without Agent Assignment).} It remains to fix a sequence $S^-$ that describes the schedule $\textsc{Sol}$ described in the reduction idea but which does not allow us to infer a satisfiable assignment:
This is the case for any $S^-$ consisting of consecutive pairs $(\text{\textunderscore}, s_i, m_i, +), (\text{\textunderscore}, t_i, m_i, -)$
such that if $m_i$ lies to the left of $m_j$ on some $\mathit{true}$- or $\mathit{false}$-path, it precedes $m_j$ in the schedule.
\drop{
\subparagraph{Energy Consumption of Optimal Schedules.} It is possible to show (Lemmata~\ref{lem:variablebox-independence},~\ref{lem:agent-movement} and~\ref{lem:optimum-schedule-cost})
that \emph{any} schedule $S$ has cost $\textsc{cost}(S) \geq \textsc{cost}(\textsc{Sol})$. This is true independent of whether $S$ adheres tho the fixed schedule without assignments $S^-$ or not and holds for any capacity $\kappa$.
In case of equality, $S$ has exactly the properties as described in the proof of Lemma~\ref{lem:optimum-cost}.
Since thus for each agent $a_j$ the subsequence $S|_{a_j}$ is uniquely determined, and since each message is transported by a single agent (without intermediate drop-offs),
these subsequences can be merged to match the prescribed fixed order of the actions $S^-$.
\subparagraph{Optimum schedules.} If an optimum schedule $\textsc{Opt}$ (restricted to the prescribed schedule without assignment) has cost $\textsc{cost}(\textsc{Opt}) = \textsc{cost}(\textsc{Sol})$, we get a satisfiable assignment of the variables.
If, however, an optimum schedule $\textsc{Opt}$ has cost $\textsc{cost}(S) > \textsc{cost}(\textsc{Sol})$, then there is no satisfiable assignment of the variables. We conclude:
}
\else
\ExecuteMetaData[appendix-coordination.tex]{messageorderhardnessthm1}
\ExecuteMetaData[appendix-coordination.tex]{messageorderhardnessthm2}
\fi
\begin{restatable}{thm2}{messageorderhardnessthm}
\emph{Coordination} of \textsc{WeightedDelivery}\xspace is $\mathrm{NP}$-hard on planar graphs for all capacities $\kappa$, even if we are given prescribed collaboration and planning.
\label{thm:message-order-hardness}
\end{restatable}
\subsection{Polynomial-time Algorithm for Uniform Weights and Unit Capacity
\label{sec:mcmf}
Note that Coordination is $\mathrm{NP}$-hard even for capacity $\kappa=1$. Next we show that this setting is approachable once we restrict ourselves to uniform weights.
\begin{theorem}
Given collaboration and planning in the form of a complete schedule with missing agent assignment,
Coordination of \textsc{WeightedDelivery}\xspace with capacity $\kappa=1$ and agents having uniform weights can be solved in polynomial time.
\label{thm:uniform-matching}
\end{theorem}
\begin{proof}
As before, denote by $S^- = (\text{\textunderscore}, s_i, m_i, +),\dots, (\text{\textunderscore}, h, m_i,-),$ $\ldots,(\text{\textunderscore}, t_j, m_j, -)$ the prescribed schedule without agent assignments.
Since all agents have the same uniform weight $w$, the cost $\textsc{cost}(S)$ of any feasible schedule $S$ is determined by $\textsc{cost}(S) = w\cdot \sum_{j=1}^k d_j$.
Hence at a pick-up action $(\text{\textunderscore}, q, m_i, +)$ it is not so much important \emph{which} agent picks up the message as \emph{where / how far} it comes from.
Because we have capacity $\kappa = 1$, we know that the agent has to come from either its starting position or from a preceding drop-off action $(\text{\textunderscore}, p, m_j, -) \in S^-$.
%
This allows us to model the problem as a weighted bipartite matching, see Figure~\ref{fig:MCMF-example} (center). We build an auxiliary graph $G' = (A \cup B, E_1' \cup E_2')$.
A maximum matching in this bipartite graph will tell us for every pick-up action in $B$, where the agent that performs the pick-up action comes from in $A$.
Let $A := \{p_1, \dots, p_k\} \cup \{ (\text{\textunderscore},*,*,-) \}$ and $B := \{ (\text{\textunderscore},*,*,+) \}$.
%
We add edges between all agent starting positions and all pick-ups, $E_1' := \{p_1, \dots, p_k\} \times \left\{ (\text{\textunderscore},q,m,+) \ \mid \ (\text{\textunderscore},q,m,+) \in B \right\}$
of weight $d_G(p_i, q)$. Furthermore we also add edges between drop-offs and all subsequent pick-ups $E_2' := \left\{ \left( (\text{\textunderscore}, p, m_j, -), (\text{\textunderscore}, q, m_i, +) \right) \ \mid \
(\text{\textunderscore}, p, m_j, -) < (\text{\textunderscore}, q, m_i, +) \text{ in }S^- \right\}$
of weight $d_G(p,q)$
\begin{figure}[t!]
\centering
\includegraphics[width=0.95\linewidth]{MCMF-example}
\caption{Illustration of the coordination of the following schedule $S = (\text{\textunderscore}, s_1, m_1, +)$, $(\text{\textunderscore}, s_2, m_2, +), (\text{\textunderscore}, s_3, m_1, -), (\text{\textunderscore}, s_3, m_3, +), (\text{\textunderscore}, t_2, m_2, -), (\text{\textunderscore}, t_3, m_3, -), (\text{\textunderscore}, s_3, m_1, +), (\text{\textunderscore}, t_1, m_1, -)$ (left) Instance with 3 messages and 2 agents of uniform weight. (center) Equivalent weighted bipartite matching problem $G'$. (right) The resulting trajectories of the agents.}
\label{fig:MCMF-example}
\end{figure}
A maximum matching of minimum cost in $G'$ captures the optimal assignment of agents to messages and can be found by solving the classic \emph{assignment problem},
a special case of the \emph{minimum cost maximum flow problem}.
Both of these problems can be solved in polynomial time for instance using the \emph{Hungarian method}~\cite{kuhn1955hungarian} or the \emph{successive shortest path algorithm}~\cite{edmonds1972theoretical}, respectively.
%
The cost of this optimum matching corresponds to the cost of the agents moving around without messages.
The cost of the agents while carrying the messages can easily be added: Consider the schedule $S^-$ restricted to a message $m_i$.
This subsequence $S^-|_{m_i}$ is a sequence of pairs of pick-up/drop-off actions $\left( (\text{\textunderscore}, q, m_i, +), (\text{\textunderscore}, p, m_i, -) \right)$,
and in every pair the message is brought from $q$ to $p$ on the shortest path, so we add $\sum d_G(q,p)$.
Concatenating these piecewise shortest paths gives the trajectory of each agent in the optimum solution, as illustrated in Figure~\ref{fig:MCMF-example}~(right).
\end{proof}
\drop{
We let the messages be numbered in the order that they have to be delivered. To be precise, we assume that message $i$ has to get delivered at $t_i$ before the next message $i+1$ can be picked up at $s_{i+1}$. Therefore, at any point in time at most one message is in transit and so no agent has to carry more than one message at a time.
We can now argue that a message never has to be handed over from one agent to another.
Any optimum schedule with handovers can be transformed into an optimum schedule without handovers as follows.
Whenever two agents meet and exchange messages, we know that only a single message can be involved in this exchange as by assumption at most one message is in transit at any time.
Thus one agent carries one message, the other agent carries no message both before and after the exchange.
This implies that we can simply swap the trajectories of the two exchanging agents from the meeting point onwards to get rid of this exchange.
We can eliminate all exchanges one by one without changing the total travel distance.
This allows us to model the problem as a weighted bipartite matching. We build an auxiliary graph $G' = (A \cup B, E')$. A maximum matching in this bipartite graph will tell us for every message where the agent that serves this message comes from. Before starting to work on message $i$ at $s_i$, the agent might have been at any $p_j$, if it is the first message for agent $j$, or at some $t_{i'}$ with $i'<i$, if $i'$ is the message that the agent delivered right before $i$. Note that we do not care which agent delivers the message as they all have the same weight and differ only in their starting position. All we need to enforce is that the agents comply with the fixed message order.
We let $A := \{p_1, \dots, p_k\} \cup \{t_1, \dots, t_{m-1}\}$ and $B := \{s_1, \dots, s_m\}$.We set $E' := \{(p_j,s_i) \mid j \in [k], i \in [m]\} \cup \{(t_{i'},s_i) \mid i' < i; i',i \in [m]\}$ to enforce the message order. We set the weight of the edge $(a,b) \in E'$ to $d_G(a,b)$, the length of the shortest path from $a$ to $b$ in $G$. We refer to Figure~\ref{fig:MCMF-example} for an example.
A maximum matching of minimum cost in $G'$ captures the optimal assignment of agents to messages and can be found by solving the classic \emph{assignment problem}, a special case of the \emph{minimum cost maximum flow problem}. Both of these problems can be solved in polynomial time for instance using the \emph{Hungarian method}~\cite{kuhn1955hungarian} or the \emph{successive shortest path algorithm}~\cite{edmonds1972theoretical}, respectively.
The cost of this optimum matching corresponds to the cost of the agents moving around without messages. The cost of the agents while carrying the messages can easily be added: every message is brought to its target on the shortest path, so we add $\sum_{i=1}^m d_G(s_i,t_i)$. Concatenating these piecewise shortest paths gives the trajectory of each agent in the optimum solution, as illustrated in Figure~\ref{fig:MCMF-example}.
}
Our algorithm is remotely inspired by a simpler problem at the ACM ICPC world finals 2015~\cite{ICPC}.
The official solution is pseudo-polynomial~\cite{ICPCofficial}, Austrin and Wojtaszczyk~\cite{ICPCunofficial} later sketched a min-cost bipartite matching solution.
\subsection{Overview of the Problem}
Recent technological progress in robotics allows the mass production of inexpensive mobile robots which can be used to perform a variety of tasks autonomously without the need for human intervention. This gives rise to a variety of algorithmic problems for teams of autonomous robots, hereafter called \emph{mobile agents}. We consider here the delivery problem of moving some objects or messages between various locations. A mobile agent corresponds to an automated vehicle that can pick up a message at its source and deliver it to the intended destination. In doing so, the agent consumes energy proportional to the distance it travels. We are interested in energy-efficient operations by the team of agents such that the total energy consumed is minimized.
In general the agents may not be all identical; some may be more energy efficient than others if they use different technologies or different sources of power. We assume each agent has a given \emph{weight} which is the rate of energy consumption per unit distance traveled by this agent. Moreover, the agents may start from distinct locations. Thus it may be sometimes efficient for a agent to carry the message to some intermediate location and hand it over to another agent which carries it further towards the destination. On the other hand, an agent may carry several messages at the same time. Finding an optimal solution that minimizes the total energy cost involves scheduling the moves of the agents and the points where they pick up or handover the messages.
We study this problem (called \textsc{WeightedDelivery}\xspace) for an edge-weighted graph $G$ which connects all sources and destinations. The objective is to deliver $m$ messages between specific source-target pairs using $k$ agents located in arbitrary nodes $G$.
Note that this problem is distinct from the connectivity problems on graphs or network flow problems since the initial location of the agents are in general different from the sources where the messages are located, which means we need to consider the cost of moving the agents to the sources in addition to the cost of moving the messages. Furthermore, there is no one-to-one correspondence between the agents and the messages in our problem.
Previous approaches to energy-efficient delivery of messages by agents have focused on a
bottleneck where the agents have limited energy (battery power) which restricts their movements~\cite{AnayaCCLPV16, DDalgosensors13}. The decision problem of whether a single message can be delivered without exceeding the available energy for any agent is known as the DataDelivery problem~\cite{DDicalp14} or the BudgetedDelivery problem~\cite{sirocco16} and it was shown to be weakly $\mathrm{NP}$-hard on paths~\cite{DDicalp14} and strongly $\mathrm{NP}$-hard on planar graphs~\cite{sirocco16}.
\subparagraph{Our Model.}
We consider an undirected edge-weighted graph $G=(V,E)$.
Each edge $e \in E$ has a \emph{cost} (or \emph{length}) denoted by $l_e$.
The length of a simple path is the sum of the lengths of its edges. The distance between nodes $u$ and $v$ is denoted by $d_{G}(u,v)$ and is equal to the length of the shortest path from $u$ to $v$ in $G$.
There are $k$ mobile agents denoted by $a_1, \dots a_k$ and having weights $w_1,\dots w_k$. These agents are initially located on arbitrary nodes $p_1, \ldots, p_k$
of $G$. We denote by $d(a_i,v)$ the distance from the initial location of~$a_i$ to node~$v$.
Each agent can move along the edges of the graph. Each time an agent $a_i$ traverses an edge $e$ it incurs an energy cost of $w_i \cdot l_e$.
Furthermore there are $m$ pairs of (source, target) nodes in $G$ such that for $1\leq i \leq m$, a message has to be delivered from source node $s_i$ to a target node $t_i$. A message can be picked up by an agent from any node that it visits and it can be carried to any other node of $G$, and dropped there.
The agents are given a \emph{capacity}~$\kappa$ which limits the number of messages an agent may carry simultaneously.
There are no restrictions on how much an agent may travel. We denote by $d_j$ the total distance traveled by the $j$-th agent.
\textsc{WeightedDelivery}\xspace is the optimization problem of minimizing the total energy $\sum_{j=1}^k w_j d_j$ needed to deliver all messages.
A \emph{schedule} $S$ describes the actions of all agents as a sequence (ordered list) of pick-up actions $(a_j,p,m_i,+)$ and drop-off actions $(a_j,q,m_i,-)$, where each such tuple denotes the action of agent $a_j$ moving from its current location to node $p$ (node $q$) where it picks up message $m_i$ (drops message $m_i$, respectively).
A schedule~$S$ implicitly encodes all the pick-up and drop-off times and it is easy to compute its total energy use of $\textsc{cost}(S) := \sum_{j=1}^k w_j d_j$.
We denote by $S|_{a_j}$ the subsequence of all actions carried out by agent $a_j$ and by $S|_{m_i}$ the subsequence of all actions involving pick-ups or drop-offs of message $m_i$.
We call a schedule \emph{feasible} if every pick-up action $(\text{\textunderscore},p,m_i,+),\ p\neq s_i$, is directly preceded by a drop-off action $(\text{\textunderscore},p,m_i,-)$ in $S|_{m_i}$ and if all the messages get delivered, see Figure~\ref{fig:schedule-example}.
\begin{figure}[t!]
\centering
\includegraphics[width=\textwidth]{schedule-example.pdf}
\caption{Example of an optimal, feasible schedule for two messages and two agents.}
\label{fig:schedule-example}
\end{figure}
\subparagraph{Our Contribution.}
Solving \textsc{WeightedDelivery}\xspace naturally involves simultaneously solving three subtasks, \emph{collaboration}, \emph{individual planning}, and \emph{coordination}:
First of all, if multiple agents work on the same message, they need to collaborate, i.e., we have to find all intermediate drop-off and pick-up locations of the message.
Secondly, if an agent works on more than one message, we have to plan in which order it wants to approach its subset of messages.
Finally, we have to coordinate which agent works on which subset of all messages (if they do this without collaboration, the subsets form a partition, otherwise the subsets are not necessarily pairwise disjoint).
Even though these three subtasks are interleaved, we investigate collaboration, planning and coordination separately in the next three sections.
This leads us to a polynomial-time approximation algorithm for \textsc{WeightedDelivery}\xspace, given in Section~\ref{sec:approx}.
In Section~\ref{sec:singlemessage} we consider the \emph{Collaboration} aspect of \textsc{WeightedDelivery}\xspace.
We first present a polynomial time solution for \textsc{WeightedDelivery}\xspace when there is only a single message ($m=1$). The algorithm has complexity $O(|V|^3)$ irrespective of the number of agents $k$.
In general, we show that any algorithm that only uses one agent for delivering every message cannot achieve an approximation ratio better than what we call the \emph{benefit of collaboration} $(\textsc{BoC})$
which is at least $1/ \ln \left( \left( 1 + 1/(2m)\right)^m \left( 1 + 1/(2m+1) \right) \right)$. We show this to be tight for $m=1$ (where $BoC \geq 1 / \ln 2$) and $m\rightarrow \infty$ (where $BoC \rightarrow 2$).
In Section~\ref{sec:planning} we look at the \emph{Planning} aspect of \textsc{WeightedDelivery}\xspace.
Individual planning by itself turns out to be $\mathrm{NP}$-hard on planar graphs and $\mathrm{NP}$-hard to approximate within a factor of less than $\tfrac{367}{366}$. On the positive side, we give approximation guarantees for restricted versions of \textsc{WeightedDelivery}\xspace which turn out to be useful for the analysis in Section~\ref{sec:approx}.
In Section~\ref{sec:message-order-hardness} we study the \emph{Coordination} aspect of \textsc{WeightedDelivery}\xspace. Even if collaboration and planning are taken care of (i.e., a schedule is fixed except for the assignment of agents to messages),
Coordination also turns out to be $\mathrm{NP}$-hard even on planar graphs.
The result holds for any capacity, including $\kappa=1$. This setting, however, becomes tractable if restricted to uniform weights of the agents.
In Section~\ref{sec:approx} we give a polynomial-time approximation algorithm for \textsc{WeightedDelivery}\xspace with an approximation ratio of $4\cdot \max \frac{w_i}{w_j}$ for $\kappa =1$.
Due to the limited space, some proofs are deferred to the appendix.
\subparagraph{Related Work.}
The problem of communicating or transporting goods between sources and destinations in a graph has been well studied in a variety of models with different optimization criteria. The problem of finding the smallest subgraph or tree that connects multiple sources and targets in a graph is called the \emph{point-to-point connection problem} and is known to be $\mathrm{NP}$-hard~\cite{Mccormick92Point-to-point}. The problem is related to the more well-known generalized Steiner tree problem~\cite{Winter87Steiner} which is also $\mathrm{NP}$-hard.
Unlike these problems, the maximum flow problem in a network~\cite{edmonds1972theoretical}, puts a limit on the number of messages that can be transported over an edge, which makes the problem easier allowing for polynomial time solutions. In all these problems, however, there are no agents carrying the messages as in our problem.
For the case of a single agent moving in a graph, the task of optimally visiting all nodes, called the \emph{Traveling salesman problem} or visiting all edges, called the \emph{Chinese postman problem} have been studied before. The former is known to be $\mathrm{NP}$-hard~\cite{ApplegateTSP} while the latter can be solved in $O(|V|^{2}|E|)$ time~\cite{Edmonds73}. For metric graphs, the traveling salesman problem has a polynomial-time $\tfrac{3}{2}$-approximation for tours~\cite{Christofides76} and for paths with one fixed endpoint~\cite{Hoogeveen91}.
For multiple identical agents in a graph,
Demaine et al.~\cite{Demaine2009} studied the problem of moving the agents to form desired configurations (e.g. connected or independent configurations) and they provided approximation algorithms and inapproximability results. Bilo et al.~\cite{Bilo2013} studied similar problems on visibility graphs of simple polygons and showed many motion planning problems to be hard to approximate
Another optimization criteria is to minimize the maximum energy consumption by any agent, which requires partitioning the given task among the agents.
Frederickson et al.~\cite{FredericksonHechtKim/76} studied this for uniform weights and called it the \emph{$k$-stacker-crane problem} and they gave approximation algorithms for a single agent and multiple agents.
Also in this minmax context, the problem of visiting all the nodes of a tree using $k$ agents starting from a single location is known to be $\mathrm{NP}$-hard~\cite{FraGKP04}. Anaya et al.~\cite{AnayaCCLPV16} studied the model of
agents having limited energy budgets. They presented hardness results (on trees) and approximation algorithms (on arbitrary graphs) for the problem of transferring information from one agent to all others (\emph{Broadcast}) and from all agents to one agent (\emph{Convergecast}). For the same model, message delivery between a single $s$-$t$ node pair was studied by Chalopin et al.~\cite{DDalgosensors13, DDicalp14, sirocco16}\nocite{sirocco16arxiv}
as mentioned above.
A recent paper~\cite{EnergyExchange15} shows that these three problems remain $\mathrm{NP}$-hard for general graphs even if the agents are allowed to exchange energy when they meet.
\subsection{Polynomial-time Approximation for Planning in Restricted Settings}
Motivated by Theorem~\ref{theo-boc-nointermediate}, we now look at the restricted setting of planning for a feasible schedule $S^R$
of which we know that each message is completely transported by some agent $a_j$ without intermediate drop-offs,
i.e., for every message~$m_i$ there must be an agent~$j$ with $S^R|_{m_i} = (a_j, s_i, m_i, +), (a_j,t_i, m_i, -)$.
This allows us to give polynomial-time approximations for planning with capacity $\kappa \in \left\{ 1,\infty \right\}$:
\begin{restatable}{thm2}{restrictedplanning}
\label{thm:planning-restricted}
Let $S^R$ be a feasible schedule for a given instance of \textsc{WeightedDelivery}\xspace with the restriction that $\forall i\ \exists j:\ S^R|_{m_i} = (a_j, s_i, m_i, +), (a_j,t_i, m_i, -)$.
Denote by $\textsc{Opt}(S^R)$ a reordering of $S^R$ with optimal cost.
There is a polynomial-time planning algorithm $\textsc{Alg}$ which gives a reordering $\textsc{Alg}(S^R)$ such that
$\textsc{cost}(\textsc{Alg}(S^R)) \leq 2\cdot \textsc{cost}(\textsc{Opt}(S^R))$ if $\kappa=1$ and
$\textsc{cost}(\textsc{Alg}(S^R)) \leq 3.5\cdot \textsc{cost}(\textsc{Opt}(S^R))$ if $\kappa=\infty$.
\end{restatable}
\begin{proof}
By the given restriction, separate planning of each $S^R|_{a_j}$ independently maintains feasibility of $S^R$. We denote by $m_{j1}, m_{j2}, \ldots, m_{jx}$ the messages appearing in $S^R_{a_j}$.
We define a complete undirected auxiliary graph $G'=(V',E')$ on the node set
$V' = \left\{ p_j \right\} \cup \left\{ s_{j1}, s_{j2}, \ldots, s_{jx} \right\} \cup \left\{ t_{j1}, \ldots, t_{jx} \right\}$ with edges $(u,v)$ having weight $d_G(u,v)$.
For $\underline{\kappa = 1}$, the schedule $\textsc{Opt}(S^R)|_{a_j}$ corresponds to a Hamiltonian path $H$ in $G'$ of minimum length, starting in $p_j$,
subject to the condition that for each message $m_{ji}$ the visit of its source $s_{ji}$ is directly followed by a visit of its destination $t_{ji}$.
We can lower bound the length of $H$ with the total length of a spanning tree $T'=(V', E(T)') \subseteq G'$ as follows:
Starting with an empty graph on $V'$ we first add all edges $(s_{ji},t_{ji})$.
Following the idea of Kruskal~\cite{Kruskal56}, we add edges from
$\left\{ p_j \right\} \times \left\{ s_{j1}, \ldots, s_{jx} \right\} \cup \left\{ t_{j1}, \ldots, t_{jx} \right\} \times \left\{ s_{j1}, \ldots, s_{jx} \right\}$
in increasing order of their lengths, disregarding any edges which would result in the creation of a cycle.
Now a DFS-traversal of $T'$ starting from $p_j$ visits any edge $(s_{ji},t_{ji})$ in both directions. Whenever we cross such an edge from $s_{ji}$ to $t_{ji}$,
we add $(a_j,s_{ji},m_{ji},+),(a_j,t_{ji},m_{ji},-)$ as a suffix to the current schedule $\textsc{Alg}(S^R)|_{a_j}$.
We get an overall cost of $\textsc{cost}(\textsc{Alg}(S^R)|_{a_j}) \leq 2\cdot \sum_{e \in E(T')} l_e \leq 2 \cdot \sum_{e \in H} = 2\cdot \textsc{cost}(\textsc{Opt}(S^R)|_{a_j})$.
\ifProofsAppendix
For $\underline{\kappa = \infty}$, the idea is to first collect all messages by traversing a spanning tree (with cost $\leq 2\cdot \textsc{cost}(\textsc{Opt}(S^R)|_{a_j})$)
and then delivering all of them in a metric TSP path fashion (with cost $\leq \tfrac{3}{2}\cdot \textsc{cost}(\textsc{Opt}(S^R)|_{a_j})$), see Appendix~\ref{app:planning} for details.
\else
\ExecuteMetaData[appendix-planning.tex]{restrictedplanning}
\fi
\end{proof}
\begin{remark}
If we assume as an additional property that the agent returns to its starting position $p_j$ (as for example in the result of Theorem~\ref{theo-boc-nointermediate}), we can get a better approximation for the case $\kappa=1$.
Instead of traversing a spanning tree twice, we can model this as the \emph{stacker-crane problem} for which a polynomial-time $1.8$-approximation is known~\cite{FraGKP04}.
\end{remark}
\section{Introduction}
\label{sec:introduction}
\input{introduction.tex}
\section{Collaboration}
\label{sec:singlemessage}
\input{collaboration-singlemessage.tex}
\input{collaboration-boclowerbound.tex}
\input{collaboration-bocupperbound.tex}
\section{Planning}
\label{sec:planning}
\input{planning.tex}
\section{Coordination}
\label{sec:message-order-hardness}
\input{coordination-hardness.tex}
\input{coordination-mcmf.tex}
\section{Approximation Algorithm}
\label{sec:approx}
\input{approximation}
\newpage
|
1410.3300
|
\section{Introduction}
In the analysis of data on decays of the $\Upsilon$-meson family --$\Upsilon(2S)\to\Upsilon(1S)\pi\pi$, $\Upsilon(3S)\to\Upsilon(1S)\pi\pi$ and $\Upsilon(3S)\to\Upsilon(2S)\pi\pi$ -- the contribution of multi-channel $\pi\pi$ scattering in the final-state interactions is considered.
The analysis, which aims at studying the scalar mesons, is performed jointly considering the isoscalar S-wave processes $\pi\pi\!\to\!\pi\pi,K\overline{K},\eta\eta$, which are described in our model-independent approach based on analyticity and unitarity and using an uniformization procedure, and the charmonium decay processes
$J/\psi\to\phi(\pi\pi, K\overline{K})$, $\psi(2S)\to J/\psi(\pi\pi)$.
\vspace*{0.5mm}
Importance of studying properties of scalar mesons is related to the obvious fact that a comprehension of these states is necessary in principle for the most profound topics concerning the QCD vacuum, because these sectors affect each other especially strongly due to possible "direct" transitions between them. However the problem of interpretation of the scalar mesons is faraway to be solved completely \cite{PDG-12}.
E.g., applying our model-independent method in the 3-channel analyses of processes $\pi\pi\to\pi\pi,K\overline{K},\eta\eta,\eta\eta^\prime$ \cite{SBKN-PRD10,SBL-prd12} we have obtained parameters of the $f_0(500)$ and $f_0(1500)$ which differ considerably from results of analyses which utilize other methods (mainly those based on dispersion relations and Breit--Wigner approaches).
To make our approach more convincing, to confirm obtained results and to diminish inherent arbitrariness, we have utilized the derived model-independent amplitudes for multi-channel $\pi\pi$ scattering calculating the contribution of final-state interactions in decays $J/\psi\to\phi(\pi\pi, K\overline{K})$, $\psi(2S)\to J/\psi(\pi\pi)$ and $\Upsilon(2S)\to\Upsilon(1S)\pi\pi$ \cite{SBGLKN-npbps13,SBLKN-prd14}.
Here we add to the analysis the data on decays $\Upsilon(3S)\to\Upsilon(1S)\pi\pi$ and $\Upsilon(3S)\to\Upsilon(2S)\pi\pi$ from CLEO(94) Collaboration. A distinction of the $\Upsilon(3S)$ decays from the above ones consists in the fact that in this case a phase space cuts off, as if, possible contributions which might interfere destructively with the $\pi\pi$-scattering contribution giving a characteristic 2-humped shape of the energy dependence of di-pion spectrum in decay $\Upsilon(3S)\to\Upsilon(1S)\pi\pi$.
After establishing the 2-humped shape of di-pion spectrum Lipkin and Tuan \cite{Lipkin-Tuan} have suggested that the decay $\Upsilon(3S)\to\Upsilon(1S)\pi\pi$ proceeds as follows:
$~~~\Upsilon(3S)\to B^*\overline{B}^*\to B^*\overline{B}\pi\to B\overline{B}\pi\pi\to\Upsilon(1S)\pi\pi$.\\ Then in the heavy-quarkonium limit,
when neglecting recoil of the final-quarkonium state, they obtained that the amplitude contains a term proportional to ${{\bf p}_1\!*{\bf p}_2}\propto\cos\theta_{12}$ ($\theta_{12}$ is the angle between the pion three-momenta ${\bf p}_1$ and ${\bf p}_2$) multiplied by some function of the kinematic invariants. If the latter were a constant, then the distribution
$d\Gamma/d\cos\theta_{12}\propto\cos\theta_{12}^2$ (and $d\Gamma/d M_{\pi\pi}$) would have the 2-humped shape. However, this scenario was not tested numerically by fitting to data. It is possible that this
effect is negligible due to the small coupling of the $\Upsilon$ to the b-flavored sector.
In his work \cite{Moxhay}, Moxhay has suggested that the 2-humped shape is a result of interference
between two parts of the decay amplitude.
One part, in which the $\pi\pi$ final state interaction is allowed for, is related to a mechanism
which acts well in the decays $\psi(2S)\to J/\psi(\pi\pi)$ and $\Upsilon(2S)\to\Upsilon(1S)\pi\pi$
and which, obviously, should operate also in the process $\Upsilon(3S)\to\Upsilon(1S)\pi\pi$.
The other part is responsible for the Lipkin -- Tuan mechanism. Though there remains nothing from
the latter because the author says that the term containing ${\bf p}_1\!*{\bf p}_2$ does not dominate
this part of amplitude and ``the other tensor structures conspire to give a distribution in
$M_{\pi\pi}$ that is more or less flat'' -- indeed, constant.
It seems, the approach of work \cite{Komada-Ishida-Ishida} resembles the above one. The authors
have supposed simply that a pion pair is formed in the $\Upsilon(3S)$ decay both as a result of
re-scattering and ``directly". One can, however, believe that the latter is not reasonable because
the pions interact strongly, inevitably.
We show that the indicated effect of destructive interference can be achieved taking into account our previous conclusions on the wide resonances \cite{SBLKN-prd14,SBLKN-jpgnpp14}, without the doubtful assumptions.
\section{The model-independent amplitudes for multi-channel $\pi\pi$ scattering}
Considering the multi-channel $\pi\pi$ scattering, we shall deal with the 3-channel case (namely with $\pi\pi\!\to\!\pi\pi,K\overline{K},\eta\eta$) because it was shown \cite{SBLKN-jpgnpp14,SBKLN-PRD12} that this is a minimal number of coupled channels needed for obtaining correct values of scalar-isoscalar resonance parameters.
\vspace*{0.5mm}
\begin{itemize}
\item{\underline{Resonance representations on the 8-sheeted Riemann surface}}
\end{itemize}
\vspace*{0.6mm}
The 3-channel $S$-matrix is determined on the 8-sheeted Riemann surface. The matrix elements $S_{ij}$, where $i,j=1,2,3$ denote channels, have the right-hand cuts along the real axis of the $s$ complex plane ($s$ is the invariant total energy squared), starting with the channel thresholds $s_i$ ($i=1,2,3$), and the left-hand cuts related to the crossed channels.
The Riemann-surface sheets are numbered according to the signs of analytic continuations of the square roots $\sqrt{s-s_i}~~(i=1,2,3)$ as follows:\\
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
{} & ~I~ & ~II~ & ~III~ & ~IV~ & ~V~ & ~VI~ & ~VII~ & ~VIII~ \\ \hline
{~$\mbox{Im}\sqrt{s-s_1}$~} & $+$ & $-$ & $-$ & $+$ & $+$ & $-$ & $-$ & $+$ \\
{~$\mbox{Im}\sqrt{s-s_2}$~} & $+$ & $+$ & $-$ & $-$ & $-$ & $-$ & $+$ & $+$\\
{~$\mbox{Im}\sqrt{s-s_3}$~} & $+$ & $+$ & $+$ & $+$ & $-$ & $-$ & $-$ & $-$\\
\hline
\end{tabular}
\end{center}
\vspace*{0.1mm}
An adequate allowance for the Riemann surface structure is performed taking the following uniformizing variable
\cite{SBL-prd12} where we have neglected the $\pi\pi$-threshold branch-point and taken into account the $K\overline{K}$- and $\eta\eta$-threshold branch-points and the left-hand branch-point at $s=0$ related to the crossed channels:
\begin{equation}\label{w}
w=\frac{\sqrt{(s-s_2)s_3} + \sqrt{(s-s_3)s_2}}{\sqrt{s(s_3-s_2)}}~~~~(s_2=4m_K^2 ~ {\rm and}~ s_3=4m_\eta^2)
\end{equation}
(reasons and substantiation for neglecting the $\pi\pi$-threshold branch-point can be found in works \cite{SBL-prd12,KMS-96}).
Resonance representations on the Riemann surface are obtained using formulas from \cite{SBL-prd12,KMS-96}, expressing analytic continuations of the $S$-matrix elements to all sheets in terms of those on the physical (I) sheet that have only the resonances zeros (beyond the real axis), at least, around the physical region.
In the 3-channel case, there are {\it 7 types} of resonances corresponding to 7 possible situations when there are resonance zeros on sheet I only in $S_{11}$ -- ({\bf a}); ~~$S_{22}$ -- ({\bf b}); ~~$S_{33}$ -- ({\bf c}); ~~$S_{11}$ and $S_{22}$ -- ({\bf d}); ~~$S_{22}$ and $S_{33}$ -- ({\bf e}); ~~$S_{11}$ and $S_{33}$ -- ({\bf f}); ~~$S_{11}$, $S_{22}$ and $S_{33}$ -- ({\bf g}). The resonance of every type is represented by the pair of complex-conjugate \underline{clusters} (of poles and zeros on the Riemann surface).
The variable $w$ eq.(\ref{w}) maps a model of the 8-sheeted Riemann surface, allowing for the neglect
of the $\pi\pi$-threshold branch-point, onto the $w$-plane divided into two parts by
a unit circle centered at the origin (Fig. 1).
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=0.36\textwidth,angle=-90]{fig_3ch_a3.eps}
\hspace*{-0.1cm}
\includegraphics[width=0.36\textwidth,angle=-90]{fig_3ch_b3.eps}\\
\vspace*{0.5cm}
\includegraphics[width=0.36\textwidth,angle=-90]{fig_3ch_c3.eps}
\hspace*{-0.1cm}
\includegraphics[width=0.36\textwidth,angle=-90]{fig_3ch_g3.eps}
\vspace*{0.3cm}
\caption{Uniformization $w$-plane: Representation of resonances of
types ({\bf a}), ({\bf b}), ({\bf c}) and ({\bf g}) in the
3-channel $\pi\pi$-scattering $S$-matrix element.}
\end{center}\label{fig:lw_plane}
\end{figure}
The semi-sheets I (III), II (IV), V (VII) and VI (VIII) are mapped onto the exterior (interior) of the unit disk in the 1st, 2nd, 3rd and 4th quadrants, respectively. The physical region, shown by the thick line, extends from the point $\pi\pi$ on the imaginary axis (the first $\pi\pi$ threshold corresponding to $s_1$) along this axis down to the point {\it i} on the unit circle (the second threshold corresponding to $s_2$). Then it extends further along the unit circle
clockwise in the 1st quadrant to point 1 on the real axis (the third threshold
corresponding to $s_3$) and then along the real axis to the point
$b=(\sqrt{s_2}+\sqrt{s_3})/\sqrt{s_3-s_2}$ into which $s=\infty$ is mapped on the $w$-plane. The intervals $(-\infty,-b]$, $[-b^{-1},b^{-1}]$, $[b,\infty)$ on the real axis are the images of the corresponding edges of the left-hand cut of the $\pi\pi$-scattering amplitude. In Fig. 1, the 3-channel resonances of the types ({\bf a}), ({\bf b}), ({\bf c}) and ({\bf g}), met in the analysis, in $S_{11}(w)$ are represented by the poles ($*$) and zeroes ($\circ$) symmetric to these poles with respect to the imaginary axis giving corresponding pole clusters. The ``pole--zero'' symmetry guarantees the elastic unitarity of $\pi\pi$ scattering in the ($\pi\pi$, $i$) interval.
\newpage
\begin{itemize}
\item{\underline{The $S$-matrix parametrization}}
\end{itemize}
\vspace*{0.1cm}
The $S$-matrix elements $S_{ij}$ are parameterized using the Le Couteur-Newton relations \cite{LeCou}. On the $w$-plane, we have derived for them:
\begin{equation}\label{w:LeCouteur-Newton}
S_{11}=\frac{d^* (-w^*)}{d(w)},~~~~~~~~
S_{22}=\frac{d(-w^{-1})}{d(w)},~~~~~~~~
S_{33}=\frac{d(w^{-1})}{d(w)},
\end{equation}
\begin{equation}
S_{11}S_{22}-S_{12}^2=\frac{d^*({w^*}^{-1})}{d(w)},~~~~~~~~
S_{11}S_{33}-S_{13}^2=\frac{d^*(-{w^*}^{-1})}{d(w)}.~~
\end{equation}
The $d(w)$ is the Jost matrix determinant.
The 3-channel unitarity requires the following relations to hold for physical $w$-values:
\begin{equation}
|d(-w^*)|\leq |d(w)|,\quad |d(-w^{-1})|\leq |d(w)|,\quad |d(w^{-1})|\leq
|d(w)|,
\end{equation}
\begin{equation}
|d({w^*}^{-1})|=|d(-{w^*}^{-1})|=|d(-w)|=|d(w)|.
\end{equation}
The $S$-matrix elements in Le Couteur--Newton relations (\ref{w:LeCouteur-Newton}) are taken as the products~~$S=S_B S_{res}$; the main (\underline{model-independent}) contribution of resonances, given by the pole clusters, is included in the resonance part $S_{res}$; possible remaining small (\underline{model-dependent}) contributions of resonances and influence of channels which are not taken explicitly into account in the uniformizing variable are included in the background part $S_B$. The d-function is:\\ for the resonance part
\begin{equation}
d_{res}(w)=w^{-\frac{M}{2}}\prod_{r=1}^{M}(w+w_{r}^*)~~~(M~ \mbox{is the number of resonance zeros}),
\end{equation}
for the background part
\begin{equation}
d_B=\mbox{exp}[-i\sum_{n=1}^{3}\frac{\sqrt{s-s_n}}{2m_n}(\alpha_n+i\beta_n)]
\end{equation}
where
$$\alpha_n=a_{n1}+a_{n\sigma}\frac{s-s_\sigma}{s_\sigma}\theta(s-s_\sigma)+
a_{nv}\frac{s-s_v}{s_v}\theta(s-s_v)
$$
$$\beta_n=b_{n1}+b_{n\sigma}\frac{s-s_\sigma}{s_\sigma}\theta(s-s_\sigma)+
b_{nv}\frac{s-s_v}{s_v}\theta(s-s_v)$$
with $s_\sigma$ the $\sigma\sigma$ threshold, $s_v$ the combined threshold of the
$\eta\eta^{\prime},~\rho\rho,~\omega\omega$ channels.
The resonance zeros $w_{r}$ and the background parameters were fixed by fitting to data on processes ~$\pi\pi\to\pi\pi,K\overline{K},\eta\eta$ with adding the data on decays $J/\psi\to\phi(\pi\pi, K\overline{K})$, $\psi(2S)\to J/\psi(\pi\pi)$ from the Crystal Ball, DM2, Mark~II, Mark~III, and BES~II Collaborations.
For the data on multi-channel $\pi\pi$ scattering we used the results of phase
analyses which are given for phase shifts of the amplitudes $\delta_{\alpha\beta}$
and for the modules of the $S$-matrix elements
$\eta_{\alpha\beta}=|S_{\alpha\beta}|$ ($\alpha,\beta=1,~2,~3$):
\begin{equation}
S_{\alpha\alpha}=\eta_{\alpha\alpha}e^{2i\delta_{\alpha\alpha}},~~~~~
S_{\alpha\beta}=i\eta_{\alpha\beta}e^{i\phi_{\alpha\beta}}.
\end{equation}
If below the third threshold there is the 2-channel unitarity then
the relations
\begin{equation}
\eta_{11}=\eta_{22}, ~~ \eta_{12}=(1-{\eta_{11}}^2)^{1/2},~~
\phi_{12}=\delta_{11}+\delta_{22}
\end{equation}
are fulfilled in this energy region.
For the $\pi\pi$ scattering, the data from the threshold to 1.89~GeV are taken from many works
\cite{pipi-data}. For $\pi\pi\to K\overline{K}$, practically all the accessible data are used
\cite{pipiKK-data}. For $\pi\pi\to\eta\eta$, we have taken the data for $|S_{13}|^2$ from the
threshold to 1.72~GeV \cite{pipi_eta_eta-data}.
We have found a following more preferable scenarios: the $f_0(500)$ is described by the cluster of type ({\bf a}); the $f_0(1370)$ and $f_0(1500)$, type ({\bf c}) and $f_0^\prime(1500)$, type ({\bf g}); the $f_0(980)$ is represented only by the pole on sheet~II and shifted pole on sheet~III. However, the $f_0(1710)$ can be described by clusters either of type ({\bf b}) or ({\bf c}). For definiteness, we have taken type~({\bf c}).
Analyzing these data, we have obtained two solutions which are distinguished mainly in the width of $f_0(500)$. Further we show the solution which has survived after adding to the analysis the data on decays $J/\psi\to\phi(\pi\pi, K\overline{K})$ from the Mark~III, DM2 and BES~II Collaborations.
A comparison of the description with the experimental data on multi-channel $\pi\pi$ scattering is shown in Fig.~2.
\begin{figure}[!thb]
\begin{center}
\includegraphics[width=0.46\textwidth,angle=0]{ppf.eps}
\includegraphics[width=0.46\textwidth,angle=0]{ppm.eps}\\
\includegraphics[width=0.46\textwidth,angle=0]{pKf.eps}
\includegraphics[width=0.46\textwidth,angle=0]{pKm.eps}\\
\vspace*{-0.0cm} \includegraphics[width=0.48\textwidth,angle=0]{petam.eps}
\vskip -.3cm
\caption{The phase shifts and modules of the $S$-matrix element in the S-wave $\pi\pi$-scattering (upper panel), in $\pi\pi\to K\overline{K}$ (middle panel), and the squared module of the $\pi\pi\to\eta\eta$ $S$-matrix element (lower figure).}
\end{center}\label{fig:fitting}
\end{figure}
In Table~\ref{tab:clusters} we show the obtained pole clusters for the resonances on the complex energy plane $\sqrt{s}$. The poles on sheets III, V, and VII and VI, corresponding to the $f_0^\prime(1500)$, are of the second and third order, respectively (this is an approximation).
\begin{table}[!htb]
\caption{The pole clusters for $f_0$ resonances on the $\sqrt{s}$-plane.
~$\sqrt{s_r}\!=\!{\rm E}_r\!-\!i\Gamma_r/2$ in MeV.}
\label{tab:clusters}
\vspace*{0.05cm}
\def1.5{1.5}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline ${\rm Sheet}$ & {} & $f_0(500)$ & $f_0(980)$ & $f_0(1370)$ & $f_0(1500)$ & $f_0^\prime(1500)$ & $f_0(1710)$ \\ \hline
II & {${\rm E}_r$} & $514.5\pm12.4$ & $1008.1\pm3.1$\! & {} & {} & $1512.7\pm4.9$ & {} \\
{} & {$\Gamma_r/2$} & $465.6\pm5.9$ & $32.0\pm1.5$ & {} & {} & $285.8\pm12.9$ & {} \\
\hline III & {${\rm E}_r$} & $544.8\pm17.7$ & $976.2\pm5.8$ & $1387.6\pm24.4$ & {} & $1506.2\pm9.0$ & {} \\{} & {$\Gamma_r/2$} & $465.6\pm5.9$ & $53.0\pm2.6$ & $166.9\pm41.8$ & {} & \!\!\!$127.9\pm10.6$ & {} \\
\hline IV & {${\rm E}_r$} & {} & {} & 1387.6$\pm$24.4 & {} & 1512.7$\pm$4.9 & {} \\
{} & {$\Gamma_r/2$} & {} & {} & $178.5\pm37.2$ & {} & $216.0\pm17.6$ & {} \\
\hline V & {${\rm E}_r$} & {} & {} & 1387.6$\pm$24.4 & $1493.9\pm3.1$ & $1498.9\pm7.2$ & $1732.8\pm43.2$ \\
{} & {$\Gamma_r/2$} & {} & {} & $260.9\pm73.7$ & $72.8\pm3.9$ & $142.2\pm6.0$ & $114.8\pm61.5$ \\
\hline VI & {${\rm E}_r$} & $566.5\pm29.1$ & {} & 1387.6$\pm$24.4 & $1493.9\pm5.6$
& $1511.4\pm4.3$ & 1732.8$\pm$43.2 \\
{} & {$\Gamma_r/2$} & $465.6\pm5.9$ & {} & $249.3\pm83.1$ & $58.4\pm2.8$ & $179.1\pm4.0$ & $111.2\pm8.8$ \\
\hline VII & {${\rm E}_r$} & $536.2\pm25.5$ & {} & {} & $1493.9\pm5.0$ & $1500.5\pm9.3$ & 1732.8$\pm$43.2 \\
{} & {$\Gamma_r/2$} & $465.6\pm5.9$ & {} & {} & $47.8\pm9.3$ & $99.7\pm18.0$ & $55.2\pm38.0$ \\
\hline VIII & {${\rm E}_r$} & {} & {} & {} & $1493.9\pm3.2$ & 1512.7$\pm$4.9 & 1732.8$\pm$43.2 \\
{} & {$\Gamma_r/2$} & {} & {} & {} & $62.2\pm9.2$ & $299.6\pm14.5$ & $58.8\pm16.4$ \\
\hline
\end{tabular}
\end{table}
The obtained background parameters are:
\underline{$a_{11}=0.0$, $a_{1\sigma}=0.0199$, $a_{1v}=0.0$,} \underline{$b_{11}=b_{1\sigma}=0.0$, $b_{1v}=0.0338$,} $a_{21}=-2.4649$, $a_{2\sigma}=-2.3222$, $a_{2v}=-6.611$, $b_{21}=b_{2\sigma}=0.0$, $b_{2v}=7.073$, $b_{31}=0.6421$, $b_{3\sigma}=0.4851$, $b_{3v}=0$; $s_\sigma=1.6338~{\rm GeV}^2$, $s_v=2.0857~{\rm GeV}^2$.
The very simple description of the $\pi\pi$-scattering background (underlined numbers) confirms well our assumption \underline{$S=S_B S_{res}$} and also that representation of multi-channel resonances by the pole clusters on the uniformization plane is good and quite sufficient.
\underline{It is important that we have obtained practically zero background of the}
\underline{$\pi\pi$ scattering in the scalar-isoscalar channel} because a reasonable and simple description of the background should be a criterion for the correctness of the approach. Furthermore, this shows that the consideration of the left-hand branch-point at $s=0$ in the uniformizing variable solves partly a problem of some approaches (see, e.g., \cite{Achasov94}) that the wide-resonance parameters are strongly controlled by the non-resonant background.
Generally, {\it wide multi-channel states are most adequately represented by pole clusters}, because the pole clusters give the main model-independent effect of resonances. The pole positions are rather stable characteristics for various models, whereas masses and widths are very model-dependent for wide resonances.
However, mass values are needed in some cases, e.g., in mass relations for multiplets. Therefore, we stress that such parameters of the wide multi-channel states, as {\it masses, total widths and coupling constants with channels, should be calculated using the poles on sheets II, IV and VIII}, because only on these sheets the analytic continuations have the forms: $$\propto 1/S_{11}^{\rm I},~~
\propto 1/S_{22}^{\rm I}~~{\rm and}~~\propto 1/S_{33}^{\rm I},$$
respectively, i.e., the pole positions of resonances are at the same points of the complex-energy plane, as the resonance zeros on the physical sheet, and are not shifted due to the coupling of channels. E.g., if the resonance part of amplitude is taken as
\begin{equation}
T^{res}=\sqrt{s}~\Gamma_{el}/(m_{res}^2-s-i\sqrt{s}~\Gamma_{tot}),
\end{equation}
for the mass and total width, one obtains
\begin{equation}
m_{res}=\sqrt{{\rm E}_r^2+\left(\Gamma_r/2\right)^2}~~~
{\rm and}~~~\Gamma_{tot}=\Gamma_r,
\end{equation}
where the pole position $\sqrt{s_r}\!=\!{\rm E}_r\!-\!i\Gamma_r/2$ must be taken
on sheets II, IV, VIII, depending on the resonance classification.
In Table~\ref{tab:mass-width} we show the obtained values of masses and total widths of the $f_0$ resonances.
\begin{table}[htb!]
\caption{\large The masses and total widths of the $f_0$ resonances.}
\vspace*{0.05cm}
\label{tab:mass-width}
\def1.5{1.5}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline {} & $f_0(500)$ & $f_0(980)$ & $f_0(1370)$ & $f_0(1500)$ & $f_0^\prime(1500)$ & $f_0(1710)$\\ \hline
$m_{res}$[MeV] & 693.9$\pm$10.0 & 1008.1$\pm$3.1 & 1399.0$\pm$24.7 & 1495.2$\pm$3.2 & 1539.5$\pm$5.4 & 1733.8$\pm$43.2 \\ \hline
$\Gamma_{tot}$[MeV] & 931.2$\pm$11.8 & 64.0$\pm$3.0
& 357.0$\pm$74.4 & 124.4$\pm$18.4 & 571.6$\pm$25.8 & 117.6$\pm$32.8 \\
\hline
\end{tabular}}
\end{table}
\section{The contribution of multi-channel $\pi\pi$ scattering in the final states of decays of $\psi$- and $\Upsilon$-meson families}
For decays $J/\psi\to\phi\pi\pi,\phi K\overline{K}$ we have taken data from Mark III \cite{MarkIII}, DM2 \cite{DM2} and BES II \cite{BES} Collaborations;
for $\psi(2S)\to J/\psi(\pi^+\pi^-)$ from Mark~II \cite{Mark_II}; for $\psi(2S)\to J/\psi(\pi^0\pi^0)$ from Crystal Ball(80) \cite{Crystal_Ball(80)}; for $\Upsilon(2S)\to\Upsilon(1S)(\pi^+\pi^-,\pi^0\pi^0)$ from Argus \cite{Argus}, CLEO \cite{CLEO}, CUSB \cite{CUSB}, and Crystal Ball(85) \cite{Crystal_Ball(85)} Collaborations; finally for $\Upsilon(3S)\to\Upsilon(1S)(\pi^+\pi^-,\pi^0\pi^0)$ and $\Upsilon(3S)\to\Upsilon(2S)(\pi^+\pi^-,\pi^0\pi^0)$ from CLEO(94) Collaboration \cite{CLEO(94)}.
Formalism for calculating di-meson mass distributions of decays $J/\psi\to\phi(\pi\pi, K\overline{K})$ and $V^{\prime}\to V\pi\pi$ ($V=\psi,\Upsilon$) can be found in Ref. \cite{MP-prd93}. There is assumed that pairs of pseudo-scalar mesons of final states have $I=J=0$ and only they undergo strong interactions, whereas a final vector meson ($\phi$, $V$) acts as a spectator. The amplitudes for decays are related with the scattering amplitudes $T_{ij}$ $(i,j=1-\pi\pi,2-K\overline{K})$ as follows
\begin{equation}F(J/\psi\to\phi\pi\pi)=\sqrt{2/3}~[c_1(s)T_{11}+c_2(s)T_{21}],\end{equation}
\begin{equation}F(J/\psi\to\phi K\overline{K})=\sqrt{1/2}~[c_1(s)T_{12}+c_2(s)T_{22}],\end{equation}
\begin{equation}
F(V(2S)\to V(1S)\pi\pi~(V=\psi,\Upsilon))=[(d_1,e_1)T_{11}+(d_2,e_2)T_{21}],\end{equation}
\begin{equation}
F(\Upsilon(3S)\to \Upsilon(1S,2S)\pi\pi)=[(f_1,g_1)T_{11}+(f_2,g_2)T_{21}]\end{equation}
where ~~$c_1=\gamma_{10}+\gamma_{11}s$, ~~$c_2=\alpha_2/(s-\beta_2)+\gamma_{20}+\gamma_{21}s$, ~~$(d_i,e_i)=(\delta_{i0},\rho_{i0})+(\delta_{i1},\rho_{i1})s$ and ~~$(f_i,g_i)=(\omega_{i0},\tau_{i0})+(\omega_{i1},\tau_{i1})s$~~ are functions of couplings of the $J/\psi$, $\psi(2S)$, $\Upsilon(2S)$ and $\Upsilon(3S)$ to channel~$i$; ~$\alpha_2$, $\beta_2$, $\gamma_{i0}$, $\gamma_{i1}$, $\delta_{i0}$, $\rho_{i0}$, $\delta_{i1}$, $\rho_{i1}$, $\omega_{i0}$, $\omega_{i1}$, $\tau_{i0}$ and $\tau_{i1}$ are free parameters. The pole term in $c_2$ is an approximation of possible $\phi K$ states, not forbidden by OZI rules when considering quark diagrams of these processes. Obviously this pole should be situated on the real $s$-axis below the $\pi\pi$ threshold.
The expressions for decays $J/\psi\to\phi(\pi\pi, K\overline{K})$
\begin{equation}
N|F|^{2}\sqrt{(s-s_i)[m_\psi^{2}-(\sqrt{s}-m_\phi)^{2}][m_\psi^2-(\sqrt{s}+m_\phi)^2]}
\end{equation}
and the analogues relations for $V(2S)\to V(1S)\pi\pi~(V=\psi,\Upsilon)$ and $\Upsilon(3S)\to \Upsilon(1S,2S)\pi\pi$ give the di-meson mass distributions. N (normalization to experiment) is 0.7512 for Mark~III, 0.3705 for DM2, 5.699 for BES~II, 1.015 for Mark~II, 0.98 for Crystal Ball(80), 4.3439 for Argus, 2.1776 for CLEO, 1.2011 for CUSB, 0.0788 for Crystal Ball(85), and, finally, for CLEO(94): 0.5096 and 0.2235 for $\Upsilon(3S)\to\Upsilon(1S)(\pi^+\pi^-~{\rm and}~\pi^0\pi^0)$, 11.6092 and 5.7875 for $\Upsilon(3S)\to\Upsilon(2S)(\pi^+\pi^-$ ${\rm and}~\pi^0\pi^0)$, respectively. Parameters of the coupling functions of the decay particles ($J/\psi$, $\psi(2S)$, $\Upsilon(2S)$ and $\Upsilon(3S)$) to channel~$i$, obtained in the analysis, are: $(\alpha_2,\beta_2)=(0.0843,0.0385)$,
$(\gamma_{10},\gamma_{11},\gamma_{20},\gamma_{21})=(1.1826,1.2798,-1.9393,-0.9808)$,
$(\delta_{10},\delta_{11},\delta_{20},\delta_{21})=$($-0.1270$,~16.621,~5.983,~$-57.653$),
$(\rho_{10},\rho_{11},\rho_{20},\rho_{21})=$(0.4050, 47.0963, 1.3352,$-21.4343)$,
$(\omega_{10},\omega_{11},\omega_{20},\omega_{21})=$($1.1619,-2.915$,$0.7841$,~1.0179),
$(\tau_{10},\tau_{11},\tau_{20},\tau_{21})=(7.2842,-2.5599,0.0,0.0)$.
Satisfactory combined description of all considered processes is obtained with the total $\chi^2/\mbox{ndf}=596.706/(527-78)\approx1.33$;\\ for the $\pi\pi$ scattering, $\chi^2/\mbox{ndf}\approx1.15$;\\ for $\pi\pi\to K\overline{K}$, $\chi^2/\mbox{ndf}\approx1.65$;\\ for $\pi\pi\to\eta\eta$, $\chi^2/\mbox{ndp}\approx0.87$;\\ for decays $J/\psi\to\phi(\pi^+\pi^-, K^+K^-)$, $\chi^2/\mbox{ndp}\approx1.36$;\\ for $\psi(2S)\to J/\psi(\pi^+\pi^-,\pi^0\pi^0)$, $\chi^2/\mbox{ndp}\approx2.43$;\\ for $\Upsilon(2S)\to\Upsilon(1S)(\pi^+\pi^-,\pi^0\pi^0)$, $\chi^2/\mbox{ndp}\approx1.01$;\\
for $\Upsilon(3S)\to\Upsilon(1S)(\pi^+\pi^-,\pi^0\pi^0)$, $\chi^2/\mbox{ndp}\approx0.67$,\\
for $\Upsilon(3S)\to\Upsilon(2S)(\pi^+\pi^-,\pi^0\pi^0)$, $\chi^2/\mbox{ndp}\approx0.61$.
In Figs. 3 -- 7 we show our fitting to the experimental data on above
indicated decays of $\psi$- and $\Upsilon$-meson families in the combined analysis with the processes
$\pi\pi\!\to\!\pi\pi,K\overline{K},\eta\eta$. Cavities in the energy dependence of di-pion spectra
(Fig. 7, upper panel) is the result of destructive interference between the $\pi\pi$ scattering and
$K\overline{K}\to\pi\pi$ contributions to the final states of decays $\Upsilon(3S)\to\Upsilon(1S)(\pi^+\pi^-,\pi^0\pi^0)$.
\begin{figure}[!thb]
\begin{center}
\includegraphics[width=0.44\textwidth,angle=0]{Jpsi1.eps}
\includegraphics[width=0.44\textwidth,angle=0]{Jpsi2.eps}\\
\vspace*{0.1cm}
\includegraphics[width=0.44\textwidth,angle=0]{Jpsi3.eps}
\includegraphics[width=0.44\textwidth,angle=0]{Jpsi4.eps}
\vspace*{-0.1cm}\caption{The $J/\psi\to\phi\pi\pi$ and $J/\psi\to\phi
K\overline{K}$ decays. }
\end{center}\label{fig:Jpsi}
\end{figure}
\begin{figure}[!thb]
\begin{center}
\includegraphics[width=0.6\textwidth,angle=0]{JpsiBES.eps}
\vspace*{-0.2cm}\caption{The $J/\psi\to\phi\pi\pi$ decay;
the data of BES~II Collaboration. }
\end{center}\label{fig:BESII}
\end{figure}
Note an important role of the BES~II data:
Namely this di-pion mass distribution rejects the solution with
the narrower $f_0(500)$. The corresponding curve lies considerably below the data from the threshold to about 850~MeV.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.45\textwidth]{psi21-1.eps}
\includegraphics[width=0.45\textwidth]{psi21-2.eps}
\end{center}
\vspace*{-.1cm}\caption{The $\psi(2S)\to J/\psi\pi\pi$ decay. }
\label{fig:psi21}
\end{figure}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.44\textwidth]{Ups21-1.eps}
\includegraphics[width=0.44\textwidth]{Ups21-2.eps}\\
\includegraphics[width=0.44\textwidth]{Ups21-3.eps}
\includegraphics[width=0.44\textwidth]{Ups21-4.eps}
\vspace*{-0.1cm}\caption{{\small The $\Upsilon(2S)\to\Upsilon(1S)\pi\pi$ decay.
}}
\end{center}\label{fig:Ups21}
\end{figure}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.44\textwidth]{Ups31-1.eps}
\includegraphics[width=0.44\textwidth]{Ups31-2.eps}\\
\includegraphics[width=0.44\textwidth]{Ups32-1.eps}
\includegraphics[width=0.44\textwidth]{Ups32-2.eps}
\vspace*{-0.1cm}\caption{{\small The decays $\Upsilon(3S)\to\Upsilon(1S)\pi\pi$ and $\Upsilon(3S)\to\Upsilon(2S)\pi\pi$.}}
\end{center}\label{fig:Ups31_32}
\end{figure}
\section{Conclusions}
\begin{itemize}
\item
We have performed the combined analysis of data on isoscalar S-wave processes $\pi\pi\to\pi\pi,K\overline{K},\eta\eta$ and on decays
$J/\psi\to\phi(\pi\pi, K\overline{K})$, $\psi(2S)\to J/\psi(\pi\pi)$,
$\Upsilon(2S)\to\Upsilon(1S)\pi\pi$, $\Upsilon(3S)\to\Upsilon(1S)\pi\pi$ and $\Upsilon(3S)\to\Upsilon(2S)\pi\pi$ from the Argus, Crystal Ball, CLEO, CUSB, DM2, Mark~II, Mark~III, and BES~II Collaborations.
\item
It was shown that in the final states of the $\Upsilon$-meson family decays (except the $\pi\pi$ scattering) the contribution of the coupled processes, e.g., $K\overline{K}\to\pi\pi$, is important even if these processes are energetically forbidden. This is in accordance with our previous conclusions on the wide resonances \cite{SBLKN-prd14,SBLKN-jpgnpp14,SBKLN-PRD12}: If a wide resonance cannot decay into a channel which opens above its mass but the resonance is strongly connected with this channel (e.g. the $f_0(500)$ and the $K\overline{K}$ channel), one should consider this resonance as a multi-channel state with allowing for the indicated channel taking into account the Riemann-surface sheets related to the threshold branch-point of this channel and performing the combined analysis of the considered and coupled channels. E.g., on the basis of that consideration the new and natural mechanism of the destructive interference in the decay $\Upsilon(3S)\to\Upsilon(1S)\pi\pi$ is indicated, which provides the two-humped shape of the di-pion mass distribution (Fig.~7).
\item
Results of the analysis confirm all of our earlier conclusions on the scalar mesons, main of which are: \\1) Confirmation of the $f_0(500)$ with a mass of about 700~MeV and a width of 930~MeV. This mass value is in line with prediction ($m_{\sigma}\approx m_\rho$) on the basis of mended symmetry by S.Weinberg \cite{Weinberg90} and with an analysis using the large-$N_c$ consistency conditions between the unitarization and resonance saturation suggesting $m_\rho-m_\sigma=O(N_c^{-1})$ \cite{Nieves-Arriola}. Also the prediction of a soft-wall AdS/QCD approach \cite{GLSV_13} for the mass of the lowest $f_0$ meson -- 721~MeV -- practically coincides with the value obtained in our work.\\
2) Indication for the $f_0(980)$ (the pole on sheet~II is $1008.1\pm3.1-i(32.0\pm1.5)$) to be a non-$q{\bar q}$ state, e.g., the bound $\eta\eta$ state. Note that for a earlier popular interpretation of the $f_0(980)$ as a $K\overline{K}$ molecule, it is important whether the mass value of this state is below the $K\overline{K}$ threshold or not.
In the PDG tables of 2010 its mass is 980$\pm$10~MeV. We found in all combined analyses of the multi-channel $\pi\pi$ scattering the $f_0(980)$ slightly above 1~GeV, as in the dispersion-relations analysis only of the $\pi\pi$ scattering \cite{GarciaMKPRE-11}. In the PDG tables of 2012, for the mass of $f_0(980)$ an important alteration appeared: now there is given the estimate 990$\pm$20~MeV.\\
3) Indication for the ${f_0}(1370)$ and $f_0 (1710)$ to have a dominant $s{\bar s}$ component. This is in agreement with a number of experiments \cite{Amsler,Braccini,Barate}.\\
4) Indication for two states in the 1500-MeV region: the $f_0(1500)$ ($m_{res}\approx1495$~MeV, $\Gamma_{tot}\approx124$~MeV) and the $f_0^\prime(1500)$ ($m_{res}\approx1539$~MeV,
$\Gamma_{tot}\approx574$~MeV). The $f_0^\prime(1500)$ is interpreted as a glueball taking into account its biggest width among the enclosing states \cite{Anis97}.
\end{itemize}
\section{Acknowledgments}
This work was supported in part by the Heisenberg-Landau Program, the Votruba-Blokhintsev Program for Cooperation of Czech Republic with JINR, the Grant Agency of the Czech Republic (grant No. P203/12/2126), the Bogoliubov-Infeld Program for Cooperation of Poland with JINR, the DFG under Contract No. LY 114/2-1, the Tomsk State University Competitiveness Improvement Program, and by the Polish National Science Center (NCN) grant DEC-2013/09/B/ST2/04382.
|
1410.3548
|
\section{INTRODUCTION}
Spontaneous collapse and consequent quantum revival~[\onlinecite{parker, Perelman}] occurs in the long
term dynamics of an injected wave-packet in systems with non-equidistant energy levels due to quantum
interference. It has been investigated in a wide class of systems [\onlinecite{Robinett}] and the phenomenon
of wave packet collapse, revivals, and fractional revivals have been observed experimentally in a number
of atomic and molecular systems [\onlinecite{stroud,stroud1,ewart,wals}]. However this phenomena of
purely quantum mechanical in origin, is relatively less explored in condensed matter systems with
discrete Landau energy levels, despite the fact that unlike {\it zitterbewegung} in solid state
systems [\onlinecite{zbS1,spinzb1,Zawadzki-review}] these oscillations are large and slow enough
for an experimental probe.
{\it Zitterbewegung} and wave-packet dynamics in several 2D condensed matter systems with
Landau-levels have been explored
earlier [\onlinecite{green, zbgrph2, Zawadzki_PRD_2010, Singh2014, Zawadzki-review, Schliemann, tutul}],
but the phenomena of spontaneous collapse and revival has largely gone
unaddressed [\onlinecite{Romera_PRB, Kramer, Romera2}]. Motivated by the
unified description of {\it zitterbewegung} in solid state systems
in Refs.~[\onlinecite{zbgen1,zbgen2}], in this article we present an exact
and unified description of the quantum wave-packet dynamics
and the phenomena of collapse and quantum revival in various two
dimensional (2D) solid state systems. Initially when a well localized
wave packet is injected into a 2D system with
non-equidistant Landau energy levels, it undergoes cyclotron
motion and evolves quasi-classically
with periodicity $\tau_{\rm cl}$, for a number of cycles, with its
probability density spreading around the quasi-classical
trajectory. Non-equidistant nature of the discrete energy spectrum
then leads to destructive quantum interference and consequently the
collapse of the wave-packet. The (almost) collapsed wave-packet regain
their initial waveform and oscillate again with the quasi-classical
periodicity on a much longer time scale known as revival time ($\tau_{\rm rev} \gg
\tau_{\rm cl}$).
In addition, there is also the possibility of fractional revivals which
occurs at rational fraction of the revival time $\tau_{\rm rev}$
when the initial wave-packet evolves into a collection of mini wave-packets
resembling the waveform of the injected wave-packet [\onlinecite{Robinett, Perelman}].
In this article we consider several 2D systems, which can be classified into two categories
based on the dynamics of the charge carriers: Schr\"odinger-like (non-relativistic
dynamics described by the Schr\"odinger equation) and Dirac-like
fermionic systems [\onlinecite{DM}] (relativistic dynamics described by the
Dirac equation). The so called `Schr\"odinger-like' materials include
2D electron/hole gas (2DEG/2DHG) trapped at the interface of III-V semiconductor
hetero-structures, like AlGaAs-GaAs, and these
typically have a linear [\onlinecite{Gossard, Nitta}] or cubic spin orbit interaction
(SOI) terms [\onlinecite{hole1,Winkler,hole3,hole4, Sr1, Sr3}] in addition to the parabolic
dispersion relation. The so called `Dirac-like' materials have a relativistic
dispersion relation
and typical examples include graphene [\onlinecite{grphn1, graphene2, graphene3}] and
other crystals like silicene
[\onlinecite{sili1,sili2, gap1, gap2, gap3}], germanene [\onlinecite{STB, germanene}], monolayer
transition metal group-VI dichalcogenides {\mbox MX}$_2$
(\mbox{M=Mo, W} and \mbox{X=S, Se}) etc. [\onlinecite{gap4, gap5, TMDC, TMDC2}] which
generally have a honeycomb lattice structure. Dirac materials also have suppressed
electron scattering and tunable electronic properties which make them very
interesting from an application point of view.
For the present study, we consider both class of systems on an equal footing
and present a unified and an exact description of
quantum wave packet dynamics whose long term behavior displays the universal
phenomena of spontaneous collapse and
revival. For this purpose we choose the initial localized wave-packet to be a
coherent state, which is also a minimum uncertainty wave packet, whose
cyclotron dynamics resembles the dynamics of a classical charged particle
in a perpendicular magnetic field [\onlinecite{green}].
Our article is organized as follows:
In Sec. \ref{Sec2} we study the wave-packet dynamics in an exact
and unified manner for various 2D systems
with Landau levels. In addition we motivate and discuss the timescales
associated with the phenomena of
spontaneous collapse and revival in systems with discrete and non-equidistant
energy levels. In Sec. \ref{Sec3}, we discuss the collapse
and revival phenomenon in Schr\"odinger-like materials with parabolic
energy spectrum and $k$-linear and $k$-cubic Rashba SOI. In Sec. \ref{Sec4} we discuss
wave-packet dynamics in Dirac-like materials with a relativistic dispersion, and finally, in Sec. \ref{Sec5} we
summarize our results.
\section{Unified description of wave-packet dynamics in various 2D systems}
\label{Sec2}
In this section we present an exact and unified formalism describing the
temporal evolution of a wave-packet in various 2D systems, in presence of
a transverse magnetic field. In particular, we focus on both Schr\"odinger-like systems
as well as Dirac-like materials, whose low energy properties are described by a
two band model. In Sec.~\ref{exact} we calculate the exact expectation value of the
position and velocity operator (or alternatively electric current), for an injected
coherent state minimum uncertainty wave-packet in generic 2D systems. Next in Sec.~\ref{TS},
we briefly review the phenomenon of wave-packet revival in various 2D systems with Landau level spectrum.
\subsection{Exact quantum evolution of a wave packet in various 2D systems}
\label{exact}
We begin by presenting an exact unified description of the temporal evolution of the
center of an injected wave-packet in various two dimensional systems with non-equidistant Landau energy-levels.
The Landau level eigen-spectrum describing different 2D systems can be
written in the following generic form:
\begin{eqnarray}\label{eign_spect}
\varepsilon_n = \hbar\omega_c\Big[f(n) + \lambda\sqrt{c^2 + g(n)}\Big],
\end{eqnarray}
where $\lambda=\pm 1$ denotes two chiral energy branches, $f(n)$ and $g(n)$ are system
dependent functions of the Landau level indexed by $n$, and $c$ is a system dependent constant term.
Note that $f(n)$ is generally a linear function of $n$ in Schr\"odinger-like systems
arising from the parabolic part of the dispersion relation, and it is generally
absent in Dirac-like systems. The functions $f(n)$, $g(n)$ and $c$ for various
systems are tabulated in Table \ref{T11a}. The magnetic length, for both class
of systems, is given by $l_c=\sqrt{\hbar/(eB)}$. The cyclotron frequency, for Schr\"odinger-like systems is given by
$\omega_c=eB/m^\ast$ with $B$ being the applied transverse magnetic field
and $m^\ast$ is the effective mass. For Dirac-like systems $\omega_c=\sqrt{2}v_F/l_c$ where $v_F$ is the Fermi velocity.
To specify the eigen-vectors, we assume the 2D system to lie in the $x-y$ plane
and work in the Landau gauge, {\it i.e.~}the vector potential is
specified by ${\bf A} = (-By, 0, 0)$, where $B$ is the strength of
the magnetic field in the $z$-direction. The eigen-vectors corresponding
to the eigen-energies given by Eq.~(\ref{eign_spect}) with different chiralities are now given by,
\begin{equation} \label{wavfnP}
\psi_{n,q_x}^+(x,y) = \frac{e^{iq_xx}}{\sqrt{2\pi a_n}}\begin{pmatrix}
-z_n\phi_{n-m}(y-y_c)\\ \phi_n(y-y_c)\end{pmatrix}~,
\end{equation}
and
\begin{equation} \label{wavfnM}
\psi_{n,q_x}^-(x,y) = \frac{e^{iq_xx}}{\sqrt{2\pi a_n}}\begin{pmatrix} \phi_{n-m}(y-y_c)
\\z_n^{*}\phi_n(y-y_c)\end{pmatrix}~,
\end{equation}
where $m$ is an integer which depends on the related system, $|z_n|=\sqrt{g(n)}/(c+\sqrt{c^2 + g(n)})$,
$a_n=1+|z_n|^2$ and
$\phi_n(y-y_c) = N_ne^{-(y-y_c)^2/2l_c^2}H_n((y-y_c)/l_c)$ is the harmonic oscillator
wave function. Other constants are given
by $N_n=\sqrt{1/(\sqrt{\pi}2^nn!l_c)}$, $y_c=q_xl_c^2$ and $H_n(x)$ denotes
the Hermite polynomial of order $n$, and
$z_n = |z_n|$ for Dirac-like materials and $z_n = i |z_n|$
for Sch\"rodinger-like systems (within the chosen gauge).
Note that the eigen-system described in Eqs.~(\ref{eign_spect})-(\ref{wavfnM}) is
applicable only for $n\ge m$. For $n<m$ there are $m$ Landau levels of `$+$' chirality with eigen-energy
$\epsilon_n^\prime=[f(n)-c]\hbar\omega_c$, and the corresponding two component eigen-vector is given by
\begin{equation}
\psi_n({\bf r})=\frac{e^{iq_x x}}{\sqrt{2\pi}}
\begin{pmatrix} 0
\\ \phi_n(y-y_c)\end{pmatrix}~.
\end{equation}
\begin{table}[t]
\begin{center}
\caption{Landau level of various 2D systems are given by Eq.~\eqref{eign_spect}, with the following: \label{T11a}}
\begin{tabular}{ l c c c c r} \hline \hline \\
System (Dispersion)& $f(n)$ & $c$ & $g(n)$ \\ \hline
2DEG with linear Rashba & $n$ & $\frac{g^\ast m^\ast}{4m_e}-\frac{1}{2}$ & $\frac{2\alpha_1^2 n}{\hbar^2\omega_c^2 l_c^2}$ \\
2DHG with cubic Rashba & $n-1$& $ \frac{3g^\ast m^\ast}{4m_e}-\frac{3}{2}$ &
$ \frac{8 \alpha_3^2 n (n-1)(n-2)}{l_c^6\hbar^2\omega_c^2} $ \\
Massive Dirac spectrum &$0$ &$\Delta/(\hbar \omega_c)$ & $n$
\\ \hline
\end{tabular}
\end{center}
\end{table}
Having described the generic form of Landau-levels and the associated eigen-vectors,
in various 2D systems with two-bands, we now turn our attention to the dynamics of an
injected wave-packet. For this purpose, we choose an initial wave-packet to be a
coherent state in a magnetic field, {\it i.e.}, a Gaussian wave packet of the following form,
\begin{equation}\label{ini_mag}
\Psi({\bf r},0)=\exp\Big({-\frac{r^2}{2l_c^2}+i q_{0}x} \Big)\frac{1}{\sqrt{\pi}l_c \sqrt{|c_1|^2 + |c_2|^2}}
\begin{pmatrix}
c_1\\c_2
\end{pmatrix},
\end{equation}
where $\hbar q_{0}$ is the initial momentum along the $x$ direction and the width of the
Gaussian wave packet is considered to be equal to the magnetic length $l_c$, and the
coefficients $c_1$ and $c_2$ determine the initial spin/pseudospin polarization of the injected wave-packet.
The idea here is to choose a minimum uncertainty wave packet, whose cyclotron dynamics
should resemble the dynamics of a classical particle in a perpendicular magnetic field.
In addition such a wave-packet, when expressed in terms of Landau level eigen-states,
is peaked around the Landau-level $n_0 \approx l_c^2 q_0^2/2$ (see for example,
the Appendix A of Ref.~[\onlinecite{Kramer}]), and has a spread given by $\delta n = \sqrt{n_0}$.
We note here that such a simplistic choice of the initially injected wave-packet
in Eq.~\eqref{ini_mag}, is widely used in the literature [\onlinecite{green}], is amenable to
analytical treatment, and gives valuable insight into the relevant timescales of the problem.
Another realistic experimental possibility is to create wave-packets
by illuminating samples with short laser pulses [\onlinecite{laser1}].
The spinor wave packet at a later time $t$ can be written as
\begin{equation} \label{wavp_t}
\Psi_{\mu} = \int G_{\mu\nu}({\bf{r}},{\bf{r'}},t)\Psi_{\nu}({\bf{r'}},0)d{\bf{r'}}~,
\end{equation}
where $G({\bf r},{\bf r^\prime},t)$ is the $2\times2$ Green's function matrix.
The matrix elements of the Green's functions are defined as [\onlinecite{green}]
\begin{equation}
G_{\mu\,\nu}({\bf{r}},{\bf{r^\prime}},t) = \sum_{n,\lambda}
\int d{q_x}\psi_{n,q_x,\mu}^{\lambda}({\bf{r}},t)
{\psi_{n,q_x,\nu}^{\lambda^\ast}}({\bf{r^\prime}},0),
\end{equation}
where $\psi_{n,q_x}^\lambda({\bf r}, 0)$ is the two component spinor
eigen-functions at $t=0$, given by Eqs. (\ref{wavfnP})-(\ref{wavfnM}).
At finite time, $\psi_{n,q_x}^\lambda({\bf r},t)=\psi_{n,q_x}^\lambda({\bf r},0)e^{-i\epsilon_n^\lambda t/\hbar}$,
with $\epsilon_n^\lambda$ being the Landau-level energy eigen-value given in Eq. (\ref{eign_spect}).
Slightly lengthy but straightforward algebra gives the components of the Green's function matrix, to be of the following form,
\begin{equation} \label{Grn}
G_{\mu\nu}({\bf r},{\bf r}^\prime,t) = \frac{1}{2\pi}\int_{-\infty}^{+\infty}dq_xe^{iq_x(x-x^\prime)} \eta_{\mu \nu}(q_x)~,
\end{equation}
where $\eta_{\mu \nu}$ is given by
\begin{eqnarray}
\eta_{11} &=& \sum_{n=0}^{\infty}P_{n+m}\phi_n(y-y_c)\phi_n(y^\prime-y_c)~, \\
\eta_{21} & = & \sum_{n=0}^{\infty}Q_{n+m}\phi_{n+m}(y-y_c)\phi_n(y^\prime-y_c)~, \\
\eta _{12} & = & \sum \limits_{n=0}^{\infty} R_{n+m}\phi_n(y-y_c)\phi_{n+m}(y'-y_c)~, \\
\eta_{22} & = & \sum\limits_{n=0}^{\infty} S_n\phi_n(y-y_c)\phi_{n}(y'-y_c)~,
\end{eqnarray}
and we have defined $\gamma_n \equiv \omega_c \sqrt{c^2 + g(n)}$, along with
\begin{eqnarray}
P_n &=& e^{-if(n)\omega_c t}\left[e^{- i \gamma_n t} + 2 i \sin({\gamma_nt})/a_n\right]~, \\
Q_n &=& 2iz_n^{*}e^{-if(n)\omega_c t}\sin({\gamma_nt})/a_n~, \\
R_n &= & 2 i z_n e^{-i f(n) \omega_c t}\sin{(\gamma_nt)}/a_n~, \\
S_n & =& \begin{cases} e^{-i\varepsilon'_n t/\hbar} & \mbox{for } n < m \nonumber \\
e^{-if(n)\omega_c t}\left[e^{ i \gamma_n t} - 2 i \sin({\gamma_nt})/a_n\right] & \mbox{for } n \ge m.
\end{cases} \label{S_n} \nonumber \\
\end{eqnarray}
Substituting Eqs.~(\ref{Grn})-(\ref{S_n}) in Eq.~(\ref{wavp_t}), we obtain the time evolved
two component injected wave packet at a later time $t$,
\begin{eqnarray}\label{wav_t}
\begin{pmatrix}
\Psi_1({\bf r},t)\\ \Psi_2({\bf r},t)
\end{pmatrix}
&=&\frac{1}{\sqrt{2}\pi l_c} \int du e^{F(x,u)} \sum_{n=0}^\infty (-u)^n \frac{1}{\sqrt{c_1^2 + c_2^2}}\\
&\times& \begin{pmatrix}
\left(\frac{c_1 P_{n+m}}{2^nn!N_n} + \frac{c_2 (-u)^mR_{n+m}}{2^{n+m}(n+m)!N_{n+m}}\right) \phi_n(y-y_c)\\
\frac{c_1 Q_{n+m} \phi_{n+m}(y-y_c) + c_2 S_n \phi_{n}(y-y_c)}{2^nn!N_n}
\end{pmatrix}, \nonumber
\end{eqnarray}
where $F(x,u)=iux/l_c-(l_c q_{0}-u)^2/2-u^2/4$ with
$u=q_x l_c$ and $\hbar q_{0}$ is the momentum of the injected wave-packet.
We emphasize that Eq.~(\ref{wav_t}) gives the exact temporal evolution of the
coherent state injected wave function with arbitrary spin/pseudo-spin polarization,
for a wide-class of 2D materials.
In the rest of the paper we will focus on the specific case when the lower component
of initial wave-packet is equal to zero, {\it i.e.}, the parameters $c_1 = 1$ and $c_2 = 0$ in Eq.~\eqref{ini_mag}.
Now, the expectation value of the position operator $\hat{\bf r}$ (or any other operator), at time $t$,
is simply given by
\begin{equation} \label{pos_def}
\langle {\bf r}(t)\rangle=\sum_{i=1}^2 \int \Psi_i^\ast({\bf r},t) \hat{\bf r}
\Psi_i({\bf r},t) d{\bf r}~.
\end{equation}
A tedious but straightforward calculation using Eq. (\ref{wav_t}) in Eq.~(\ref{pos_def}),
gives us the exact time dependent expectation values of the position ($x,y$), of the centre of the injected wave-packet to be
\begin{eqnarray}\label{expX_gn}
\langle x(t)\rangle&=&l_c\sum_{n=0}^\infty\xi_n\Bigg{(}\Im\left[P_{n+m+1}(t)P_{n+m}^\ast(t)\right] \\
&+&\Im\left[ Q_{n+m+1}(t)Q_{n+m}^\ast(t) \right]\sqrt{\frac{n+m+1}{n+1}}\Bigg{)}~, \nonumber
\end{eqnarray}
and
\begin{eqnarray}\label{expY_gn}
\langle y(t)\rangle&=&l_c\sum_{n=0}^\infty\xi_n\Bigg{(}\Re[P_{n+m+1}(t)P_{n+m}^\ast(t)]
\\
&+&\Re[Q_{n+m+1}(t)Q_{n+m}^\ast(t)]
\sqrt{\frac{n+m+1}{n+1}}-1\Bigg{)}~,\nonumber
\end{eqnarray}
where we have defined
\begin{equation} \label{xi_n}
\xi_n \equiv \frac{i}{3}e^{-\frac{\tilde{q}_0^2}{3}}
\Big(\frac{-1}{12}\Big)^n\frac{1}{n!}H_{2n+1}\Big(i\sqrt{\frac{2}{3}}\tilde{q}_0\Big) ~,
\end{equation}
with $\tilde{q} = q_0 l_c$. Note that $\xi_n$ is always real.
The temporal evolution of an incident wave-packet which is centered around
some high energy level, in systems with discrete but non equidistant energy
levels, typically displays the phenomena of spontaneous collapse and revival [\onlinecite{Robinett}]. However it is
generally difficult to infer the relevant timescales associated with these
phenomena, from the exact expressions for the position operators
given in Eqs.~(\ref{expX_gn})-(\ref{expY_gn}). Thus we briefly discuss the
phenomena of spontaneous collapse and revival, based on general arguments in the next subsection.
Later, in Sec.~\ref{Sec3} and Sec.~\ref{Sec4}, we will show the emergence
of these timescales from the exact solution in various 2D systems.
\subsection{Oscillation, cyclotron and revival timescales}
\label{TS}
Discretized Landau energy-levels are formed whenever a 2D electronic system,
is subjected to a strong perpendicular magnetic field. The spatio-temporal evolution of wave-packets in such quantum
system with discrete but non-equidistant
energy spectrum is generally quiet complex and exhibits both classical and
quantum behavior. The quantum behavior being manifested in the form of
spontaneous collapse and long term quantum revival of the wave-packet
arising due to quantum interference. However well defined periodicities for
quasi-classical behavior, spontaneous collapse and revival
emerge [\onlinecite{Nauenberg, Robinett, Romera_PRB, Demikhovskii_PRA}],
if the initial wave packet have a substantial overlap with some large Landau level denoted by $n_0$.
Various timescales during the wave packet dynamics in a discrete system with
non-equidistant energy spectrum, can be inferred from the analytic form of
the autocorrelation function [\onlinecite{Nauenberg}] of the wave packet, which is defined
as $A(t) = \langle \Phi({\bf{r}},t) |\Phi({\bf{r}},0) \rangle$. Expanding $\Phi({\bm r}, t)$ in terms of the orthonormal
eigenstates of the system under consideration, $\{\phi_n\}$, we get
$ \Phi({\bf{r}},t) = \sum_n c_n \phi_n({\bf r}) e^{- i \epsilon_n t/\hbar}$, where $\epsilon_n$ are the discrete
energy eigenvalues of the system and $c_n = \langle \phi_n ({\bf{r}})| \Phi({\bf{r}}, 0)\rangle$. The autocorrelation
function is now given by
\begin{equation}
A(t) = \sum_n |c_n|^2 e^{i \epsilon_n t/\hbar}~.
\end{equation}
For studying the dynamics of a localized injected wave packets, which can be
expressed as a superposition of the eigen-states of the system centered around
some large Landau-level $n=n_0$, such that $n_0 \gg \delta n \gg 1$,
we can assume the form of the expansion coefficients to a Gaussian centered around $n_0$ with spread of $\delta n$.
Doing a Taylor series expansion of the
energy, $\epsilon_n = \epsilon_{n_0} + (n-n_0) \epsilon_{n_0}' + (n-n_0)^2 \epsilon_{n_0}''/2 + \dots $,
where $\epsilon_n' = (d\epsilon_n/dn)_{n=n_0} $ and so forth, the autocorrelation function can be rewritten as,
\begin{equation} \label{eq:auto}
A(t) = \sum_{n = - \infty}^{\infty} |c_n|^2 e^{i t/\hbar \left( \epsilon_{n_0} + (n-n_0) \epsilon_{n_0}'
+ (n-n_0)^2 \epsilon_{n_0}''/2 + \dots \right) }~.
\end{equation}
The coefficients of the Taylor expansion of the energy $\epsilon_n$ in the
exponential of Eq.~\eqref{eq:auto} defines a characteristic timescale via,
\begin{equation} \label{timescale1}
\tau_{\rm osc} = \frac{2 \pi \hbar}{\epsilon_{n_0}} ~,~~~ \tau_{\rm cl} = \frac{2 \pi \hbar}{|\epsilon_{n_0}'|}~,
~{\rm and}~~~ \tau_{\rm rev} = \frac{4 \pi \hbar}{|\epsilon_{n_0}''|}~.
\end{equation}
The timescale $\tau_{\rm osc}$, is an intrinsic quantum oscillation time scale,
which does not lead to any quantum interference in the various wave packet components
of different Landau-levels and is thus it does not effect the long term
dynamics of the system. At the { `classical' cyclotron} time-scale $\tau_{\rm cl}$,
the wave-packet evolves quasi-classically and the center of the wave-packet completes
one cyclotron orbit, and returns to the initial
position and the autocorrelation function approximately reaches its initial value.
At larger timescales quantum interference between the wave function components of
different Landau levels in the incident wave-packet, leads to spontaneous collapse
of the wave function and then to quantum revival.
The quantum revival of the wave packet, over timescale $ \tau_{\rm rev}$, occurs due
to constructive interference when the terms proportional to the second derivative in
the energy are in multiples of $2 \pi$. This revival hierarchy sustains even in the higher order terms.
In addition the time at which the spreading of the wave packet leads to quantum
self-interference which leads to spontaneous collapse, is
given by $\tau_{\rm coll} = \tau_{\rm rev}/({\delta n})^2$. See Ref.~[\onlinecite{Robinett}] for a detailed review.
For this article, where the generic Landau-level spectrum for various 2D systems
is given by Eq.~\eqref{eign_spect}, corresponding derivatives of the energy
with respect to $n$, which appear in Eq.~(\ref{timescale1}) are explicitly given by
\begin{equation}
\frac{\varepsilon_n'}{\hbar\omega_c} = f'(n) +\lambda \frac{g'(n)}{2 \sqrt{c^2 + g(n)}}~,
\end{equation}
and
\begin{equation}
\frac{\varepsilon_n''}{\hbar\omega_c} = f''(n)+
\frac{\lambda}{2 \sqrt{c^2 + g(n)}}\Big{[}g''(n)-\frac{g'(n)^2}{2(c^2 + g(n))}\Big{]},
\end{equation}
where $f(n)$ is typically a linear function of $n$, arising from the parabolic
part of the dispersion relation, which exists only for Schr\"odinger-like systems,
and thus $f'(n)$ is a constant and $f''(n) = 0$.
The classical, spontaneous collapse and revival timescales for various 2D systems
studied in this paper, along with the relevant material parameters,
is tabulated in Table~\ref{T2} and Table~\ref{T3}. We now proceed to
study the phenomena of revival and collapse in various systems like $k$-linear Rashba 2DEG, $k$-cubic rashba 2DHG etc.
\section{Schr\"odinger-like systems}
\label{Sec3}
In this section we discuss spontaneous
decay and long-term quantum revival of a quantum wave packet
in two-dimensional Schr\"odinger-like fermionic systems
described by Rashba spin-orbit interaction (SOI).
Let us first consider the case of a 2DEG with $k$-linear Rashba SOI.
\subsection {2DEG with $k$-linear Rashba SOI}
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0 \linewidth]{fig1.pdf}
\end{center}
\caption{Panel a), b), and c) show the position of the center of the wave
packet vs time, while panels d), e) and f) display the velocity expectation
value, over different timescales, {\it i.e.~}$\tau_{\rm cl}$, $\tau_{\rm coll}$ and $\tau_{\rm rev}$ respectively.
Note that in panels a), b) and c) $\langle x(t)\rangle$
is centered around 0, whereas $\langle y(t)\rangle$ is centered around $y_c=q_0l_c^2 = 10$.
Here we have chosen $B=2$T and other material parameters for AlGaAs/GaAs quantum well:
$m^\ast=0.067m_e$, $g^\ast=-0.44$, $\alpha_1=10^{-13}$eV-m.}
\label{fig1}
\end{figure}
We now examine the wave-packet dynamics in a 2DEG formed at the interface of
an {\it inversion asymmetric} III-V semiconductor quantum well, subjected to a Zeeman field
perpendicular to the interface. The inversion asymmetry of the quantum well
gives rise to Rashba SOI [\onlinecite{rashba}], which has a linear dependence on momentum.
To obtain the Landau level spectrum of a Rashba 2DEG in a transverse magnetic field, we work with
Landau gauge ${\bf A}=(-By,0,0)$ for the vector potential. Making the Landau-Peierls substitution
${\bf p} \to {\bf \Pi} = {\bf p}+e{\bf A}/\hbar$, the Hamiltonian [\onlinecite{ras_mg}] describing Rashba 2DEG is given by
\begin{equation} \label{Ham_lin}
H=\frac{{\bf \Pi}^2}{2m^\ast}+\frac{\alpha_1}{\hbar}
\Big(\sigma_{x}\Pi_{y}-\sigma_{y}\Pi_{x}\Big)
+\frac{1}{2}g^\ast \mu_B \sigma_z B~,
\end{equation}
where $m^\ast$ is the effective mass of the electron,
$\alpha_1$ is the Rashba SOI coupling coefficient, $g^\ast$ is the effective
Lande g-factor, $\mu_B$ is the Bohr magneton and $\sigma_i$ denote the
Pauli spin matrices.
The Landau-level eigen-spectrum corresponding to Eq.~(\ref{Ham_lin}) is
given [\onlinecite{ras_mg}] by the generic form considered in Eq.~(\ref{eign_spect})
with the following substitutions:
\begin{equation}
f(n) = n~,~~ c = \frac{g^\ast m^\ast}{4m_e}-\frac{1}{2}~, ~~{\rm and}~~ g(n) = \frac{2\alpha_1^2 n}{\hbar^2\omega_c^2 l_c^2}~,
\end{equation}
where $m_e$ is the free electron mass.
The corresponding eigen-vectors
for the two spin-split branches ($\lambda = \pm 1$) are now simply obtained by putting
$m=1$ in Eqs.~(\ref{wavfnP})-(\ref{wavfnM}).
Now the temporal evolution of the expectation value of the coordinates $(x,y)$ of
the centre of the injected Gaussian wave-packet are given by
Eq.~(\ref{expX_gn}) and Eq.~(\ref{expY_gn}) with $m \to 1$. As a check of
our calculation, we note that the expectation values of the position operator
are consistent with that derived in Eqs.~(36a)-(36b) of Ref.~[\onlinecite{green}].
Let us now calculate the time-dependent expectation values of the velocity operator, which also gives the charge current.
Using the Heisenberg equation of motion for the position operator,
$\hat{\bf v}= -i \hbar^{-1} [\hat{\bf r}, H]$, the components of the
velocity operator are given by
\begin{equation}
\hat{v}_x=\frac{\hat{\Pi}_x}{m^\ast}-\frac{\alpha_1}{\hbar}\sigma_y~,
~~\quad ~{\rm and~} \quad \hat{v}_y=\frac{\hat{\Pi}_y}{m^\ast}+\frac{\alpha_1}{\hbar}\sigma_x~.
\end{equation}
Following the same procedure as described in Sec. \ref{exact},
we finally obtain the following expressions for the
components of the velocity expectation values,
\begin{equation}
\begin{pmatrix} \langle v_x(t)\rangle \\ \langle v_y(t)\rangle \end{pmatrix}
= \frac{\hbar}{l_c m^\ast}
\sum_{n=0}^\infty \xi_n \begin{pmatrix} \Re v_n \\ \Im v_n \end{pmatrix}~,
\end{equation}
where we have defined
\begin{equation}
v_n \equiv P_{n+2}^\ast P_{n+1} + Q_{n+2}^\ast Q_{n+1} \sqrt{\frac{n+2}{n+1}}
+ \frac{i\tilde{\alpha}_1}{\sqrt{n+1}} P_{n+2}^\ast Q_{n+1}~,
\end{equation}
using the dimensionless SOI strength: $\tilde{\alpha}_1 = \sqrt{2} \alpha_1/(l_c \hbar \omega_c)$
and $\xi_n$ is defined in Eq.~(\ref{xi_n}).
To discuss wave-packet dynamics which shows the phenomena of spontaneous collapse and revival,
we compute the relevant time scales described in Sec.~\ref{TS} for 2DEG with $k$-linear Rashba SOI.
The oscillation, classical and quantum revival timescales for this system are respectively given by
\begin{equation}
\tau_{\rm osc}^\lambda=\tau_c\frac{1}{\lvert n_0+\lambda \sqrt{c^2+\tilde{\alpha}_1^2 n_0} \rvert}~,
\end{equation}
\begin{eqnarray}
\tau_{\rm cl}^\lambda=\tau_c\frac{1}{\lvert 1+\lambda\frac{\tilde{\alpha}_1^2}{2\sqrt{c^2+\tilde{\alpha}_1^2 n_0}}\rvert},
\end{eqnarray}
and finally,
\begin{eqnarray}
\tau_{\rm rev}=\tau_c
\frac{8[{c^2+\tilde{\alpha}_1^2 n_0}]^{3/2}}{\tilde{\alpha}_1^4},
\end{eqnarray}
where $\tau_c=2\pi/\omega_c$.
Note that there are two contributions in $\tau_{\rm osc}$ and $\tau_{\rm cl}$ coming from the
upper and lower branches of the energy spectrum. In the rest of the paper, we will be using the average timescales
defined as $\tau_{\rm osc}=(\tau_{\rm osc}^++\tau_{\rm osc}^-)/2$ and $\tau_{\rm cl}=(\tau_{\rm cl}^++\tau_{\rm cl}^-)/2$.
Experimentally, $k$-linear Rashba SOC is present in an AlGaAs/GaAs
quantum well [\onlinecite{Gossard}] or in an InGaAs/InAlAs quantum well [\onlinecite{Nitta}]
among other materials.
Various parameters and the corresponding timescales for these are given in Table.~\ref{T2}.
We study the dynamics of the center of the wave-packet and its velocity
(or equivalently current), in Fig.~\ref{fig1}, using the material parameters of an AlGaAs/GaAs
quantum well. Both the position and velocity of an initial minimum-uncertainty wave-packet display
the phenomena of spontaneous collapse and consequent full quantum revival at long times.
\subsection {2D systems with $k$-cubic Rashba SOI}
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0 \linewidth]{fig2.pdf}
\end{center}
\caption{As in Fig.~\ref{fig1}, panels a), b), and c) show the position, and panels d),e), and f)
show the velocity expectation vs time, for different timescales.
Here we have chosen $B=2$T and other material parameters for 2D hole gas formed at $p$-type
GaAs/AlGaAs heterostructure which are:
$m^\ast=0.45m_e$, $g^\ast=7.2$, $\alpha_3=10^{-29}$eV-m$^3$.}
\label{fig2}
\end{figure}
\begin{table*}[t]
\begin{center}
\caption{The parameters and associated timescales for various
2D Schr\"odinger-like systems. Here $n_0=50$ and $B=2$T.
\label{T2}}
\begin{tabular}{ l c c c c c c r} \hline \hline
System & material & $m^\ast/m_e$ & $g^\ast$ & SOI: $\alpha_1$(eV-m) & $\tau_{\rm osc}$ & $\tau_{\rm cl}$ & $\tau_{\rm rev}$\\
\ &(quantum well)& & & or \ $\alpha_3$ (eV-m$^3$) & (fs)& (ps) & (ns) \\ \hline
\vspace{0.1cm}
Systems with \\ linear Rashba SOI & AlGaAs/GaAs \cite{Gossard} & 0.067 & -0.44 &
$\alpha_1 = 10^{-13}$& 23.94 & 1.19 & 48.3$\times10^6$\\
& InGaAs/InAlAs \cite{Nitta} & 0.052 & 4.0 & $\alpha_1 =10^{-11}$ &18.59 & 0.93 & 18.04 \\
\vspace{0.1cm}
2DEG with \\ cubic Rashba SOI & GaAs/AlGaAs (2DHG)\cite{hole1, Winkler, hole5} & 0.45 & 7.2&
$\alpha_3 = 10^{-29}$& 165 & 8.11 & 13.70\\
& SrTiO$_3$ (2DEG)\cite{Sr1,Sr3} & 1.45 & 2.0 & $\alpha_3 = 10^{-30}$ & 529 & 25.92 & 102.02 \\
\\ \hline
\end{tabular}
\end{center}
\end{table*}
In this section we investigate the wave-packet dynamics in 2DEG and 2DHG with parabolic
dispersion relation along with the Rashba SOI which is cubic in momentum. Generally $k$-cubic
Rashba SOI occurs in two different systems. One of such systems is
2D heavy hole gas [\onlinecite{hole1,Winkler,hole3,hole4, hole5}] formed at the interface of $p$-doped III-V semiconductors,
namely {\mbox GaAs}/{\mbox AlGaAs} heterostructure. In addition $k$-cubic Rashba SOI can also be
found in the 2D electron gas formed at the interface of perovskite oxide structures [\onlinecite{Sr1,Sr2, Sr3}] such as
{\mbox LaAlO}${}_3$/{\mbox SrTiO}${}_3$ interface and {\mbox SrTiO}${}_3$
surface.
The single particle Hamiltonian [\onlinecite{tianx,zarea}]
of a 2D system with cubic Rashba SOI, in a transverse magnetic field
is given by
\begin{eqnarray}\label{Ham_cub}
H=\frac{{\bf \Pi}^2}{2m^\ast}+\frac{i \alpha_3}{2\hbar^3}
\Big(\Pi_{-}^3\sigma_{+}-\Pi_{+}^3\sigma_{-}\Big)
+ \frac{3}{2}g_s \mu_B {\boldsymbol \sigma} \cdot {\bf B},
\end{eqnarray}
where ${\bf \Pi}$ is the conjugate momentum defined in previous section,
$\alpha_3$ is the Rashba coupling coefficient. Additionally, we have defined
$\Pi_{\pm} \equiv \Pi_x\pm i\Pi_y$, and $\sigma_\pm \equiv \sigma_x\pm i\sigma_y$.
Note that for the case of heavy holes, the Pauli matrices represent an effective pseudo-spin with
spin projection $\pm3/2$ along the growth direction of the quantum well.
The Landau level spectrum is again given by Eq.~\eqref{eign_spect}, with
\begin{equation}
f(n)=n-1~, ~~~ g(n) = \frac{8 \alpha_3^2 n (n-1)(n-2)}{l_c^6\hbar^2\omega_c^2}~,
\end{equation}
and finally
\begin{equation}
c= \frac{3g^\ast m^\ast}{4m_e}-\frac{3}{2}~.
\end{equation}
Similarly, the temporal evolution of the expectation values of the position operators
for the centre of the wave-packet are given by substituting
$m=3$ in Eqs.~(\ref{expX_gn})-(\ref{expY_gn}).
The components of the velocity operator are given by
\begin{equation}
\hat{v}_x=\frac{\Pi_x}{m^\ast}\sigma_0+\frac{3i\alpha_3}{2\hbar^3}
(\sigma_{+}\Pi_{-}^2-\sigma_{-}\Pi_{+}^2)~,
\end{equation}
and
\begin{equation}
\hat{v}_y=\frac{\Pi_y}{m^\ast}\sigma_0+\frac{3\alpha_3}{2\hbar^3}
(\sigma_{+}\Pi_{-}^2+\sigma_{-}\Pi_{+}^2)~.
\end{equation}
Their expectation values are given by
\begin{equation}
\begin{pmatrix} \langle v_x(t)\rangle \\ \langle v_y(t)\rangle \end{pmatrix}
= \frac{\hbar}{l_c m^\ast}
\sum_{n=0}^\infty \xi_n \begin{pmatrix} \Re h_n \\ \Im h_n \end{pmatrix}~,
\end{equation}
where we have defined
\begin{eqnarray}
h_n &\equiv& P_{n+4}^\ast P_{n+3} + \sqrt{\frac{n+4}{n+1}} Q_{n+4}^\ast Q_{n+3} \nonumber\\
&+ &3 i \tilde{\alpha}_3 \sqrt{\frac{(n+2)(n+3)}{n+1}} P_{n+4}^\ast Q_{n+3},
\end{eqnarray}
using the dimensionless SOI strength:
$\tilde{\alpha}_3 = 2\sqrt{2} \alpha_3/(l_c^3 \hbar \omega_c)$ and $\xi_n$ is defined in Eq.~(\ref{xi_n}).
Note that the expectation values of the position and velocity operators for systems
with $k$-cubic Rashba SOI have recently been reported in Ref.~[\onlinecite{tutul}],
and are presented here for completeness, and to emphasize that they can be derived from our unified description.
The oscillation, classical, and revival timescales are now given by,
\begin{equation}
\tau_{\rm osc}^\lambda=\tau_c\frac{1}{\lvert n_0-1+\lambda \sqrt{c^2+g(n_0)}\rvert}~,
\end{equation}
\begin{equation}
\tau_{\rm cl}^\lambda=\tau_c\frac{1}{\lvert 1+\lambda\frac{\tilde{\alpha}_3^2(3n_0^2-6n_0+2)}{2\sqrt{c^2+g(n_0)}}\rvert}~,
\end{equation}
and
\begin{equation}
\tau_{\rm rev}=\tau_c
\frac{4\sqrt{c^2+g(n_0)}}{\tilde{\alpha}_3^2\Big\{6(n_0-1)-\frac{\tilde{\alpha}_3^2}{2[c^2+g(n_0)]}(3n_0^2-6n_0+2)^2\Big\}}~. \\
\end{equation}
The phenomena of spontaneous collapse is evident in Fig.~\ref{fig2}b and Fig.~\ref{fig2}e for
the position and the velocity of the centre of the
injected wave-packet respectively. The phenomena of partial and full quantum revival in
2DHG is evident in Fig.~\ref{fig2}c and Fig.~\ref{fig2}f.
\section{Dirac materials}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=1.0 \linewidth]{fig3.pdf}
\end{center}
\caption{As in Fig.~\ref{fig1}, panels a), b), and c) show the position, and panels d),e), and f) show
the velocity expectation vs time, for different timescales.
Other parameters are $\Delta=0.4$eV, $v_F=532000$ m/s, $B = 2$T
and $q_0l_c$=10.}
\label{fig3}
\end{figure}
\label{Sec4}
\begin{table*}[t]
\caption{The parameters for various 2D Dirac-like materials and the associated time-scales. Here $n_0=50$ and $B=2$T.
\label{T3}}
\vspace{0.3cm}
\begin{tabular}{l c c c c c r } \hline\hline
Material & Fermi velocity & Band Gap & $\tau_{\rm osc}$ & $\tau_{\rm cl}$& $\tau_{\rm rev}$ \\
& ( $10^6$ ms$^{-1})$ & (eV) & (fs) & (ps) & (ns) \\ \hline
\vspace{0.1cm}
Graphene\cite{grphn1, graphene2, graphene3} & 1 & Gapless & 11.39 & 1.13 & 0.22 \\
\vspace{0.1cm}
Silicene\cite{gap1,gap2,gap3} & 0.532 & 0 - 0.2 (tunable) & 21.4-19.02 & 2.14-2.41 & 0.42-0.61\\
\vspace{0.1cm}
Germanene \cite{STB} & 0.517 & 0 - 0.2 (tunable) & 22.0-19.4 & 2.2-2.5 & 0.44-0.64\\
\vspace{0.1cm}
MoS$_2$ \cite{gap4,gap5}& 0.085 & 1.6 & 5.16 & 348.12 & 4.69$\times10^4$\\ \hline
\end{tabular}
\end{table*}
In this section we focus on systems whose energy dispersion is similar to that given by the
relativistic Dirac equation,
such as graphene [\onlinecite{grphn1, graphene2, graphene3}], silicene [\onlinecite{sili1,sili2, gap1, gap2, gap3}],
monolayer group-VI dichalcogenides {\mbox MX}$_2$
(\mbox{M=Mo, W} and \mbox{X=S, Se}) etc [\onlinecite{gap4, gap5, TMDC, TMDC2}].
The low-energy Hamiltonian describing these systems is simply given by,
\begin{equation} \label{Hdirac}
H = v_F(p_x\sigma_x+p_y\sigma_y) + \Delta\sigma_z,
\end{equation}
where $v_F$ is the Fermi velocity, and $2 \Delta$ is the band
gap, both of which differ from system to system (see Table~\ref{T3}). If the
band gap $\Delta \to 0$, then Eq.~\eqref{Hdirac} describes
massless Dirac fermions as in graphene. Typically, the band gap varies
from $0-1$ eV in these systems, and in silicene it can even be tuned
experimentally by means of an externally applied electric field [\onlinecite{STB}].
The effect of a transverse magnetic field is included by the usual Landau-Peierls substitution
in Eq.~\eqref{Hdirac}.
The Landau-Level eigen-spectrum is again given by the generic Eq.~\eqref{eign_spect}, after substituting
\begin{equation}
f(n)=0~,~~~ g(n)= n,~~~ {\rm and}~~~ c= \Delta/(\hbar \omega_c)~.
\end{equation}
The corresponding Landau level wave-functions are given by Eqs.~\eqref{wavfnP}-\eqref{wavfnM}, with the substitution $m \to 1$.
The time-evolved injected wave-packet at a later time $t$ can be obtained from Eq.~\eqref{wav_t}, with $m=1$.
The coordinates of the center of the wave packet at later times are now given
by the generic Eqs.~\eqref{expX_gn}-\eqref{expY_gn} after the substitution $m \to 1$.
The velocity operator is given by the Heisenberg equation of motion: $\hbar \hat{v}_j = i [\hat{H}, {\hat r}_j]$, and
straightforward calculations yield the following expressions for the expectation value
for the velocity of the center of the injected wave-packet,
\begin{equation} \label{velB1}
\langle v_x(t)\rangle = \Re [\langle v(t) \rangle]~,~~{\rm and},~~\langle v_y(t)\rangle = \Im [\langle v (t)\rangle]~,
\end{equation}
where
\begin{equation} \label{velB2}
\langle v(t) \rangle = \sqrt{2} v_F \sum_{n=0}^\infty \xi_n \frac{1}{\sqrt{1+n}} ~P_{n+2}^\ast Q_{n+1}~.
\end{equation}
The oscillation, classical and revival timescales in this case are given by,
\begin{equation}\label{timescale2}
\tau_{\rm osc} = \frac{2\pi\hbar}{\varepsilon_{n_0}}~,~~~ \tau_{\rm cl} = \frac{4\pi\hbar\varepsilon_{n_0}}{\epsilon^2}~,
~~{\rm and} ~~\tau_{\rm rev} = \frac{16\pi\hbar\varepsilon_{n_0}^3}{\epsilon^4}~,
\end{equation}
where $\varepsilon_{n_0}=\sqrt{\Delta^2+\epsilon^2n_0}$ and $\epsilon=\sqrt{2}\hbar v_F/l_c.\vspace{0.5cm}$
We tabulate different materials, whose low energy properties are described by Eq.~\eqref{Hdirac} in
Table~\ref{T3}, along with their material properties, and the
relevant oscillation, classical and revival timescales. We plot the expectation value
of the position and velocity of the center of the wave-packet in Fig.~\ref{fig3} over the
classical, collapse and revival timescales to highlight the the phenomena of spontaneous collapse and quantum revival.
Note that the value for band gap plays an important role in determining the relevant timescales. In particular the revival time
for MoS$_2$ is quite large as compared to other Dirac-like materials due to its large band-gap. However the band-gap is not a fixed
quantity, and in some materials it can be varied by the application of a transverse electric fields (as in silicene) or by doping
(as in transition Metal dichalcogenides).
\section{Summary and conclusions}
\label{Sec5}
We present a unified treatment of wave-packet dynamics in various 2D systems,
in presence of a transverse magnetic field, and the associated phenomena of
spontaneous collapse and long term quantum revival of the wave-packet.
In particular we focus on a minimum uncertainty Gaussian wave packet
and obtain exact expressions for the expectation values of the position
and velocity operators for a variety of 2D materials, in addition to the
various timescales associated with the phenomena of spontaneous collapse and quantum revival.
For any system with discrete and non-equidistant Landau level spectrum
injecting an initial electron wave packet which is peaked (centered) around some Landau level, we
find that the wave packet initially evolves quasi-classically and oscillate with a period of $\tau_{\rm cl}$
(the cyclotron time-period). However at larger times, the wave packet
eventually spreads and quantum interference between the different Landau
level components of the wave-function leads to its `collapse', and at even longer times,
that are multiples (or rational fractions) of $\tau_{\rm rev}$, quantum
interference results in long-term revival of the wave packet, and the electron
position and velocity regains its initial amplitude --- again undergoing quasiclassical oscillatory motion.
To summarize finally, we present a unified and exact analytical treatment of
wave-packet dynamics in various 2D spin-orbit coupled Schr\"odinger systems
and Dirac-like materials. As expected on general grounds, for any system with
phenomenon of spontaneous collapse and quantum revival.
\section*{Acknowledgements}
A. A. gratefully acknowledges funding from the INSPIRE Faculty Award by DST
(Govt. of India), and from the Faculty Initiation Grant by IIT Kanpur, India.
|
1012.3233
|
\section{Introduction}
\label{sec:intro}
Active systems in general, and active fluids in particular, have recently
become a topical research area in soft matter physics~\cite{Ramaswamy,Toner,kruse1,nedelec,Voituriez,Aranson,goldstein,Marenduzzo07,Cates08,Giomi08,Giomi10,Baskaran09,EPL,Liverpool,Liverpool06,ramaswamy3,llopis,ignacio,SoftMatterReview}.
An ``active particle'' is a particle which consumes energy from the
surrouding environment to do work. In many cases this work is used for
self-propulsion. An active fluid is then a suspension of such
active particles in an underlying Newtonian fluid. The distinguishing feature
of activity in this context is the ability of the particle to exert forces on
the surrounding fluid. In the absence of body forces applied externally, the
simplest perturbation which a particle can impart on the fluid is a force
dipole. According to the direction of the force pair making up the dipole, a
single particle can be either ``extensile'' -- if the forces are exerted
from the centre of mass to the fluid -- or ``contractile'' -- if it is
exerted from the fluid to the centre of mass of the particle. (For a rodlike
particle, extensile forcing ejects fluid in both directions along the axis and
draws it in around the equator; the reverse is true for contractile motion.)
Also, the dipole can be applied at the centre of drag of the object or
elsewhere -- only in the latter case can this active forcing lead to
motility (creating a ``mover'' as opposed to a ``shaker'' \cite{Ramaswamy}).
The presence of a force dipole in active fluids defines a director at the
particle level even in cases where the organism is not rodlike. Thus one can
introduce a natural order parameter which characterises
the magnitude of local orientational order in the fluid. As a result, in
concentrated active fluids one generically expects an isotropic-nematic
transition, and this is why such systems are commonly described by means of
equations of motion that closely resemble those of liquid crystalline
materials.
A suspension of bacterial swimmers, such as {\it B. subtilis}, is
an example of an extensile active fluid, whereas a suspension of algae like {\it Chlamydomonas} is contractile. At a quite different length scale, the same equations can be used to describe an actomyosin solution containing filaments and motors. In this case the active motion of the motors, which resemble moving cross-links on the network of filaments, creates a contractile effect. Such actomyosin networks represent a simplified picture of the cytoskeletal structures underlying the motility of eukaroytic cells \cite{nedelec,kruse1}. In this work we prefer the term
``active fluids'' to ``active gels'' for all the systems under study, although the latter term is widely used. (In fact not all these materials
have a markedly viscoelastic rheological response, as the term ``gel'' suggests; we shall discuss this later on.) The very definition of active particles implies that they are far from equilibrium; unsurprisingly therefore, nonequilibrium phenomena are a dominant theme in the physics of active fluids.
For instance, it has been predicted theoretically~\cite{Voituriez}
that active fluids should undergo a nonequilibrium phase transition
between a ``passive'' quiescent phase, where the motion of each of the
particles are basically uncorrelated and the coarse grained mean velocity
field is uniform and zero, as in conventional passive unforced fluids, and
an ``active'' phase, in which long-range correlations lead to a
non-zero ``spontaneous flow'' in steady state. This transition has also
been seen experimentally: there the spontaneous flow patterns have been
mapped out by following the dynamics of individual bacteria in a
thin {\it B. subtilis} film, where the local concentration of bacteria
was very high~\cite{Aranson}. What triggers
the transition between the quiescent and the ``active'' phase in these
bacterial fluids is simply the increase in concentration -- which controls, among other things, the magnitude of an ``activity'' parameter which appears in the equations of motion that we will
consider later on. The bacterial flow patterns invariably were seen to
involve vortices and swirls of bacteria, similar to those which
are observed in aerobic bacteria which move around to get into contact with
oxygen~\cite{goldstein}. These large-scale motions are sometimes referred to
as ``bacterial turbulence'' because the flow field resembles that
of a Newtonian fluid at high Reynolds number, in the turbulent regime.
However, it should be kept in mind that, unlike the turbulent flow
of Newtonian fluids, that of active fluids occurs at essentially
zero Reynolds number, so that inertia plays no role.
Although less characterised experimentally as yet, the rheology of active
fluids is also expected to display a very intriguing
phenomenology~\cite{Liverpool06,Marenduzzo07,Cates08,Giomi08}.
Extensile and contractile fluids lead to very different rheological
responses. It was predicted and later on confirmed by simulations in 1D that
contractile active gels should show an almost solid-like behaviour when
sheared, and that if left free to reorient at the boundary they should
possess a non-zero yield stress~\cite{Liverpool06,Cates08}.
Extensile fluids on the other hand should have in 1D a ``negative yield stress'' causing a window of apparently superfluid rheology (in which a nonzero macroscopic shear rate can be accommodated at zero stress)
close to the transition between the isotropic and the nematic phase.
An outstanding problem in the {continuum} theory of active fluids is that most of
the calculations done so far assume a 1D (or rather a quasi-1D) geometry, in which the
variations of the orientational order parameter and the flow field
are limited to just one dimension. (The orientational order itself lives in a 2D or 3D space.) While this is enough to
describe the spontaneous flow transition and to map out rheology curves,
it is legitimate to ask how much the 1D predictions are confirmed by
fully 2D or 3D simulations.
(The same would hold in most fluid dynamics problems involving the occurrence of
hydrodynamic instabilities; spontaneous flow is just one of these.)
A small number of simulations in 2D for
extensile spontaneously flowing fluids, within an active but isotropic phase, have already shown that flow
patterns can indeed be significantly different in 2D than in 1D. For instance, while
in 1D a spontaneous net flow is the first state found on entering the
active phase, in 2D rolls and turbulent flow were instead observed, which are
more closely related to the experiments in Ref.~\cite{goldstein,Aranson}.
{Although, as stated above, there are few works in 2D simulating active fluids using the hydrodynamic continuum equations of \cite{Ramaswamy,kruse1} as we pursue here, several other simulation avenues have been pursued in 2D. These include a two-phase model, developed by Wolgemuth \cite{Wolgemuth} in a context specific to bacteria; this focuses on the coupling between activity and number density, which we neglect in this paper. Other contributions have involved simulation of discrete swimmers in two or three dimensions \cite{Ishikawa,Santillan,Graham,llopis,Rupert}; however, such approaches become increasingly difficult as one approaches the case of dense swarms for which the hydrodynamic description (without coupling to number density) was primarily developed. Several of these methods have shown onset of bacterial turbulence or development of coherent structures of various kinds, but it is not yet clear, for instance, which of these require the coupling of activity to number density.}
Our aim in this paper is therefore to study in more detail {the simplest continuum models for} 2D flowing
states of active fluids, focusing on the extensile case,
close to or within the nematic phase, both in the absence of
any forcing and when subjected to a small shear rate. Besides, we evaluate
the role of boundary conditions by comparing simulations in which the
nematic order parameter field is fixed on the surface (to ensure planar
alignment of the associated director field) to others in which the
director is free to rotate at the boundary planes. We confirm that
2D simulations of spontaneous flow patterns in the active phase
markedly differ from 1D patterns. We also show that the rheology curves
may differ when the 1D approximation is lifted. In essence, this paper
extends to 2D the work presented for 1D systems in our earlier paper with
Orlandini and Yeomans \cite{Cates08}. Naturally one can ask what would happen if one generalizes further to the fully 3D case. A comprehensive study of this case would be computationally exhausting, and we defer it to future work. However, some initial progress has recently been made using stability analyses to guide a selective exploration of parameter space \cite{Morozov}.
The rest of this paper is structured as follows. In Sec.~\ref{sec:equations}
we review the equations of motion of apolar active fluids, to which we
restrict ourselves here. In Sec.~\ref{sec:geometry} we specify for our simulations the
geometry and boundary conditions; the
parameter values and units used; and (briefly) the algorithmic methods involved. In
Sec.~\ref{sec:1D} we summarise and extend slightly our earlier results~\cite{Cates08}
for the 1D hydrodynamics and rheology of these active suspensions.
The results of our new 2D simulation study are then presented in
Sec.~\ref{sec:extensile} -- note that we focus on active extensile
materials which lead to spontaneous flow in a quasi-1D geometry,
for rod-like active particles~\cite{Cates08,Voituriez}.
Sec.~\ref{sec:conclusion} contains
conclusions and perspectives for future study.
\section{Model equations}
\label{sec:equations}
Following standard procedures \cite{SoftMatterReview} we first employ a Landau -- de Gennes free energy ${\cal F} = \int fdV$ to describe an equilibrium isotropic to nematic transition in a material without activity. The free energy density can be written as
a sum of two terms, $f=f_1+f_2$. The first is a bulk contribution
\begin{eqnarray}
{f}_1=\frac{A_0}{2}(1 - \frac {\gamma} {3}) Q_{\alpha \beta}^2 -
\frac {A_0 \gamma}{3} Q_{\alpha \beta}Q_{\beta
\gamma}Q_{\gamma \alpha}
+ \frac {A_0 \gamma}{4} (Q_{\alpha \beta}^2)^2
\label{eqBulkFree}
\end{eqnarray}
while the second is a distortion term, which we take in a (standard) one-constant approximation as \cite{degennes}
\begin{equation}\label{distorsion}
{f}_2=\frac{K}{2} \left(\partial_\gamma Q_{\alpha \beta}\right)^2.
\end{equation}
In the equations above, $A_0$ is a constant measuring the relative
contributions of the bulk and the distortion term, $\gamma$ controls the
magnitude of order (it may be viewed as an effective temperature or
concentration for thermotropic and lyotropic liquid crystals
respectively), while $K$ is an elastic constant. The resulting free energy density $f$ is standard to describe passive nematic liquid
crystals \cite{degennes}. Here and in what follows Greek indices
denote cartesian components and summation over repeated indices is
implied.
The equation of motion for {\bf Q} is taken to be \cite{beris,O92,O99}
\begin{equation}
(\partial_t+{\bf u}\cdot{\bf \nabla}){\bf Q}-{\bf S}({\bf \nabla u},{\bf
Q})= \Gamma {\bf H}+\lambda {\bf Q}
\label{Qevolution}
\end{equation}
where $\Gamma$ is a collective rotational diffusion constant. We have added a term proportional to
$\lambda$, which is one of two ``activity parameters'' that become nonzero when activity is switched on. However, this term is easily absorbed into a redefinition of the coefficient $\gamma$ in Eq. (\ref{eqBulkFree}) and from now on we suppress it: $\lambda = 0$.
{The form of Eq. (\ref{Qevolution}) was suggested on the basis of
symmetry in Refs. \cite{Ramaswamy,kruse1} and derived starting from
an underlying microscopic model in Ref. \cite{Liverpool}.} The
first term on the left-hand side of Eq. (\ref{Qevolution}) is the
material derivative describing the usual time dependence of a quantity
advected by a fluid with velocity ${\bf u}$. This is generalized for
rod-like molecules by a second term
\begin{eqnarray}\label{S_definition}
{\bf S}({\bf \nabla u},{\bf Q})
& = &(\xi{\bf D}+{\bf \omega})({\bf Q}+{\bf I}/3)
\\ \nonumber
& + & ({\bf Q}+
{\bf I}/3)(\xi{\bf D}-{\bf \omega}) \\ \nonumber
& - & 2\xi({\bf Q}+{\bf I}/3){\mbox{Tr}}({\bf Q}{\bf \nabla u})
\end{eqnarray}
where ${\mbox{Tr}}$ denotes the tensorial trace, while ${\bf D}=({\bf \nabla u}+{\bf
\nabla u}^T)/2$ and ${\bf \omega}=({\bf \nabla u}-{\bf \nabla u}^T)/2$ are the symmetric
part and the anti-symmetric part respectively of the velocity gradient
tensor $\nabla u_{\alpha\beta}=\partial_\beta u_\alpha$. The constant $\xi$
depends on the molecular details of a given particle and controls whether the passive material is flow-aligning or flow-tumbling. To isolate the nolinear effects of activity we choose flow-aligning materials in this paper (bearing in mind that flow-tumbling ones often show unsteady flow behaviour even without activity). The first
term on the right-hand side of Eq. (\ref{Qevolution}) describes the
relaxation of the order parameter towards the minimum of the free
energy. The molecular field ${\bf H}$ which provides the force for
this motion is given by
\begin{equation}
{\bf H}= -{\delta {\cal F} \over \delta {\bf Q}}+({\bf
I}/3) \mbox{Tr}{\delta {\cal F} \over \delta {\bf Q}}.
\label{molecularfield}
\end{equation}
The fluid velocity, ${\bf u}$, obeys the continuity equation for an effectively incompressible fluid, $\nabla.{\bf u}= 0$, and also
the corresponding Navier-Stokes equation,
\begin{equation}\label{navierstokes}
\rho(\partial_t+ u_\beta \partial_\beta)
u_\alpha = \partial_\beta (\Pi_{\alpha\beta})+
\eta \partial_\beta(\partial_\alpha
u_\beta + \partial_\beta u_\alpha)
\end{equation}
Here $\rho$ is the fluid density, $\eta$ is a viscosity and
$\Pi_{\alpha\beta}=\Pi^{\rm passive}_{\alpha\beta}+ \Pi^{\rm
active}_{\alpha\beta}$. {Note that this viscosity $\eta$ in Eq.\ref{navierstokes} is isotropic, and that of the solvent in which our active particles are suspended. This does not preclude the emergence of an anisotropic macroscopic viscosity as a result of the coupling to those particles.}
The stress tensor
$\Pi^{\rm passive}_{\alpha\beta}$ necessary to describe ordinary LC
hydrodynamics without activity is:
\begin{eqnarray}\label{BEstress}
\Pi^{\rm passive}_{\alpha\beta}= &-&P_0 \delta_{\alpha \beta} +2\xi
(Q_{\alpha\beta}+{1\over 3}\delta_{\alpha\beta})Q_{\gamma\epsilon}
H_{\gamma\epsilon}\\\nonumber &-&\xi
H_{\alpha\gamma}(Q_{\gamma\beta}+{1\over 3}\delta_{\gamma\beta})-\xi
(Q_{\alpha\gamma}+{1\over
3}\delta_{\alpha\gamma})H_{\gamma\beta}\\ \nonumber
&-&\partial_\alpha Q_{\gamma\nu} {\delta {\cal F}\over
\delta\partial_\beta Q_{\gamma\nu}} +Q_{\alpha \gamma} H_{\gamma
\beta} -H_{\alpha \gamma}Q_{\gamma \beta}.
\end{eqnarray}
The nematic free energy gives an effectively constant contribution to the fluid pressure $P_0$ (whose final spatial variation is determined by the fluid incompressibility condition). The active stress, which unlike the passive one cannot be derived from any underlying free energy functional, is given to leading order by
\begin{equation}
\Pi^{\rm active}_{\alpha\beta}=-\zeta Q_{\alpha\beta} \label{activestress}
\end{equation}
where $\zeta$ is a second activity constant \cite{Ramaswamy,EPL}.
Note that with the sign convention chosen here $\zeta>0$ corresponds
to extensile rods and $\zeta<0$ to contractile ones \cite{Ramaswamy}; in either case this term does not merely renormalize the equations for a passive liquid crystal but fundamentally alters their form. Accordingly it is a key control parameter in the continuum description of active nematics.
As in Eq. \ref{Qevolution}, the explicit form of the active contribution to the stress tensor entering Eq. \ref{navierstokes} was proposed on the basis of a symmetry analysis of a fluid of contractile or extensile dipolar objects in \cite{Ramaswamy}. It was also derived by coarse graining a more microscopic model for a solution of actin fibers and myosins in Ref. \cite{Liverpool}.
The equations of motion chosen above address the case of active nematics, by
which we mean particles whose local ordering is apolar. This means that
locally one has a preferred orientation of the force dipole but not of any
vector field such as the mean swimming direction of motile particles.
The equations of motion in the latter case are yet more complicated
(see e.g. \cite{SoftMatterReview}) but because the active stress takes the
same form as above, for rheological purposes are expected to yield broadly
similar results. In this work, for simplicity, we study solely the apolar case.
\section{Simulation details}
{
In this Section we describe the geometry and boundary conditions used for our numerical work and also discuss the units and parameter values chosen, and the issue of numerical convergence. Note that (primarily for historical reasons) we used finite difference (FD) for free boundaries and lattice Boltzmann (LB) for fixed ones; this is considered further in Appendix A.}
\subsection{Geometry and boundary conditions}
\label{sec:geometry}
We consider a slab of active fluid sandwiched between flat parallel
plates located at $y=0,L_y$. The fluid velocity and order parameter tensor can vary in both $x$ and $y$, while in the third
dimension $z$ we assume translational invariance.
Each plate has length $L_x$ and periodic boundary conditions are imposed in the $x$ direction. In general the
upper plate is taken to move in the positive $x$ direction at a
constant speed $\gdotbar L_y$, although many of the results presented below are for the case with no externally applied shear,
$\gdotbar=0$. At the plates we impose boundary conditions of no slip and no permeation for the fluid velocity. For the molecular order parameter, we will
study two different boundary conditions:
\begin{itemize}
\item So-called ``free'' boundary conditions, in which the order parameter tensor at the wall can take any value but must satisfy a zero-gradient condition in the direction normal to the wall:
\be
\partial_y Q_{\alpha\beta}=0,\;\;\; {\rm at}\;\;\; y=\{0,L_y\}\;\;\;\forall\;{\alpha,\beta}\label{eqn:free}
\ee
\end{itemize}
Free boundaries are believed to give the fastest convergence to bulk behavior by minimizing the effects of the confining walls on the order parameter dynamics.
\begin{itemize}
\item So-called ``fixed'' boundary conditions, in which {the
director field is anchored along the $x$ direction parallel to the wall}, and the order parameter tensor $Q_{\alpha\beta}$ has the equilibrium form for a uniaxial state with this director:
\be
Q_{\alpha\beta} = q\left(n_{\alpha}n_{\beta}-\frac{\delta_{\alpha\beta}}{3}
\right)
\ee
where
\be
q= \frac{1}{4} + \frac{3}{4}\sqrt{1- \frac{8}{3\gamma}}
\ee
is the magnitude of order in the surface, which we take to be equal to
the one in the bulk.
\end{itemize}
The choice of fixed boundary conditions is motivated by the behavior of
non-active liquid crystals which frequently have specific anchoring
interactions that lock in the director field at the boundary.
Their appropriateness for active nematics is less well established
(particularly for bacterial suspensions, although some sort of anchoring
remains plausible for actomysin gels), which is why a comparison of the two
types of boundary condition is appropriate here. In fact, as we will see
below, our main conclusions concerning the flow patterns and rheology are
relatively robust to the choice of boundary conditions (although these
certainly influence some of the details).
\subsection{Units and parameter values}
\label{sec:parameters}
Throughout our study we use units of length in which the gap between
the plates $L_y=1$; units of time in which the model's underlying microscopic timescale $\tau\equiv
1/(A_0\Gamma)=1$; and units of mass such that the free energy parameter $A_0$, which has the dimensions of a stress, likewise obeys $A_0 =1$.
All runs use a value of the I/N control parameter
$\gamma=3.0$ which lies within the nematic phase for a system without
activity; as stated previously this value effectively absorbs the activity
parameter $\lambda$, which we have set to zero. Throughout we set $\xi=0.7$,
which corresponds to a typical value for flow-aligning molecular nematics {\cite{degennes}; such nematics generally comprise rod-like molecules of modest aspect ratio. In any case}
we expect that, so long as one does not leave the flow-aligning regime, our
results should be relatively insensitive to this choice. In the system of
units just defined, we choose for the Newtonian viscosity a value
$\eta=0.567$, which is much larger than the viscoelastic one without activity,
hence leading to a conventional Newtonian rheology in the passive phase.
With the above choices, and denoting by $\gdotbar$ the mean shear rate imposed
by the relative motion of the two plates, the parameter values that remain to
be varied between runs are the activity level $\zeta$, and a microscopic
lengthscale on which elastic distortions compete
with bulk free energies. This obeys $l\equiv \sqrt{K/A_0} = \sqrt{K}$ (where
the latter equality holds in our system of units only). Accordingly, we
will quote values for the parameters $(\zeta,l)$ in each figure caption
of our results sections below. A third parameter, the Reynolds number (Re)
which controls the relative strength of inertial to viscous terms in the
Navier-Stokes equation, is always small or zero (see below).
{
\subsection{Numerical convergence}
\label{sec:convergence}
}
In a class of systems whose flow frequently is (or appears to be) chaotic, one cannot expect genuine convergence of numerically calculated velocity patterns with respect to the
time-step $\Delta t$ and mesh scale $(\Delta y, \Delta x)$.
We strive to ensure our results are converged, in the sense that further refinement gives no significant change to the type of behavior observed (except very close to parameter values at which there is transition from one regime to another), nor to macroscopic quantities like the time-averaged stress. In the units chosen above, we found this to typically require values of $(\Delta y, \Delta x, \Delta t)=(1/100,\,1/100,\,0.34)$
in our LB simulations and $(\Delta y, \Delta x, \Delta
t)=(1/128,\,4/512,\,0.05)$ -- or sometimes $(\Delta y, \Delta x, \Delta
t)=(1/256,\,4/1024,\,0.05)$ -- in our FD simulations. For sufficiently small $l$, one expects the boundary conditions to be unimportant so that (modulo the minor technical differences discussed above) the results from FD and LB should approach one another in this limit.
\section{Results in zero and one dimension}
\label{sec:1D}
\begin{figure}[tbp]
\includegraphics[scale=0.6]{fig1.eps}
\caption{Homogeneous constitutive curves.}
\label{fig:fc}
\end{figure}
\begin{figure}[tb]
\includegraphics*[scale=0.6]{fig2a.eps}
\includegraphics*[scale=0.6]{fig2b.eps}
\caption{{\bf Top}: shear banded (or quiescent) profiles for
$\zeta=0.01$. Values of $l=$0.0005, 0.00071, 0.001, 0.00113,
0.0014, 0.002, 0.0028, 0.004, 0.00476, 0.0057, 0.00673, 0.0073,
0.008, 0.00872, 0.00951, 0.016 increasing for decreasing
$\gdot(y=0)$. Inset: shear rate $\gdot(y=0)$ at the left edge of
the cell extracted from such profiles, for various values of
$\zeta,l$, shown in the master scaling representation discussed in
the main text. \newline {\bf Bottom:} Phase diagram for 1D runs
showing the region of spontaneous shear banded flow (closed
circles) and the region in which an initially heterogeneous state
decays back to a quiescent state of zero flow (open circles). The
dashed line is the power law $l^* = a \zeta^{1/2}$ using the value
of the intercept $a$ extracted from the master scaling plot in the
inset to the top figure.}
\label{fig:profiles1D}
\end{figure}
In this section we briefly recap some results of Ref.~\cite{Cates08} for the
hydrodynamics and rheology of an extensile fluid in less than two dimensions.
We discuss first the case of a homogeneous (0D) imposed
shear flow, and then recall the results of calculations that allow spatial
variations in one spatial dimension (1D), $y$, assuming translational
invariance in both $z$ and $x$.
The homogeneous constitutive curves are shown in Fig.~\ref{fig:fc} for
three different values of positive (extensile) activity, $\zeta > 0$.
As discussed in Ref.~\cite{Cates08}, the vertical drop at the origin arises
from the alignment by weak flow of the nematic director at the Leslie angle
(this occurs, for a passive system, throughout the nematic phase). This
alignment produces an active stress tensor whose $xy$ component for extensile
particles is of opposite sign with respect to shear rate.
This negative stress contribution is proportional to
$\zeta$ and depends on the sign of the imposed flow $\dot\gamma$ but not its
magnitude (when this is small). This discontinuous variation creates a
downward step in the flow curve, which is expected
to allow the spontaneous formation of shear bands, even when no
external shear is applied at the system's boundaries,
$\gdotbar=0$~\cite{Cates08}.
Accordingly, we perform a series of unsheared 1D runs in which spatial
variations are allowed in $y$ (only). Each is initialised with the
tensor $Q_{\alpha\beta}(y)$ having a uniform ($y$-independent) degree
of uniaxial order, and its long axis in the $xy$ plane at an angle
$\theta=22^\circ \tanh( (0.5-y)/l)$ to the $x$ direction. This favours
the formation of just two shear bands. (For random initial conditions,
multiple bands can in principle form in this planar flow geometry.)
The code was then evolved to steady state.
In the limit $l\to 0$ for any value of $\zeta$ we find the steady
state to comprise two coexisting bands of equal and opposite shear
rates $\pm\gdot^*(\zeta)$, these being the values at which the two
positively sloping branches of the homogeneous constitutive curve
intercept the $\gdot$ axis (see Fig.~\ref{fig:profiles1D}). The bands
are separated from each other by a slightly diffuse interface of
thickness $l$. Increasing $l$ in a series of runs at any fixed value
of $\zeta$ eventually eliminates this state of spontaneous flow, with
quiescence being restored at a critical value $l^*=a\zeta^{1/2}$ where
$a\approx 0.885$. This is seen in the master scaling curve of
$\gdot(y=0,t\to\infty)/\gdot^*$ versus $l/\zeta^{1/2}$ in the inset of
Fig.~\ref{fig:profiles1D}, top, and by the dashed line separating
quiescence (open circles) from spontaneous banded states (filled
circles) in Fig.~\ref{fig:profiles1D}, bottom.
The results just presented were found with free boundary conditions.
Simulations of the fixed boundary case give a more complex behavior,
particularly at the larger values of $l$, where the anchoring condition at the
wall can compete with the elastic director distortions required to maintain the
shear-banded state. For a fuller discussion of such effects in 1D
see \cite{Cates08}.
\section{2D Results: extensile systems}
\label{sec:extensile}
The remainder of the paper concerns 2D simulations that allow spatial
variations parallel to the plates (along $x$) as well as perpendicular
to them (along $y$). Furthermore we focus on extensile
fluids, which lead to spontaneous flow in 1D (contractile systems
will be treated elsewhere).
\subsection{Unsheared systems}
\label{sec:unsheared}
We discuss first systems that are not subject to any externally
applied shear flow, treating the free and fixed boundary condition cases in
turn. (Systems with applied shear will be addressed in
Sec.~\ref{sec:sheared} below.)
\subsubsection{Free boundary conditions}
\begin{figure}[tb]
\includegraphics[scale=0.53]{fig3.eps}
\caption{Phase diagram for 2D runs with free boundary conditions, each denoted by a square. Empty
squares: quiescence. Elsewhere initial shear banded profile gives
way to a state of 2D domains that is: steady (crosses);
oscillatory (shaded square); aperiodic (filled squares).}
\label{fig:phaseDiagram2D}
\end{figure}
\if{
\begin{figure}[tb]
\includegraphics[scale=0.53]{fig4.eps}
\caption{Dispersion relation for initial growth rate of 2D
perturbations as a function of wavevector $q_x$. Left hand graph
shows $l=0.002$ and $\zeta=0.001,0.002,0.004,0.01,0.02,0.04$
(curve sets upwards). Three curves are shown for
$\zeta=0.001,0.01,0.04$ (default, half mesh size, half
time-step). For $\zeta=0.001,0.01$ they are indistinguishable,
showing that convergence has been attained. For $\zeta=0.04$ they are
almost indistinguishable. Right hand graph: $\zeta=0.1$ and
$l=0.004,0.008,0.016$ (curves
downwards).}
\label{fig:dispersion}
\end{figure}
}\fi
\begin{figure}[tb]
\includegraphics[scale=0.325]{fig5.eps}
\caption{Throughput versus time for $l=0.002$ and $\zeta=0.004$
(left), $\zeta=0.04$ (right).}
\label{fig:throughput}
\end{figure}
\begin{figure*}[tb]
\includegraphics[width=8.5cm]{fig6a.eps}
\includegraphics[scale=0.38,angle=90]{fig6b.eps}
\includegraphics[width=8.5cm]{fig6c.eps}
\includegraphics[scale=0.38,angle=90]{fig6d.eps}
\includegraphics[width=8.5cm]{fig6e.eps}
\includegraphics[scale=0.38,angle=90]{fig6f.eps}
\includegraphics[width=8.5cm]{fig6g.eps}
\includegraphics[scale=0.38, angle=90]{fig6h.eps}
\includegraphics[width=8.5cm]{fig6i.eps}
\includegraphics[scale=0.38,angle=90]{fig6j.eps}
\includegraphics[width=8.5cm]{fig6k.eps}
\includegraphics[scale=0.38,angle=90]{fig6l.eps}
\caption{Snapshot at long time of 2D runs for
$\zeta=0.002,0.004,0.01,0.02,0.04,0.1$ and
$l=0.002$
(free boundary conditions, no external shear throughout). Greyscale on left shows $Q_{xx}$; while arrows on right show the fluid velocity. {The $x$ direction is horizontal, $y$ vertical.}}
\label{fig:states1}
\end{figure*}
\begin{figure*}[tb]
\includegraphics[width=8.5cm]{fig7a.eps}
\includegraphics[scale=0.38,angle=90]{fig7b.eps}
\includegraphics[width=8.5cm]{fig7c.eps}
\includegraphics[scale=0.38,angle=90]{fig7d.eps}
\includegraphics[width=8.5cm]{fig7e.eps}
\includegraphics[scale=0.38,angle=90]{fig7f.eps}
\includegraphics[width=8.5cm]{fig7g.eps}
\includegraphics[scale=0.38,angle=90]{fig7h.eps}
\caption{Snapshot at long time of 2D runs for $\zeta=0.1$ and,
from top to bottom, $l=0.016,0.008,0.004,0.002$ (free boundary conditions, no external shear throughout). Greyscale on left shows $Q_{xx}$; while arrows on right show the fluid velocity. {The $x$ direction is horizontal, $y$ vertical.}}
\label{fig:states2}
\end{figure*}
Using the free boundary condition, Eqn.~\ref{eqn:free}
above, we performed a series of 2D runs at the values of
$\zeta,l$ shown by squares in Fig.~\ref{fig:phaseDiagram2D}. In each
run we input as an initial condition the final state of the
corresponding 1D run of Sec.~\ref{sec:1D} above. (All these 1D runs
had themselves used free boundary conditions.) The dashed line in
Fig.~\ref{fig:phaseDiagram2D} is copied directly from
Fig.~\ref{fig:profiles1D}, bottom. Accordingly, all runs above
this line had a quiescent initial condition. All those below it
started with coexisting shear bands of equal and opposite shear rate.
To this 1D intial condition, we also added a 2D random component
(the noise distribution was uniform, and could be either positive
or negative for each component of the order parameter tensor) of tiny
amplitude. This initialisation procedure allowed a study of the linear
regime kinetics for the initial growth (or decay) of 2D perturbations
at early times, as well as the ultimate attractor that is attained at
long times.
For all runs located above the dashed line in
Fig.~\ref{fig:phaseDiagram2D}, the initial 2D perturbation decayed to
zero as a function of time, showing the homogeneous quiescent state to
be linearly stable. In all runs below the line, for which the 1D base
state was shear banded, the initial 2D perturbation grew in time. \if{The
dispersion relation for the initial growth away from the 1D base state
is shown in Fig.~\ref{fig:dispersion}. This was measured by tracking
the amplitude of the first few (lowest $q_x$) modes as a function of
time on a log-linear plot, of which the slope gives the growth rate
$\omega(q_x)$ for any given mode.}\fi
In each case, this destabilisation of the 1D initial state resulted at
long times in a more complicated state with 2D structure. A
macroscopic signature of this evolution is the decay of the
gap-averaged throughput, which was non-zero in the initial V-shaped 1D
velocity profile, to zero (on average) in the final 2D state; see
Fig.~\ref{fig:throughput}. (Note that by convention we choose the V-shaped velocity profile in the initial state
to correspond to a positive throughput.)
At long times, the signal of $y-$integrated amplitude versus time (for
all $q_x$ modes in any given run) settled either to steady state, or
to an oscillatory attractor, or to an aperiodic, apparently chaotic, attractor. The word ``apparently'' is used here because we do not measure Lyaponov exponents and therefore do not distinguish true chaos from quasiperiodic motion. (From now on, though, we neglect this distinction and use the terms `chaotic' and `aperiodic' interchangeably.) We distinguish these three
different dynamical regimes by the type of filling of the squares in
Fig.~\ref{fig:phaseDiagram2D} (crosses=steady; shaded=oscillatory;
solid-filling=chaotic.) Representative snapshots of the full 2D state
at a long time after the start of each run, once the system has
attained this ultimate attractor, are shown in Figs.~\ref{fig:states1}
and~\ref{fig:states2} respectively for runs performed along the
horizontal and vertical thin dotted lines in
Fig.~\ref{fig:phaseDiagram2D}. As can be seen, steady roll-like states
tend to dominate towards the upper left of the unstable regime in the
$\zeta,l$ plane, giving way to chaotic, turbulent-like states at the
lower right.
It is remarkable that the 1D shear bands are immediately unstable towards a
static ``roll-like'' flow pattern with variation in both $x$ and $y$.
Given this,
however, the general progression from steady rolls via oscillations to chaotic
flow on increasing activity at fixed $l$ (that is, fixed ratio of sample
dimension to microscopic length, $L/l$) is fairly natural -- similar behavior
was reported anecdotally for one parameter set in \cite{active_pre}. Notably,
the same progression is seen at fixed $\zeta$ by instead decreasing $l$ --
equivalent to increasing $L/l$, the ratio of the sample size to the molecular
length scale. Thus not only is the instability of the quiescent state delayed
in small systems (as it is in 1D, \cite{Cates08}), but also the subsequent
transitions to oscillation and chaos are likewise delayed. This reflects the
high energy cost of creating inhomogeneous director fields in systems that are
not many times larger than the elasticity length $l$; this cost stabilizes the
quiescent state but can be overcome, for all the $l$ values studied here, by
increasing the activity sufficiently.
\subsubsection{Fixed boundary conditions}
We now turn to the case of fixed boundary conditions.
Fig.\ref{fig:phaseDiagram2Dfixed} shows the phase diagram in this case,
plotted in the same way as Fig.\ref{fig:phaseDiagram2D} for free BCs. Just as
we found there, the simple 1D banded state (creating a V-shaped velocity
profile) is never seen.
As expected, in the regime of small $l$ (large $L/l$) the BCs become relatively
unimportant and the behavior as a function of $\zeta$ very similar to that
with free BCs. However, at larger $l$ (i.e., smaller system sizes in units of
the microscopic length), the choice of BCs plays a larger role.
\begin{figure}[tb]
\includegraphics*[width=7.5cm]{fig8a.eps}
\includegraphics*[width=7.5cm]{fig8b.eps}
\caption{Top: Phase diagram for 1D runs with fixed
boundary conditions, each denoted by a circle. Empty circles denote
quiescence, filled ones spontaneously flowing states.
Bottom: Phase diagram for 2D runs with fixed boundary conditions,
each denoted by a square. Empty
squares: quiescence. Elsewhere the stable 1D profile with a small
inhomogeneity gives
way to a state of 2D domains that is: steady (crosses);
oscillatory (shaded square); chaotic (filled squares).}
\label{fig:phaseDiagram2Dfixed}
\end{figure}
In 1D the fixed BCs, by constraining the director to lie in the flow plane,
somewhat suppress the instability towards the banded state relative to free
BCs \cite{Cates08}. However, the 1D banded state is again unstable; moreover,
the presence of additional instability modes pushes the boundary of the
passive phase back towards lower activity.
Specifically, to numerical accuracy we discern the following progression at
fixed $l=0.002$. If the activity is smaller than, but close to, the critical threshold in 1D,
the system forms steady convection rolls (seen previously in \cite{active_pre}). Fig. \ref{unsheared_fixedBC}a gives a steady-state snapshot of $Q_{xx}$ and of
the fluid flow in this regime. This steady state is achieved in about 200,000 LB timesteps which corresponds to time $6.755 \times 10^4$ in our units
(i.e., $t=6.755 \times 10^{4} \tau$).
On the other hand, if the activity is raised to just beyond the level needed to create spontaneous flow
in 1D, then the rolls break and form what looks like one or more tilted
bands (Fig. \ref{unsheared_fixedBC}b). These tilted bands are again stationary.
Additional simulations in a box with a 4:1 aspect ratio (not shown) suggest that these
may be stabilised by the reduced geometry and box size used here --
consistently with the fact that we do not
observe these states with free boundary conditions in a 4:1 box (Section VA1).
If the activity increases further (Fig. \ref{unsheared_fixedBC}c, $\zeta = 0.008$) a roll-like state that forms initially breaks into an apparently undulating band which then oscillates.
This looks quite similar to Fig. \ref{fig:states2} (second panel from top) which is however already chaotic (perhaps because of the larger aspect ratio).
A further increase to $\zeta = 0.01$ appears to stabilise multiple bands
with small vortices in between (Fig. \ref{unsheared_fixedBC}d). These states are quasi-1D in
nature, in the sense that the main variation occurs along the velocity gradient
direction, yet were not seen in our strictly 1D simulations.
For yet larger activity, we have an apparently chaotic flowing state, similar to that reported above for
free BCs (Fig.\ref{unsheared_fixedBC}e), also seen previously in Ref. \cite{active_pre}.
It should be noted that the threshold for spontaneous flow in 1D and 2D are different with fixed boundary conditions. This is a difference with respect to the free boundary condition results discussed before, which is due to boundary stabilisation of the passive phase. For relatively large values of $l/L$, the boundaries provide layers within which the ordering is fixed by the anchoring, and this provides the observed stabilisation. As expected, we find that as the value of
$l/L$ decreases, so does the difference between the thresholds in 1D and 2D (see Fig.~\ref{fig:phaseDiagram2Dfixed}).
\begin{figure*}
\begin{center}
\centerline{\includegraphics[scale=0.5]{fig9_new.eps}}
\end{center}
\caption{Snapshots corresponding to
simulations with extensile fluids and fixed boundary conditions,
for a system with $L_x=L_y=1$ and periodic boundary conditions along the
flow direction.
The snapshots show grayscale plots of $Q_{xx}$ (left column), and
the velocity profile (right column).
The value of the activity is (from top to bottom): 0.002 (a), 0.004 (b), 0.008 (c), 0.01 (d), 0.08 (e); throughout, $l=0.002$.}
\label{unsheared_fixedBC}
\end{figure*}
\subsection{Sheared systems}
\label{sec:sheared}
Here we address systems that are subject to a shear flow applied
externally at the cell boundaries. As before we consider in turn the cases
of free and fixed boundary conditions.
\subsubsection{Free boundary conditions}
For very small $\gdotbar$, one expects the effect of flow to be perturbative
on the underlying zero-shear states found above. In a strictly 1D flow showing
shear bands, the effect is merely to shift the band interface, creating a
nonzero mean shear rate without altering any structure within the bands. This
degeneracy allows the system to accommodate a small macroscopic shear flow
without developing a stress (called ``superfluidity'' in \cite{Cates08}).
In 1D we also found that increasing the shear rate beyond the superfluidity
window caused elimination of one of the two shear bands, thereby restoring a
homogeneous shear flow.
In 2D, with a more complicated inhomogeneous flow in the absence of
bulk shear, it is far from clear whether a similar degeneracy ought to
exist. If it does not, one expects a finite shear rate to be
accompanied by a finite stress, so that the superfluid region is
replaced by one of finite (zero-shear) viscosity. Nonetheless one
might expect restoration of homogeneous flow at high enough
$\gdotbar$, either by a sharp transition or only gradually as
$\gdotbar\to\infty$.
To address the behavior in 2D we use for reference the unsheared phase
diagram in the $(\zeta,l)$ plane of Fig.~\ref{fig:phaseDiagram2D},
focusing on three locations in it: $l=0.002$ and
$\zeta=0.001,0.005,0.04$, for which the unsheared state comprised
simple rolls, wavy rolls and turbulence respectively. (Recall
Fig.~\ref{fig:states1}.) At each of these locations we perform a
series of runs for increasing values of the applied shear rate
$\gdotbar$. At each shear rate separately we perform shear startup
from a state of no flow, with an initial condition in which the order
parameter tensor is taken to be everywhere uniaxial for simplicity,
$Q_{\alpha\beta}=\tfrac{3}{2} S_1 (n_\alpha n_\beta -
\delta_{\alpha\beta}/3)$, and with its principal axis assumed to
reside in the $xy$ plane. To avoid biasing the system into any
particular final state, at each individual grid point separately we
assign a random value for the order parameter $S_1$ (drawn from a flat
top-hat distribution); and for the angle of the principal axis (from a
flat distribution between $0$ and $2\pi$).
Our results are shown in Figs.~\ref{fig:zeta0.001_l0.002}
to~\ref{fig:zeta0.04_l0.002}. For low activity $\zeta = 0.001$, the
static roll pattern observed without shear is destroyed even at the
smallest shear rates studied, whereas at high enough shear rates a
homogeneous flow is recovered and the shear stress reverts to the one
calculated in 1D. The flow pattern at very low shear rates is unsteady
and is spatially nonuniform with no evident periodicity in either
space or time. The shear stress takes both positive and negative
values. The unsteadiness makes it very difficult to determine a
reliable value for the average shear stress at small $\gdotbar$ (error
bars in Fig.~\ref{fig:zeta0.001_l0.002} are standard deviations of the mean
from the time series).
At $\zeta = 0.005$, the `wavy roll' state observed in the case of zero
bulk shear (closely resembling that of Fig.~\ref{fig:states1}) is again
broken up at the smallest shear rates studied. This is not surprising
since this state has some form of (defect-ridden) layerwise order
along the flow direction which is inconsistent with the imposed shear
geometry. In contrast to the preceding case, the system now shows
clear evidence of a finite viscosity at low shear rates (with much
smaller stress fluctuations at low $\gdotbar$). Again, the system
approaches a homogeneous laminar flow at high enough flow rates where
the stress similarly attains the value predicted from the 1D
calculation in this regime.
Finally, for $\zeta= 0.04$, the chaotic state present initially is
perturbed only marginally at small strain rates. The data is
consistent with a finite effective viscosity at low strain rates, but
unlike the previous two cases the flow curve has noticable upward
curvature here (so $\sigma\sim\gdotbar^p$ with $p>1$ cannot be ruled
out). At high strain rates the flow becomes progressively more
homogeneous and the shear stress intersects the 1D curve, signifying a
return to a laminar 1D flow.
\begin{figure}[tbp]
\includegraphics[width=7.5cm]{fig10a.eps}
\includegraphics*[width=6.5cm]{fig10b.eps}
\includegraphics*[width=6.5cm]{fig10c.eps}
\includegraphics*[width=6.5cm]{fig10d.eps}
\includegraphics*[width=6.5cm]{fig10e.eps}
\includegraphics*[width=6.5cm]{fig10f.eps}
\includegraphics*[width=6.5cm]{fig10g.eps}
\includegraphics*[width=6.5cm]{fig10h.eps}
\caption{Top: Flow curve for $\zeta=0.001, l=0.002$. {Lower:} representative state snapshot at a late time
(on the final attractor) for each of the lowest seven shear
rates. (Shear rate increasing in snapshots downwards. {The $x$ direction is horizontal, $y$ vertical.})
The solid line shows the 0D homogeneous constitutive curve.}
\label{fig:zeta0.001_l0.002}
\end{figure}
\begin{figure}[tbp]
\includegraphics*[width=7.5cm]{fig11a.eps}
\includegraphics*[width=6.5cm]{fig11b.eps}
\includegraphics*[width=6.5cm]{fig11c.eps}
\includegraphics*[width=6.5cm]{fig11d.eps}
\includegraphics*[width=6.5cm]{fig11e.eps}
\includegraphics*[width=6.5cm]{fig11f.eps}
\includegraphics*[width=6.5cm]{fig11g.eps}
\includegraphics*[width=6.5cm]{fig11h.eps}
\caption{Top: Flow curve for $\zeta=0.005, l=0.002$. {Lower:}
representative state snapshot at a late time (on the final
attractor) for each of the lowest seven shear rates. (Shear rate
increasing in snapshots downwards. {The $x$ direction is horizontal, $y$ vertical.})
The solid line shows the 0D homogeneous constitutive curve.}
\label{fig:zeta0.005_l0.002}
\end{figure}
\begin{figure}[tbp]
\includegraphics*[width=7.5cm]{fig12a.eps}
\includegraphics*[width=6.5cm]{fig12b.eps}
\includegraphics*[width=6.5cm]{fig12c.eps}
\includegraphics*[width=6.5cm]{fig12d.eps}
\includegraphics*[width=6.5cm]{fig12e.eps}
\includegraphics*[width=6.5cm]{fig12f.eps}
\includegraphics*[width=6.5cm]{fig12g.eps}
\includegraphics*[width=6.5cm]{fig12h.eps}
\caption{Top: Flow curve for $\zeta=0.04, l=0.002$. {Lower:}
representative state snapshot at a late time (on the final
attractor) for each of the lowest seven shear rates. (Shear rate
increasing snapshots downwards. {The $x$ direction is horizontal, $y$ vertical.})
The solid line shows the 0D homogeneous constitutive curve.}
\label{fig:zeta0.04_l0.002}
\end{figure}
\subsubsection{Fixed boundary conditions}
We now turn to the case of fixed boundary conditions. In 1D the results are very similar to the free boundary case \cite{Cates08}. The main difference between fixed and free boundary conditions in 1D is that the superfluid window is somewhat narrowed by the constraint on the order parameter (which inhibits the formation of a band interface close to a wall) and can disappear altogether for large enough value of $l$~\cite{Cates08}.
As done in the preceding Section focussing on free boundary conditions, we here report flow curves and typical stress snapshots found for different values of activity, which we chose as representative of the steady rolls and the turbulent regimes in the unsheared phase diagram.
The flow curve observed when shearing the static roll pattern is shown in Fig.~\ref{rolls-fixed}. One important qualitative difference with respect to the free boundary case is that there appears to be a linear regime in this case, in which the viscosity of the rolls is well defined, and larger than $\eta$ -- see Fig.~\ref{viscosity-rolls-fixed}. The extent of this linear regime is very small, and quite likely it shrinks with increasing $L_x$ -- which may explain this qualitative difference between the free and fixed boundary observations. On the other hand, in good agreement with the free boundary case, at high enough shear rates a 1D flow curve is recovered. At low $\dot{\gamma}$, again as with free boundaries, the shear stress takes both positive and negative values.
The behaviour shown in Fig.~\ref{turbulent-fixed} for $\zeta= 0.04$, is selected as typical for flow curves starting from the unsheared turbulent regime. The flow curve is quantitatively in good agreement with its counterpart for free boundary conditions shown before. The data at small shear rate are consistent with a small yield stress in this case -- this is reasonable as turbulent flows may be expected to dissipate more than rolls, hence have a larger viscosity. The linear regime has also disappeared in the turbulent regime.
\if{
Finally, we recall that the FD simulations employed to obtain the free boundary conditions results are fully 2D, whereas the LB code allows in principle both the director and the velocity field to get an out-of-plane component. This does lead to some differences, although here we mostly focus on the regimes in which the LB leads to in-plane dynamics (this occurs in our case when the initial velocity and director field are both in-plane). However, LB simulations give evidence of a different, out-of-plane, instability, in which the director field rotates out of the plane. This is accompanied by a strong secondary flow. While we do not go into the details of this new state in this work, we note that its flow response is closer to that of a 1D state, and its associated apparent viscosity in the linear regime is this time smaller than $\eta$ (see Fig.~\ref{out-of-plane-viscosity-fixed}). Given the fact that the director goes out of the shear plane in that case, however, one would require 3D simulations to confirm the shape of the flow curves. We leave this to future work.
}
\fi
\begin{figure}
\begin{center}
\centerline{\includegraphics[width=7.5cm]{fig13.eps}}
\end{center}
\caption{
Top: Flow curve for $\zeta=0.001, l=0.002$. The dashed curve is
part of the homogeneous constitutive curve.
Lower two panels: representative state snapshot at a late time
for $\dot{\gamma}=10^{-6}$ (second row) and $\dot{\gamma}=10^{-4}$
(third row).}
\label{rolls-fixed}
\end{figure}
\begin{figure}
\begin{center}
\centerline{\includegraphics[width=7.5cm]{fig14.eps}}
\end{center}
\caption{Effective viscosity (normalised by $\eta$)
versus time for an active gel with
$l=0.002$, $\zeta=0.001$ with
fixed boundary conditions. The solid line refers to
$\dot{\gamma}=0.0002$, the dashed one to $\dot{\gamma}=0.001$.
The steady state effective viscosities are very similar in both cases, proving
that rolls have a linear rheology regime.}
\label{viscosity-rolls-fixed}
\end{figure}
\begin{figure}
\begin{center}
\centerline{\includegraphics[width=7.5cm]{fig15.eps}}
\end{center}
\caption{
Top: Flow curve for $\zeta=0.04, l=0.002$.
Lower three panels: representative snapshots at a late time
for $\dot{\gamma}=0.00005$ (second row), $\dot{\gamma}=0.001$
(third row) and $\dot{\gamma}=0.002$ (bottom row).}
\label{turbulent-fixed}
\end{figure}
\begin{figure}
\begin{center}
\centerline{\includegraphics[width=7.5cm]{fig16.eps}}
\end{center}
\caption{Effective viscosity (normalised by $\eta$)
versus time for an active gel with
$l=0.002$, $\zeta=0.0005$,
and fixed boundary conditions. The solid line refers to
$\dot{\gamma}=0.0002$, the dashed one to $\dot{\gamma}=0.001$.
The effective viscosities are smaller than $\eta$ in both cases, and nearly equal (demonstrating the existence of a linear regime).
Note that in order
to get an out-of-plane velocity field in steady state one needs
to start e.g. with an out-of-plane director profile.}\label{out-of-plane-viscosity-fixed}
\end{figure}
\section{Conclusion}\label{sec:conclusion}
To conclude, we have presented numerical calculations, by means of both finite difference and Lattice Boltzmann simulations, of the dynamics and flow behaviour of active extensile fluids, such as, for instance, concentrated bacterial suspensions. After checking consistency between the two methods we have used them to explore the different active flows associated with free and fixed boundary conditions.
In the unsheared case, we found that the transition to spontaneous flow occurs significantly differently than in 1D, consistently with previously reported 2D simulations performed with selected parameters~\cite{SoftMatterReview}. Instead of active bands, we find that the spontaneously flowing phase typically consists of rolls or turbulent flow. In some runs, we were nevertheless able to stabilise tilted active bands, which are however different in shape from the purely 1D {ones.}
The 2D flow curves are also rather different from the 1D ones. Our finite difference simulations with free boundary conditions were performed with a 4:1 aspect ratio between the flow and the flow gradient directions, and this leads to a vanishing or very small linear regime. With fixed boundary conditions, on the other hand, we find that rolls have a well-defined viscosity in the linear regime, and this is larger than the nominal Newtonian viscosity of the fluid. This is in agreement with the expectation that 2D vortices lead to enhanced dissipation with respect to 1D bands.
{Fully 3D simulations of this problem would undoubtedly be worthwhile, but present serious computational challenges \cite{SoftMatterReview}. An intermediate step to a full 3D simulation is to maintain a two dimensional simulation domain but allow the simulated velocities and director fields to be fully 3D. This is actually what we do in our LB studies (see Appendix) although in all the states discussed so far the flow and director field remains two dimensional, provided that the initial configuration is purely two-dimensional. If this is not the case, a 3D flow may result, where both the director and the velocity field acquire an out-of-plane component (an example is shown in Fig.~\ref{out-of-plane-viscosity-fixed}). Intriguingly this has a linear viscosity which is {\em smaller} than the Newtonian one, in line with the 1D results and dissimilar to the main trends observed in 2D. The details of this out-of-plane flow mode will be pursued elsewhere.}
All our findings can be summed up to say that the dynamics and rheology of active extensile fluids is extremely nonlinear, and that one should take any prediction coming from purely 1D flow, which are typically considered in analytical calculations, with care, as 2D results are often different and more complicated. Nevertheless, our results offer firm predictions for macrorheological experiments on thin 2D samples, which could be performed by e.g. shearing a {\it B. subtilis} monolayer, and it would be very interesting to compare experimental rheological curves to our predictions for the rheology of active rolls and in-plane turbulent states.
{\bf Acknowledgements} We acknowledge illuminating discussions with A. Morozov. Work funded by EPSRC Grants EP/030173/1 and EP/E05336X/2. MEC holds a Royal Society Research Professorship.
{
|
1012.3464
|
\section{Introduction}
Strongly correlated electron systems have challenged traditional condensed matter paradigms
with weakly interacting quasiparticles \cite{Anderson:1984}. Meanwhile, theory tools originating
from high-energy physics have been useful in addressing the physical properties of these materials,
(for a review see \cite{Sachdev:2010ch}). For example, the anti de-Sitter / Conformal Field Theory (AdS/CFT)
correspondence has proved successful in the investigation of strong-coupling
gauge theories \cite{Maldacena:1997re} with its first application focusing on conformally invariant theories.
Other non-relativistic scaling symmetries have been proposed in the context of holography involving Schr\"odinger
symmetry \cite{Son:2008ye, Balasubramanian:2008dm} or Lifshitz symmetry \cite{klm}.
The progress in geometric realizations of Schr\"odinger symmetry, with a general dynamical exponent z,
aimed for condensed matter applications has paved the way to finite temperature generalizations
\cite{Herzog:2008wg, Maldacena:2008wh, Adams:2008wt, Yamada:2008if}
using the null Melvin twist \cite{Alishahiha:2003ru, Gimon:2003xk}.
AdS space in the light-cone frame (ALCF) with z=2, has also been put forward \cite{Goldberger:2008vg, Barbon:2008bg},
as such a holographic background and a corresponding Schwarzschild black hole solution have been considered
\cite{Maldacena:2008wh, Kim:2010tf}.
Notably, while Schr\"odinger space and ALCF yield the same thermodynamic properties \cite{Herzog:2008wg, Maldacena:2008wh, Yamada:2008if, Kim:2010tf}
and transport coefficients (when the latter are independent of an embedding scalar)
\cite{Ammon:2010eq, Kim:2010tf}, ALCF is simpler and has a well-defined holographic renormalization.
Here we will analyze and report on the transport properties of ALCF, matching several universal experimental results of
the normal-state of cuprates superconductors at very low temperatures, which have been a subject of
intensive research and yet remain largely unexplained over the past two decades.
While there are other types experimental data available, such as spectroscopy and thermodynamic data,
we choose to analyze transport data because an understanding of the normal state transport properties of high
$T_c$ cuprates is widely regarded as a key step towards the elucidation of the pairing mechanism for high-temperature
superconductivity \cite{Hussey:2008review}.
The holographic model we present provides a novel paradigm for the normal state of strange metals,
in particular high-temperature superconducting (high $T_c$) cuprates in the overdoped region,
where the charge carriers added to the parent insulator exceed the value necessary for optimal superconductivity.
Further to describing the puzzling normal state properties of these materials,
our approach leads to new falsifiable predictions for experiment.
In particular, we successfully describe the $T+T^2$ behavior of the resistivity
in \cite{Cooper2009} and the $T+T^2$ behavior of the inverse Hall angle observed in \cite{mackenzie}
at {\em very low temperatures} $T<30K$, where a single scattering rate is present.
{ This newly emerging very low temperature scaling behaviors of magnetotransport properties
are in accord with the distinct origin of the criticality at very low temperatures advertised in \cite{Hussey2009},
while the higher temperature, $T>100K$, scaling has different behaviors between
the linear temperature resistivity and the quadratic temperature inverse Hall angle, signaling two
scattering rates \cite{Tyler1997}. In searching for quantum criticality at zero temperature and
its possible connection to the origin of superconductivity, we concentrate on the lower
temperature regime with a single scattering process. We also comment on how two scattering processes emerge
by incorporating other mechanisms present in our model.
}
In addition to the resistivity and inverse Hall angle, very good agreement is also found with experimental
results of the Hall Coefficient, magnetoresistance and K\"ohler rule on various high $T_c$ cuprates
\cite{Cooper2009,mackenzie,Hussey2009,Tyler1997,Ong1991,Takagi1992,Kendziora1992,hwang,harris,Hussey1996,tyler,Naqib2003,Nakajima2004,E1,E2,AndoBoebinger,Daou2009}.
To the best of our knowledge, no other model that describes all of these observables successfully.
Our model provides a change of paradigm from the notion of a quantum critical point,
as it is quantum critical at $T\to 0$ on the entire overdoped region.
In this sense our work breaks apart from other holographic approaches \cite{Cubrovic:2009ye, Liu:2009dm, Faulkner:2009wj},
where the measured transport is due to loop fermion effects. As such, it is applicable to a more general class of materials
{\it e.g.,} $d$ and $f$-electron systems, where the low temperature resistivity varies as $T + T^2$ \cite{stewart} and
exhibit a quantum critical line \cite{Cooper2009,zaum}.
There have been several works that use holographic approaches in order to model strange metal behavior.
The fermionic structure of such systems in the IR has been analyzed in \cite{Cubrovic:2009ye, Liu:2009dm, Faulkner:2009wj}
and modifications due to dipole couplings in \cite{dipole}.
In particular, it was found that there is an IR scaling symmetry that could allow the realization of a marginal Fermi liquid.
The IR exponent would need, however, to be tuned for this to take place.
The linear temperature dependence of the Ohmic resistivity was realized in spaces with AdS or Lifshitz scaling
\cite{Faulkner:2010zz, Hartnoll:2009ns, cgkkm, Lee:2010ii}
and in Schr\"odinger space \cite{Ammon:2010eq,Kim:2010tf}.
A linear resistivity and a crossover to quadratic behavior was found in a larger class of scaling geometries in
\cite{cgkkm}. In the same reference, the full set of possible holographic non-trivial
low temperature behavior was classified and, as shown in \cite{qc}, comprises all possible classes of quantum critical behavior in theories with a single scalar IR relevant operator dominating the dynamics.
Finally, the temperature behavior of the Hall angle was addressed
using Lifshitz type metric with broken rotational symmetry \cite{Pal:2010sx}.
{
In section \ref{sec:SchrGeometry}, we provide the basic information for the gravity background, including how to
interpret the background compared to the extensively studied AdS. Then, we provide detailed properties and calculations of the
transport data using the probe DBI technique in section \ref{sec:transport}. Magnetotransport coefficients are calculated
and analyzed in section \ref{sec:HallTransport}, where we also include the analysis of higher-temperature transport properties.
Our data is compared to the experimental results available in the literature, focusing on the universal features in section \ref{exp}.
}
\section{Holography and AdS/CFT for strongly correlated electrons}
Strong interactions of realistic finite-density systems have provided an arena for a wealth of techniques, geared
to assess in most cases the qualitative physics. A wide range of unsolved problems remain to be addressed,
especially in the realm of strange metals including condensed matter systems on the border with magnetism.
There is, therefore, an inviting opportunity for new techniques and approaches to contribute in these challenging
problems in modern condensed matter.
An interdisciplinary approach towards this aim is the utilization of the gauge-gravity correspondence,
abstracted from the correspondence between non-abelian gauge theories and string theories.
So far it has been explored in several directions, providing a novel perspective both in the modelization as well as
solution of some strongly coupled QFTs.
The hope behind potential applications to condensed matter physics is that IR strong interactions of the Kondo type in materials,
where spins can interact with electrons, may provide bound states that behave in a range of energies as non-abelian
gauge degrees of freedom that may also be coupled to other fields. The gauge interactions are characterized
by a number of charges $N_c$ that are conventionally called ``colors". Their actual number depends on the problem
at hand but it is typically small.
If this is the case, then in terms of the electrons and spins, the YM fields are composite. In the regime where the effective
YM interaction is strong, the physical degrees of freedom are expected to be colorless bound states.
Their residual interactions, analogous to nuclear forces in high-energy physics, are still strong.
{
On the other hand, the effective interaction between colorless bound states can be made arbitrarily weak
in the limit of a large number of colors, $N_c\to \infty $, as it is controlled by $1/N_c\to 0$,
although the original interaction of colored sources is strong.
}
In this limit, the theory is simplified and may be calculable.
Of course, typically, the original problem has a finite and sometimes small number of effective colors.
The question then is: how reliable are the large $N_c$ estimates for the real physics of the system?
The answer to this varies, and we know many examples in both classes of answers.
A good example on one side is the fundamental theory of strong interactions,
Quantum Chromodynamics based on the gauge group SU(3), indicating $N_c=3$ colors.
It is by now established that for many aspects of this theory, $3\simeq \infty$, the accuracy varies in the range $3-10\%$.
It is also known that the analogous theory with two colors, SU(2), has some significant differences from its $N_c\geq 3$ counterparts.
There are other theories where the behavior at finite $N_c$ is separated from the $1/N_c$ expansion
by phase transitions making large $N_c$ techniques essentially inapplicable.
Notably, large $N_c$ techniques have been applied to strongly coupled systems for several decades,
and it is therefore natural to ask for the new contribution delivered in the present effort.
In adjoint theories in more than two dimensions, it is well known that until recently
even the leading order in $1/N_c$ could not be computed.
Although some qualitative statements could be made in this limit, the number of quantitative results was rather scarce.
On the other hand, 't Hooft observed that the leading order in $1/N_c$ is captured by the classical limit of a quantum string theory \cite{hooft}.
Finding and solving this classical string theory was therefore equivalent to calculating the leading order result in $1/N_c$ in the gauge theory.
Unfortunately, such string theories, dual to gauge theories, remained elusive until 1997, when Maldacena \cite{maldacena} made a rather
radical proposal:
(a) This string theory lives in more dimensions than the gauge theory\footnote{This unexpected (see however \cite{polyakov}) fact can be
intuitively understood in analogy with simpler adjoint theories in 0 or 1 dimensions.
There it turns out that the eigenvalues of the adjoint matrix in the relevant saddle point become continuous in the large $N_c$ limit,
and appear as an extra dimension. In general how many new dimensions may emerge in a given QFT in the large $N_c$ limit
is not a straightforward question to answer, although exceptions exist.};
(b) At strong coupling, it can be approximated by supergravity, a tractable problem.
The concrete example proposed contained on one hand a very symmetric, scale invariant, four-dimensional gauge theory
(N=4 super Yang-Mills), and on the other a ten-dimensional IIB string theory
compactified on the highly symmetric constant curvature space AdS$_5\times \mathbf S^5$.
Therefore, this correspondence becomes to be known as the AdS/CFT, or holographic, correspondence.
Although this claim is a conjecture, it has amassed sufficient evidence to spark considerable theoretical work exploring
the ramifications of the correspondence, for the dynamics on one hand of strongly coupled gauge theories and
on the other hand of strongly curved string theories.
An important evolution of the holographic correspondence is the advent of the concept of Effective Holographic Theories
(EHTs) \cite{cgkkm} in analogy with the analogous concept of Effective Field Theories (EFTs) in the context of QFT\footnote{
There are several works that contain a version or elements of the idea of the EHT \cite{rg}, although they vary in the focus or philosophy.}.
The rules more or less follow those of EFTs with some obvious differences and most importantly with less intuition.
In standard EFTs, there are several issues that are relevant:
(a) Derivation of the low energy EFT from a higher energy theory;
(b) Parametrization of the interactions of an EFT, and their ordering in terms of IR relevance;
(c) Physical Constraints that an EFT must satisfy.
Although the Wilsonian approach has allowed a good understanding of EFTs, there are still general questions
which can not be answered with our tools, for instance whether a given EFT can arise as the IR limit of a UV complete QFT.
In the context of holographically dual string theories, many issues are still not fully understood.
First and foremost is that the classical string theories dual to gauge theories cannot yet be solved.
The only approximation making these tractable is the (bulk\footnote{We refer to as the ``bulk", the spacetime in which strings propagate.
This is always a spacetime with a single boundary. The boundary is isomorphic to the space on which the dual quantum field theory
(gauge theory) lives.}) derivative expansion.
This reflects the effect of the string oscillations on the dynamics of the low-lying string modes.
It is known in many cases and widely expected that such an expansion is controlled by the strength of the QFT interactions.
In the limit of infinite strength, the string becomes stiff and the effects of string modes may be completely neglected.
The theory then collapses to a gravitational theory coupled to a finite set of fields.
Since we are working to leading order in $1/N_c$, the treatment of this theory is purely classical. Observables (typically boundary observables
corresponding to correlators of the dual CFT) are computed by solving second-order non-linear differential equations.
The effects of finite but large coupling are then captured by adding higher-derivative interactions in the gravitational action.
Note that this derivative expansion is not directly related to the IR expansion of the dual QFT.
The bulk theory, as mentioned earlier, has usually more dimensions compared to those of the dual QFT.
One of them is however special: it is known as the ``holographic" or ``radial" dimension, and controls the approach to the boundary of the bulk spacetime. Moreover, it can be interpreted as an ``energy" or renormalization scale in the dual QFT.
The second order equations of motion of the bulk gravitational theory, viewed as evolution equations in the radial direction,
can be thought of as Wilsonian RG evolution equations \cite{deboer}. The boundary of the bulk spacetime corresponds to
the UV limit of the QFT. Although the equations are second order they need only one boundary condition in order to be solved,
as the second condition is supplied by the ``regularity" requirement of the solution at the interior of spacetime.
Here gravitational physics proves particularly helpful: a gravitational evolution equation with arbitrary boundary data
leads to a singularity. Demanding regular solutions gives a unique or a small number of options.
The notion of ``regularity" can however vary, and may include runaway behavior as in the case of holographic
open string tachyon condensation relevant for chiral symmetry breaking \cite{chi}.
The holographic model and associated saddle point we will explore here is rather simple and does not require a very sophisticated machinery.
It has, however, a non-relativistic Schr\"ondiger symmetry, and this is a realm that has not been explored fully so far.
\section{ Schr\"odinger geometry} \label{sec:SchrGeometry}
The model we present is comprised of two sectors. The first is gravitational and contains the metric as a single field. It controls the dynamics of energy in the theory, and we will analyze it in this section. The second contains the dynamics of the charge carriers and will be given by a Dirac-Born-Infeld (DBI) action of a
gauge field dual to the conserved current of the carriers. We will analyze this part in a later section where we will calculate the conductivities.
The gravitational action is the Einstein action with a negative cosmological constant
\begin{align}\label{eq:OriginalAct}
I = \frac{1}{16\pi G_5} \int d^5x\sqrt{-g}
\bigg( \mathcal{R} + \frac{12}{\ell^2} \bigg) \;,
\end{align}
where the symbols $g$, $\mathcal{R}$ and $\ell$ are the determinant of the metric,
the scalar curvature and the length scale of the theory related to
the cosmological constant, respectively. We suppress the boundary terms needed for
proper boundary conditions and renormalization, and consider the AdS-Schwartzschild
black hole solution in light-cone coordinates
\cite{Maldacena:2008wh, Kim:2010tf}
\begin{align}
ds^2 =& g_{++} dx^{+2} + 2 g_{+-} dx^+ dx^- + g_{--} dx^{-2} + g_{yy} d y^2 + g_{zz} d z^2 + g_{rr} dr^2 \;,
\label{AdSinlightcone}
\end{align}
where
\begin{gather}
g_{++} = \frac{(1-h) r^2}{4b^2 \ell^2} \,, \quad
g_{+-} = - \frac{(1+h)r^2}{2 \ell^2} \,, \quad g_{--} = \frac{(1-h)b^2 r^2}{\ell^2} \,,\; \quad
g_{yy} = g_{zz} = \frac{r^2}{\ell^2} \,, \nonumber\\
g_{rr} = \frac{\ell^2}{h r^2} \,, \quad
h = 1 - \frac{r_H^4}{r^4}\;, \quad x^+ =b(t+x) \;,\quad
x^- = \frac{1}{2b}(t-x)
\;.
\label{MetricComponents}
\end{gather}
To ensure z=2, we assign $[b]$ (the scaling dimension of $b$ in the unit of mass) as $-1$, and thus
$[x^+] = -2$ and $[x^-]= 0$.
The full 10-dimensional space, AdS$_5 \times S^5$, in light-cone coordinate was written in {\it e.g.,}
\cite{Ammon:2010eq}\cite{Kim:2010tf}.
We drop the $S^5$ part for the rest of our discussion,
except for the embedding scalar discussed below,
because it is decoupled and becomes an overall factor
in the probe brane DBI action \cite{footnote1}.
To match the non-relativistic isometry group, one of the light-cone directions, $x^+$ with scaling dimension $-2$,
is identified as time, and we fix the momentum of the other light-cone coordinate, $x^-$ \cite{Son:2008ye, Barbon:2008bg}.
The thermodynamic properties of the ALCF are identical to those of Schr\"odinger space \cite{Herzog:2008wg, Maldacena:2008wh,
Yamada:2008if, Kim:2010tf}, explained below in section \ref{sec:roleofb}. The interpretation of this coordinate system
is connected to being on the infinite momentum frame along a single spatial direction, which we take here as $x$.
In this frame, the nontrivial physics occurs in the two transverse spatial dimensions $y,z$.
\subsection{Schr\"odinger geometry and its interpretation}
The initial geometry is the AdS Schwartzschild black hole, which is known to describe a strongly coupled Conformal Field Theory (CFT) at finite temperature.
However, here it is described in the light-cone coordinate system and since $x^{+}$ will be taken as time, the symmetry is
broken to a z=2 Schr\"odinger symmetry. In this sense, the bulk background, equations (\ref{eq:OriginalAct})-(\ref{MetricComponents}),
describe the strongly coupled "glue" that interpolates between conformal symmetry at high temperatures and z=2 Lifshitz like
non-relativistic scaling symmetry near $T=0$.
A qualitative way to understand this is to appreciate that in these coordinates the ``speed of propagation" of signals
in the bulk spacetime asymptotes to zero as we approach the black-hole horizon.
This is a well known effect in black hole space-times \cite{k} in this coordinate system - also known as the Carolean limit.
This transition, from AdS critical (z=1) to Lifshitz critical (z=2), is a key ingredient of the gravitational black hole background.
It is important to identify where the transition occurs. In the bulk background, this is controlled by the parameter $b$. {Here}, $b$ is a length scale that parametrized precisely this transition,
in a way that preserves scale covariance.
{In brief}, the bulk geometry is an interpolation between (z=1) and (z=2) geometries in the IR.
The associated dual theory should likewise interpolate between two energy regimes, one where it has the usual relativistic scale symmetry and another where it has the Lifshitz symmetry.
It should be noted that the gravitational background is 5 dimensional. Apart from the holographic directions, there is a time direction and 3 regular space directions $x,y,z$.
In light cone coordinates, $x^{\pm}=x\pm t$, one of the spatial coordinates, {namely} $x$, is playing a special role.
The Schr\"odinger frame can be considered as an infinite boost in the $x$ directions (this is the infinite momentum frame in QFT)
as we {discuss} below. In this limit, all dependence on $x$ spacial direction is redundant, hence {the} physics depends only on two spatial directions $y,z$.
Therefore, it is these two spacial directions that the theory {depends upon}, and the dual quantum field theory is 2+1 dimensional.
\subsection{The role and interpretation of the parameter $b$}\label{sec:roleofb}
There are two control parameters in this model, $b$ and $E_b$, which will be introduced in the following section.
Both are dimensionful but can form dimensionless combinations either alone or combined with temperature.
The significance of the parameter $b$ can be appreciated physically from the thermodynamics of
the same system described in \cite{Maldacena:2008wh}\cite{Kim:2010tf}. These are as follows:
\begin{align}
E = \frac{\pi^3 \ell^3 b^4 T^4 V_3}{16 G_5 }
\;,\quad
J = - \frac{\pi^3 \ell^3 b^6 T^4 V_3}{4 G_5}
\;,\quad
S = \frac{\pi^3 \ell^3 b^4 T^3 V_3}{4G_5}
\;, \quad
\Omega_H = \frac{1}{2b^2}
\;,
\label{sc}\end{align}
where we have defined $V_3 := \int dx^-dydz$ and used $r_H = \pi \ell^2 b T$.
$\ell$ is the AdS length, while $G_5$ is the five-dimensional Newton's constant. $J$ is the charge associated with the translational symmetry in $x^-$, that is conserved in the Schr\"odinger geometry, while $\Omega_H$
is the associated chemical potential.
To understand the non-relativistic z=2 scaling, the mass dimensions of various parameters are $[b]=-1$, $[x^+] =-2$,
$[x^-]=0$, $[y]=[z]=-1$ and $[V_3] =-2$.
From $[G_5] =-3$, we obtain $[J]=0$, $[\Omega_H] =2$, $[M]=2$, $[S]=0$, $[\beta]= -2$ and $[T]=2$.
These are consistent with the dimensions of the non-relativistic systems with the dynamical exponent z=2, as described in appendix F in \cite{Maldacena:2008wh}.
Therefore, the parameter $b$ can be associated with the chemical potential for the conserved particle number of the Schr\"odinger symmetry.
The dimensionless quantity $b^2T$ is associated with the crossover behavior between the z=1 and z=2 regimes of the black hole solutions.
It can be seen from (\ref{sc}) that $b$ controls also the system's response to external pressure. Therefore, different values of $b$ correspond to different external pressures for the ``glue" ensemble. External pressure is a widely used quantum tuning parameter to study the evolution of the ground state electronic properties in a range of strange metals including organic superconductors, heavy fermion systems and other strongly correlated electron systems.
\section{Holographic DBI transport}\label{sec:transport}
We will now add to the system, charge carriers, using D-branes.
To calculate the transport properties, we follow the standard DBI probe approach
\cite{Karch:2007pd}\cite{Ammon:2010eq}\cite{Kim:2010tf}.
We introduce $N_f$ D7 branes on the background and work in the probe limit,
$N_f\ll N_c$.
The D7 branes cover three angular directions $S^3$ of $S^5$ in addition to the background (eq. (\ref{AdSinlightcone})).
From this embedding there are two remaining world volume scalars on the branes. One scalar is chosen to be
trivially constant and the other a function of the radial coordinate $\theta (r)$. Hence, D7 has
the same metric as eq. (\ref{AdSinlightcone}) with a simple modification $g_{rr} \rightarrow
g_{rr}^{D7} = g_{rr} + \theta' (r)^2$.
We consider the $U(1)$ world-volume gauge field $A_{\mu}$, which is dual to the conserved current $J^{\mu}$ of the charge carriers.
To have an electric field $E_b$ only along the $x^+$ direction, we choose the gauge fields as
\begin{gather}
A_+ = {E_b\over 2 \pi \ell_s^2 } y + h_+ (r) \;, \quad A_- = {b^2 E_b\over \pi \ell_s^2} y + h_- (r) \;, \quad
A_y = {E_b b ^2\over \pi \ell_s^2} x^- + {h_y} (r) \;.
\end{gather}
The light-cone electric field is a vector. We turn it on in one direction only (the $y$ direction above).
The system, however, is rotationally invariant despite appearances for reasons that are explained
in appendix \ref{sec:rotationInvariance}, along with more detailed calculation of the transport.
The resulting probe DBI action has the form
\begin{equation}
S_{D7} = - N_f T_{D7} \int d^8 \xi \sqrt{-\det (g_{D7} + 2\pi \ell_s^2 F)} \ , \label{ProbeAction}
\end{equation}
where $T_{D7}, \xi$ and $F$ are the D-brane's tension,
the world-volume coordinate and the $U(1)$ field strength, respectively.
There are three constants of motion, which we identify as three currents
$\langle J^\mu \rangle = \frac{\delta {\cal L}}{\delta h_\mu'}$, where $\mu = +, -$ and $y$.
We solve the equations of motion in terms of these currents, and obtain the on-shell action along the lines of
\cite{Karch:2007pd}. Furthermore, we demand the square root in the action to be real all the way from the horizon, located at $r=r_H$,
to the boundary at infinity. As shown in appendix \ref{sec:rotationInvariance}, it delivers two important relations:
\begin{equation}
\langle J^- \rangle = -\frac{g_{+-}(r_*) }{g_{--}(r_*) } \langle J^+ \rangle\;,
\end{equation}
and Ohm's law, $\langle J^y \rangle = \sigma E_b$, with
\begin{align}
&\sigma=\sigma_0 \sqrt{{J^2\over t^2 A(t)}+{t^3\over \sqrt{A(t)}}}\;, \qquad \; A(t)=t^2+\sqrt{1+t^4} \;.
\label{conductivity}
\end{align}
where $\sigma_0={ {\cal N} b\cos^3 \theta\sqrt{2b E_b}}$, and we use the dimensionless scaling variables
\begin{equation}
t={\pi\ell Tb\over \sqrt{2b E_b}}\;\;,\;\;J^2={64\sqrt{2} \langle J^+ \rangle ^2\over
( {\cal N} b\cos^3 \theta)^2 (2b E_b)^3}.
\label{cond}\end{equation}
Equation (\ref{conductivity}) is particularly interesting in the regime
$t\ll 1$, $J\gg 1$ ;
\begin{equation}
\rho=\frac{1}{\sigma} \approx {t\over J\sigma_0}=
\frac{ \pi \ell b \sqrt{E_b b }}{ \langle J^+ \rangle }~ T \;.
\end{equation}
Therefore, the Ohmic resistivity is linear in temperature in the low-$T$ regime of the model.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.55\textwidth]{res.jpg}
\caption{The exponent of $\frac{d \ln \rho (T)}{d \ln T}$
as a function of a tuning parameter $\frac{1}{\sqrt{E_b}}$ and temperature $T$ at low temperatures.
Note that the linear temperature dependence of the resistivity extends over the low temperature range, with
$\rho \sim T + T^2$.
Compare this plot to Fig. 3 of \cite{Cooper2009}.
}
\label{fig:LCResT}
\end{center}
\end{figure}
We now focus on the first term of eq. (\ref{conductivity}).
At low temperatures, this term dominates over the second one, namely when $t\ll J^{1\over 3}$, $J\gg 1$.
Notably, the first term is due to the drag force exerted by the medium on heavier charge carriers (drag limit)
\cite{Karch:2007pd}.
In this limit, the resistivity reads
\begin{align}
\rho \approx {t\over J\sigma_0}\sqrt{t^2+\sqrt{1+t^4}} \;.
\label{DragLimit}
\end{align}
The drag mechanism here is purely stringy and is explained below in subsection \ref{sec:SCTM}.
By increasing the scaling variable $t$, the temperature
dependence of the resistivity crosses from linear
$\rho \approx {t\over J\sigma_0}$ to quadratic $\rho \approx \frac{\sqrt{2}~t^2}{J\sigma_0}$.
This crossover is governed by the bulk parameter $b$, setting the scale of the Lifshitz symmetry.
$E_b$, on the other hand, is a more interesting parameter. Its direct physical interpretation is not
straightforward as it is the light-cone component of an electric field in the boost direction $x$.
{ In section \ref{sec:RoleOfEb}, }we also explain the interpretation of
$E_b$ and discuss why we expect $E_b\to 0$ to correspond to the heavily overdoped region whereas $E_b\to\infty$ to optimal doping.
The crossover behavior observed is due to the fact that effectively the gravitational background (\ref{AdSinlightcone}),
interpolates between z=1 (AdS) symmetry in the UV to z=2 Lifshitz symmetry in the IR.
\subsection{Strong-coupling transport mechanisms}\label{sec:SCTM}
We will now comment on the resistivity results discussed in the previous section.
In strongly coupled systems described holographically, the conductivity of charge carriers has typically two contributions
(that add quadratically), the ``drag" term and the ``pair-creation" term \cite{Karch:2007pd}.
The physical picture corresponding to the drag contribution is that a charged ``quark"
moves through the strongly coupled (glue) plasma dragging behind its flux that is represented
in the (fundamental) string. The ``string" should be considered as the glue field attached to the ``quark".
There is a world-volume horizon on that string, which has been interpreted to separate the part of the
tail that has thermalized via interactions with the plasma and that is closer and follows the ``quark".
It is loosing energy because of the strong interactions with the plasma.
A Drude-like formula relates this energy loss and terminal velocity to the conductivity (drag conductivity).
Although the Drude formula is classical and its physics are well understood, the result of the energy loss at strong
coupling is poorly understood.
The same mechanism for QCD is more or less experimentally tested in heavy-ion collisions. However,
there is no alternative theoretical understanding of the dependence of the energy loss on terminal velocity, etc.,
apart from general symmetry considerations.
The clear picture exists in the gravitational description: the resistance is due to the energy
loss of a string moving in the appropriate gravitational background.
The other contribution is {expected to be due to} light charged pairs created from the vacuum contributing to the conductivity.
This contribution is Boltzmann suppressed and controlled, in our model (and in \cite{Karch:2007pd}), by the coefficient
${\cal N}$ given in equations (\ref{conductivity}) and (\ref{cond}) above.
In full blown holographic models, this depends explicitly on the
UV mass of charge carriers. Notably, this contribution comes from strong
coupling and no alternative calculations of this exist in the same regime for comparison.
This term picks up at higher temperatures and is not relevant to the regimes discussed {below}.
Here, we are interested in the very low temperature regime to study the
possible presence of quantum criticality and the associated
superconducting mechanism.
Therefore, the model includes a bulk geometry representing critical ``glue" that crosses over from z=1 to z=2
behavior in the IR and massive charge carriers (as probes) moving in this background, losing energy via
the ``drag" strong coupling mechanism.
\subsection{The role and interpretation of the parameter $E_b$}\label{sec:RoleOfEb}
The parameter $E_b$ controls the physics of charge transport in analogy to experimental tuning parameters
such as charge carrier doping, pressure, electric field or in-plane magnetic field.
A priori, $E_b$ is a light-cone electric field component, $E_b=F_{{+y}}$.
More precisely, as detailed in appendix \ref{sec:rotationInvariance}, it is a vector with two components,
$E^y_b=F_{{+y}}$ and $E^z_b=F_{{+z}}$.
However, as shown there, we may set $E_b = \sqrt{(E_b^y)^2+(E^z_b)^2}$
and describe the transport properties in terms of $E_b$ without loss of generality.
In the same appendix we also show that, despite the fact that the vector light-cone electric field is non-zero,
transport is in fact rotationally invariant.
Since $E_b$ is the only non-zero electric field component and, in particular, does not break rotational invariance
in the transverse $y,z$ directions, its presence {demands} an interpretation. Such an electric field can be obtained
by an infinite boost along the $x$ directions from a standard electric field $E_y$ in the $y$ direction.
Under a boost $\lambda=\tanh{v\over c}$ along the $x$ direction,
$$
F_{+y}'={\lambda\over 2}E_y\,\,\,,\,\,\, F_{-y}'={1\over 2\lambda}E_y \;.
$$
Therefore, to arrive at our set-up we need to send $\lambda\to \infty$ and $E_y\to 0$ so that the product is finite
$$
E_b=\lim_{\lambda\to\infty\atop E_y\to 0} {\lambda\over 2}E_y
$$
Therefore, a non-zero $E_b$ reflects an infinite boost of the system in the $x$ direction and an infinitesimal
electric field in the $y$ direction. This limiting procedure explains why we should not expect rotational invariance in the $y$-$z$ plane
to be broken as demonstrated explicitly in appendix \ref{sec:rotationInvariance}.
To interpret the effect of varying $E_b$, we will have to follow it through the passage to the infinite momentum frame.
This translates into varying the "speed of light" $c$ that enters in the boost.
Therefore, fixing the same infinitesimal $E_y$ in the rest frame and varying the "speed of light" is equivalent to varying $E_b$
in the infinite momentum frame - in particular as $c\to 0$, $E_b\to\infty$.
In our metric, this variation is implemented by varying the IR scale $b$ that controls the passage between z=1 and z=2 scaling
in the bulk geometry. This is also visible in all our expressions for the conductivity, in terms of the scaling variables
where $E_b$ appears always in the combination $bE_b$.
Therefore, $E_b$ should not be thought as an external field but as an internal variable parameter of the system.
By the relativity principle, we conclude that the infinite momentum frame captures the physics of charge carriers in two regimes: \\
(a) The $z=1$ CFT regime when $t={\pi\ell Tb\over \sqrt{2b E_b}}\gg 1$. \\
(b) The $z=2$ Liftshitz-like regime when $t={\pi\ell Tb\over \sqrt{2b E_b}}\ll 1$.
The transition temperature is controlled by $E_b$. $E_b\to 0$ maps to the large "doping" region where {the} resistivity is quadratic
at all scales. This is the quadratic resistivity of CFT and is not necessarily associated, as is now well known, to fermions or bosons
(in the $N=4$ example, it is both.) $E_b\to \infty$ maps to optimal doping {where} the resistivity is linear at all scales.
There are several side arguments that support this map.
1. In parametrizing the resistivity as $\rho=a_1 T+a_2 T^2$ at low temperature, experiments indicate $a_2$
to be constant and $a_1$ to decrease rapidly with doping \cite{Cooper2009}.
In our model, $a_2$ is indeed independent of $E_b$, while $a_1\sim \sqrt{E_b}$ and vanishes across the
"overdoped regime" ($E_b\to 0$).
2. The scaling variable for the magnetic field is ${\cal B}\sim {B_b \over E_b}$
and the conductivities depend on ${\cal B}$ alone.
This is in accordance with experimental observations, where as one moves to the
overdoped region the effects of the magnetic field are stronger \cite{tyler}.
This is discussed in more detail {below.} Notably, in the families of
strange metals one may vary the chemical potential also using an external
magnetic and electric field and not necessarily chemical doping.
It is not entirely clear at the moment how parameters such as the "internal light velocity"
(as defined by the holographic metric) is related to standard physical properties
of the material - charge density, velocity of quasiparticles, etc. To assert this,
a more detailed analysis is necessary where several new constituents should be
considered - for instance, the calculation of correlation functions of currents,
couplings to fermions, and potentially others. This analysis maybe necessary to
provide further features for this class of ideas. However, it is beyond the
focus of the present effort.
\section{Holographic Hall transport} \label{sec:HallTransport}
{ In this section}, we analyze the charge transport in the presence of a magnetic field following \cite{O'Bannon:2007in}.
The detailed calculation is carried out in appendix \ref{sec:HallCal}. The analysis of the behavior of the conductivity in different regimes can be found
in appendix \ref{apb}.
The gauge fields now are
\begin{align}
A_+ = {E_b\over 2 \pi \ell_s^2 } y + h_+ (r) \;, ~~ A_- = {b^2 E_b\over \pi \ell_s^2} y + h_- (r) \;, ~~
A_y = {E_b b ^2\over \pi \ell_s^2} x^- + {h_y} (r) \;, ~~ \; A_z = {B_b y\over 2 \pi \ell_s^2} + h_z (r) \;.
\label{GaugeHall}
\end{align}
This configuration includes a light-cone electric field, $E_b$,
along the $y$ direction and a magnetic field, $B_b$, perpendicular to the $y, z$ directions.
The DBI probe action eq. (\ref{ProbeAction}) has four conserved currents,
$\langle J^\mu \rangle$, related to the variation of $h_\mu' (r)$
with $\mu = +, -, y$ and $z$.
The exact Ohmic conductivity in the presence of a magnetic field is
\begin{align}
\sigma^{yy}
= &\sigma_0\frac{\sqrt{{\cal F}_+ J^2
+ t^4\sqrt{{\cal F}_+} {\cal F}_- }} {{\cal F}_-} \;, \quad \;\sigma^{yz}
=\bar\sigma_0\frac{{\cal B}}{{\cal F}_-} \;,
\label{electricConductivitywithB}
\end{align}
where $\bar\sigma_0=\frac{\langle J^+ \rangle }{b E_b}$, $\sigma_0$ was defined earlier (eq. (\ref{conductivity})), and $t, J$ in eq. (\ref{cond}).
Here
\begin{equation}
{\cal F}_\pm= \sqrt{\left({\cal B}^2 + t^4 \right)^2
+ t^4 } \mp {\cal B}^2 + t^4\;, \quad \;\;{\cal B}={B_b\over 2b E_b} \;.
\label{bcond}\end{equation}
Note that eqs. (\ref{electricConductivitywithB}) and (\ref{bcond}) reduce to eq. (\ref{conductivity}) for ${\cal B}=0$.
For a rotationally symmetric system with a plane of $y, z$ coordinates, the resistivity matrix is defined as the inverse of the conductivity matrix.
The inverse Hall angle is defined as the ratio between Ohmic conductivity and the Hall conductivity as
$ \cot \Theta_H =\frac{\sigma^{yy}}{\sigma^{yz}} $. We also define the Hall coefficient $R_H$ and the magnetoresistance ${\Delta\rho\over \rho}$ as
\begin{align}
R_H={\rho_{yz}\over B} \;, \quad \;
{\Delta\rho\over \rho}={\rho_{yy}(B)-\rho_{yy}(0)\over \rho_{yy}(0)} \;.
\end{align}
For $J$ sufficiently large, the resistivities are given by drag contributions. There are three relevant regimes:\\
(a) ${\cal B}\ll t\ll 1$ with
\begin{equation}
R_H\simeq {\bar\sigma_0 \over \sigma_0^2 J^2 }\;, \qquad \;
\cot\Theta _H\simeq {\sigma_0 J\over \bar\sigma_{0} {\cal B}}t
\;, \qquad \; {\Delta \rho\over \rho}\simeq {3\over 2}{{\cal B}^2\over t^2} \;,
\label{InverseAngle1}\end{equation}
(b) ${\cal B}\ll t^2$ and $t\gg 1$ with
\begin{equation}
R_H\simeq {\bar\sigma_0 \over \sigma_0^2 J^2 }\;,\qquad\;
\cot\Theta _H\simeq {\sqrt{2}\sigma_0 J\over \bar\sigma_{0} {\cal B}}t^2\;, \qquad \;
{\Delta \rho\over \rho}\simeq {{\cal B}^2\over t^4} \;,
\label{InverseAngle3}\end{equation}
(c) ${\cal B}\gg t$ and ${\cal B}\gg t^2$ with
\begin{equation}
R_H\simeq {2\over \bar\sigma_0 }\;, \qquad
\cot\Theta _H\simeq {\sigma_0 J\sqrt{1+4{\cal B}^2}\over \bar\sigma_{0} \sqrt{2}{\cal B}^2}~t^2\;, \qquad
{\Delta \rho\over \rho}\simeq {2\sqrt{2}\sigma_0^2 J^2 t^2 \over \bar\sigma_0^2 t {\cal A} } \;.
\label{InverseAngle2}\end{equation}
For a summary of these properties we refer the reader to Fig. \ref{BT}.
\begin{figure}[t]
\begin{center}
\begin{tabular}{ccc}
\includegraphics[width=0.3\textwidth]{cot.jpg} ~~
\includegraphics[width=0.3\textwidth]{Resistivity-DragPair1.jpg} ~~
\includegraphics[width=0.3\textwidth]{LC-CotH-DragPair1.jpg}
\end{tabular}
\caption{Left: Temperature ($T$) and magnetic field ($B$) dependence of the exponent of $\cot\Theta_H$ in the low $T$, low $B$ regions. Middle: The effective power $n$ of the resistivity $\rho \sim T^n$ at zero magnetic field as a function of temperature ($T$) and the effective doping parameter $1/\sqrt{E_b}$. For $T\lessapprox 8$, the resistivity is dominated by the drag mechanism, while at $T \gtrapprox 8$ it is dominated by the pair-creation term. Right: the effective power dependence of $\cot\Theta_H$ at small magnetic field, as a function of temperature and $1/\sqrt{E_b}$. For $T\lessapprox 8$, the resistivity is dominated by the drag mechanism, while at $T \gtrapprox 8$ it is dominated by the pair-creation term. Note that here the range of the power varies from 1 to 3. }
\label{BT}
\end{center}
\end{figure}
The above-mentioned transport properties can be compared successfully to those of strange metals as described in the section below.
\section{Comparison to experiment\label{exp}}
Since the discovery of the high $T_c$ cuprate superconductors 25 years ago, there have been significant experimental efforts to identify the physical mechanism governing their unconventional superconducting and normal state properties. Magnetotransport has been at the heart of studying the emerging properties of superconductors. Here, we focus our discussion on characteristic quantities which have puzzled the condensed matter community and remain largely unexplained. We discuss especially the region where the concentration of charge carrier doping is sufficiently high to span the phase diagram from optimal superconductivity (optimal doping) towards its suppression due to excessive carrier concentration (overdoping). For this we chose to address two prototypical copper oxide superconductors, namely $La_{2-x}Sr_xCuO_4$ (LSCO) and $Tl_2Ba_2CuO_{6+\delta}$ (TBCO), for which it is possible to span the abovementioned doping range. The normal state of these superconductors may be accessed by suppressing superconductivity, for example, through the substitution of $Zn$ for $Cu$ (see {\it e.g.}, \cite{Ong1991,Takagi1992,Naqib2003}) or by a sufficiently high applied magnetic field \cite{Daou2009, Cooper2009}. For a nice review concerning anomalous transport properties of cuprates, see {\it e.g.,} \cite{Hussey:2008review}.
It has been generally accepted that at optimal doping i.e., where the absolute value of the superfluid density is highest, the resistivity in the normal state of cuprate superconductors varies linearly with $T$. This unconventional behavior has often been discussed in terms of quantum criticality. As charge carrier doping increases, the linearity gives way to higher power laws and eventually a more or less Fermi liquid regime emerges. However, recent low temperature transport data \cite{Cooper2009} on LSCO have challenged earlier works \cite{boebinger2009}. In \cite{Cooper2009}, the authors reported that the suppressed superconducting region is replaced by a "2D strange metal", with the Ohmic resistivity at low temperature behaving as $\rho \sim T + T^2$. Especially the doping region where the resistivity varies linearly with $T$ is broader than expected and continues to survive in the heavily overdoped side of the phase diagram. This result suggests a line of critical points and therefore a significant departure from our earlier understanding on the possible role of the above mentioned linearity and a well defined, singular quantum critical point coinciding with optimal superconductivity.
This result is in fact consistent with an earlier observation on TBCO at very low temperatures $T<30K$ - see Figs 5 and 6 in \cite{mackenzie}. Notably, a line of critical points has recently been argued for another group of unconventional superconductors on the border of magnetism, namely the $f$-electron systems \cite{zaum}.
The inverse Hall angle has been shown to vary as $\cot \Theta_H \sim T+ T^2$ at very low temperatures in TBCO \cite{mackenzie}, which is surprising on the basis of the conventional wisdom considering two scattering rates in the cuprate superconductors. In particular, the inverse Hall angle and the resistivity behave in a similar manner, namely as $\cot \Theta_H \sim \rho \sim T + T^2$ at very low temperature, $T<30$. This is clearly depicted in Fig. 9 of \cite{mackenzie}. It has been argued that two scattering rates observed in the overdoped region of TBCO collapse on to a single scattering rate as $T \rightarrow 0$, in the temperature range $T<30K$ \cite{mackenzie}.
The similar behavior between resistivity and inverse Hall angle might be considered to be the realm of a Fermi liquid, yet their strong linear temperature dependence over a broad range of doping is a challenge
\cite{mackenzie}\cite{Cooper2009}\cite{boebinger2009}.
For LSCO, similar behaviors were observed for the inverse Hall angle \cite{hwang}.
Here, we compare the results of our model with the experimental results. We focus our attention on several key and outstanding features of the normal state of cuprate superconductors. Namely, the analysis of the in-plane resistivity, in-plane Hall coefficient, inverse Hall angle,
in-plane magnetoresistance and the modified K\"ohler rule. We start by summarizing the main features of the transport properties described by our model.
\begin{enumerate}
\item In the absence of an applied magnetic field, there is a linear resistivity near $T=0$, which changes to quadratic at higher temperatures.
The coefficient of the quadratic term is independent of $E_b$, whereas that of the linear term is proportional to $\sqrt{E_b}$,
which is directly related to the inverse of the doping.
\item In the presence of a magnetic field, $\cot\Theta_H$ is linear when the resistivity is linear, and quadratic when the resistivity is quadratic.
This is the behavior seen in strange metals at very low temperatures (for example below 25 K in overdoped $Tl_2Ba_2CuO_{6+\delta}$).
At higher temperatures however, the quadratic behavior in real materials dominates the overdoped side.
\item The magnetoresistance calculated is in agreement with experimental data at low temperatures. The model predicts that near $T=0$ the magnetoresistance dives sharply towards zero.
\item The universal scaling behavior of Hall coefficient, available in the experimental literature, is qualitatively very similar to the scaling function
$\frac{1}{t \sqrt{A}}$ of our model.
\item The ``modified K\"ohler" rule is known to be valid for cuprates and other related materials. K\"ohler's rule has been {shown} experimentally to fail at relatively
high temperatures in the overdoped region. It has been argued that this is due to a superconducting instability \cite{kimura}. We show that it is also compatible with the correlation between $\cot\Theta_H$ and resistivity
{as observed} experimentally at low temperatures. The model therefore predicts that at sufficiently low temperatures both the K\"ohler and modified K\"ohler rules are valid in the overdoped region.
\end{enumerate}
\subsection{Resistivity}
Let us concentrate on the recent experimental observations on LSCO and TBCO at very low temperature. In overdoped TBCO the resistivity in the millikelvin regime follows $\rho\sim T+T^2$ \cite{mackenzie}. Recently, very low temperature resistivity data on LSCO over a wide range of doping, namely from slight underdoped $p=0.15$ to heavily overdoped $p=0.33$, indicate that the suppressed superconducting region in the overdoped regime has an unexpected $\rho=a_0+a_1T+a_2T^2$ behavior, with a particularly interesting linear temperature dependence of the resistivity at very low temperatures \cite{Cooper2009}. Furthermore, $a_2$ was found to be doping independent, while $a_1$ decreased rapidly with overdoping. Earlier works in the overdoped region (above $p\sim 0.2$) for LSCO reported a novel power-law $\rho = \rho_0 + A T^n$ with $n\sim 1.5$ dominating the resistivity over a wide temperature range (see for example, Fig. 1 in \cite{Takagi1992}). Here we make a comparison of our results to the abovementioned reports.
We focus on the drag limit mentioned above. The drag term, proportional to $J^2$ in eq. (6), dominates in the low temperature limit. Here, the resistivity has two different contributions, one linear in $T$ and another $T^2$,
\begin{align}
\rho \approx a_1 T = \left( \frac{ \sqrt{ E_b / b}}{\ell \pi } \right)
\frac{\ell^2 \pi^2 b^2}{ \langle J^+ \rangle } T \;, \qquad
\rho \approx a_2 T^2 = \frac{\ell^2 \pi^2 b^2 }{ \langle J^+ \rangle } T^2 \;.
\end{align}
$a_2$ is doping independent whereas $a_1$ decreases rapidly with doping, in agreement with our model. We may therefore, map $\frac{1}{\sqrt{E_b}}$ to the doping parameter as depicted above in Fig. \ref{fig:LCResT}.
\subsection{Inverse Hall angle}
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.65\textwidth]{LC-Resistance-HallResistance3.jpg}
\caption{Plot of the resistivity and inverse Hall angle, in our model, for the low-temperature regime
with small magnetic field. Note that
the inverse Hall angle has been scaled by a constant factor
$a= B_b / (32 \sqrt{2} \langle J^+ \rangle$. This plot is to be compared with Fig. 9
of \cite{mackenzie}.
}
\label{fig:ResHallAng}
\end{center}
\end{figure}
The inverse Hall angle is defined as $\cot\Theta _H=\sigma_{yy} / \sigma_{yz}$. At optimal doping and relatively high temperature ($T \geq 100 K$ for YBCO \cite{Ong1991}, LSCO \cite{hwang} and TBCO \cite{Tyler1997}),
$\cot\Theta _H$ varies universally as $T^2$, while the corresponding Hall coefficient is highly irregular. To the best of our knowledge there is no corresponding systematic data available for optimal doping at very low temperatures.
The first observation of $\cot\Theta_H = T + T^2$ in overdoped samples at low temperatures is depicted in Fig. 8 of \cite{mackenzie}. Notably, the resistivity and the inverse Hall angle for TBCO behave in a similar manner at low temperature (Fig. 9 of \cite{mackenzie}). There is also indirect evidence for universality from works on LSCO see {\it e.g.,} Fig. 3 of \cite{Ando1997} and Fig. 3 (c) of \cite{Ando2004PRL92}. Further support may be obtained from earlier studies on overdoped LSCO. For instance, in \cite{hwang} the authors suggest $\cot\Theta_H$ cannot be fitted by $A+BT^2$ in the range $T= 4 K$ to $T= 500 K$ (Fig. 4 in \cite{hwang}). A thorough investigation at very low temperature however, has yet to be performed.
Our results demonstrate that the resistivity and the inverse Hall angle behave in a similar manner when the system is at low temperature and small magnetic fields, indicating that we are working in a linear regime or a weak field regime as defined by the experimental results for the magnetoresistance
$ {\Delta \rho\over \rho} \sim B_b^2$ and Hall coefficient
$R_H \sim B_b^0 \sim const.$ \cite{tyler}. This is depicted in Fig. \ref{fig:ResHallAng}.
\subsection{Magnetoresistance}
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=0.65\textwidth]{Dr.jpg}
\caption{The plot depicts the magnetoresistance for a for heavily overdoped sample at lower temperature,
which is to be contrasted to Fig. 1 of \cite{Hussey1996}.}
\label{fig:MagnetoPlots}
\end{center}
\end{figure}
The magnetoresistance is defined as follows:
\begin{align}
{\Delta \rho\over \rho}\equiv {\rho_{yy}(B)-\rho_{yy}(0)\over \rho_{yy}(0)} \;.
\end{align}
Unlike overdoped TBCO ($T_c\sim 30$ K), in optimally doped TBCO ($T_c \sim 80$ K) the weak magnetic field regime extends up to 60 T.
This has implications on the doping dependence of ${\cal B}$.
The scaling dependence of the resistivity on magnetic field, via the scaling in equation (\ref{bcond}) is in qualitative agreement
with experimental results \cite{tyler}. Hence, magnetic fields, which are in the linear regime at optimal doping are in fact
in the non-linear regime in the overdoped region (optimal doping here is $E_b\to\infty$).
The magnetoresistance in heavily overdoped TBCO increases gradually with decreasing $T$, approaching a finite value at the lowest temperatures measured, around $30K$, in the low temperature and weak field regime
(being proportional to square of magnetic field); see Fig. 1 of \cite{Hussey1996}. This behavior is captured by our results, as depicted in Fig. \ref{fig:MagnetoPlots}. { We expect the strong dip as $T\to 0$ may be also visible if experiments at lower temperatures are performed. }
\subsection{Hall coefficient}
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.65\textwidth]{LC-HallResistanceScaling-new.jpg}
\caption{Temperature dependence of the normalized Hall coefficient.
This corresponds to the function $\frac{1}{t \sqrt{A}}$ of our model. Compare this to the plot
of the quantity, ${R_H(T/T_*)-R_H(\infty)\over R^*_H}$, Fig. 2 of \cite{hwang}.
}
\label{fig:HallCoeffPlot}
\end{center}
\end{figure}
Attempts to identify a universal scaling behavior for the Hall coefficient in cuprate superconductors have not been very successful \cite{Ong1991}.
On the other hand, the inverse Hall angle depicts a universal behavior \cite{Ong1991}. It has been argued however, that the central anomaly of the Hall effect resides in direct measurements of the Hall coefficient \cite{Ong1997}.
To the best of our knowledge, there is only one report where a scaling behavior of the Hall coefficient
${R_H(T/T_*)-R_H(\infty)\over R^*_H}$, was argued to show a universal scaling behavior \cite{hwang}.
Here, $R_H(\infty)$ is the high temperature limit of $R_H$, $R^*_H$ rescales
the magnitude, and $T^*$ is a temperature scale. The scaling behavior is shown in Fig. 2 of \cite{hwang}. We compare this result to $\frac{1}{t \sqrt{A}}$ of our model,
which can be shown to be the Hall coefficient at a vanishingly small $B$ with only temperature scaling.
This is presented on Fig. \ref{fig:HallCoeffPlot}.
\subsection{K\"ohler rule}
K\"ohler's rule for metals states that $K=\rho^2{\Delta \rho\over \rho}$
should be independent of temperature. This was claimed to fail for YBCO and LSCO \cite{harris}. The authors of \cite{harris}
suggested, however, that a modified K\"ohler rule is valid and
$(\cot\Theta_H)^2 {\Delta \rho\over \rho}$ is approximately constant with temperature.
It has been argued that for LSCO superconducting fluctuations play an important role in accounting for the difference between K\"ohler's rule and the modified K\"ohler rule \cite{kimura}.
While in principle our model can be shown to exhibit a superconducting transition by coupling to gauge and scalar fields, in the current setup our system does not include superconducting fluctuations.
Furthermore, at very low temperatures in the overdoped regime we do not expect that two such scales exist, as suggested by the very low temperature measurements of magnetoresistance.
Our data for the K\"ohler ratio and the modified K\"ohler ratio are in general temperature dependent, but not in the small temperature and large temperature limits.
Indeed, the facts that the resistivity and the inverse Hall angle are proportional at low temperatures and the modified K\"ohler ratio is constant implies that the K\"ohler ratio is also constant at very low temperatures.
{ Although this seems to be in contradiction with claims in the literature, we believe it should be valid at very low temperatures, in view of the proportionality of $\cot\Theta_H$ to $\rho$ \cite{mackenzie}.}
\section{Outlook}
A simple holographic system, namely the AdS-Schwarzschild black hole in light-cone coordinates, provides a solvable quantum critical model of magnetotransport with a wide range of properties.
The results obtained are in good agreement with those of strange metals, in particular the high-$T_c$ cuprates at very low temperatures with charge carrier concentration ranging from the optimal to the overdoped regime.
An intriguing novel property emerging from our work is the scaling of the carrier doping dependence, hence, the model at $T = 0$ should be considered as a quantum critical line albeit with a Lifshitz scaling of z=2, which presents a radical departure from the paradigm of the isolated critical point.
This controls the linear resistivity in this regime as suggested in \cite{Hartnoll:2009ns}. Recent experimental results also point in this direction. This regime crosses over to a quadratic one, controlled by a standard CFT liquid. The crossover temperature diverges at optimal doping $E_b\to \infty$ explaining the high temperature reach of the linear resistivity regime.
Moreover, our findings provide several novel experimental and testable signatures for the low-temperature behavior of strange metals.
\begin{itemize}
\item The magneto-resistance vanishes abruptly near $T=0$.
\item At sufficiently low temperatures, the transport data scale with a function $B/B_*$, where $B_*$ is doping dependent.
\item The K\"ohler rule and modified K\"ohler rule are both valid at low temperatures.
\end{itemize}
An extension to this work would be to clarify the underlying dynamics and how it matches the expected interactions of electrons in real materials.
The precise relation of the holographic model presented here with microscopic dynamics has yet to be clarified. Ideas in this direction have already been discussed \cite{sachdev} and connected to critical points and phases of the Hubbard model in \cite{Sachdev:2010uz}.
They are based on expectations of emergent strong non-abelian interactions at low energies and the ensuing holographic description. However, the non-standard holographic realization of the non-relativistic scaling symmetries remains a generic puzzle.
The emergence of superconductivity in this context is another important direction to be explored. For instance, using a probe scalar and gauge fields one would be able to study the onset of superconductivity and its dependence on quantum tuning parameters including charge carrier doping and magnetic field.
\addcontentsline{toc}{section}{Acknowledgments}
\section*{Acknowledgments}
We would like to thank T. Hu, M. Lippert, A. O'Bannon and D. Yamada for discussions.
This work has been partially supported by grants MEXT-CT-2006-039047, EURYI, FP7-REGPOT-2008-1-CreteHEPCosmo-228644,
PERG07-GA-2010-268246 and the National Research Foundation, Singapore.
|
1704.01474
|
\section{Introduction}\label{sec:introduction}}
\else
\section{Introduction}
\label{sec:introduction}
\fi
\IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file''
for IEEE Computer Society journal papers produced under \LaTeX\ using
IEEEtran.cls version 1.8b and later.
I wish you the best of success.
\hfill mds
\hfill August 26, 2015
\subsection{Subsection Heading Here}
Subsection text here.
\subsubsection{Subsubsection Heading Here}
Subsubsection text here.
\section{Conclusion}
The conclusion goes here.
\appendices
\section{Proof of the First Zonklar Equation}
Appendix one text goes here.
\section{}
Appendix two text goes here.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
The authors would like to thank...
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\section{Introduction}
This demo file is intended to serve as a ``starter file''
for IEEE Computer Society conference papers produced under \LaTeX\ using
IEEEtran.cls version 1.8b and later.
I wish you the best of success.
\hfill mds
\hfill August 26, 2015
\subsection{Subsection Heading Here}
Subsection text here.
\subsubsection{Subsubsection Heading Here}
Subsubsection text here.
\section{Conclusion}
The conclusion goes here.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
The authors would like to thank...
\section{Introduction}
Page segmentation is an important prerequisite step of document image analysis and understanding.
The goal is to split a document image into regions of interest.
Compared to segmentation of machine printed document images,
page segmentation of historical document images is more challenging due to many variations such as layout structure,
decoration, writing style, and degradation.
Our goal is to develop a generic segmentation method for handwritten historical document images.
In this method, we consider the segmentation problem as a pixel-labeling problem, i.e., for
a given document image, each pixel is labeled as one of the predefined classes.
Some page segmentation methods have been developed recently.
These methods rely on hand-crafted features~\cite{grana2011automatic, bukhari2012layout, chen2014robust, chen2014page}
or prior knowledge~\cite{bulacu2007layout, van2011development, panichkriangkrai2013character, gatos2014segmentation},
or models that combine hand-crafted features with domain knowledge~\cite{cohen2013robust, asi2014coarse}.
In contrast, in this paper, our goal is to develop a more general method which automatically learns
features from the pixels of document images.
Elements such as strokes of words, words in sentences, sentences in paragraphs
have a hierarchical structure from low to high levels.
As these patterns are repeated in different parts of the documents.
Based on these properties, feature learning algorithms can be applied to learn layout information of the document images.
Convolutional Neural Network (CNN) is a kind of feed-forward artificial neural network which shares weights among neurons in the same layer.
By enforcing local connectivity pattern between neurons of adjacent layers, CNN can discover spatially local correlation~\cite{lecun1998gradient}.
With multiple convolutional layers and pooling layers, CNN has achieved many successes in various fields, e.g.,
handwriting recognition~\cite{lecun1989backpropagation},
image classification~\cite{krizhevsky2012imagenet},
text recognition in natural images~\cite{wang2012end},
and sentence classification~\cite{kim2014convolutional}.
In our previous work~\cite{chen2015page}, an autoencoder to learn features automatically on the training images.
An autoencoder is a feed forward neural network.
The main idea is that by training an autoencoder to reconstruct its input, features can be discovered on the hidden layers.
Then an off-the-shelf classifier can be trained with the learned features to predict pixels into different predefined classes.
By using superpixels as the units of labeling~\cite{chen2016page}, the speed of the method is increased.
In~\cite{chen2016pageb}, a Conditional Random Field (CRF)~\cite{lafferty2001conditional} is applied in order to model the local and contextual information
jointly to refine the segmentation results which have been achieved in~\cite{chen2016page}.
Following the same idea of~\cite{chen2016pageb}, we consider the segmentation problem as an image patch labeling problem.
The image patches are generated by using superpixels algorithm.
In contrast to~\cite{chen2015page, chen2016page, chen2016pageb}, in this work, we focus on developing an end-to-end method.
We combine feature learning and classifier training into one step.
Image patches are used as input to train a CNN for the labeling task.
During training, the features used to predict labels of the image patches are learned on the convolution layers of the CNN.
While many researchers focus on developing very deep CNN to solving
various problems~\cite{krizhevsky2012imagenet, zeiler2014visualizing, simonyan2014very, szegedy2015going, he2016deep},
in the proposed method, we train a simple CNN of one convolution layer.
Experiments on public historical document image datasets show that
despite the simple structure and little tuning of hyperparameters,
the proposed method achieves comparable results compared to other CNN architectures.
The rest of the paper is organized as follows. Section~\ref{sec:related-work} gives an overview of some related work.
Section~\ref{sec:methodology} presents the proposed CNN for the segmentation task.
Section~\ref{sec:experiments} reports the experimental results and Section~\ref{sec:conclusion} presents the conclusion.
\section{Related Work}
\label{sec:related-work}
This section reviews some representative state-of-the-art methods for historical document image segmentation.
Unlike segmentation of contemporary machine printed documents,
segmentation of handwritten historical documents is more challenging due to the various writing styles,
the rich decoration, degradation, noise, and unrestricted layout.
Therefore, traditional page segmentation method can not be applied to handwritten historical documents directly.
Many methods have been proposed for segmentation of handwritten historical documents,
which largely can be divided into rule based and machine learning based.
Some methods rely on threshold values predefined based on prior knowledge of the document structure.
Van Phan et al.~\cite{van2011development} use the area of Voronoi
diagram to represent the neighborhood and boundary of connected components (CCs).
By applying predefined rules, characters were extracted by grouping adjacent Voronoi regions.
Panichkriangkrai et al.~\cite{panichkriangkrai2013character} propose a text line and character extraction system of Japanese historical woodblock printed books.
Text lines are separated by using vertical projection on binarized images.
To extract kanji characters, rule-based integration is applied to merge or split the CCs.
Gatos et al.~\cite{gatos2014segmentation} propose a text zone and text line segmentation method for handwritten historical documents.
Based on the prior knowledge of the structure of the documents,
vertical text zones are detected by analyzing vertical rule lines and vertical white runs of the document image.
On the detected text zones, a Hough transform based text line segmentation method is used to segment text lines.
All these methods have achieved good segmentation results on specific document datasets.
However, the common limitation is that
a set of rules have to be carefully defined and document structure is assumed to be observed.
Benefit from the prior knowledge of the structure, the thresholds values are tuned in order to archive good performance.
In other words, due to the generality, the rule-based methods can not be applied to other kinds of historical document images directly.
In order to increase the generality and robustness of page segmentation methods,
machine learning techniques are employed.
In this case, usually the segmentation problem is considered as a pixel labeling problem.
Feature representation is the key of the machine learning based methods.
Carefully hand-crafted features are designed in order to train an off-the-shelf classifier on the labeled training set.
Bukhari et al.~\cite{bukhari2012layout} propose a text segmentation method of Arabic historical document images.
They consider the normalized height, foreground area, relative distance, orientation,
and neighborhood information of the CCs as features.
Then the features are used to train a multilayer perceptron (MLP).
Finally, the trained MLP is used to classify CCs to relevant classes of text.
Cohen et al.~\cite{cohen2013robust} apply Laplacian of Gaussian on the multi-scale binarized image to extract CCs.
Based on prior knowledge, appropriate threshold values are chosen in order to remove noise CCs.
With an energy minimization method and the features, such as bounding box size, area, stroke width, and estimated text lines distance,
each CC is labeled into text or non-text.
Asi et al.~\cite{asi2014coarse} propose a two-steps segmentation method of Arabic historical document images.
They first extract the main text area with Gabor filters.
Then the segmentation is refined by minimization an energy function.
Compared to the rule based methods,
the advantage of the machine learning based methods is less prior knowledge is needed.
However, the existing machine learning based methods rely on hand-crafted feature engineering
and to obtain appropriate hand-crafted features for specific tasks is cumbersome.
\section{Methodology}
\label{sec:methodology}
In order to create general page segmentation method without using any prior knowledge of the layout structure of the documents,
we consider the page segmentation problem as a pixel labeling problem.
We propose to use a CNN for the pixel labeling task.
The main idea is to learn a set of feature detectors and train a nonlinear classifier on the features extracted by the feature detectors.
With the set of feature detectors and the classifier, pixels on the unseen document images can be classified into different classes.
\subsection{Preprocessing}
\label{sec:preprocessing}
In order to speed up the pixel labeling process, for a given document image,
we first applying a superpixel algorithm to generate superpixels.
A superpixel is an image patch which contains the pixels belong to the same object.
Then instead of labeling all the pixels, we only label the center pixel of each superpixel
and the rest pixels in that superpixel are assigned to the same label.
The superiority of the superpixel labeling approach over the pixel labeling approach for the page segmentation task has been demonstrated in~\cite{chen2016page}.
Based on the previous work~\cite{chen2016page}, the simple linear iterative clustering (SLIC) algorithm is applied
as a preprocessing step to generate superpixels for given document images.
\subsection{CNN Architecture}
\label{sec:cnn-architecture}
The architecture of our CNN is given in Figure~\ref{fig:cnn}.
The structure can be summarized as $28 \times 28 \times 1 - 26 \times 26 \times 4 - 100 - M$, where $M$ is the number of classes.
The input is a grayscale image patch. The size of the image patch is $28 \times 28$ pixels.
Our CNN architecture contains only one convolution layer which consists of $4$ kernels.
The size of each kernel is $3 \times 3$ pixels.
Unlike other traditional CNN architecture, the pooling layer is not used in our architecture.
Then one fully connected layer of $100$ neurons follows the convolution layer.
The last layer consists of a logistic regression with softmax which outputs the probability of each class, such that
\begin{equation}\label{eq:logistic-regression}
P(y=i | x, W_1, \cdots, W_M, b_1, \cdots, b_M ) = \frac{e^{W_{i}x + b_i}}{\sum_{j=1}^{M} e^{W_{j}x + b_j}},
\end{equation}
where $x$ is the output of the fully connected layer, $W_i$ and $b_i$ are the weights and biases of the $i^{th}$ neuron in this layer, and $M$ is the number of the classes.
The predicted class $\hat{y}$ is the class which has the max probability, such that
\begin{equation}\label{eq:prediction}
\hat{y} = \operatorname*{arg\,max}_i P(y=i | x, W_1, \cdots, W_M, b_1, \cdots, b_M).
\end{equation}
In the convolution and fully connected layers of the CNN, Rectified Linear Units (ReLUs)~\cite{nair2010rectified} are used as neurons.
An ReLU is given as:
\begin{equation}\label{eq:relu}
f(x) = \max(0, x),
\end{equation}
where $x$ is the input of the neuron.
The superiority of using ReLUs as neurons in CNN over traditional sigmoid neurons is demonstrated in~\cite{krizhevsky2012imagenet}.
\begin{figure}[!tb]
\vspace{-10pt}
\centering
\includegraphics[width=0.9\linewidth]{figures/cnn/cnn.png}
\caption{The architecture of the proposed CNN}
\label{fig:cnn}
\vspace{-20pt}
\end{figure}
\subsection{Training}
\label{sec:training}
To train the CNN, for each superpixel, we generate a patch which is centred on that superpixel.
The patch is considered as the input of the network.
The size of each patch is $28 \times 28$ pixels.
The label of each patch is its center pixel's label.
The patches of the training images are used to train the network.
In the CNN, the cost function is defined as the cross-entropy loss, such that
\begin{equation}\label{eq:cross-entropy}
\mathcal{L}(X, Y) = -\frac{1}{n} \sum_{i=1}^{n} (\ln a(x^{(i)}) + (1 - y^{(i)}) \ln (1 - a(x^{(i)}))),
\end{equation}
where $X = \{ x^{(1)}, \cdots, x^{(n)} \}$ is the training image patches
and $Y = \{ y^{(1)}, \cdots, y^{(n)} \}$ is the corresponding set of labels.
The number of training image patches is $n$. For each $x^{(i)}$, $a(x^{(i)})$ is the output of the CNN as defined in Eq.~\ref{eq:logistic-regression}.
The CNN is trained with Stochastic Gradient Descent (SGD) with the dropout~\cite{srivastava2014dropout} technique.
The goal of dropout is to avoid overfitting by introducing random noise to training samples.
Such that during the training, the outputs of the neurons are masked out with the probability of $0.5$.
\section{Experiment}
\label{sec:experiments}
Experiments on six public handwritten historical document image datasets are conducted.
\subsection{Datasets}
\label{sec:dataset}
The datasets are of very different nature.
The \emph{G. Washington} dataset consists of the pages written in English with ink on paper and the images are in gray levels.
The other two datasets, i.e., \emph{Parzival} and \emph{St. Gall} datasets consist of images of manuscripts written with ink on parchment
and the images are in color.
The \emph{Parzival} dataset consits of the pages written by three writers in the $13$th century.
The \emph{St. Gall} dataset consists the manuscripts from a medieval manuscript written in Latin.
The details of the ground truth are presented in~\cite{chen2015ground}.
Three new datasets with more complex layout have been recently created~\cite{simistira16}.
The \emph{CB55} dataset consists of manuscripts from the $14$th century which are written in Italian and Latin languages by one writer.
The \emph{CSG18} and \emph{CSG863} datasets consist of manuscripts from the $11$th century which are written in Latin language.
The number of the writers of the two datasets is not specified.
The details of the three datasets are presented in~\cite{simistira16}.
In the experiments, all images are scaled down with a scaling factor $2^{-3}$.
Table~\ref{tab:dataset} gives the details of training, test, and validation sets of the six datasets.
\begin{table}[!tb]
\caption{Details of training, test, and validation sets. $TR$, $TE$, and $VA$ denotes the training, test, and validation sets respectively.}
\vspace{-5pt}
\centering
\begin{tabular}
{l c c c c}
\toprule
& image size (pixels) & $|TR|$ & $|TE|$ & $|VA|$ \\
\hline
\emph{G. Washington} & $2200 \times 3400$ & 10 & 5 & 4 \\
\emph{St. Gall} & $1664 \times 2496$ & 20 & 30 & 10 \\
\emph{Parzival} & $2000 \times 3008$ & 20 & 13 & 2 \\
\hline
\emph{CB55} & $4872 \times 6496$ & 20 & 10 & 10 \\
\emph{CSG18} & $3328 \times 4992$ & 20 & 10 & 10 \\
\emph{CSG863} & $3328 \times 4992$ & 20 & 10 & 10 \\
\bottomrule
\end{tabular}
\label{tab:dataset}
\vspace{-15pt}
\end{table}
\subsection{Metrics}
\label{sec:metrics}
To evaluate methods of page segmentation for historical document images,
the most used metrics are precision, recall, and pixel level accuracy.
In contrast, besides of the standard metrics,
we adapt the metrics which are well defined and has been widely used from common semantic segmentation and scene parsing evaluations.
The metrics are variations on pixel accuracy and region intersection over union (IU).
They have been proposed in~\cite{long2015fully}.
Consequently, the metrics used in the experiments are: pixel accuracy, mean pixel accuracy, mean IU, and frequency weighted IU (f.w. IU).
In order to obtained the metrics, we define the variables:
\begin{itemize}
\item $n_{c}$: the number of classes.
\item $n_{ij}$: the number of pixels of class $i$ predicted to belong to class $j$. For class $i$:
\begin{itemize}
\item $n_{ii}$: the number of correctly classified pixels (true positives).
\item $n_{ij}$: the number of wrongly classified pixels (false positives).
\item $n_{ji}$: the number of wrongly not classified pixels (false negatives).
\end{itemize}
\item $t_{i}$: the total number of pixels in class $i$, such that
\begin{equation}
t_i = \sum_j n_{ji}.
\end{equation}
\end{itemize}
With the defined variables, we can compute:
\begin{itemize}
\item pixel accuracy:
\begin{equation}
acc = \frac{\sum_i n_{ii}}{\sum_i t_i} .
\end{equation}
\item mean accuracy:
\begin{equation}
acc_{mean} = \frac{1}{n_{c}} \times \sum_i \frac{n_{ii}}{t_i} .
\end{equation}
\item mean IU:
\begin{equation}
iu_{mean} = \frac{1}{n_{c}} \times \sum_i \frac{n_{ii}}{t_i + \sum_j n_{ji} - n_{ii}} .
\end{equation}
\item f.w. IU:
\begin{equation}
iu_{weighted} = \frac{1}{\sum_k t_k} \times \sum_i \frac{t_i \times n_{ii}}{t_i + \sum_j n_{ji} - n_{ii}} .
\end{equation}
\end{itemize}
\subsection{Evaluation}
\label{sec:evaluation}
We compare the proposed method to our previous methods~\cite{chen2016page, chen2016pageb}.
Similar to the proposed method, superpixels are considered as the basic unit of labeling.
In~\cite{chen2016page}, the features are learned on randomly selected grayscale image patches with a stacked convolutional autoencoder in an unsupervised manner.
Then the features and the labels of the superpixels are used to train a classifier.
With the trained classifier, superpixels are classified into different classes.
In~\cite{chen2016pageb}, a Conditional Random Field (CRF) is applied in order to model the local and contextual information jointly for the superpixel labeling task.
The trained classifier in~\cite{chen2016page} is considered as the local classifier in~\cite{chen2016pageb}.
Then the local classifier is used to train a contextual classifier which takes the output of the local classifier as input and output the scores of given labels.
With the local and contextual classifiers, a CRF is trained to label the superpixels of a given image.
In the experiments, we use a multilayer perceptron (MLP) as the local classifier in~\cite{chen2016page, chen2016pageb}
and another MLP as the contextual classifier in~\cite{chen2016pageb}.
Simple Linear Iterative Clustering algorithm (SLIC)~\cite{achanta2012slic} is applied to generate the superpixels.
The superiority of SLIC over other superpixel algorithms is demonstrated in~\cite{chen2016page}.
In the experiments, for each image, $3000$ superpixels are generated.
Table~\ref{tab:performance-comparison-1} reports the pixel accuracy, mean pixel accuracy, mean IU, and f.w. IU of the three methods.
It is shown that the proposed CNN outperforms the previous method.
Figure~\ref{fig:performance-comparison} gives the segmentation results of the three methods.
We can see that visually the CNN achieve more accurate segmentation results compared to other methods.
\begin{table*}[!t]
\caption{Performance (in percentage) of superpixel labeling with only local MLP, CRF, and the proposed CNN.}
\centering
\begin{tabular}
{ l c c c c c c c c c c c c }
\toprule
& \multicolumn{4}{c}{\emph{G. Washington}} & \multicolumn{4}{c}{\emph{Parzival}} & \multicolumn{4}{c}{\emph{St.Gall}} \\
\hline
& {pixel} & {mean} & {mean} & {f.w.} & {pixel} & {mean} & {mean} & {f.w.} & {pixel} & {mean} & {mean} & {f.w.} \\
& {acc.} & {acc.} & {IU} & {IU} & {acc.} & {acc.} & {IU} & {IU} & {acc.} & {acc.} & {IU} & {IU}\\
{Local MLP~\cite{chen2016page}} & 87 & 89 & 75 & 83 & 91 & 64 & 58 & 86 & 95 & 89 & 84 & 92 \\
{CRF~\cite{chen2016pageb}} & \textbf{91} & 90 & 76 & 85 & 93 & 70 & 63 & 88 & 97 & 88 & 84 & 94 \\
{CNN} & \textbf{91} & \textbf{91} & \textbf{77} & \textbf{86} & \textbf{94} & \textbf{75} & \textbf{68} & \textbf{89} & \textbf{98} & \textbf{90} & \textbf{87} & \textbf{96} \\
{CNN (max pooling)} & \textbf{91} & 90 & \textbf{77} & \textbf{86} & \textbf{94} & \textbf{75} & \textbf{68} & \textbf{89} & \textbf{98} & \textbf{90} & \textbf{87} & \textbf{96} \\
\toprule
& \multicolumn{4}{c}{\emph{CB55}} & \multicolumn{4}{c}{\emph{CSG18}} & \multicolumn{4}{c}{\emph{CSG863}} \\
\hline
& {pixel} & {mean} & {mean} & {f.w.} & {pixel} & {mean} & {mean} & {f.w.} & {pixel} & {mean} & {mean} & {f.w.} \\
& {acc.} & {acc.} & {IU} & {IU} & {acc.} & {acc.} & {IU} & {IU} & {acc.} & {acc.} & {IU} & {IU}\\
{Local MLP~\cite{chen2016page}} & 83 & 53 & 42 & 72 & 83 & 49 & 39 & 73 & 84 & 54 & 42 & 74 \\
{CRF~\cite{chen2016pageb}} & 84 & 53 & 42 & 75 & 86 & 47 & 37 & 77 & 86 & 51 & 42 & 78 \\
{CNN} & \textbf{86} & 59 & 47 & \textbf{77} & \textbf{87} & \textbf{53} & 41 & 79 & \textbf{87} & \textbf{58} & \textbf{45} & \textbf{79} \\
{CNN (max pooling)} & \textbf{86} & \textbf{60} & \textbf{48} & \textbf{77} & \textbf{87} & \textbf{53} & \textbf{42} & \textbf{80} & \textbf{87} & 57 & \textbf{45} & \textbf{79} \\
\bottomrule
\end{tabular}
\label{tab:performance-comparison-1}
\vspace{-10pt}
\end{table*}
\begin{figure*}[!t]
\centering
\label{fig:d-144-input}
\frame{\includegraphics[width=0.16\textwidth]{figures/ground-truth/d-144.png}} \,
\label{fig:d-144-ground-truth}
\frame{\includegraphics[width=0.16\textwidth]{figures/ground-truth/d-144_200.png}} \,
\label{fig:d-144-pixel}
\frame{\includegraphics[width=0.16\textwidth]{figures/segmentation/local-mlp/d-144_250.png}} \,
\label{fig:d-144-local-mlp}
\frame{\includegraphics[width=0.16\textwidth]{figures/segmentation/crf/d-144_250.png}} \,
\label{fig:d-144-crf}
\frame{\includegraphics[width=0.16\textwidth]{figures/segmentation/cnn/d-144_250.png}} \\
\vspace{2pt}
\label{fig:cb55}
\frame{\includegraphics[width=0.16\textwidth]{figures/ground-truth/e-codices_fmb-cb-0055_0098v_max.jpg}} \,
\label{fig:ch-crf-cb55-ground-truth}
\frame{\includegraphics[width=0.16\textwidth]{figures/ground-truth/e-codices_fmb-cb-0055_0098v_max_gt.jpg}} \,
\label{fig:cb55-pixel}
\frame{\includegraphics[width=0.16\textwidth]{figures/segmentation/local-mlp/e-codices_fmb-cb-0055_0098v_max.jpg}} \,
\label{fig:cb55-local-mlp}
\frame{\includegraphics[width=0.16\textwidth]{figures/segmentation/crf/e-codices_fmb-cb-0055_0098v_max.jpg}} \,
\label{fig:cb55-crf}
\frame{\includegraphics[width=0.16\textwidth]{figures/segmentation/cnn/e-codices_fmb-cb-0055_0098v_max_250.jpg}} \\
\vspace{2pt}
\label{fig:csg863-input}
\frame{\includegraphics[width=0.16\textwidth]{figures/ground-truth/e-codices_csg-0863_050_max.jpg}} \,
\label{fig:csg863-ground-truth}
\frame{\includegraphics[width=0.16\textwidth]{figures/ground-truth/e-codices_csg-0863_050_max_gt.jpg}} \,
\label{fig:csg863-pixel}
\frame{\includegraphics[width=0.16\textwidth]{figures/segmentation/local-mlp/e-codices_csg-0863_050_max.jpg}} \,
\label{fig:csg863-local-mlp}
\frame{\includegraphics[width=0.16\textwidth]{figures/segmentation/crf/e-codices_csg-0863_050_max.jpg}} \,
\label{fig:csg863-crf}
\frame{\includegraphics[width=0.16\textwidth]{figures/segmentation/cnn/e-codices_csg-0863_050_max_250.jpg}} \\
\caption{
Segmentation results on the \emph{Parzival}, \emph{CB55}, and \emph{CSG863} datasets from top to bottom respectively.
The colors: black, white, blue, red, and pink are used to represent: \emph{periphery}, \emph{page}, \emph{text}, \emph{decoration}, and \emph{comment} respectively.
The columns from left to right are: input, ground truth, segmentation results of the local MLP, CRF, and CNN respectively.}
\label{fig:performance-comparison}
\end{figure*}
\subsection{Max Pooling}
\label{sec:max-pooling}
Pooling is a widely used technology in CNN.
Max pooling is the most common type of pooling which is applied in order to
reduce spatial size of the representation to reduce the number of parameters of the network.
In order to show the impact of max pooling for the segmentation task.
We add a max pooling layer after the convolution layer.
The pooling size is $2 \times 2$ pixels.
Table~\ref{tab:performance-comparison-1} reports the performance of the CNN with a max pooling layer.
We can see that only on the \emph{CB55} dataset, the mean pixel accuracy and mean IU are slightly improved.
In general, adding a max pooling layer does not improve the performance of the segmentation task.
The reason is that for some computer vision problems, e.g., object recognition and text extraction in natural images,
the exact location of a feature is less important than its rough location relative to other features.
However, for a given document image,
to label a pixel in the center of a patch, it is not sufficient to know if there is text somewhere in that patch, but also the location of the text.
Therefore, the exact location of a feature is helpful for the page segmentation task.
\begin{figure}[!tb]
\centering
\includegraphics[width=0.85\linewidth]{figures/cnn_experiments_nb_features.png}
\caption{f.w. IU of the one convolution layer CNN on different number of filters.}
\label{fig:performance-nb-features}
\vspace{-10pt}
\end{figure}
\subsection{Number of Kernels}
\label{sec:nb-filters}
In order to show the impact of the number of kernels of the convolution layer on the segmentation task.
We define the number of kernels as $K$.
In the experiments, we set $K \in \{ 1, 2, 4, 6, 8, 10, 12, 14 \}$.
Figure~\ref{fig:performance-nb-features} reports the f.w. IU of the one convolution layer CNN with different number of kernels.
We can see that except on the \emph{CS18} dataset, when $K \geq 4$ the performance is not improved.
\subsection{Number of Layers}
\label{sec:nb-layers}
In order to show the impact of the number of convolutional layers on the page segmentation task.
We incrementally add convolutional layers, such that there is two more kernels on the current layer than the previous layer.
Figure~\ref{fig:performance-nb-conv-layers} reports the f.w. IU of the CNN with different number of convolutional layers.
It is show that the number of layers does not affect the performance of the segmentation task.
However, on the \emph{G. Washington} dataset, with more layers, the performance is degraded slightly.
The reason is that compared to other datasets, the \emph{G. Washington} dataset has fewer training images.
\begin{figure}[!tb]
\centering
\includegraphics[width=0.85\linewidth]{figures/cnn_experiments_nb_conv_layers.png}
\caption{f.w. IU of the CNN on different number of convolutional layers.}
\label{fig:performance-nb-conv-layers}
\vspace{-10pt}
\end{figure}
\subsection{Number of Training Images}
\label{sec:training-images}
In order to show the performance under different amount of training images.
For each dataset, we choose $N$ images in the training set to train the CNN.
For each experiment, the number of batches is set to $5000$.
Figure~\ref{fig:performance-nb-train-images} reports the f.w. IU under different values of $N$, such that $N \in \{1, 2, 4, 8, 10, 12, 14, 16, 18, 20 \}$\footnote{In the \emph{G. Washington} dataset, there is $10$ training images. Therefore, $N \in \{1, 2, 4, 8, 10 \}$}.
We can see that in general, when $N > 2$, the performance is not improved.
However, on the \emph{G. Washington} dataset, with more training images, the performance is degraded slightly.
The reason is that compared to the other datasets, on the \emph{G. Washington} dataset the pages are more varied
and the ground truth is less constant.
\begin{figure}[!tb]
\centering
\includegraphics[width=0.85\linewidth]{figures/cnn_experiments_nb_train_images.png}
\caption{f.w. IU of the CNN on different number of training images.}
\label{fig:performance-nb-train-images}
\vspace{-10pt}
\end{figure}
\subsection{Run Time}
\label{sec:run-time}
The proposed CNN is implemented with the python library Theano~\cite{bergstra2010theano}.
The experiments are performed on a PC with an Intel Core i7-3770 3.4 GHz processor and 16 GB RAM.
On average, for each image, the CNN takes about $1$ second processing time.
The superpixel labeling method~\cite{chen2016page} and CRF model~\cite{chen2016pageb} take about $2$ and $5$ seconds respectively.
\begin{comment}
\begin{figure*}[!t]
\centering
\label{fig:cb55}
\frame{\includegraphics[width=0.18\textwidth]{figures/ground-truth/e-codices_fmb-cb-0055_0098v_max.jpg}} \,
\label{fig:ch-crf-cb55-ground-truth}
\frame{\includegraphics[width=0.18\textwidth]{figures/ground-truth/e-codices_fmb-cb-0055_0098v_max_gt.jpg}} \,
\label{fig:cb55-pixel}
\frame{\includegraphics[width=0.18\textwidth]{figures/segmentation/local-mlp/e-codices_fmb-cb-0055_0098v_max.jpg}} \,
\label{fig:cb55-local-mlp}
\frame{\includegraphics[width=0.18\textwidth]{figures/segmentation/crf/e-codices_fmb-cb-0055_0098v_max.jpg}} \,
\label{fig:cb55-crf}
\frame{\includegraphics[width=0.18\textwidth]{figures/segmentation/cnn/e-codices_fmb-cb-0055_0098v_max_250.jpg}} \\
\vspace{5pt}
\label{fig:csg18-input}
\frame{\includegraphics[width=0.18\textwidth]{figures/ground-truth/e-codices_csg-0018_098_max.jpg}} \,
\label{fig:csg18-ground-truth}
\frame{\includegraphics[width=0.18\textwidth]{figures/ground-truth/e-codices_csg-0018_098_max_gt.jpg}} \,
\label{fig:csg18-pixel}
\frame{\includegraphics[width=0.18\textwidth]{figures/segmentation/local-mlp/e-codices_csg-0018_098_max.jpg}} \,
\label{fig:csg18-local-mlp}
\frame{\includegraphics[width=0.18\textwidth]{figures/segmentation/crf/e-codices_csg-0018_098_max.jpg}} \,
\label{fig:csg18-crf}
\frame{\includegraphics[width=0.18\textwidth]{figures/segmentation/cnn/e-codices_csg-0018_098_max_250.jpg}} \\
\vspace{5pt}
\label{fig:csg863-input}
\frame{\includegraphics[width=0.18\textwidth]{figures/ground-truth/e-codices_csg-0863_050_max.jpg}} \,
\label{fig:csg863-ground-truth}
\frame{\includegraphics[width=0.18\textwidth]{figures/ground-truth/e-codices_csg-0863_050_max_gt.jpg}} \,
\label{fig:csg863-pixel}
\frame{\includegraphics[width=0.18\textwidth]{figures/segmentation/local-mlp/e-codices_csg-0863_050_max.jpg}} \,
\label{fig:csg863-local-mlp}
\frame{\includegraphics[width=0.18\textwidth]{figures/segmentation/crf/e-codices_csg-0863_050_max.jpg}} \,
\label{fig:csg863-crf}
\frame{\includegraphics[width=0.18\textwidth]{figures/segmentation/cnn/e-codices_csg-0863_050_max_250.jpg}} \\
\caption{
Segmentation results on the \emph{CB55}(the first row), \emph{CSG18}(the second row), and \emph{CSG863}(the third row) datasets.
The colors: white, blue, red, and pink are used to represent: \emph{page}, \emph{text line}, \emph{decoration}, and \emph{comment} respectively.
The columns from left to right are: input, ground truth, segmentation results of the local MLP, CRF, and CNN respectively.}
\label{fig:performance2}
\end{figure*}
\end{comment}
\section{Conclusion}
\label{sec:conclusion}
In this paper, we have proposed a convolutional neural network (CNN) for page segmentation of handwritten historical document images.
In contrast to traditional page segmentation methods which rely on off-the-shelf classifiers trained with hand-crafted features,
the proposed method learns features directly from image patches.
Furthermore, feature learning and classifier training are combined into one step.
Experiments on public datasets show the superiority of the proposed method over previous methods.
While many researchers focus on applying very deep CNN architectures for different tasks,
we show that with the simple one convolution layer CNN, we achieve comparable performance compared to other network architectures.
\bibliographystyle{IEEEtran}
\section{Introduction}\label{sec:introduction}}
\IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file''
for IEEE Computer Society journal papers produced under \LaTeX\ using
IEEEtran.cls version 1.8b and later.
I wish you the best of success.
\hfill mds
\hfill August 26, 2015
\subsection{Subsection Heading Here}
Subsection text here.
\subsubsection{Subsubsection Heading Here}
Subsubsection text here.
\section{Conclusion}
The conclusion goes here.
\appendices
\section{Proof of the First Zonklar Equation}
Appendix one text goes here.
\section{}
Appendix two text goes here.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
The authors would like to thank...
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\section{Introduction}
\IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file''
for IEEE Communications Society journal papers produced under \LaTeX\ using
IEEEtran.cls version 1.8b and later.
I wish you the best of success.
\hfill mds
\hfill August 26, 2015
\subsection{Subsection Heading Here}
Subsection text here.
\subsubsection{Subsubsection Heading Here}
Subsubsection text here.
\section{Conclusion}
The conclusion goes here.
\appendices
\section{Proof of the First Zonklar Equation}
Appendix one text goes here.
\section{}
Appendix two text goes here.
\section*{Acknowledgment}
The authors would like to thank...
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\section{Introduction}
\IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file''
for IEEE journal papers produced under \LaTeX\ using
IEEEtran.cls version 1.8b and later.
I wish you the best of success.
\hfill mds
\hfill August 26, 2015
\subsection{Subsection Heading Here}
Subsection text here.
\subsubsection{Subsubsection Heading Here}
Subsubsection text here.
\section{Conclusion}
The conclusion goes here.
\appendices
\section{Proof of the First Zonklar Equation}
Appendix one text goes here.
\section{}
Appendix two text goes here.
\section*{Acknowledgment}
The authors would like to thank...
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\section{Introduction}
\IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file''
for IEEE \textsc{Transactions on Magnetics} journal papers produced under \LaTeX\ using
IEEEtran.cls version 1.8b and later.
I wish you the best of success.
\hfill mds
\hfill August 26, 2015
\subsection{Subsection Heading Here}
Subsection text here.
\subsubsection{Subsubsection Heading Here}
Subsubsection text here.
\section{Conclusion}
The conclusion goes here.
\appendices
\section{Proof of the First Zonklar Equation}
Appendix one text goes here.
\section{}
Appendix two text goes here.
\section*{Acknowledgment}
The authors would like to thank...
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
|
1108.4169
|
\section{Introduction}
\label{intro}
The consequences of geometric lattice frustration on ground and
excited state behavior of spin systems have been widely studied over
the past two decades \cite{Diep,Lacroix}. Even as many of the issues
are still being debated for the complex two- and three-dimensional (2D
and 3D) lattices, consensus on some of the simpler models have been
reached. One such model frustrated system is the spin-$\frac{1}{2}$
zigzag ladder, with nonzero antiferromagnetic exchanges $J_1$ and
$J_2$ between nearest and next-nearest neighbor spins. The ground
state here is a valence bond solid (VBS) for $J_2$ = 0.5$J_1$
\cite{Majumdar69a,Majumdar69b}, and is dimerized for $J_2 \geq
0.2411J_1$ \cite{Okamoto92a,Chitra95a}, where the excitations are spin
solitons \cite{Shastry81a,Sorensen98a}. Some theoretical results
exist also for $J_2 > J_1$ \cite{White96a,Itoi01a,Kumar10a}.
Spin Hamiltonians are restricted to a charge per site $\rho=1$
($\frac{1}{2}$-filled band) and are obtained in the limit of very
large onsite Hubbard repulsion $U$ between electrons or holes where
there are no charge degrees of freedom. The literature on frustrated
non-$\frac{1}{2}$-filled bands, where the charge degrees of freedom
are non-vanishing even for $U \to \infty$, remains relatively
sparse. Much of the literature on non-$\frac{1}{2}$-filled band
frustrated systems is for $\rho=\frac{1}{2}$, or the
$\frac{1}{4}$-filled band, which is of interest both because under
certain conditions it can be described within an effective
$\frac{1}{2}$-filled band picture \cite{Kino95a,Kanoda06a,Powell11a},
and because there exist many $\frac{1}{4}$-filled band frustrated
materials, both organic and inorganic, that exhibit novel behavior
including charge, spin and orbital-ordering, and superconductivity
(see below). We have recently shown that there is a strong tendency to
form local spin-singlets in the $\frac{1}{4}$-filled band with strong
electron-electron interactions, in both one dimension (1D) and 2D,
and especially in the presence of lattice
frustration, which enhances quantum effects \cite{Li10a,Dayal11a}.
The frustration-driven singlet formation does not occur if only the
frustrating Coulomb interactions are included \cite{Merino05a}; the
inclusion of the frustrating electron-hopping integral, with or
without the corresponding Coulomb interactions is essential. Singlet
formation in the $\frac{1}{4}$-filled band is accompanied by
charge-ordering (CO), leading to what we have termed as a {\it
paired-electron crystal} (PEC), which consists of pairs of
singly-occupied sites separated by pairs of vacant sites. The PEC is
the $\frac{1}{4}$-filled band equivalent of the VBS. The difference
from the standard VBS lies in the possibility that the PEC gives way
to a {\it paired-electron liquid} under appropriate conditions, which
might then lead to superconductivity. Clearly it is desirable to
extend of our current work on the ground state of the PEC to excited
states.
\begin{figure}
\centerline{\resizebox{3.2in}{!}{\includegraphics{fig1}}}
\caption{Zigzag ladder lattice and ground state configurations for
density $\rho=\frac{1}{2}$. (a) The ground state for
$t_2/t_1>1.707$ has uniform charges and bonds. (b) For
$t_2/t_1<1.707$, the ground state is a PEC with coexisting charge
and bond order. Filled (open) circles correspond to sites with large
(small) charge density. The heavy line indicates the strongest
(spin-singlet) bond. Double, single, and dotted lines indicate
strong, intermediate, and weak bonds, respectively.}
\label{lattice}
\end{figure}
In the present paper, we discuss theoretical results for the PEC that
occurs in the simplest $\frac{1}{4}$-filled frustrated lattice with
interacting electrons. The lattice geometry here is the same as in the
spin zigzag ladder (see Fig.~\ref{lattice}(a)), with nearest and next
nearest neighbor electron hopping matrix elements $t_1$ and $t_2$,
respectively. In our previous work on the ground state of the
$\frac{1}{4}$-filled band zigzag ladder \cite{Clay05a}, we had shown
that in the presence of electron-lattice coupling (either nearest
neighbor along the zigzag direction, or second neighbor, or both), and for
$t_2/t_1$ less than a critical value $(t_2/t_1)_c$ (see below) there
occurs a coupled bond-charge density wave (BCDW), as shown in
Fig.~\ref{lattice}(b). Alternate rungs of the zigzag ladder are
occupied by singlet spin-coupled charge-rich sites, and charge-poor
sites occupy the other set of rungs. The CO pattern here characterizes
the BCDW as a PEC. We discuss below the nature of the ground state in
this system for the complete parameter range $t_2/t_1 \leq
(t_2/t_1)_c$. We also present numerical results for spin excitations
and thermodynamics for the $\frac{1}{4}$-filled band zigzag ladder
within the interacting electron Hamiltonian and discuss applications
of the model to real systems.
Although our actual work is limited to the zigzag ladder, our analysis
leads to an empirical understanding of a perplexing observation in the
$\frac{1}{4}$-filled band systems with interacting electrons in
general. In many such systems, irrespective of dimensionality, there
often are two transitions, (i) a high temperature metal-insulator (MI)
transition at T$_{\rm{MI}}$ that is accompanied by CO without any
perceptible effect on the magnetic behavior, and (ii) a second
insulator-insulator transition at a lower temperature T$_{\rm{SG}}$
where a spin gap (SG), seen in the magnetic susceptibility, occurs.
In a second class of systems with nearly identical chemical
constituents there occurs however a single transition with
T$_{\rm{MI}}$ = T$_{\rm{SG}}$. We point out a well-defined correlation
between this ``two-versus-one'' transition and the pattern of the bond
distortion in the spin-gapped phase. Furthermore, many interacting
$\frac{1}{4}$-filled band materials are superconducting, often under
pressure. There appears to exist a correlation also between
superconductivity and the bond distortion pattern at the lowest
temperatures.
The organization of the paper is as follows. In Section \ref{model} we
introduce our Hamiltonian for the $\frac{1}{4}$-filled band zigzag
ladder. In Section \ref{summary} we present a brief summary of the
earlier results for the 1D limit of this model
\cite{Ung94a,Kuwabara03a,Clay03a,Clay07a,Yoshimi11a},
and for the ground state of the zigzag
ladder \cite{Clay05a}. Two different bond distortion patterns,
accompanied however with the same charge distortion pattern are
possible in the 1D limit. We point out the two bond distortion
patterns correspond to two different mappings of the
$\frac{1}{4}$-filled band to effective $\frac{1}{2}$-filled bands. In
the 1D limit both mappings are valid. In contrast, only one mapping is
applicable to the zigzag ladder, and there is a single unambiguous PEC
here. The results of our numerical calculations for the zigzag ladder
are presented in Section \ref{results}, where we discuss both ground
state and temperature-dependent behavior. In Section \ref{discussion}
we discuss the implications of our results for real
$\frac{1}{4}$-filled materials. Appendix \ref{appendixa} contains details of the
numerical method, based on a matrix-product state (MPS)
representation, used for our large-lattice calculations.
Appendix \ref{appendixb} contains details of the
finite-size scaling of our ground-state calculations.
\section{Theoretical model and parameter space}
\label{model}
The Hamiltonian for the zigzag ladder is
\begin{eqnarray}
H&=&-t_1\sum_{i}(1+\alpha_1\Delta_{i,i+1})B_{i,i+1}
+\frac{1}{2}K_1\sum_{i}\Delta^2_{i,i+1} \nonumber \\
&-& t_2\sum_{i}(1+\alpha_2\Delta_{i,i+2})B_{i,i+2} +\frac{1}{2}K_2\sum_i
\Delta^2_{i,i+2} \nonumber \\
&+& \sum_i(U n_{i\uparrow}n_{i\downarrow} + V_1 n_in_{i+1} +V_2 n_in_{i+2}).
\label{ham}
\end{eqnarray}
In Eq.~\ref{ham}
$B_{i,j}=\sum_\sigma(c^\dagger_{j,\sigma}c_{i,\sigma}+H.c.)$ is the
kinetic energy operator for the bond between sites $i$ and $j$, where
$c^\dagger_{i,\sigma}$ creates an electron of spin $\sigma$ on site
$i$. $n_{i\sigma}=c^\dagger_{i,\sigma}c_{i,\sigma}$ is the density
operator, and $n_i=n_{i\uparrow}+n_{i\downarrow}$. $t_1$ and $t_2$
are hopping integrals along the zigzag rung direction and the rail
direction, respectively, as shown in Fig.~\ref{lattice}. The lattice
may also be viewed as a 1D chain with nearest neighbor hopping $t_1$
and frustrating second-neighbor hopping $t_2$. In our reference to
the 1D limit below, however, we will imply $t_2=0$ and $V_2=0$. We
give energies in units of $t_1$ and fix $t_1$=1. $\Delta_{i,j}$ is
the deformation of the bond between sites $i$ and $j$; $\alpha_1$ and
$\alpha_2$ are the inter-site electron-phonon (e-p) coupling constants
with corresponding spring constants $K_1$ and $K_2$, which for
simplicity we choose to be identical in the $t_1$ and $t_2$ directions
($\alpha_1=\alpha_2\equiv\alpha$ and $K_1=K_2\equiv K$). For all
results below we fix $K_1=K_2=2$. We have omitted intra-site e-p
coupling terms in Eq.~\ref{ham}, as apart from changing the strength
of the charge order, they do not greatly change the thermodynamic
properties \cite{Clay07a}. $U$ is the onsite Coulomb interaction and
$V_1$ and $V_2$ are the nearest-neighbor Coulomb interactions for
$t_1$ and $t_2$ bonds. We will consider only the case $V_1=V_2=V$. In
the 1D limit, the ground state is a Wigner crystal (WC) for
$V_1>V_1^c$, where $V_1^c=2|t_1|$ for $U \to \infty$ and is larger for
finite $U$. Our discussions of the 1D limit are for $V_1<V_1^c$. All
calculations use periodic boundary conditions.
\section{Summary of earlier results.}
\label{summary}
Given the complexity of our results, it is useful to have a brief
summary of our earlier results. While the numerical results for the 1D
limit and the ground state of the zigzag ladder have been presented
before, the mappings of the ground states to the effective
$\frac{1}{2}$-filled band model that we discuss below give new
insight.
\subsection{1D limit}
\label{1dsummary}
For moderate to strong electron-electron (e-e) interactions but
$V_1<V_1^c$, the $\frac{1}{4}$-filled band is bond- or
charge-dimerized at T$<$T$_{\rm{MI}}$. In either case the ground state
enters a spin-Peierls (SP) phase \cite{Clay07a,Yoshimi11a} for T$<$T$_{\rm{SG}}$.
We show in Fig.~\ref{1d}(a) a schematic of the bond-dimerized phase,
with site charge densities 0.5, strong intradimer bonds and weak
interdimer bonds. The dimer unit cells (the boxes in the figure)
containing a single electron can be thought of as effective sites,
which leads to the effective $\frac{1}{2}$-filled band description in
this case \cite{Powell11a}. The description of the SP state is then
as shown in Fig.~\ref{1d}(b), with alternating {\it inter}-dimer
bonds. The overall pattern of bond strength here is
$\cdots$Strong-Weak-Strong-Weak$^\prime\cdots$ (SWSW$^\prime$)
\cite{Ung94a,Mazumdar00a,Clay03a}. Unlike the true
$\frac{1}{2}$-filled band, however, the charge degrees of freedom
internal to the dimer unit cell are relevant in the
$\frac{1}{4}$-filled band, and the charge distribution is as shown in
the Fig.~\ref{1d}(b). The coexisting charge order pattern is written
as $\cdots$1100$\cdots$, where `1' (`0') denotes site charge density
of 0.5+$\delta$ (0.5-$\delta$). The CO amplitude $\delta$ varies with
e-e and e-p interaction strengths. Nearest-neighbor spin-singlet
coupling between the electrons on charge-rich `1' sites that are
linked by the W$^\prime$ inter-dimer bond gives the spin gap of this
SP phase. This coexisting broken symmetry state has been termed a
BCDW \cite{Ung94a,Mazumdar00a,Clay03a}, and is the simplest example of
the PEC found more generally in $\frac{1}{4}$-filled systems beyond
the 1D limit \cite{Li10a,Dayal11a}. Finite temperature phase
transition to the SWSW$^\prime$ phase, starting from the uniform phase
and through the dimer phase of Fig.~\ref{1d}(a), has been demonstrated
within an extended Hubbard model that incorporated interchain
interactions at a mean-field level \cite{Otsuka08a}.
The pattern of bond order modulation in the 1D PEC is not unique but
depends on the strength of e-e interactions \cite{Ung94a}. The
bond-order pattern that dominates for relatively weaker e-e
interactions, or for relatively strong e-p interaction has the form
$\cdots$Strong-Medium-Weak-Medium$\cdots$ (SMWM) \cite{Ung94a}.
Importantly, in this bond pattern the coexisting charge modulation
pattern is again $\cdots$1100$\cdots$ but the spin singlets now
coincide with the strongest `S' {\it intra}-dimer bonds
(Fig.~\ref{1d}(c)). As shown in Fig.~\ref{1d}(d), the corresponding
effective $\frac{1}{2}$-filled band is now different; pairs of sites
with large (small) charge densities are now mapped onto effective
sites with charge density 2 (0). The bonds between the effective sites
are now equivalent, and the effective state is now a
$\frac{1}{2}$-filled band {\it site}-diagonal CDW, which is known to
have a single transition where charge and magnetic gaps open
simultaneously. Further, as the strongest bond coincides with the
location of the spin-singlet, this state is also expected to have a
larger SG and transition temperature than T$_{\rm{SG}}$ in the
SWSW$^\prime$ case. A simple physical picture for two-versus-one
transition is thus obtained from the 1D study: if the strongest bonds
are between a charge-rich and a charge-poor site, the spin-singlet
formed are inter-dimer and there will occur two transitions; if,
however, the strongest bond is a 1-1 bond, the spin singlet is located
on an intra-dimer bond and there is a single transition.
\begin{figure}
\centerline{\resizebox{3.0in}{!}{\includegraphics{fig2}}}
\caption{
(a) Bond-dimerized phase with uniform charge density. Boxes denote
dimer units. (b) SP state evolving from (a), with bond pattern
SWSW$^\prime$. Strong `S' dimer bonds are represented by boxes
enclosing two sites. Zeroes indicate sites with charge density
0.5-$\delta$ while spins indicate sites with charge density
0.5+$\delta$. Double dotted bonds are stronger than single-dotted
bonds, but weaker than the dimer bonds. (c) Bond pattern SMWM found
in the case of relatively weaker e-e interactions. Here double
lines indicate the strongest `S' bonds, single lines medium strength
`M' bonds, and dotted lines weak `W' bonds. (d) Effective
$\frac{1}{2}$-filled band model for (c).}
\label{1d}
\end{figure}
\subsection {Zigzag ladder}
The energy dispersion relation in the zigzag ladder in the
noninteracting limit is a sum of two terms,
\begin{equation}
E(k)=-2t_1\cos(q)-2t_2\cos(2q).
\label{bands}
\end{equation}
Here $q$ refers to a wavenumber along the $t_1$ direction, viewing the
ladder as a 1D chain with second-neighbor interactions. The topology
of the bandstructure changes at
$t_2/t_1=(t_2/t_1)_c=(2+\sqrt{2})/2=1.707\cdots$; for
$t_2/t_1<(t_2/t_1)_c$ the Fermi surface consists of two points at
$q=k_F=\frac{\pi}{4}$, while for $t_2/t_1>(t_2/t_1)_c$ there are four
such points. In the presence of e-p coupling the ground state is then
unstable\cite{Clay05a} to a Peierls distorted state for
$t_2/t_1<(t_2/t_1)_c$. As in 1D, the ground state in the distorted
region again has $\cdots1100\cdots$ CO and a spin gap.
Importantly, for the $\rho=\frac{1}{2}$ zigzag ladder, there is no WC
state with a charge order {\it distinct} from the $\cdots1100\cdots$
charge order found in the PEC (for $V_1 \sim V_2$). In the zigzag
ladder, the WC charge pattern $\cdots1010\cdots$ CO can be placed
along the two $t_2$ directions in two different ways, both of which
lead to the same PEC state shown in Fig.~\ref{lattice}(b). Placing
the WC CO pattern $\cdots1010\cdots$ along the zigzag direction would
place all charge on a single $t_2$ chain and is stable only in the
limit $V_1\gg V_2$. Thus only the PEC of Fig.~\ref{lattice}(b) is
stable for realistic $V_1 \sim V_2$. As we show below, this has
important consequences for both the number of transitions expected in
the zigzag ladder and the nature of the spinon excitations.
\section{Numerical Results for the zigzag ladder}
\label{results}
We use two different numerical methods to solve Eq.~\ref{ham}. For
the ground state order parameters and the spin gaps in the interacting
case we use a new variational quantum Monte Carlo (QMC) method using a
MPS basis \cite{Sandvik07a}. For quasi-1D systems, this MPS-QMC method
provides accuracy similar to that of the Density Matrix
Renormalization Group (DMRG)\cite{White92a,White93a} method. Similar
methods using MPS representations have predominantly been used to
study quantum spin systems. Our results here show however that they
may be successfully applied to electronic models as well. Details on
the application of this method to Hubbard-type models are discussed in
Appendix \ref{appendixa}. For finite temperatures our calculations are
for zero e-p interactions ($\alpha_1=\alpha_2=0$). This is because
information about the tendency to distortion exists in the
wavefunction even without inclusion of explicit e-p interactions
\cite{Hirsch84a}. We use here the standard determinantal QMC method
\cite{Loh92a}. While the lowest temperatures achievable by this method
are limited by the Fermion sign problem, in the present system for
density $\rho=\frac{1}{2}$, inverse temperatures of $\beta \approx
10-16$ are reachable for parameters $U\approx 4$ and $V=0$.
\subsection{Charge order, bond periodicity, and spin gap}
\begin{figure}
\centerline{\resizebox{3.2in}{!}{\includegraphics{fig3}}}
\caption{(color online) (a) $E_s$, the per site spring constant energy
term in Eq.~\ref{ham}, as a function of $t_2/t_1$ for a 256 site
ladder with $U=V=0$. Electron-phonon parameters are $\alpha_1=1.6$,
$\alpha_2=0$ (circles); $\alpha_1=0$, $\alpha_2=1.6$ (squares); and
$\alpha_1=1.6$, $\alpha_2=1.6$ (diamonds). (b) Cooperative
enhancement $\Delta E_s$ as a function of $t_2/t_1$ (see text). The
inset shows the small $t_2/t_1$ region in more detail.}
\label{nonint}
\end{figure}
We first consider the solution of Eq.~\ref{ham} in the limit of zero
e-e interactions ($U=V=0$). Because Eq.~\ref{ham} includes e-p
coupling in both the $t_1$ and $t_2$ directions, the variation of the
lattice distortion amplitude with the ratio $t_2/t_1$ is nontrivial. The total
spring constant energy $E_s$, the sum of $K_1$ and $K_2$ terms in
Eq.~\ref{ham}, is a convenient measure of the strength of the lattice
distortion. Fig.~\ref{nonint}(a) shows $E_s$ as a function of $t_2/t_1$
for a 256 site ladder. Three different choices for e-p couplings are
shown: (i) $\alpha_1=\alpha$, $\alpha_2=0$; (ii) $\alpha_1=0$,
$\alpha_2=\alpha$; and (iii) $\alpha_1=\alpha_2=\alpha$. In the first
case with e-p coupling only along the zigzag direction, the lattice
distortion is strongest when $t_2=0$, and vanishes continuously at
$(t_2/t_1)_c$. In the second case with e-p coupling only along the
$t_2$ bonds, the strength of the distortion increases with
$t_2$. Putting these two effects together, the strength of the
lattice distortion in
the general case with $\alpha_1$ and $\alpha_2$ nonzero shows a
minimum at an intermediate value of $t_2/t_1$, as seen in Fig.~\ref{nonint}.
The precise $t_2/t_1$
value of this minimum will depend on the specific choices for
$\alpha_1$ and $\alpha_2$.
Fig.~\ref{nonint}(a) also shows that the $t_2$ and $t_1$ lattice
distortions act {\it cooperatively}--the total lattice distortion for
both $\alpha_1$ and $\alpha_2$ nonzero is considerably stronger than
the sum of the independent distortions. In Fig.~\ref{nonint}(b), we
plot a quantitative measure of the cooperative enhancement, $\Delta
E_s=E_s(\alpha_1=\alpha_2=\alpha)
-(E_s(\alpha_1=\alpha,\alpha_2=0)+E_s(\alpha_1=0,\alpha_2=\alpha))$,
as a function of $t_2/t_1$. $\Delta E_s$ increases continuously with
$t_2/t_1$, and as shown in the inset is nonzero even for small values
of $t_2/t_1$. As we will discuss further in Section \ref{discussion},
this cooperative effect also has potentially important consequences
for materials---if both $\alpha_1$ and $\alpha_2$ are nonzero, near
$(t_2/t_1)_c$ the lattice distortion changes abruptly.
\begin{figure}
\centerline{\resizebox{3.2in}{!}{\includegraphics{fig4}}}
\caption{Order parameters versus $t_2/t_1$ for the zigzag ladder with
$U=6$, $V=1$, and $\alpha=1.6$. Results are finite-size scaled from
20, 28, 36, and 60 site lattices. (a) 4k$_F$ component of the
lattice distortion, and (b) spring energy per site, $E_s$, (c)
charge disproportionation $\Delta$n, and (d) singlet-triplet gap
$\Delta_{\rm{ST}}$. Open symbols are for $V=V_1=V_2$, and filled
symbols at $t_2/t_1=0$ are for $V=V_1$, $V_2=0$ (see text).}
\label{data}
\end{figure}
A key difference between the $\rho=\frac{1}{2}$ zigzag ladder and the
single chain is that in the ladder the bond pattern of the PEC
remains SMWM regardless of the strength of e-e interactions.
In 1D the displacement of site $j$ from equilibrium, $u_j$
($\Delta_{j,j+1}=u_{j+1}-u_j$ in Eq.~\ref{ham}),
can be written as \cite{Ung94a}
\begin{equation}
u_j=u_0[r_2\cos(2k_{\rm{F}}j-\theta_2) + r_4\cos(4k_{\rm{F}}j-\theta_4)],
\label{r2r4}
\end{equation}
where $r_2$ and $r_4$ are relative components of the period-4
2k$_{\rm{F}}$ and period-2 4k$_{\rm{F}}$ lattice distortions and $u_0$
is an overall amplitude. $r_2$ and $r_4$ are normalized such that
$r_2+r_4=1$. The SMWM bond pattern corresponds to $r_4<0.41$ while
the SWSW$^\prime$ pattern\cite{Ung94a} has $r_4>0.41$. In both cases
$\theta_2=\frac{\pi}{4}$ and $\theta_4=0$. Note that $\Delta_{j,j+2}$
in Eq.~\ref{ham} is an independent lattice distortion of the $t_2$
bonds and has no counterpart in the 1D model.
While the PEC occurs in the thermodynamic limit for
$t_2/t_1<(t_2/t_1)_c$ and $\alpha=0^+$, in finite lattices a finite
e-p coupling is required to observe the broken-symmetry ground state.
We have chosen the e-p coupling strengths $\alpha$ and $g$ just larger
than the minimum required for the ground and triplet states to be
Peierls distorted. The strength of the lattice distortion depends on
the values of $\alpha$ and $\beta$ and therefore results here should
not be directly compared to experimental values.
\begin{figure}
\centerline{\resizebox{3.2in}{!}{\includegraphics{fig5}}}
\caption{Dependence of order parameters on $V$. Results are
finite-size scaled from 20, 28, 36, and 60 site lattices.
Quantities plotted in panels (a)-(d) are as defined in
Fig.~\ref{data}. Here $t_2/t_1=1.5$, $U=6$, and $\alpha=1.6$. }
\label{vdata}
\end{figure}
In Fig.~\ref{data} we plot the finite-size scaled values of several
order parameters determined self-consistently versus $t_2/t_1$ for
$\alpha=1.6$, $U=6$, and $V=1$. Exact and MPS-QMC calculations were
performed for 20, 28, 36, and 60 site lattices. Details of the
finite-size extrapolation are given in Appendix \ref{appendixb}.
Fig.~\ref{data}(a) shows the 4k$_F$ lattice
distortion strength and Fig.~\ref{data}(b) the spring constant
energy per site (including both $t_1$ and $t_2$ lattice distortions).
Fig.~\ref{data}(c) shows the charge disproportionation $\Delta n$,
defined as the difference between the charge density on charge-rich
and charge-poor sites. The singlet-triplet gap defined as
$\Delta_{\rm{ST}}=E(S=1)-E(S=0)$ is plotted in Fig.~\ref{data}(d).
Note that two different `1D' limits are shown in Fig.~\ref{data} at
$t_2/t_1=0$: either including the second neighbor Coulomb interaction
($V_2=V$, open symbols), or with only nearest-neighbor Coulomb
interactions ($V_2=0$, filled symbols).
Focusing first on the $t_2/t_1=0$ limit, the bond pattern is SMWM in
the case where $V_2$ is nonzero, as is found in the 1D weakly
correlated band\cite{Ung94a}. In the more traditional 1D with no
second-neighbor $V_2$, the bond distortion pattern is SWSW$^\prime$
for $U=6$, $V=1$ with $r_4>0.41$ (filled circle). For equal e-p
coupling, the lattice distortion energy, $\Delta n$, and
$\Delta_{\rm{ST}}$ are all stronger when the bond pattern is SMWM.
Away from the 1D limit, $r_4$ increases with increasing $t_2/t_1$ but
is always less than the 0.41 that would be necessary to reach the
SWSW$^\prime$ bond pattern. The amplitude of the lattice distortion
measured by the spring constant energy in Fig.~\ref{data}(b) shows
a minima at intermediate $t_2/t_1$ as in the non-interacting case.
Surprisingly, $\Delta n$ {\it decreases} as the lattice distortion
becomes stronger--this is one significant difference from the 1D chain
BCDW state where $\Delta n$ follows the strength of the bond
distortion. Reference \cite{Clay05a} showed that the spin gap
in the PEC state of the zigzag ladder is larger than the gap in the
single chain having the same $\Delta n$. As Fig.~\ref{data}(d)
shows, the gap in the ladder increases with $t_2/t_1$, and is largest
near $(t_2/t_1)_c$.
The $U$ interaction weakens the PEC in the zigzag ladder (not
shown). This decrease is, however, weaker than in the 1D
chain\cite{Clay05a}. In Fig.~\ref{vdata} we show the results of
varying $V$ while keeping other parameters ($t_2/t_1=1.5$, $U=6$, and
$\alpha=1.6$) fixed. Even at large $V$ ($V>U/2$), as expected we did
not find any transition to a distinct WC state; in all cases the CO
pattern is $\cdots1100\cdots$ along the zigzag direction. Contrary to
what occurs in the 1D limit, the 4k$_F$ component of the bond
distortion actually {\it decreases} with increasing $V$. While in 1D
$V$ destabilizes the $\cdots1100\cdots$ CO, in the zigzag ladder, this
effect in the ladder along the $t_1$ direction is countered by $V_2$,
which prefers $\cdots1010\cdots$ order along the $t_2$ directions. A
similar effect is found in the $\frac{1}{4}$-filled PEC state in the
2D anisotropic triangular lattice---there also $V$ strengthens the
$\cdots1100\cdots$ CO provided $V$ is not too large \cite{Dayal11a}.
The most
\begin{figure*}[t]
\centerline{\resizebox{5.5in}{!}{\includegraphics{fig6}}}
\caption{(color online) Representative temperature-dependent QMC
results for bond ((a) and (d)), charge ((b) and (e)), and spin
susceptibilities ((c) and (f)) for a 36 site zigzag ladder with
$U=4$, $V=0$, and $\alpha=0$. $t_2/t_1$ is 0.4 in panels (a)-(c),
and 1.4 in panels (d)-(f).}
\label{qmcdata1}
\end{figure*}
striking result of Fig.~\ref{vdata} is the very large spin
gap that is obtained when both $t_2/t_1$ and $V$ are moderately
large. We will return to this point later in Section \ref{discussion}.
\subsection{Thermodynamics and spinon binding}
To understand the thermodynamics of the zigzag ladder we present our
results from complementary calculations of (i)
wavenumber-dependent susceptibilities, and (ii) the nature of higher
spin states. Finite-temperature calculations of susceptibilities here
are done within the static undistorted lattice.
\subsubsection{Temperature dependence of susceptibilities}
We calculate wavenumber dependent charge ($\chi_\rho(q)$), spin
($\chi_\sigma(q)$), and bond susceptibilities ($\chi_B(q)$), defined
as
\begin{equation}
\chi_x(q) = \frac{1}{N} \sum_{j,k} e^{iq(j-k)} \int_{0}^{\beta} d\tau
\langle O^x_j(\tau)O^x_k(0)\rangle.
\label{eqn:susceptibility}
\end{equation}
In Eq.~\ref{eqn:susceptibility}, $O^\rho_j = n_{j,\uparrow} +
n_{j,\downarrow}$, $O^\sigma_j = n_{j,\uparrow} - n_{j,\downarrow}$,
and $O^B_j = B_{j,j+1}$, for charge, spin, and bond order
susceptibilities, respectively. $\beta$ is the inverse temperature in
units of $t_1$. To facilitate comparison with 1D, the $q$ is again
taken to be one dimensional as in Eq.~\ref{bands}. The presence and
periodicity of charge and bond order can be determined from
divergences of the charge or bond-order susceptibility as
$\beta\rightarrow\infty$ Within determinantal QMC methods finite $V$
leads to a significantly worse Fermion sign problem and hence we
present results only for nonzero $U$ but $V=0$. However, as shown in
Fig.~\ref{vdata}(a), $V$ does not change the pattern of the bond
distortion. We therefore expect that our results here are
representative for arbitrary $U$ and $V$.
Fig.~\ref{qmcdata1} shows the bond, charge, and spin susceptibilities
as a function of $q$ and temperature for a $N=36$ site ladder with
$U=4$, $V=0$, and two representative values of $t_2/t_1$, 0.4
(Fig.~\ref{qmcdata1}(a)-(c)) and 1.4 (Fig.~\ref{qmcdata1}(d)-(f)). At
low temperature, the bond, charge, and spin susceptibilities all peak
at 2k$_F$, consistent with period-four order of charges and bonds as
expected. The 2k$_F$ peak in the spin susceptibility corresponds to
the short-range AFM spin order found in the 1D $\frac{1}{4}$-filled
chain. As seen in Fig.~\ref{qmcdata1}(c) and (f), the spin
susceptibility converges to a finite value as $q\rightarrow 0$ at low
temperatures. This is consistent with the expectation that no SG
exists in the ladder in the limit of zero e-p coupling. Several
differences can be noted when comparing results for small and large
$t_2/t_1$. First, the 2$k_F$ bond and charge response as T$\rightarrow
0$ is stronger for small $t_2/t_1$. This reflects the larger amplitude
distortion ($\Delta n$ and bond order) found in the 1D
weakly-correlated limit (see Fig.~\ref{data}(b)-(c)). The 2$k_F$ spin
response becomes weaker with increasing $t_2/t_1$, and
$\chi_\sigma(q)$ is reduced for small $q$. These changes reflect the
increasing strength of the spin-singlet bond along the $t_1$ direction
with increasing $t_2/t_1$, which in turn leads to the increase in
$\Delta_{\rm{ST}}$. For larger $t_2/t_1$, a broad plateau appears at
$q\approx 3\pi/4$ in the charge and spin susceptibilities. This
plateau is however non-divergent---while it is significant at high and
intermediate T, at low temperatures the 2$k_F$ response becomes
stronger. We will show below that this plateau reflects the binding of
spinon excitations.
Summarizing, the above numerical results show that in the zigzag
ladder, no separate high-temperature ordering is expected; instead the
ladder is metallic at high temperature and as temperature decreases a
single transition to the PEC state with SG takes place. This is in
contrast to what happens in 1D \cite{Clay07a}.
\subsubsection{Spinon excitations}
The structure of excitations out of the ground state provides an
alternate way to understand the thermodynamics of a
strongly-correlated system. In the 1D limit, flipping one spin in the
PEC results in two spinons which are unbound and hence separated by a
distance of $N/2$ sites on the lattice. We have performed
self-consistent calculations within Eq.~\ref{ham} of excited spin
triplet ($S=1$) excitations to detect and study such spinon
excitations in the zigzag ladder. Our calculations are for moderately
large e-p interactions so that the widths of the charge-spin solitons
are relatively narrow. This is necessary in order to prevent overlaps
between the solitons. In Fig.~\ref{mps36site} we show the charge and
spin densities in the $S=1$ state of a 36 site zigzag ladder for small
and large $t_2/t_1$. Spinon excitations are identified as defects in
the $\cdots1100\cdots$ charge order, and also from their large local
spin densities \cite{Clay07a}.
For small $t_2/t_1$ (Fig.~\ref{mps36site}(a)), flipping a single spin
results in two separated spinons as in the 1D chain. The repulsive
interaction between spinons results in their separating to opposite
positions on the periodic lattice. As indicated in the figure, each
spinon occupies two lattice sites, and there occur both charge and
spin modulations. The charge density at each spinon site is 0.5; the
spin densities are also equal on the two sites. The PEC charge and
bond distortions on either side of a spinon are out of phase with each
other by two lattice sites. The charge densities on the two sites at
the center of the are 0.5, 0.5 (see Fig.~\ref{mps36site}(a)). With
increasing $t_2/t_1$, the lattice separation between the spinons
decreases, indicating binding. For the e-p coupling of
Fig.~\ref{mps36site}, at approximately $t_2/t_2\approx 1.0$ the bound
spinons form a single excitation that occupies 4 lattice sites with
approximately uniform charge density of 0.5 on each site
(Fig.~\ref{mps36site}(b)). For spin states higher than 1 (not shown
here), we found that spinons are bound in pairs; for example in the
$S=2$ state there are two of the defects shown in
Fig.~\ref{mps36site}(b), separated by the maximum possible lattice
spacing. Spinon binding is usually associated with an increase in the
singlet-triplet gap, and can thus explain the observation
the(Fig.~\ref{data}(d)) that for fixed $V$, $\Delta_{\rm{ST}}$
increases with increasing $t_2/t_1$, even though $\Delta n$ decreases
at the same time.
The thermodynamic behavior is understood only when the complementary
calculations of Fig.~\ref{qmcdata1} and \ref{mps36site} are taken
together. Fig.~\ref{mps36site} shows spinon creation upon a single
spin excitation, while the thermodynamic properties at intermediate
and high temperatures are dominated by high-spin configurations
containing multiple spinons. Thus the signatures of spinons, including
spinon binding can also are to be be found in the finite-temperature
QMC results of Fig.~\ref{qmcdata1}. In the 1D limit spinons have
local 4k$_F$ charge or bond order \cite{Clay07a}. As in the 1D chain
(reference \cite{Clay07a}), when $t_2/t_1$ is small we find that the
4k$_F$ bond and charge susceptibilities {\it increase} with
temperature (see Fig.~\ref{qmcdata1}(a)-(b) at $q=\pi$).
When $t_2/t_1$ is of order unity, several differences as seen in the
susceptibilities that can be correlated with the binding of spinons
shown in Fig.~\ref{mps36site}. First, at intermediate temperatures,
the charge and spin susceptibilities have a broad plateau at $q\approx
3\pi/4$. When spinons bind, the larger size of these defects moves the
charge and spin response away from 4k$_F$ and towards a smaller
wavenumber. Second, in this parameter region the 4k$_F$ bond
susceptibility remains small regardless of temperature and instead at
higher temperatures a broad response from $0<q<\pi/2$ appears
(Fig.~\ref{qmcdata1}(d)). This small-$q$ (long wavelength) bond order
response is a result of increasing numbers of bound spinons created in
high-spin configurations. Each high-spin configuration has multiple
bound spinons equally separated from each other, giving a
long-wavelength bond order distortion with $0<q<\pi/2$. Related to
this is the observation that $\chi_B(2k_F)$ varies only weakly with
temperature in the ladder limit. In the 1D limit, each spinon
separates regions where the PEC is out of phase by two lattice units
(Fig.~\ref{mps36site}(a)); due to this phase difference the
introduction of spinons will lead to a rapid decrease in the 2k$_F$
response. On the other hand in the ladder limit, the PEC remains ``in
phase'' on either side of bound spinons (Fig.~\ref{mps36site}(b)),
resulting in a weaker temperature dependence of the 2k$_F$ bond
susceptibility---bound spinons suppress the BCDW locally, but do not
disturb the overall phase of the density wave as do single spinons.
\begin{figure}
\centerline{\resizebox{3.4in}{!}{\includegraphics{fig7}}}
\caption{(color online) MPS-QMC charge and spin density in the triplet
state versus site index $j$ along the $t_1$ direction. Parameters
are $N=36$, $U=6$, $V=1$, and $\alpha=1.6$. In panels (a) and (b),
$t_2/t_1$=0.2 and 1.4, respectively. Circles (diamonds) are charge
(spin) density. Arrows indicate location of spinon defects (see
text). Statistical errors are smaller than symbol sizes.}
\label{mps36site}
\end{figure}
\section{Discussion}
\label{discussion}
We have shown that the $\frac{1}{4}$-filled correlated zigzag ladder
has the following properties: First, due to the geometry of this
lattice, the WC state driven by strong nearest-neighbor Coulomb
interactions in 1D is strongly suppressed. Instead, a single PEC state
with CO pattern $\cdots1100\cdots$ occurs over a wide range of
$t_2/t_1$ ($0 \leq t_2/t_1 \leq 1.707$). Second, unlike in the 1D
$\frac{1}{4}$-filled band the bond order pattern in the zigzag ladder
is $SMWM$ for all Coulomb interactions, and the bond distortion
$SWSW^\prime$ never occurs. Third, with increasing $t_2/t_1$ from the
1D limit, the CO amplitude decreases for fixed $V$ until it vanishes
above a critical value. Surprisingly, the singlet-triplet gap
increases in magnitude at the same time. We have shown that this
increase is accompanied by the binding of spinon excitations. The
singlet-triplet gap can be very large when $t_2/t_1$ and $V$ are both
moderately large (see Fig.~\ref{vdata}(d)). Finally, our QMC
calculations show that in the zigzag ladder there is no distinct
intermediate state between the low temperature PEC and the high
temperature metallic state, and there is a single metal-insulator
transition that is accompanied by simultaneous bond and charge
distortions and spin gap. The existence of simultaneous transitions
is expected here as the spin-singlet here is of the ``intra-dimer''
type as shown in Fig.\ref{1d}(c)-(d).
In the isotropic $\frac{1}{4}$-filled band 2D square lattice, adding a
frustrating diagonal $t^\prime$ bond results in a transition from a
uniform state to a paired PEC state once $t^\prime$ exceeds a critical
value \cite{Li10a,Dayal11a}. As with spin systems, the zigzag ladder
provides a simple model for the study of the frustration-driven PEC
state in 2D, with the difference that the frustration does not create
the SG state in the ladder, but rather enhances the SG that is already
present in the unfrustrated model.
This enhancement can be large due to the cooperative interaction between
the two kinds of electron-phonon interactions that are possible in the
ladder, as shown in Fig.~\ref{nonint}.
We discuss below the implications
of our work for understanding experiments on $\frac{1}{4}$-filled
materials in general (including both 2D and 3D). We first consider
specifically those materials which have been suggested to be ladders
based on their crystal structures. We then also consider the broader
class of $\frac{1}{4}$-filled band materials. Ideally, this second
class of materials require understanding of the excitations and
thermodynamics of the higher dimensional PEC, which is a much more
formidable task than the ladder calculations. We nevertheless point
out that broad conclusions can be drawn for these systems based on our
current work. The majority of the materials we consider belong to the
large family of low dimensional organic charge transfer solids (CTS),
within which many examples of SG ground states are found with quasi-1D
and 2D lattice \cite{Ishiguro}. We also point out a few inorganic
$\frac{1}{4}$-filled materials that exhibit similar transitions.
\subsection{$\frac{1}{4}$-filled ladder candidates}
While most ladder materials found have been $\frac{1}{2}$-filled
\cite{Dagotto96a}, the family of CTS materials (DTTTF)$_2$M(mnt)$_2$,
where M is a metal ion, are likely candidates for $\frac{1}{4}$-filled
band ladders \cite{Rovira00a}. In this series of compounds, the M=Au
and M=Cu (for both the metal ion is diamagnetic) have been studied in
most detail
\cite{Rovira00a,Ribera99a,Wesolowski03a,Ribas05a,Musfeldt08a}.
Structurally, these materials consist of pairs of DTTTF stacks, each
with $\frac{1}{4}$-filled band of holes, separated by stacks of
M(mnt)$_2$ which are Mott-Hubbard semiconductors with
$\frac{1}{2}$-filled electron bands. It is then likely that the each
pair of DTTTF stacks behaves as ladders. Note that the stack direction
corresponds to the $t_2$ direction within our model, and hence these
systems lie in the parameter regime $t_2>t_1$. The very
large\cite{Ribas05a} T$_{\rm{SG}}$ in (DTTTF)$_2$Au(mnt)$_2$ ($\sim$70
K) and (DTTTF)$_2$Cu(mnt)$_2$ ($\sim$90K), which are nearly an order
of magnitude larger than the spin-Peierls transition temperatures in
the 1D $\frac{1}{4}$-filled systems \cite{Pouget88a}, supports the
conjecture that these systems are ladders.
The MI and SG transitions are, however, distinct in these compounds,
which would argue against the zigzag ladder picture, at least its
simplest version. For M=Au, a broad MI transition occurs at
T$_{\rm{MI}}\approx 220$ K, followed by a decrease in the magnetic
susceptibility at 70 K \cite{Ribera99a}. Below the MI transition,
diffuse X-ray scattering at $b^\star/2$ indicates dimerization along
$t_2$, but broad line widths suggest the dimerization order is not
long-ranged \cite{Ribera99a}. The M=Cu salt is isostructural to the
Au salt with slightly smaller lattice parameters due to the smaller
metal ion. The MI transition for M=Cu occurs at 235 K, and unlike the
Au salt is a sharp, second-order phase transition
that is accompanied by doubling of the unit cell in the ladder
direction \cite{Ribas05a}. Changes in the optical properties at T$_{\rm{MI}}$
and T$_{\rm{SG}}$ for the two salts are also different
\cite{Wesolowski03a,Musfeldt08a}. For M=Au, at T$_{\rm{MI}}$ symmetry
breaking occurs along the rung direction (perpendicular to the DTTTF
stacks), while at T$_{\rm{SG}}$, symmetry breaking is predominantly
along the stacks \cite{Wesolowski03a}. In contrast, optical response
indicates symmetry breaking in both rung and stack directions at
T$_{\rm{MI}}$ for M=Cu \cite{Musfeldt08a}.
We believe that the $\frac{1}{4}$-filled band zigzag ladder model is
nevertheless a valid description for both M = Au and Cu at low
temperatures. The only other competing model for these systems is the
rectangular ladder model \cite{Wesolowski03a}, wherein each DTTTF
molecule is coupled to a single other such molecule on the neighboring
stack. Such a description would be against the known crystal
structures \cite{Rovira00a}. Furthermore, within the rectangular
ladder model, there needs to occur a high temperature metal-insulator
transition accompanied by in-phase bond dimerization, such that each
dimer of DTTTF molecules has a single electron; the ladder after
dimerization would be akin to rectangular spin ladder, which has SG
for all interstack spin exchange. The two stacks need to be identical
within the model and hence there is no symmetry breaking within the
rectangular ladder scenario along the rung direction at any
temperature. Neither is there any CO within the model, in
contradiction to what is found in optical measurements
\cite{Musfeldt08a}.
There can be several different reasons why the high temperature MI
transition occurs within the zigzag ladder scenario. First, muon spin
rotation experiments suggest significant interladder coupling
\cite{Arcon99a}, that has been ignored in our isolated ladder
model. Second, our model does not take into account the
temperature-dependent lattice expansion that is common to CTS
crystals. Note that lattice expansion will affect the interstack
hopping $t_1$ much more strongly than the intrastack hopping $t_2$. It
is then conceivable that at high temperatures the interstack distance
is large enough (and $t_1$ is small enough) that $t_2/t_1$ is greater
than the critical value 1.707 and the systems behave as independent
chains. The lattice contracts at reduced temperatures, increasing
$t_1$ and reducing $t_2/t_1$, when the systems exhibit zigzag ladder
behavior. This would require $t_2/t_1$ close to 1.7 in the
experimental systems, which is indeed close to ratio of the calculated
hopping integrals for the M = Au system \cite{Ribera99a}. As seen in
Fig.~\ref{data}(d), large $t_2/t_1$ would be in agreement with the
unusually large SG seen in the (DTTTF)$_2$M(mnt)$_2$. There are
however additional complications. During the synthesis of the
(DTTTF)$_2$M(mnt)$_2$ salt, the 1:1 salt (DTTTF)M(mnt)$_2$ is also
produced and crystals of the 1:2 salt must therefore be separated from
this mixture for experiments \cite{Ribas05a}. Relative to other CTS,
available experimental data is thus more limited. Experimental
determinations of the pattern of the CO below T$_{\rm{SG}}$ (and
above, if any), and of the temperature-dependent lattice distortion
are necessary for resolution of the above issues.
\subsection{General classification of $\frac{1}{4}$-filled materials}
As discussed in Section \ref{1dsummary}, in 1D the SG and MI
transitions are distinct when the ground state broken-symmetry state
has the bond pattern SWSW$^\prime$, but are coupled together in a
single transition when the bond pattern is SMWM as occurs in the
zigzag ladder we have considered here. In 1D, the strength of e-e
interactions determines which bond pattern is favored. As SG
transitions are found in a number of $\frac{1}{4}$-filled materials
with 2D as well as 3D lattices, generalizations of these results to
higher lattice dimensionality are of great interest. Our results in
Section \ref{results} for the zigzag ladder show that in dimensions
greater than one, e-e interaction strength does {\it not necessarily}
determine the bond distortion pattern and thermodynamics---lattice
structure and frustration are also important.
We point out an empirical criterion here that rationalizes separate
versus coupled SG--MI transitions in $\frac{1}{4}$-filled band
materials at large, that we arrive at by simply extrapolating from the
1D and zigzag ladder results. Our observation is that if the low
temperature structure is such that the singlet bond is interdimer, and
the strongest bond is between intradimer charge-rich and charge-poor
sites (Fig.~\ref{1d}(b)), there occur distinct transitions involving charge
and spin degrees of freedom. Conversely, if the intradimer bond is
between a pair of charge-rich sites (Fig.~\ref{1d}(c)), and the SG is due to
intradimer spin-singlets, there is a single coupled SG-MI transition.
In this latter case in general T$_{SG}$ is high. The first of these
two observations was noted previously by Mori \cite{Mori99b}. We do
not have any microscopic calculation to justify these conclusions for
2D and 3D; they are based on the mappings of Fig.~\ref{1d}. In
Table~\ref{materialstab} we give a list of materials for which the
bond and/or charge ordering pattern below T$_{SG}$ is known. In all
cases our simple criterion appears to be valid (see below). Two of
the entries require additional explanation. (TMTTF)$_2$PF$_6$ is
already insulating at room temperature because of lattice
dimerization; the high temperature CO at 70 K here is of the WC type,
but there is strong redistribution of the charge below T$_{SG}$
\cite{Nakamura07a}, clearly placing this system into the category of
materials that undergoes two transitions \cite{Clay07a}. The low
T$_{SG}$ is also a signature of this. The situation with
$\alpha^\prime$-NaV$_2$O$_5$ is exactly the opposite. This system is
also insulating already at high temperature, but now the
charge-ordering and spin-gap transitions occur at the same
temperature, indicating that these transitions are coupled. While
Table~\ref{materialstab} is meant to be representative and not
comprehensive, we are unaware of examples where our criterion
fails. We describe individual materials in detail below.
\begin{table}
\begin{tabular}{l|l|D{.}{.}{0}|D{.}{.}{0}|l}
\hline
Material & D-n & \multicolumn{1}{l|}{T$_{\rm{SG}}$ } &
\multicolumn{1}{l|}{T$_{\rm{CB}}$} & Ref.\\
& & (K) & (K) & \\
\hline
MEM(TCNQ)$_2$ & 1D-2 & 17 & 335 & 42,43 \\%\cite{Huizinga79a,Visser83a} \\
(TMTTF)$_2$PF$_6$ & 1D-2 & 19 & 70 & 38,44 \\% \cite{Pouget88a,Nad06a} \\
$\theta$-(ET)$_2$RbZn(SCN)$_4$ & 2D-2 & 20 & 195 & 45-48 \\%\cite{Mori98b,Miyagawa00a,Watanabe04a,Watanabe07a} \\
EtMe$_3$P[Pd(dmit)$_2$]$_2$ & 2D-2 & 25 & >300 & 49,50 \\%\cite{Kato06a,Tamura06a} \\
$\beta^{\prime\prime}$-DODHT)$_2$PF$_6$ & 2D-2 & 40 & 255 & 51,52 \\% \cite{Nishikawa02a,Nishikawa05a} \\
(DMe-DCNQI)$_2$Ag & 1D-2 & 80 & 100 & 53 \\% \cite{Werner88a} \\
$\alpha$-NaV$_2$O$_5$ & 2D-1 & 34 & 34 & 54 \\%\cite{Isobe96a}\\
$\alpha^\prime$-(ET)$_2$I$_3$ & 2D-1 & 135 & 135 & 55,56 \\% \cite{Rothaemel86a,Kakiuchi07b} \\
(BDTFP)$_2$PF$_6$(PhCl)$_{0.5}$& 1D-1 &175 & 175 & 57-59 \\%\cite{Ise01a,Uruichi02a,Uruichi03a} \\
CuIr$_2$S$_4$ & 3D-1 & 230 & 230 & 60,61 \\%\cite{Radaelli02a,Khomskii05a} \\
(EDO-TTF)$_2$PF$_6$ & 1D-1 & 280 & 280 & 62,63 \\% \cite{Ota02a,Drozdova04a} \\
\hline
\end{tabular}
\caption{$\frac{1}{4}$-filled materials with spin gapped ground
states. For each the dimensionality and number of transitions D-n,
spin-gap transition temperature T$_{\rm{SG}}$, charge--bond ordering
temperature T$_{\rm{CB}}$, and references are listed.}
\label{materialstab}
\end{table}
\subsubsection{Two transitions: inter-dimer singlet formation}
We have previously discussed quasi-1D CTS where two transitions occur
\cite{Clay07a}. MEM(TCNQ)$_2$ is one example where the low
temperature bond pattern has been well characterized by neutron
scattering \cite{Visser83a}. In MEM(TCNQ)$_2$, the MI transition
occurs near room temperature at T$_{\rm{MI}}$=335 K, followed by a SG
transition at T$_{\rm{SG}}$=17.4 K \cite{Huizinga79a,Visser83a}. The
bond pattern at low temperature is SWSW$^\prime$ \cite{Visser83a}.
The (TMTTF)$_2$X series is another quasi-1D example where CO and SG
occur at different temperatures \cite{Chow00a,Foury-Leylekian04a,Nad06a,Fujiyama06a,Clay07a}.
In the 2D $\theta$-(BEDT-TTF)$_2$MM$^\prime$(SCN)$_4$ series, CO
occurs at the high temperature MI transition
\cite{Mori98b,Miyagawa00a} followed by the SG transition at
T$_{\rm{SG}}\sim$20 K \cite{Mori98b,Watanabe04a,Watanabe07a}. X-ray
studies in the temperature range T$_{\rm{SG}}<$T$<$T$_{\rm{CO}}$ show
that the strongest bond orders in the 2D BEDT-TTF layers are between
molecules with large and small charge density (see Fig.7 in reference
\cite{Watanabe04a}). These and lower temperature X-ray
results\cite{Watanabe07a} have indicated that the spin-singlets in the
SG phase are located on the inter-dimer bonds, as in the SWSW$^\prime$
bond pattern in 1D.
Yet another 2D material that very clearly shows two transitions and in
which the charge-bond distortion patterns are known is
EtMe$_3$P[Pd(dmit)$_2$]$_2$ \cite{Kato06a,Tamura06a}. The material is
semiconducting already at 300 K (T$_{MI} > 300$ K). The magnetic
susceptibility at high temperatures corresponds to that of a
Heisenberg antiferromagnet on a triangular lattice with $J=250$
K. Below a relatively low T$_{SG}$=25 K the system enters a distorted
phase, with the intermolecular bond distortion pattern clearly of the
dimerized dimer type, and the strongest bonds between charge-rich and
charge-poor sites \cite{Tamura06a}. This material undergoes
superconducting transition under pressure \cite{Shimizu07a}.
\subsubsection{Coupled transitions: intra-dimer singlet formation}
In cases where T$_{\rm{MI}}$ and T$_{\rm{SG}}$ coincide, the spin gap
transition temperature tends to be quite high. Examples here include
(EDO-TTF)$_2$X, which shows a first order MI transition at high
temperature, 280 K for X=PF$_6$ and 268 K for X=AsF$_6$
\cite{Ota02a}. This transition coincides with T$_{\rm{SG}}$
\cite{Ota02a}. Optical experiments determined that the charge order
pattern in the low temperature phase is $\cdots$1100$\cdots$, with the
strongest bond between molecules with large charge density
\cite{Drozdova04a}. A coupled SG--MI transition occurs at
T$_{\rm{MI}}$=175 K in (BDTFP)$_2$PF$_6$(PhCl)$_{0.5}$
\cite{Ise01a}. While structurally (BDTFP)$_2$PF$_6$(PhCl)$_{0.5}$
appears to be ladder-like \cite{Ise01a,Clay05a}, X-ray
\cite{Uruichi02a} and optical measurements \cite{Uruichi03a} show that
in the low-temperature phase tetramerization takes place along the
stacks with the SMWM bond pattern \cite{Uruichi02a}. The coupled
SG--MI transitions found in these two materials are consistent with
our criterion above.
Beyond quasi-1D materials, in the 2D CTS examples can be found with
coupled transitions, which also typically take place at a relatively
high temperature. In $\alpha$-(BEDT-TTF)$_2$I$_3$ T$_{\rm{SG}}$=135 K
coinciding with the MI transition \cite{Rothaemel86a}. Similar to
(EDO-TTF)$_2$X, the SG--MI transition is first order and coincides
with a large structural change \cite{Kakiuchi07b}. In the low
temperature phase, the strongest bond is again between the sites of
largest charge density \cite{Kakiuchi07b}.
Similar transitions are observed in inorganic $\frac{1}{4}$-filled
materials as well. The inorganic spinel CuIr$_2$S$_4$ is one example
in which the Ir-ions form the active sites with $\frac{3}{4}$-filled
electron band ($\frac{1}{4}$-filled hole band) \cite{Khomskii05a}. In
CuIr$_2$S$_4$ a coupled SG--MI transition occurs at 230 K, below which
the criss-cross chains of Ir-ions are charge-ordered as
Ir$^{4+}$-Ir$^{4+}$-Ir$^{3+}$-Ir$^{3+}$, with the strongest bonds
between the spin $\frac{1}{2}$ hole-rich Ir$^{4+}$ ions and weakest
bonds between the spin 0 hole-poor Ir$^{3+}$ ions.
\cite{Radaelli02a,Khomskii05a}. A more complex coupled transition
occurs in $\alpha^\prime$-NaV$_2$O$_5$, where a coupled CO-SG
transition occurs at 34 K within an insulating phase. Structurally
$\alpha^\prime$-NaV$_2$O$_5$ consists of rectangular V-ion based
ladders linked by zigzag V-V bonds. Below the transition the V-ions
are charge disproportionated and there occurs a period-4
V$^{4+}$-V$^{4+}$-V$^{5+}$-V$^{5+}$ CO within the zigzag links between
the rectangular ladders. Once again, the strongest bonds are between
the spin $\frac{1}{2}$ electron-rich V$^{4+}$ ion pairs and the
weakest bonds between the spin 0 electron-poor V$^{5+}$ ion pairs
\cite{Mostovoy99a,Edegger06a}, in agreement with our criterion for
coupled CO-SG transition.
\subsection{Possible relationship with superconductivity}
We have recently suggested that superconductivity in
strongly-correlated $\frac{1}{4}$-filled systems is due to a
transition from an insulating PEC state to a paired-electron liquid
\cite{Mazumdar08a,Li10a,Dayal11a}. Within this model, the
spin-singlets of the PEC become mobile with further increase in
frustration. The fundamental theoretical picture is then analogous to
bipolaron theories of superconductivity \cite{Alexandrov94a}, with two
differences: (i) the pairing in our model is driven by
antiferromagnetic correlations (as opposed to very strong
e-p interactions that screen out the short-range Coulomb
repulsion), and (ii) nearly all the carriers are involved in the
pairing. The effective mass of the spin-bonded pairs is an important
parameter, and overly strong binding will reduce pair mobility. This
would suggest that superconductivity is {\it more likely} in materials
with inter-dimer singlets. As noted in the above, such systems tend to
have the dimerized dimer structure. In contrast, in those materials
with intra-dimer singlets that form at high temperature, the stronger
pair binding would lead to a pair mobility too low to achieve
superconductivity--in these cases the ground state would remain an
insulating spin-gapped PEC with charge and bond order. It is
interesting to note that that such a correlation was suggested from
empirical observations alone by Mori \cite{Mori99b}.
\section{Acknowledgments}
This work was supported by the US Department of Energy grant
DE-FG02-06ER46315. RTC thanks A. Sandvik for helpful discussions
regarding the MPS-QMC method. RTC thanks the University of Arizona,
the Boston University Condensed Matter Theory Visitor's program, and
the Institute for Solid State Physics of the University of Tokyo for
hospitality while on sabbatical.
|
1108.3675
|
\section{Introduction} \label{sec:intro}
Logic optimization approaches can be divided into {\em algorithmic-based methods}, which are based on global transformations, and {\em rule-based methods}, which are based on local transformations~\cite{espr}.
Rule-based methods, also called {\em rewriting}, use a set of rules which are applied when certain patterns are found. A rule transforms a pattern for a local sub-expression, or a sub-circuit, into another equivalent one. Since rules need to be described, and hence the type available of operations/gates must be known, the rule-based approach usually requires that the description of the logic is confined to a limited number of operation/gate types such as AND, OR, XOR, NOT etc. In addition, the transformations have limited optimization capability since they are local in nature. Examples of rule-based systems include LSS~\cite{lss} and SOCRATES~\cite{socrates}.
Algorithmic methods use global transformations such as decomposition or factorization, and therefore they are much more powerful compared to the rule-based methods. However, general Boolean methods, including don't care optimization, do not scale well for large functions. Algebraic methods are fast and robust, but they are not complete and thus often give lower quality results. For this reasons, industrial logic synthesis systems normally use algebraic restructuring methods in a combination with rule-based methods.
In this paper,
we propose a new rewriting algorithm based on 5-Input cuts. In the algorithm, the best circuits are pre-computed for a subset of NPN classes of 5-variable functions. Cut enumeration technique~\cite{Cong99} is used to find 5-input cuts for all nodes, and some of them are replaced with a best circuit. The Boolean matcher~\cite{Chai06} is used to map a 5-input function to its canonical form.
The presented approach is expected to complement existing rewriting approaches
which are usually based on 4-input cuts.
Our experimental results show that, by adding the new rewriting algorithm to ABC synthesis tool~\cite{abc},
we can further reduce the area of heavily optimized large circuits by 5.57\% on average.
The paper is organized as follows.
Section~\ref{sec:bg} describes main notions and definitions used in the sequel.
Section~\ref{sec:prev} summarises previous work.
Section~\ref{sec:main} presents the proposed approach.
Section~\ref{sec:exp} shows experimental results. Section~\ref{sec:conc} concludes the paper and
discusses open problems.
\section{Background} \label{sec:bg}
A \emph{Boolean network} is a directed acyclic graph, of which the nodes represent logic gates, and the directed edges represent connections of the gates. A network is also referred to as a \emph{circuit}.
A node of the network has zero or more \emph{fanins}, and zero or more \emph{fanouts}. A \emph{fanin} of a node $n$ is a node $n_\text{in}$ such that there exists an edge from $n_\text{in}$ to $n$. Similarly, a \emph{fanout} of a node $n$ is a node $n_\text{out}$ such that there is an edge from $n$ to $n_\text{out}$. The \emph{primary inputs} (PIs) of a network are the zero-fanin nodes of the network. The \emph{primary outputs} of a network are a subset of all nodes. If a network contains flip-flops, the inputs/outputs of the flip-flops are treated as POs/PIs of the network.
An \emph{And-Inverter graph} (AIG) is a network, of which a node is either a PI or a 2-input AND gate, and an edge is negatable. An AIG is structurally hashed~\cite{Ganai00} to ensure uniqueness of the nodes. The \emph{area} of an AIG is measured by the number of nodes in the network.
A \emph{cut} of a node $n$ is a set $C$ of nodes such that any path from a PI to $n$ must pass through at least one node in $C$. Node $n$ itself forms a \emph{trivial cut}. The nodes in $C$ are called the $leaves$ of cut $C$. A cut $C$ is \emph{$K$-feasible} if $|C| \leq K$; additionally, $C$ is called a \emph{$K$-input cut} if $|C| = K$.
The \emph{level} of a node $n$ is the number of edges of the longest path from any PI to $n$. The \emph{depth} of a network is the largest level among all internal nodes of the network.
Two Boolean functions, $F$ and $G$, are \emph{NPN-equivalent} and belong to the same \emph{NPN equivalence class}, if $F$ can be transformed into $G$ through negation of inputs (N), permutation of inputs (P), and negation of the output (N)~\cite{hurst2}.
\section{Previous Work} \label{sec:prev}
Rewriting of networks was introduced in the early logic synthesis systems. SOCRATES~\cite{socrates} and the IBM system~\cite{lss}\cite{ibm} performed rewriting under a set of rewriting rules to replace a combination of library gates with another combination of gates which had a smaller area or delay. In SOCRATES, these rules were managed in an expert system deciding which ones to apply and when. The rules in SOCRATES were written by human designers, based on personal experience and observation of experimental results.
In the MIS system~\cite{mis}, which later developed into SIS~\cite{sis}, local transformations such as \emph{simplification} were used to locally optimize a multi-level network after global optimization. Two-level minimization methods such as ESPRESSO~\cite{espr} were used to minimize the functions associated with the nodes in the network. Similar methods~\cite{Brayton90} were also included in works of~\cite{bold}\cite{Malik88}\cite{Savoj89}.
Rule-based rewriting method was used to simplify AND-OR-XOR networks in the multi-level synthesis approach presented in~\cite{Sasao95}.
AIG-based rewriting technique presented in~\cite{Bjesse04} is used as a way to compress circuits before formal verification. Rewriting is performed in two steps. In the first step, which happens only once when the program starts, all two-level AIG subgraphs are pre-computed and stored in a table by their Boolean functions. In the second step, the AIG is traversed in topological order. The two-level AIG subgraphs of each node are found and the functionally equivalent pre-computed subgraphs are tried as the implementation of the node, while logic sharing with existing nodes is considered. The subgraph leading to least number of overall nodes is used as the replacement of the original subgraph.
An improved AIG rewriting technique for pre-mapping optimization is presented in~\cite{rwr}. It uses 4-input cuts instead of two-level subgraphs in rewriting, and preserves the number of logic levels so the area is reduced without increasing delay. Additionally, AIG balancing, which minimizes delay without increasing area, is used together with rewriting, to achieve better results. Iterating these two processes forms a new technology-independent optimization flow, which is implemented in the sequential logic synthesis and verification system, ABC~\cite{abc}. Experiments show that this implementation scales to very large designs and is much faster than SIS~\cite{sis} and MVSIS~\cite{mvsis}, while resulting in circuits with the same or better quality.
\section{AIG Rewriting Using 5-Input Cuts} \label{sec:main}
The presented algorithm can be divided into two parts:
\begin{enumerate}
\item Best circuit generation \label{part:cgen}
\item Cut enumeration and replacement \label{part:enum}
\end{enumerate}
Part \ref{part:cgen} of the algorithm tries to find the optimal circuits for a subset of ``practical'' 5-variable NPN classes, and stores these circuits. Part \ref{part:enum} of the algorithm enumerates all 5-input cuts in the target circuit, and chooses to replaces a cut with a suitable best circuit.
In the implementation of rewriting using 4-input cuts in~\cite{rwr}, pre-computed tables of canonical forms and the transformations are kept for all $2^{16}$ 4-input functions~\cite{abc}\cite{rwr}. As we extend rewriting to 5-input cuts, the size of these tables becomes $2^{32}$. i.e.
too large for using in a program that runs on a regular computer. In our implementation, we use a Boolean matcher~\cite{Chai06} to dynamically calculate the canonical form of a truth table and the corresponding transformation from the original truth table.
\subsection{Best circuit generation}
Similarly to~\cite{rwr}, we pre-compute the candidate circuits for each NPN class so they can be directly used later. There are $616126$ NPN equivalence classes for 5-input functions, among which only $2749$ classes appear in all IWLS 2005 benchmarks~\cite{iwls2005} as 5-feasible cuts. We picked $1185$ of them with more than 20 occurrences, and generated best circuits for representative functions of these classes.
Due to the expanded complexity of the problem, we had to make some trade-offs between the quality of the circuits and the time and memory usage of our algorithm. Our implementation has following differences compared to~\cite{rwr}:
\begin{itemize}
\item Use of Boolean matcher to calculate canonical form, instead of table look-up.
\item Use of a hash map to store the candidate into best circuits, instead of using a full table.
\item When deciding whether to store a node in the node list, a node with the same cost as an existing node is discarded, instead of being stored in the list.
\item Nodes of both canonical functions and the complement of the canonical functions are used as the candidate circuit, while in~\cite{rwr} complement functions are not used.
\item When the number of nodes reaches an upper limit, a reduction procedure is performed before the generation continues, leaving only the nodes used in the circuit table.
\end{itemize}
We use two structures to store the best circuits: the \emph{forest}, list of all nodes, and the \emph{table}, storing only the pointers to the nodes in the list, which represent canonical functions or their complements. In the \emph{forest}, a node can either be an AND node or an XOR node, and two incoming edges of a node have complementation attributes. The \emph{cost} of a node is the number of AND nodes plus twice the number of XOR nodes those are reachable from this node towards the inputs.
First, the constant zero node and five nodes for single variables are added into the \emph{forest}. The constant node and one of the variable nodes are added to the \emph{table}, since all variable nodes are NPN equivalent. Then, for each pair of nodes in the \emph{forest}, five types of 2-input gates are created, using the pair as inputs:
\begin{itemize}
\item AND gate
\item AND gate with first input complemented
\item AND gate with second input complemented
\item AND gate with both inputs complemented
\item XOR gate
\end{itemize}
A newly created node is stored in the \emph{forest} if the following conditions are met, otherwise it is discarded:
\begin{itemize}
\item The cost of the node is lower than any other node with the same functionality.
\item The cost of the node is lower than or equal to any other node with NPN-equivalent functionality.
\end{itemize}
In addition, the pointer to this node is added to the \emph{table} if the following condition is also met:
\begin{itemize}
\item The function of the node is the canonical form representative, or its complement, in the NPN-equivalence class it belongs to.
\end{itemize}
When the number of nodes in the \emph{forest} reaches an upper limit, a node reduction procedure is performed, where only the reachable nodes from the nodes in the \emph{table} are left in the \emph{forest}.
The algorithm stops when the number of uncovered ``practical'' classes is smaller than a threshold value.
Finally, the generated best circuits are stored, so they can be used later when rewriting takes place.
The pseudo-code of the proposed best circuit generation algorithm is shown in Algorithm~\ref{alg:gen}. The \texttt{GenerateBestCircuits} procedure returns a node list $N$ and a table of nodes $C$ recording the candidate best circuits for a subset of NPN classes. It takes three parameters. Parameter $P$ is a set of truth tables of ``practical'' 5-variable functions. This set contains about $1200$ 5-input canonical NPN representatives with 20 or more occurrence in IWLS 2005 benchmarks. Parameter $u$ is an integer indicating the acceptable number of uncovered practical NPN classes; $n_\text{max}$ is an integer indicating the limit number of nodes when a node reduction is needed. In our implementation, $u$ is set to $60$, and $n_\text{max}$ is set to $10000000$.
The pseudo-code for procedure \texttt{TryNode} is shown in Algorithm~\ref{alg:node}. \texttt{TryNode} creates a node, and determines whether to put it into the node list and the circuit table. Parameter $T \in \{\text{AND}, \text{XOR}\}$ indicates whether the new gate should be an AND gate or an XOR gate. Parameter $n_0$ and $n_1$ are two fanins of the new gate.
Procedure \texttt{ReduceNodes} reduces the node list by removing the nodes that are not used in any circuit in the circuit table.
Procedure \texttt{Canonicalize} calculates the canonical form of the truth table of a given function.
In the algorithms, variables $N$, $C$ and $M$ are globally accessible. $N$ denotes the list of all nodes. $C$ is a hash map of the candidate circuits; each of its entry is a set of nodes storing the root node of candidate circuits for the NPN class of this entry. $M$ is a temporary hash map to store the currently minimum costs of all functions.
\begin{algorithm}[t]
\caption{
\texttt{GenerateBestCircuits}($P$, $u$, $n_\text{max}$):
Generate candidate best circuits for a subset of NPN classes of 5-input Boolean functions.
}
\label{alg:gen}
\begin{algorithmic}[1]
\STATE Add constant zero node to $N$ and $C$
\STATE Add variable nodes to $N$
\STATE Add node of variable 0 to $C$
\FOR{each $i$ from 2 \TO $|N|$}
\FOR{each $j$ from 1 \TO $i - 1$}
\STATE \texttt{TryNode}(AND, $N_i$, $N_j$)
\STATE \texttt{TryNode}(AND, \texttt{Not}($N_i$), $N_j$)
\STATE \texttt{TryNode}(AND, $N_i$, \texttt{Not}($N_j$))
\STATE \texttt{TryNode}(AND, \texttt{Not}($N_i$), \texttt{Not}($N_j$))
\STATE \texttt{TryNode}(XOR, $N_i$, $N_j$)
\IF{num. of uncovered practical NPN classes $\leq u$}
\RETURN
\ENDIF
\IF{$|N| > n_\text{max}$}
\STATE \texttt{ReduceNodes}()
\STATE $i \gets 1$
\STATE break
\ENDIF
\ENDFOR
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[h]
\caption{
\texttt{TryNode}($T$, $n_0$, $n_1$):
Create a node of type $T$ with fanins $n_0$ and $n_1$, and determine whether to put it into $N$ or $C$.
}
\label{alg:node}
\begin{algorithmic}[1]
\STATE $n_\text{new} \gets$ \texttt{CreateNode}($T$, $n_0$, $n_1$)
\STATE $t \gets \texttt{GetTruth}(n_\text{new})$
\IF{$M_t$ not exist \OR $M_t > \texttt{Cost}(n_\text{new})$}
\STATE $M_t \gets \texttt{Cost}(n_\text{new})$
\ELSE
\RETURN
\ENDIF
\STATE $t_\text{canon} \gets \texttt{Canonicalize}(t)$
\IF{$\exists n \in C_{t_\text{canon}}$ such that $\texttt{Cost}(n) < \texttt{Cost}(n_\text{new})$}
\RETURN
\ENDIF
\STATE add $n_\text{new}$ to the end of list $N$
\IF{$t \neq t_\text{canon}$ \AND $t \neq \texttt{Complement}(t_\text{canon})$}
\RETURN
\ENDIF
\IF{$\exists n \in C_{t_\text{canon}}$ such that $\texttt{Cost}(n) > \texttt{Cost}(n_\text{new})$}
\STATE $C_{t_\text{canon}} \gets \emptyset$
\ENDIF
\IF{$t = t_\text{canon}$}
\STATE $C_{t_\text{canon}} \gets C_{t_\text{canon}} \bigcup \{n_\text{new}\}$
\ELSE
\STATE $C_{t_\text{canon}} \gets C_{t_\text{canon}} \bigcup \{\texttt{Not}(n_\text{new})\}$
\ENDIF
\RETURN
\end{algorithmic}
\end{algorithm}
\subsection{Cut enumeration and replacement}
We use a quite similar cut enumeration and replacement technique as in~\cite{rwr}. The main difference is that we use a Boolean matcher to calculate the canonical form of the NPN representative as well as the transformation to the canonical form from the original function, while in~\cite{rwr}, a faster table look-up is used.
The Boolean matcher proposed in~\cite{Chai06} calculates only the canonical form representation. We modified the program so it can simultaneously generate the NPN transformation, which is needed when connecting the replacement graph to the whole circuit.
Nodes are traversed in topological order. For each node starting from the PIs to the POs, all of its 5-input cuts are listed~\cite{Cong99}. The canonical form truth table and the corresponding NPN transformation of each cut are calculated using the Boolean matcher~\cite{Chai06}. Each cut is then evaluated whether there is a suitable replacement that does not increase the area of the network. Finally, the cut with the greatest gain is replaced by a best circuit. In the presented algorithm, zero-cost replacement is accepted, since it is a useful approach for re-arranging AIG structure to create more opportunities in subsequent rewriting~\cite{Bjesse04}.
The pseudo-code of the rewriting procedure is shown in Algorithm~\ref{alg:rwr}. For each node in the network, $N_\text{best}$ denotes the largest number of nodes saved by replacing a cut of the node by a pre-computed candidate circuit; $c_\text{best}$ and $u_\text{best}$ denotes the corresponding candidate circuit and the original cut, respectively. These three variables are updated simultaneously, if there exists a possible replacement.
Procedure $\texttt{ConnectToLeaves}(N, c, u, {Trans})$ connects the fanins of candidate circuit $c$ to the leaves of cut $u$, following the NPN transformation ${Trans}$.
Procedure $\texttt{Reference}(N, c)$ increases the reference count of the nodes belong to sub-circuit $c$, in network $N$, whereas $\texttt{Dereference}(N, c)$ decreases the reference count. When the reference count of a node becomes zero, the node does not belong to the network.
\begin{algorithm}[h]
\caption{
\texttt{RewriteNetwork}($N$, $C$):
Rewrite a Boolean network $N$ using candidate circuits stored in hash map $C$.
}
\label{alg:rwr}
\begin{algorithmic}[1]
\FOR{each node $n$ in $N$, in topological order}
\STATE $N_\text{best} \gets -1$
\STATE $c_\text{best} \gets \text{NULL}$
\STATE $u_\text{best} \gets \text{NULL}$
\FOR{each 5-input cut $u$ of $n$}
\STATE $t \gets \texttt{GetTruth}(u)$
\STATE $(t_\text{canon},{Trans}) \gets \texttt{Canonicalize}(t)$
\FOR{each candidate circuit $c$ in $C_{t_\text{canon}}$}
\STATE $\texttt{ConnectToLeaves}(N, c, u, {Trans})$
\STATE $N_\text{saved} \gets \texttt{Dereference}(N, u)$
\STATE $N_\text{added} \gets \texttt{Reference}(N, c)$
\STATE $N_\text{gain} \gets N_\text{saved} - N_\text{added}$
\STATE $\texttt{Dereference}(N, c)$
\STATE $\texttt{Reference}(N, u)$
\IF{$N_\text{gain} \geq 0$ and $N_best < N_\text{gain}$}
\STATE $N_\text{best} \gets N_\text{gain}$
\STATE $c_\text{best} \gets c$
\STATE $u_\text{best} \gets u$
\ENDIF
\ENDFOR
\ENDFOR
\IF{$N_\text{best} = -1$}
\STATE continue
\ENDIF
\STATE $\texttt{Dereference}(N, u_\text{best})$
\STATE $\texttt{Reference}(N, c_\text{best})$
\ENDFOR
\end{algorithmic}
\end{algorithm}
In~\cite{rwr}, the authors proposed an optimization flow composed of \emph{balance}, \emph{rewrite} and \emph{refactor} processes, and implemented it in the tool ABC~\cite{abc} with the script \emph{resyn2}. Compared to~\cite{rwr}, rewriting using 5-input cuts exploits larger cuts and more replacement options, thus has the potential for getting \emph{resyn2} script out of local minima, providing better rewriting opportunities.
\section{Experimental Results} \label{sec:exp}
The presented algorithm is implemented using structurally hashed AIG as an internal circuit representation and integrated in ABC synthesis tool
as a command \emph{rewrite5}.
To evaluate its effectiveness, we performed a set of experiments using IWLS 2005 benchmarks~\cite{iwls2005} with more than 5000 AIG nodes after structural hashing. All experiments were carried out on a laptop with Intel Core i7 1.6GHz (2.8GHz maximum frequency) quad-core processor, 6 MB cache, and 4 GB RAM.
First, for each benchmark, we applied a sequence of commands \emph{resyn2; rewrite5; resyn2} in the modified ABC and compared the result to two consecutive runs of \emph{resyn2} without \emph{rewrite5} in between.
The results are summarized in Table~\ref{tab:exp1}. Columns labeled by $A$ give the area in terms of AIG nodes. Columns labeled by $t$ give the runtime. The improvement of area and the increase of runtime are then calculated and shown in the last two columns.
Table~\ref{tab:exp1} shows that the average improvement in area achieved by adding \emph{rewrite5} in between two \emph{resyn2} runs is 3.50\%, at the cost of 33.18\% of extra runtime. This result indicates that the proposed \emph{rewrite5} method is effective in bringing ABC's \emph{resyn2} optimization script out of local minima, leading to better optimization possibilities.
The second experiment is performed similarly, except we used a longer optimization flow: \emph{resyn2; rewrite5; resyn2; rewrite5; resyn2}. The result is compared to three consecutive runs of \emph{resyn2} script.
The result of the second experiment is shown in Table~\ref{tab:exp2}, which has the same structure as Table~\ref{tab:exp1}. The average improvement in area using the new optimization flow is 4.88\%, at the cost of 46.11\% of extra runtime. This result shows the possibility to further extend the \emph{resyn2} sequence by inserting \emph{rewrite5} runs, to achieve even better optimization.
Even longer optimization flows were also tested. The comparison of average results is summarized in Table~\ref{tab:sum}. The improvement in area converges after certain number of \emph{resyn2}-\emph{rewrite5} iterations. The increase of improvement is insignificant for more than four runs of \emph{resyn2}.
\begin{table*}[p]
\centering
\footnotesize
\begin{tabular}{cr|rr|rr|rr}
\hline
& & \multicolumn{ 2}{|c}{resyn2;resyn2} & \multicolumn{ 2}{|c|}{resyn2;rewrite5;resyn2} & & \\
\cline{3-6}
benchmark & nodes & $A_1$ & $t_1$, sec & $A_2$ & $t_2$, sec & $(A_1-A_2)/A_1$ & $(t_2-t_1)/t_1$ \\
\hline
ac97\_ctrl & 14244 & 10222 & 0.759 & 10212 & 0.921 & 0.10\% & 21.34\% \\
aes\_core & 21522 & 20153 & 3.125 & 19945 & 4.079 & 1.03\% & 30.53\% \\
b14\_1 & 9471 & 5902 & 1.299 & 4712 & 1.929 & 20.16\% & 48.50\% \\
b15\_1 & 17015 & 10215 & 2.067 & 10012 & 2.204 & 1.99\% & 6.63\% \\
b17\_1 & 51419 & 31447 & 5.364 & 30943 & 6.948 & 1.60\% & 29.53\% \\
b18\_1 & 130418 & 81185 & 18.947 & 78430 & 25.344 & 3.39\% & 33.76\% \\
b19\_1 & 254960 & 153796 & 37.618 & 149269 & 47.708 & 2.94\% & 26.82\% \\
b20\_1 & 21074 & 13635 & 2.666 & 12048 & 3.819 & 11.64\% & 43.25\% \\
b21\_1 & 20538 & 12845 & 2.618 & 10940 & 3.900 & 14.83\% & 48.97\% \\
b22\_1 & 31251 & 19698 & 4.109 & 16986 & 5.870 & 13.77\% & 42.86\% \\
des\_perf & 82650 & 73724 & 15.717 & 73224 & 23.228 & 0.68\% & 47.79\% \\
DMA & 24389 & 22306 & 2.524 & 20269 & 3.129 & 9.13\% & 23.97\% \\
DSP & 44759 & 37976 & 5.635 & 37728 & 7.734 & 0.65\% & 37.25\% \\
ethernet & 86650 & 55925 & 5.790 & 55838 & 7.879 & 0.16\% & 36.08\% \\
leon2 & 788737 & 774919 & 142.645 & 774065 & 187.660 & 0.11\% & 31.56\% \\
mem\_ctrl & 15325 & 8518 & 1.255 & 8449 & 1.511 & 0.81\% & 20.40\% \\
netcard & 803723 & 516124 & 93.952 & 516001 & 122.749 & 0.02\% & 30.65\% \\
pci\_bridge32 & 22790 & 16362 & 1.719 & 16271 & 2.288 & 0.56\% & 33.10\% \\
s35932 & 8371 & 7843 & 0.755 & 7843 & 1.003 & 0.00\% & 32.85\% \\
s38417 & 9062 & 7969 & 0.812 & 7936 & 1.149 & 0.41\% & 41.50\% \\
s38584 & 8477 & 7224 & 0.720 & 7188 & 0.921 & 0.50\% & 27.92\% \\
systemcaes & 12384 & 9614 & 1.705 & 9391 & 2.602 & 2.32\% & 52.61\% \\
tv80 & 9635 & 7084 & 1.169 & 6970 & 1.498 & 1.61\% & 28.14\% \\
usb\_funct & 15826 & 13082 & 1.439 & 12892 & 1.858 & 1.45\% & 29.12\% \\
vga\_lcd & 126696 & 88641 & 10.517 & 88659 & 14.268 & -0.02\% & 35.67\% \\
wb\_conmax & 47853 & 39163 & 4.748 & 38701 & 5.791 & 1.18\% & 21.97\% \\
\hline
Average & & & & & & 3.50\% & 33.18\% \\
\hline
\end{tabular}
\caption{Effectiveness of improving double \emph{resyn2} optimization flow using \emph{rewrite5}, on IWLS 2005 benchmarks.}
\label{tab:exp1}
\end{table*}
\begin{table*}[p]
\centering
\footnotesize
\begin{tabular}{cr|rr|rr|rr}
\hline
& & \multicolumn{ 2}{|c}{resyn2;resyn2;resyn2} & \multicolumn{ 2}{|p{25mm}|}{resyn2;rewrite5;resyn2; rewrite5;resyn2} & & \\
\cline{3-6}
benchmark & nodes & $A_1$ & $t_1$, sec & $A_2$ & $t_2$, sec & $(A_1-A_2)/A_1$ & $(t_2-t_1)/t_1$ \\
\hline
ac97\_ctrl & 14244 & 10202 & 1.084 & 10180 & 1.396 & 0.22\% & 28.78\% \\
aes\_core & 21522 & 20044 & 4.562 & 19554 & 6.646 & 2.44\% & 45.68\% \\
b14\_1 & 9471 & 5652 & 1.702 & 4350 & 2.526 & 23.04\% & 48.41\% \\
b15\_1 & 17015 & 10029 & 2.335 & 9796 & 3.231 & 2.32\% & 38.37\% \\
b17\_1 & 51419 & 30107 & 7.446 & 29248 & 10.530 & 2.85\% & 41.42\% \\
b18\_1 & 130418 & 79204 & 24.658 & 74827 & 38.047 & 5.53\% & 54.30\% \\
b19\_1 & 254960 & 149177 & 49.815 & 143633 & 70.876 & 3.72\% & 42.28\% \\
b20\_1 & 21074 & 13405 & 3.811 & 10732 & 5.878 & 19.94\% & 54.24\% \\
b21\_1 & 20538 & 12240 & 3.603 & 9379 & 5.437 & 23.37\% & 50.90\% \\
b22\_1 & 31251 & 18967 & 5.614 & 15186 & 8.595 & 19.93\% & 53.10\% \\
des\_perf & 82650 & 73248 & 23.235 & 72322 & 36.941 & 1.26\% & 58.99\% \\
DMA & 24389 & 22288 & 3.573 & 20214 & 4.874 & 9.31\% & 36.41\% \\
DSP & 44759 & 37634 & 8.055 & 37273 & 12.465 & 0.96\% & 54.75\% \\
ethernet & 86650 & 55803 & 8.287 & 55794 & 12.067 & 0.02\% & 45.61\% \\
leon2 & 788737 & 774560 & 213.921 & 773399 & 352.054 & 0.15\% & 64.57\% \\
mem\_ctrl & 15325 & 8408 & 1.726 & 8313 & 2.260 & 1.13\% & 30.94\% \\
netcard & 803723 & 515961 & 133.294 & 515771 & 181.877 & 0.04\% & 36.45\% \\
pci\_bridge32 & 22790 & 16313 & 2.385 & 16235 & 3.650 & 0.48\% & 53.04\% \\
s35932 & 8371 & 7843 & 1.034 & 7843 & 1.457 & 0.00\% & 40.91\% \\
s38417 & 9062 & 7947 & 1.158 & 7886 & 1.725 & 0.77\% & 48.96\% \\
s38584 & 8477 & 7217 & 1.021 & 7199 & 1.312 & 0.25\% & 28.50\% \\
systemcaes & 12384 & 9595 & 2.258 & 9248 & 4.043 & 3.62\% & 79.05\% \\
tv80 & 9635 & 7030 & 1.618 & 6879 & 2.308 & 2.15\% & 42.65\% \\
usb\_funct & 15826 & 13041 & 2.037 & 12784 & 2.880 & 1.97\% & 41.38\% \\
vga\_lcd & 126696 & 88621 & 15.258 & 88687 & 22.223 & -0.07\% & 45.65\% \\
wb\_conmax & 47853 & 38676 & 6.759 & 38095 & 9.032 & 1.50\% & 33.63\% \\
\hline
Average & & & & & & 4.88\% & 46.11\% \\
\hline
\end{tabular}
\caption{Effectiveness of improving triple \emph{resyn2} optimization flow using \emph{rewrite5}, on IWLS 2005 benchmarks.}
\label{tab:exp2}
\end{table*}
\begin{table}[h]
\centering
\footnotesize
\begin{tabular}{c|cc}
\hline
& improvement in area & extra runtime \\
\hline
SS $\rightarrow$ SWS & 3.50\% & 33.18\% \\
SSS $\rightarrow$ SWSWS & 4.88\% & 46.11\% \\
SSSS $\rightarrow$ SWSWSWS & 5.39\% & 47.48\% \\
SSSSS $\rightarrow$ SWSWSWSWS & 5.57\% & 51.21\% \\
\hline
\multicolumn{2}{l}{NOTE: S stands for \emph{resyn2}; W stands for \emph{rewrite5}.}
\end{tabular}
\caption{Summary of average results.}
\label{tab:sum}
\end{table}
\section{Conclusion} \label{sec:conc}
In this paper, we present an AIG-based rewriting technique that uses 5-input cuts. The technique extends the approach of AIG rewriting using 4-input cuts presented in~\cite{rwr}. Experimental results show that our algorithm is effective in driving other optimization techniques, such as \emph{resyn2} script in ABC, out of local minima. The proposed rewriting technique might be useful in a new optimization flow combining rewriting of both 4-input and 5-input cuts.
\balance
\bibliographystyle{IEEEtran}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.